Superconducting motor to increase power density

first_imgThe scientists’ experimental setup: (1) stationary cryostat; (2) induction motor; (3) belts; (4) sliding, contacts—(a) brushes, (b) rings. Image credit: Ailam, et al. ©IEEE 2007. Explore further To test the performance of the motor, the scientists calculated the magnetic scalar potential, which tells the strength of a magnetic field in a certain area, and then determined the magnetic flux density, which is the quantity of magnetism in that area. As the scientists explained, the maximal value of the flux density exists in between two of the bulk plates, while the minimum value exists behind the plates; a large difference in magnetic flux density maximizes the motor’s performance by generating a more powerful magnetic field.The group experimentally demonstrated a performance of 118.8 volts for the motor. Further, they calculated a theoretical generated voltage of 172.5 volts, and explained that the difference is due to an uncertain value for the difference in the maximal and minimal values of the magnetic fields around the bulk plates, which was not directly measured. Improving this difference in magnetic flux density will hopefully increase the motor’s voltage. “As we demonstrate in another paper, under realization, using this structure with several superconducting wires and 20 mW generated power decreases the inductor volume 20-50 percent in comparison to a classical electrical machine,” Ailam said.In the near future, the group plans to design and construct a 100 kW superconducting machine using the same configuration.“The major advantages of these motors are a high power-volume density and a high torque-volume density, and less vibration than for the conventional motors,” Ailam said. “I think that the maritime propulsion can and the electrical traction generally can benefit principally to these motors.”Citation: Ailam, El Hadj, Netter, Denis, Lévêque, Jean, Douine, Bruno, Masson, Philippe J., and Rezzoug, Abderrezak. “Design and Testing of a Superconducting Rotating Machine.” IEEE Transactions on Applied Superconductivity, Vol. 17, No. 1, March 2007.Copyright 2007 PhysOrg.com. All rights reserved. This material may not be published, broadcast, rewritten or redistributed in whole or part without the express written permission of PhysOrg.com. Recently, scientists El Hadj Ailam and colleagues working at the Université Henri Poincaré in Nancy, France and the Center for Advanced Power Systems in Tallahassee, Florida, have designed and tested a superconducting rotating machine based on an unconventional topology. Their results, which are published in IEEE Transactions on Applied Superconductivity, show promising opportunities for the motor.“This work has two goals,” Ailam, who is currently with the Centre Universitaire Khemis Miliana in Algeria, told PhysOrg.com. “The first is to show the feasibility of an electrical motor based on the magnetic flux density, and the second is to demonstrate that superconductors can significantly ameliorate the electrical machine performances.”Building on high-temperature motors designed over the past few years, Ailam et al.’s motor is a low-temperature, eight-pole machine with a stationary superconducting inductor. Unlike copper coils, the niobium-titanium (NbTi) inductor coils in this design have no electrical resistance, which is one of the greatest advantages of superconductors.When the two NbTi coils are fed with currents moving in opposite directions, the currents create a magnetic field. Located between the two coils, four superconducting bulk plates (made of YBaCuO, or yttrium barium copper oxide) shape and distribute the magnetic flux lines, which then induces an alternating electromagnetic field based on the magnetic concentration. A rotating armature wound with copper wires then converts the electrical energy to mechanical energy, which is eventually transferred to an application.In this design, the entire inductor is cooled to 4.2 K using liquid helium to enable zero electrical resistance in the coils. (The scientists explain that high-temperature wires could also work in this configuration.) As with all superconducting motors, the superconducting wire can carry larger amounts of current than copper wire, and therefore create more powerful magnetic fields in a smaller amount of space than conventional motors.“For the majority of electrical superconducting machines, the structure is a classical one, and the magnetic flux is a radial one,” Ailam explained. “[However,] for our machine, the inductor magnetic flux is an axial one.” The field of electric motors has recently entered a new era. The electric motors that you see today in everything from washing machines, toys, and fans use the same basic principles as motors from 50 years ago. But with the realization of using superconducting wire to replace conventional copper coils, motors are becoming more compact, more energy efficient, and less expensive, which will have advantages particularly for large industrial applications.center_img Theory explains ferromagnetic superconductor behavior Citation: Superconducting motor to increase power density (2007, May 24) retrieved 18 August 2019 from https://phys.org/news/2007-05-superconducting-motor-power-density.html This document is subject to copyright. Apart from any fair dealing for the purpose of private study or research, no part may be reproduced without the written permission. The content is provided for information purposes only.last_img read more

Cambridge Nights a late night show for scientists

first_img(PhysOrg.com) — While it’s not uncommon to see scientists on TV, most of the time it’s just for a few minutes on the news to comment on a recent event or major discovery. A new late night show called “Cambridge Nights” coming out of MIT’s Media Lab is changing that by providing an outlet for researchers to talk about their work in a slower paced, conversational setting. The first episodes of the show are being posted at http://cambridgenights.media.mit.edu. © 2011 PhysOrg.com A screenshot during the intro to “Cambridge Nights.” Image credit: MIT Media Lab This document is subject to copyright. Apart from any fair dealing for the purpose of private study or research, no part may be reproduced without the written permission. The content is provided for information purposes only. Similar to how Leno, Letterman, and John Stewart interview interesting people in pop culture, Cesar Hidalgo, ABC Career Development Professor at MIT’s Media Lab, interviews academic professionals about their research, their life stories, and their views of the world. So far, eight episodes have been filmed, each about 30-45 minutes long. The episodes are being released every Wednesday, with the fourth episode appearing this week. The three episodes that have been released so far feature interviews with Marc Vidal, Professor of Genetics at Harvard Medical School; Geoffrey West, former President of the Santa Fe Institute; and Albert-László Barabási, director of Northeastern University’s Center for Complex Networks Research.Due to the laid-back setting, the guests are able to tell stories that span their careers, peppered with interesting bits of trivia. For instance, as West discusses his research on how metabolism scales with an organism’s body mass, he notes that life is often marveled at for its diversity, but no less intriguing is how the characteristics of all known life forms follow some simple physical and mathematical laws. Even the arrangement of trees in a forest follows a formula, despite looking random, he explains. As the shows are not pressed for time or commercial breaks, the guests are allowed to take their time while talking without being cut short by frequent interruptions or confrontational questions.“Guests are not asked to simplify or condense their narratives,” according to the “Cambridge Nights” philosophy. “We invite them because we want to hear what they have to say, and we want to give them the time to say it comfortably. There are many high-speed formats out there. ‘Cambridge Nights’ is an alternative where thoughts can be developed and reflected upon without the need to rush.”For these reasons, the researchers who have appeared on the show so far have given positive feedback about the new outlet.“The guests have loved the format,” Hidalgo said. “Scientists tend to be long-winded, since they have a lot to say and are careful about making distinctions. An open and relaxed format has suited them well and they have been very happy.”While the rest of the episodes of the first season continue to be released through the end of November, the show is currently preparing for the next season.“We are also getting ready to film season two,” Hidalgo said. “The plan is to film during the winter and release again next fall. We are shooting for 8 to 10 episodes for season two. Currently, our plan is to continue doing this yearly.” The kids are alright More information: http://cambridgenights.media.mit.edu Citation: ‘Cambridge Nights’: a late night show for scientists (2011, October 31) retrieved 18 August 2019 from https://phys.org/news/2011-10-cambridge-nights-late-night-scientists.html Explore furtherlast_img read more

Researchers find new source for cold ocean water abyssal layer

first_img Journal information: Nature Geoscience Scientists test powerful ocean current off Antarctica Citation: Researchers find new source for cold ocean water abyssal layer (2013, February 25) retrieved 18 August 2019 from https://phys.org/news/2013-02-source-cold-ocean-abyssal-layer.html Explore further Intense sea-ice production in the CDP, revealed from satellite data. Credit: Nature Geoscience (2013) doi:10.1038/ngeo1738 AABW is important because as it moves north from its source, it creates ocean currents that have a major impact on global climate. Until now however, scientists have only been able to identify three major sources—not nearly enough to explain the amount of AABW seen in the ocean. In this new research, the team suspected that a different type of source might be at play—one that came about in a polynya (area of open water that can’t freeze over due to rapid wind and water movement), rather than directly offshore of shelf ice. To find out they employed the use of traditional undersea sensors, and less traditionally, sensors attached to the heads of elephant seals.Ocean currents result from AABW due to the way it’s formed. When seawater freezes, much of the salt in the ice is pushed back into the water giving it a very high salinity—and because it’s also very cold, it tends to sink. As it hits the bottom it joins other cold water that slowly seeps toward the edge of the continental shelf, where if falls over into the abyss, rather like an under-the-ocean waterfall. That water falling is what generates the currents that flow north.Researchers had suspected for years that a source for AABW existed somewhere near what they call the Weddell Gyre, but had not been able to find it. In this new research, the team used satellite data to pick a likely polynya, and settled on Cape Darnley. There they sank sensors and studied data supplied by the elephant seal sensors. It was the data from the seals, the researchers report, that showed that areas in which they swam—at times as deep as 1,800 meters—revealed the layer of cold dense water the researchers were looking for—the fourth AABW source.After analyzing the Cape Darnley polynya source, the researchers have concluded that it is likely responsible for 6 to 13 percent of circumpolar AABW totals, which suggests they say, that other similar sources are out there still waiting to be found.center_img (Phys.org)—An international team of ocean researchers has found a fourth source of Antarctic bottom water (AABW)—the very cold, highly saline layer of water that lies at the bottom of the ocean. In their paper published in the journal Nature Geoscience, the team describes how they discovered the site in the Cape Darnley polynya. More information: Antarctic Bottom Water production by intense sea-ice formation in the Cape Darnley polynya, Nature Geoscience (2013) doi:10.1038/ngeo1738AbstractThe formation of Antarctic Bottom Water—the cold, dense water that occupies the abyssal layer of the global ocean—is a key process in global ocean circulation. This water mass is formed as dense shelf water sinks to depth. Three regions around Antarctica where this process takes place have been previously documented. The presence of another source has been identified in hydrographic and tracer data, although the site of formation is not well constrained. Here we document the formation of dense shelf water in the Cape Darnley polynya (65°–69° E) and its subsequent transformation into bottom water using data from moorings and instrumented elephant seals (Mirounga leonina). Unlike the previously identified sources of Antarctic Bottom Water, which require the presence of an ice shelf or a large storage volume, bottom water production at the Cape Darnley polynya is driven primarily by the flux of salt released by sea-ice formation. We estimate that about 0.3–0.7×106 m3 s−1 of dense shelf water produced by the Cape Darnley polynya is transformed into Antarctic Bottom Water. The transformation of this water mass, which we term Cape Darnley Bottom Water, accounts for 6–13% of the circumpolar total. © 2013 Phys.org This document is subject to copyright. Apart from any fair dealing for the purpose of private study or research, no part may be reproduced without the written permission. The content is provided for information purposes only.last_img read more

Optimal stem cell reprogramming through sequential protocols

first_img Back in 2006, researchers in Japan were able to effectively generate stem cells from skin cells without the need for oocytes (eggs) or other embryonic cells. By expressing the four transcription factors, Oct4, Sox2, KLF4 and c-Myc, they could generate cells that, at least in theory, could turn into any other kind of cell. Unfortunately, not only the overall yield of viable stem cells was low, the “rejuvenated” cells that were able to be extracted were generally unsuitable for subsequent patient treatment. The problem is that even when transplanting stem cells obtained from a person’s own skin, immune rejection or tumor formation still occurred. This disharmony results from the fact that the immune system, while trained over a lifetime, can be confused by the “dissonance” in expression of youthful protein isoforms, particularly when encountered astride those of more adult cells. By inducing the expression of all four transformation factors at different times, the Chinese researchers eventually hit upon the optimal sequence. In a nutshell, introducing a combination of Oct4 and Klf4 first, followed later by c-Myc, and then finally Sox2, the maximal yield could be obtained. They were surprised to find that this sequential protocol activated an epithelial-to-mesenchymal (EMT) transition, which was then followed by a delayed reverse (MET) transition. It had been known for some time that in mouse fibroblasts, reprogramming to the pluripotent stem cell state begins by going through a MET conversion. Therefore finding upregulation of the proteins SLUG and N-cadherin, factors generally associated with an EMT, was not anticipated.In embryogenesis, cells interconvert between epithelial and mesenchymal phenotypes as they lay out the basic body plan. In the epithelial state, cells possess inherent polarity and show preferential adhesion, while in the mesenchymal state, these properties are lost as cells becomes migratory and invasive. This game of run-the bases is recapitulated as more option-constrained cells later rough out the critical form of each organ. Each time cells alights in either camp, they express part of an overlapping subset of various state indicators, but their genetic arrangements are never quite the same. The authors looked at a few additional factors that might help explain the appearance of a brief mesenchymal state in the sequential procedure. By applying TGF-beta to the simultaneous factor expression protocol early on, they were able to mimic the appearance of the mesenchymal state. This was found to be accompanied by an enhancement in the reprogramming yield, but the effect disappeared when the TGF-beta was applied using a 12-day treatment protocol.TGF-beta is a whole new can of worms since it is expressed by many cells and does many things, even opposite things in different cells. It is traditionally termed a cytokine, although the distinction between that and a hormone is becoming increasingly blurred. Generally hormones are active at nanomolar concentrations in the blood and vary by less than an order of magnitude. Cytokines by contrast often circulate at less than picomolar concentration and ramp up 1000-fold when called upon during injury or infection.Capturing the essential behavior of the thousands of downstream regulators or even just four transcription factors, is just not realistic with a flowchart or state diagram. Beyond a certain level of complexity, if the transition probabilities are too low, or the branch points and exceptions too numerous, new constraints are needed before any sensible algorithmic description might be attempted. In the absence of any such obvious constraints, the authors hypothesized that while multiple pathways exist for conversion between epithelial-mesenchymal states, some are shorter or easier to access than others. They believe that their sequential recipe tips the balance towards a brief mesenchymal state which ultimately leads to a better stem cell yield. Citation: Optimal stem cell reprogramming through sequential protocols (2013, May 28) retrieved 18 August 2019 from https://phys.org/news/2013-05-optimal-stem-cell-reprogramming-sequential.html (Phys.org) —Gaining control of the ability of mature tissues to generate stem cells is the central medical challenge of our day. From taming cancer, to providing compatible cell banks for replacement organs, knowledge of how cells interconvert between stable points on the complex cellulo-genetic landscape will deliver to the doctor the same mastery the programmer now holds over bits. While researchers often speak of “reprogramming” cells, most recipes today consist only of a crude and partial ingredient list, with little consideration of sequence, quantity or prior state. We recently took stock of the latest in stem cell technology and reviewed the four major factors used to revert adult cells back into omnipotent progenitors. We also just reported on further attempts to rigorously define appropriate level of factors to supply. Researchers from China have now reported that stem cell generation can be regulated by the precise temporal expression of these factors. Publishing in the journal Nature Cell Biology, they show that the efficiency and yield of stem cells can be optimized by controlling the sequencing of the transforming factors, and furthermore provide a theoretical exploration of the possible mechanisms going on behind the scenes. Stem Cell Induction. Credit: stemcellschool.org More information: Sequential introduction of reprogramming factors reveals a time-sensitive requirement for individual factors and a sequential EMT–MET mechanism for optimal reprogramming, Nature Cell Biology (2013) doi:10.1038/ncb2765AbstractPresent practices for reprogramming somatic cells to induced pluripotent stem cells involve simultaneous introduction of reprogramming factors. Here we report that a sequential introduction protocol (Oct4–Klf4 first, then c-Myc and finally Sox2) outperforms the simultaneous one. Surprisingly, the sequential protocol activates an early epithelial-to-mesenchymal transition (EMT) as indicated by the upregulation of Slug and N-cadherin followed by a delayed mesenchymal-to-epithelial transition (MET). An early EMT induced by 1.5-day TGF-β treatment enhances reprogramming with the simultaneous protocol, whereas 12-day treatment blocks reprogramming. Consistent results were obtained when the TGF-β antagonist Repsox was applied in the sequential protocol. These results reveal a time-sensitive role of individual factors for optimal reprogramming and a sequential EMT–MET mechanism at the start of reprogramming. Our studies provide a rationale for further optimizing reprogramming, and introduce the concept of a sequential EMT–MET mechanism for cell fate decision that should be investigated further in other systems, both in vitro and in vivo. This document is subject to copyright. Apart from any fair dealing for the purpose of private study or research, no part may be reproduced without the written permission. The content is provided for information purposes only.center_img Protein central to cancer stem cell formation provides new potential target © 2013 Phys.org Journal information: Nature Cell Biology Explore furtherlast_img read more

Naturallyoccurring protein enables slowermelting ice cream

first_img The answer lies in a naturally occurring protein explored by researchers at the universities of Dundee and Edinburgh.The protein’s plus points are not only slow melting but also smoother texture in ice cream, with no gritty ice crystals forming. Another plus will interest weight-watchers: the development could allow for products to be made with lower levels of saturated fat and fewer calories. For example, the protein could be used in chocolate mousse and mayonnaise to help reduce the calories.The protein in focus is BsIA. In making ice cream, it works by binding together the air, fat and water. Because of BsIA, the team replaced some of the fat molecules which are used to stabilize oil and water mixtures, cutting the fat content.The protein was developed with support from the Engineering and Physical Sciences Research Council and the Biotechnology and Biological Sciences Research Council, said a press item from the University of Dundee.Yes, the ice cream will melt eventually but University of Edinburgh’s Prof. Cait MacPhee said in a BBC News report on Monday that hopefully by keeping it stable for longer, “it will stop the drips.” She is from the University of Edinburgh’s school of physics and astronomy, and she led the project. She told BBC Radio 5 live: “This is a natural protein already in the food chain. It’s already used to ferment some foods so it’s a natural product rather than being a ‘Frankenstein’ food.”The team estimated such a slow-melting ice cream could be available in three to five years.Radio New Zealand News said the protein occurs in friendly bacteria and it works by adhering to fat droplets and air bubbles, making them more stable in a mixture.Matthew Humphries in Geek.com spelled out what this could mean if their research were to reach manufacturing stage. “For manufacturers it’s a fantastic find. It can be added to ice cream without altering the taste or mouth feel, it also means the finished ice cream can be stored at slightly higher (yet still very cold) temperatures, which will save on energy costs. The protein can also reduce the level of saturated fat required. As long as the taste isn’t affected by that, it means the ice cream you love will contain less calories.”The ice cream news is yet another example of why researchers are keenly interested in the behavior of proteins—as MacPhee said in discussing her research interests—”the molecules that are responsible for the vast majority of functions in living organisms.” She noted that self-assembly of proteins underpins the texture of foodstuffs including egg, meat and milk products. “It is understanding this process of self-assembly – to prevent or reverse disease, or to drive the development of new materials and foodstuffs – that forms the focus of my research efforts,” she stated. Ice cream goes Southern, okra extracts may increase shelf-life The BSlA protein binds together the air, fat and water in ice cream, creating a super-smooth consistency that should keep it from melting as quickly © 2015 Phys.org More information: www.dundee.ac.uk/news/2015/slo … o-new-ingredient.php (Phys.org)—Scientists have developed a slower-melting ice cream—consider the advantages the next time a hot summer day turns your child’s cone with its dream-like mound of orange, vanilla and lemon swirls with chocolate flecks into multi-colored sludge riddled with fly-like flecks. This document is subject to copyright. Apart from any fair dealing for the purpose of private study or research, no part may be reproduced without the written permission. The content is provided for information purposes only. Explore further Citation: Naturally-occurring protein enables slower-melting ice cream (2015, August 31) retrieved 18 August 2019 from https://phys.org/news/2015-08-naturally-occurring-protein-enables-slower-melting-ice.htmllast_img read more

The path to perfection Quantum dots in electricallycontrolled cavities yield bright nearly

first_img © 2016 Phys.org , Nature Communications Researchers develop ideal single-photon source Figure 1. a, Schematic of the sources: a single semiconductor quantum dot, represented by a red dot, is positioned within 50 nm from the center of the cavity, which consists of a 3 µm pillar connected to a circular frame through 1.3 µm wide waveguides. The top electrical contact is defined on a large mesa adjacent to the circular frame. By applying a bias to the cavity, the wavelength of the emitted photons can be tuned and the charge noise strongly reduced. b, Emission map of the device: the strong signal coming from the quantum dot located at the center of the cavity demonstrates the precise positioning of the quantum dot in the cavity and the enhanced collection efficiency obtained by accelerating the quantum dot spontaneous emission. Credit: Courtesy: Dr. Pascale Senellart. Figure 2. a, Photon correlation histogram measuring the indistinguishability of photons successively emitted by one of the devices. The area of the peak at zero delay allows measuring the photon indistinguishability: it should be zero for fully indistinguishable photons. We test here two configurations: the coalescence of photons with orthogonal polarization (fully distinguishable – blue curve) and the coalescence of photon with the same polarization (red curve). The disappearance of the zero delay peak in the latter case show the near unity indistinguishability of the emitted photons. b, Graph summarizing all the source characteristics as a function of excitation power: brightness (probability of collecting a photon per pulse – red – right scale), autocorrelation function g(2)(0) (characterizing the probability of emitting more than one photon – blue – left bottom scale), indistinguishablity M (purple – left top scale). Credit: Courtesy: Dr. Pascale Senellart. Explore further , Optica , Nature Nanotechnology , Nature Dr. Pascale Senellart and Phys.org discussed the paper, Near-optimal single-photon sources in the solid state, that she and her colleagues published in Nature Photonics, which reports the design and fabrication of the first optoelectronic devices made of quantum dots in electrically controlled cavities that provide bright source generating near-unity indistinguishability and pure single photons. “The ideal single photon source is a device that produces light pulses, each of them containing exactly one, and no more than one, photon. Moreover, all the photons should be identical in spatial shape, wavelength, polarization, and a spectrum that is the Fourier transform of its temporal profile,” Senellart tells Phys.org. “As a result, to obtain near optimal single photon sources in an optoelectronic device, we had to solve many scientific and technological challenges, leading to an achievement that is the result of more than seven years of research.”While quantum dots can be considered artificial atoms that therefore emit photons one by one, she explains, due to the high refractive index of any semiconductor device, most single photons emitted by the quantum dot do not exit the semiconductor and therefore cannot be used. “We solved this problem by coupling the quantum dot to a microcavity in order to engineer the electromagnetic field around the emitter and force it to emit in a well-defined mode of the optical field,” Senellart points out. “To do so, we need to position the quantum dot with nanometer-scale accuracy in the microcavity.”Senellart notes that this is the first challenge that the researchers had to address since targeting the issue of quantum dots growing with random spatial positions. “Our team solved this issue in 20081 by proposing a new technology, in-situ lithography, which allows measuring the quantum dot position optically and drawing a pillar cavity around it. With this technique, we can position a single quantum dot with 50 nm accuracy at the center of a micron-sized pillar.” In these cavities, two distributed Bragg reflectors confine the optical field in the vertical direction, and the contrast of the index of refraction between the air and the semiconductor provides the lateral confinement of the light. “Prior to this technology, the fabrication yield of quantum dot cavity devices was in the 10-4 – but today it is larger than 50%.” The scientists used this technique to demonstrate the fabrication of bright single photon sources in 20132, showing that the device can generate light pulses containing a single photon with a probability of 80% – but while all photons had the same spatial shape and wavelength, they were not perfectly identical. More information: Near-optimal single-photon sources in the solid state, Nature Photonics 10, 340–345 (2016), doi:10.1038/nphoton.2016.23Related:1Controlled light–matter coupling for a single quantum dot embedded in a pillar microcavity using far-field optical lithography, Physical Review Letters 101, 267404 (2008), doi:10.1103/PhysRevLett.101.2674042Bright solid-state sources of indistinguishable single photons, Nature Communications 4, 1425 (2013), doi:10.1038/ncomms24343Deterministic and electrically tunable bright single-photon source, Nature Communications 5, 3240 (2014), doi:10.1038/ncomms42404On-demand semiconductor single-photon source with near-unity indistinguishability, Nature Nanotechnology 8, 213–217 (2013), doi:10.1038/nnano.2012.2625Coherent control of a solid-state quantum bit with few-photon pulses, arXiv:1512.04725 [quant-ph]6Charge noise and spin noise in a semiconductor quantum device, Nature Physics 9, 570–575 (2013), doi:10.1038/nphys26887Scalable performance in solid-state single-photon sources, Optica 3, 433-440 (2016), doi:10.1364/OPTICA.3.0004338BosonSampling with single-photon Fock states from a bright solid-state source, arXiv:1603.00054 [quant-ph]9Downconversion quantum interface for a single quantum dot spin and 1550-nm single-photon channel,Optics Express Vol. 20, Issue 25, pp. 27510-27519 (2012), doi:10.1364/OE.20.02751010Ultrabright source of entangled photon pairs, Nature 466, 217–220 (08 July 2010), doi:10.1038/nature09148 Citation: The path to perfection: Quantum dots in electrically-controlled cavities yield bright, nearly identical photons (2016, June 7) retrieved 18 August 2019 from https://phys.org/news/2016-06-path-quantum-dots-electrically-controlled-cavities.html Journal information: Nature Photonics Optical quantum technologies are based on the interactions of atoms and photons at the single-particle level, and so require sources of single photons that are highly indistinguishable – that is, as identical as possible. Current single-photon sources using semiconductor quantum dots inserted into photonic structures produce photons that are ultrabright but have limited indistinguishability due to charge noise, which results in a fluctuating electric field. Conversely, parametric down conversion sources yield photons that while being highly indistinguishable have very low brightness. Recently, however, scientists at CNRS – Université Paris-Saclay, Marcoussis, France; Université Paris Diderot, Paris, France; University of Queensland, Brisbane, Australia; and Université Grenoble Alpes, CNRS, Institut Néel, Grenoble, France; have developed devices made of quantum dots in electrically-controlled cavities that provide large numbers of highly indistinguishable photons with strongly reduced charge noise that are 20 times brighter than any source of equal quality. The researchers state that by demonstrating efficient generation of a pure single photon with near-unity indistinguishability, their novel approach promises significant advances in optical quantum technology complexity and scalability. , Physical Review Letters Senellart adds that while removing scattered photons when transmitting light in processed microstructures is typically complicated, in their case this step was straightforward. “Because the quantum dot is inserted in a cavity, the probability of the incident laser light to interact with the quantum dot is actually very high. It turns out that we send only a few photons – that is, less than 10 – on the device to have the quantum dot emitting one photon. This beautiful efficiency, also demonstrated in the excitation process, which we report in another paper5, made this step quite easy.”The devices reported in the paper have a number of implications for future technologies, one being the ability to achieve strongly-reduced charge noise by applying an electrical bias. “Charge noise has been extensively investigated in quantum dot structures,” Senellart says, “especially by Richard Warburton’s group.” Warburton and his team demonstrated that in the best quantum dot samples, the charge noise could take place on a time scale of few microseconds6 – which is actually very good, since the quantum dot emission lifetime is around 1 nanosecond. However, this was no longer the case in etched structures, where a strong charge noise is always measured on very short time scale – less than 1 ns – that prevents the photon from being indistinguishable. “I think the idea we had – that this problem would be solved by applying an electric field – was an important one,” Senellart notes. “The time scale of this charge noise does not only determine the degree of indistinguishability of the photons, it also determines how many indistinguishable photon one can generate with the same device. Therefore, this number will determine the complexity of any quantum computation or simulation scheme one can implement.” Senellart adds that in a follow-up study7 the scientists generated long streams of photons that can contain more than 200 being indistinguishable by more than 88%.In addressing how these de novo devices may lead to new levels of complexity and scalability in optical quantum technologies, Senellart first discusses the historical sources used develop optical quantum technologies. She makes the point that all previous implementations of optical quantum simulation or computing have been implemented using Spontaneous Parametric Down Conversion (SPDC) sources, in which pairs of photons are generated by the nonlinear interaction of a laser on a nonlinear crystal, wherein one photon of the pair is detected to announce the presence of the other photon. This so-called heralded source can present strongly indistinguishable photons, but only at the cost of extremely low brightness. “Indeed, the difficulty here is that the one pulse does not contain a single pair only, but some of the time several pairs,” Senellart explains. “To reduce the probability of having several pairs generated that would degrade the fidelity of a quantum simulation, calculation or the security of a quantum communication, the sources are strongly attenuated, to the point where the probability of having one pair in a pulse is below 1%. Nevertheless, with these sources, the quantum optics community has demonstrated many beautiful proofs of concept of optical quantum technologies, including long-distance teleportation, quantum computing of simple chemical or physical systems, and quantum simulations like BosonSampling.” (A BosonSampling device is a quantum machine expected to perform tasks intractable for a classical computer, yet requiring minimal non-classical resources compared to full-scale quantum computers.) “Yet, the low efficiency of these sources limits the manipulation to low photon numbers: It takes typically hundreds of hours to manipulate three photons, and the measurement time increases exponentially with the number of photons. Obviously, with the possibility to generate more many indistinguishable photons with an efficiency more than one order of magnitude greater than SPDC sources, our devices have the potential to bring optical quantum technologies to a whole new level.”Other potential applications of the newly-demonstrated devices will focus on meeting near-future challenges in optical quantum technologies, including scalability of photonic quantum computers and intermediate quantum computing tasks. “The sources presented here can be used immediately to implement quantum computing and intermediate quantum computing tasks. Actually, very recently – in the first demonstration of the superiority of our new single photon sources – our colleagues in Brisbane made use of such bright indistinguishable quantum dot-based single photon sources to demonstrate a three photon BosonSampling experiment8, where the solid-state multi-photon source was one to two orders-of-magnitude more efficient than downconversion sources, allowing to complete the experiment faster than those performed with SPDC sources. Moreover, this is a first step; we’ll progressively increase the number of manipulated photons, in both quantum simulation and quantum computing tasks.”Another target area is quantum communications transfer rate. “Such bright single photon sources could also drastically change the rate of quantum communication protocols that are currently using attenuated laser sources or SPDC sources. Yet, right now, our sources operate at 930 nm when 1.3 µm or 1.55 µm sources are needed for long distance communications. Our technique can be transferred to the 1.3 µm range, a range at which single photon emission has been successfully demonstrated – in particular by the Toshiba research group – slightly changing the quantum dot material. Reaching the 1.55 µm range will be more challenging using quantum dots, as it appears that the single photon emission is difficult to obtain at this wavelength. Nevertheless, there’s a very promising alternative possibility: the use of a 900 nm bright source, like the one we report here, to perform quantum frequency conversion of the single photons. Such efficient frequency conversion of single photons has recently been demonstrated, for example, in the lab of Prof. Yoshie Yamamoto at Stanford9.”Regarding future research, Senellart says “There are many things to do from this point. On the technology side, we will try to improve our devices by further increasing the source brightness. For that, a new excitation scheme will be implemented to excite the device from the side, as was done by Prof. Valia Voliotis and her colleagues on the Nanostructures and Quantum Systems team at Pierre and Marie Curie University in Paris and Prof. Glenn Solomon’s group at the National Institute of Standards and Technology (NIST) in Gaithersburg, Maryland. Applying this technique to our cavities should allow gaining another factor of four on source brightness. In addition, operating at another wavelength would be another important feature for our devices, since as discussed above, this would allow using the source for quantum telecommunication. For example, a shorter wavelength, in the visible/near infrared range, would open new possibilities to interconnect various quantum systems, including ions or atoms through their interaction with photons, as well as applications in quantum imaging and related fields.” The researchers also want to profit from the full potential of these sources and head to high photon number manipulation in, for instance, quantum simulation schemes. “We’re aiming at performing BosonSampling measurements with 20-30 photons, with the objective of testing the extended Church Turing thesis and proving the superiority of a quantum computer over a classical one.” The original Church Turing thesis, based on investigations of Alonzo Church and Alan Turing into computable functions, states that, ignoring resource limitations, a function on the natural numbers is computable by a human being following an algorithm, if and only if it is computable by a Turing machine.Another promising impact on future optical quantum technologies is the generation of entangled photon pairs. “A quantum dot can also generate entangled photon pairs, and in 2010 we demonstrated that we could use the in situ lithography to obtain the brightest source of entangled photon pairs10. That being said, photon indistinguishability needs to be combined with high pair brightness – and this is the next challenge we plan to tackle. Such a device would play an important role in developing quantum relays for long distance communication and quantum computing tasks.”Senellart tells Phys.org that other areas of research might well benefit from their findings, in that devices similar to the one the scientists developed to fabricate single photon sources could also provide nonlinearities at the low photon count scale. This capability could in turn allow the implementation of deterministic quantum gates, a new optical quantum computing paradigm in which reversible quantum logic gates – for example, Toffoli or CNOT (controlled NOT) gates– can simulate irreversible classical logic gates, thereby allowing quantum computers to perform any computation which can be performed by a classical deterministic computer. “Single photons can also be used to probe the mechanical modes of mechanical resonator and develop quantum sensing with macroscopic objects. Other applications,” she concludes, “could benefit from the possibility to have very efficient single photon sources, such as an imaging system with single photon sources that could allow dramatically increased imaging sensitivity. Such technique could have applications in biology where the lower the photon flux, the better for exploring in vivo samples.” “Indeed, for the photons to be fully indistinguishable, the emitter should be highly isolated from any source of decoherence induced by the solid-state environment. However, our study showed that collisions of the carriers with phonons and fluctuation of charges around the quantum dot were the main limitations.” To solve this problem, the scientists added an electrical control to the device3, such that the application of an electric field stabilized the charges around the quantum dot by sweeping out any free charge. This in turn removed the noise. Moreover, she adds, this electrical control allows tuning the quantum dot wavelength – a process that was previously done by increasing temperature at the expense of increasing vibration. “I’d like to underline here that the technology described above is unique worldwide,” Senellart stresses. “Our group is the only one with such full control of all of the quantum dot properties. That is, we control emission wavelength, emission lifetime and coupling to the environment, all in a fully deterministic and scalable way.”Specifically, implementing control of the charge environment for quantum dots in connected pillar cavities, and applying an electric field on a cavity structure optimally coupled to a quantum dot, required significant attention. “We had strong indications back in 2013 that the indistinguishability of our photons was limited by some charge fluctuations around the quantum dot: Even in the highest-quality semiconductors, charges bound to defects fluctuate and create a fluctuating electric field. In the meantime, several colleagues were observing very low charge noise in structures where an electric field was applied to the quantum dot – but this was not combined with a cavity structure.” The challenge, Senellart explains, was to define a metallic contact on a microcavity (which is typically a cylinder with a diameter of 2-3 microns) without covering the pillar’s top surface.”We solved this problem by proposing a new kind of cavity – that is, we showed that we can actually connect the cylinder to a bigger frame using some one-dimensional bridges without modifying too much the confinement of the optical field.” This geometry, which the researchers call connected pillars, allows having the same optical confinement as an isolated pillar while defining the metallic contact far from the pillar itself. Senellart says that the connected pillars geometry was the key to both controlling the quantum wavelength of dot and efficiently collecting its emission3.In demonstrating the efficient generation of a pure single photon with near-unity indistinguishability, Senellart continues, the researchers had one last step – combining high photon extraction efficiency and perfect indistinguishability – which they did by implementing a resonant excitation scheme of the quantum dot. “In 2013, Prof. Chao-Yang Lu’s team in Hefei, China showed that one could obtain photons with 96% indistinguishability by exciting the quantum dot state in a strictly resonant way4. Their result was beautiful, but again, not combined with an efficient extraction of the photons. The experimental challenge here is to suppress the scattered light from the laser and collect only the single photons radiated by the quantum dot.” This document is subject to copyright. Apart from any fair dealing for the purpose of private study or research, no part may be reproduced without the written permission. The content is provided for information purposes only. , Nature Physicslast_img read more

More precise measurements of phosphorene suggest it has advantages over other 2D

first_img More information: Likai Li et al. Direct observation of the layer-dependent electronic structure in phosphorene, Nature Nanotechnology (2016). DOI: 10.1038/nnano.2016.171AbstractPhosphorene, a single atomic layer of black phosphorus, has recently emerged as a new two-dimensional (2D) material that holds promise for electronic and photonic technologies. Here we experimentally demonstrate that the electronic structure of few-layer phosphorene varies significantly with the number of layers, in good agreement with theoretical predictions. The interband optical transitions cover a wide, technologically important spectral range from the visible to the mid-infrared. In addition, we observe strong photoluminescence in few-layer phosphorene at energies that closely match the absorption edge, indicating that they are direct bandgap semiconductors. The strongly layer-dependent electronic structure of phosphorene, in combination with its high electrical mobility, gives it distinct advantages over other 2D materials in electronic and opto-electronic applications. This document is subject to copyright. Apart from any fair dealing for the purpose of private study or research, no part may be reproduced without the written permission. The content is provided for information purposes only. (Phys.org)—A large team of researchers from China, the U.S. and Japan has developed a more precise means for measuring the various band gaps in layered phosphorene, and in so doing, have found that it possesses advantages over other 2-D materials. In their paper published in the journal Nature Nanotechnology, the group describes their technique and what they observed during their measurements. Citation: More precise measurements of phosphorene suggest it has advantages over other 2-D materials (2016, September 21) retrieved 18 August 2019 from https://phys.org/news/2016-09-precise-phosphorene-advantages-d-materials.html Understanding how flat phosphorus grows Direct observation of the layer-dependent electronic structure in phosphorene. a, The puckered honeycomb lattice of monolayer phosphorene; x and y denote the armchair and zigzag crystal orientations, respectively. b,c, Optical images of few-layer phosphorene samples. The images were recorded with a CCD camera attached to an optical microscope. The number of layers (indicated in the figure) is determined by the optical contrast in the red channel of the CCD image. d, Optical contrast profile in the red channel of the CCD images along the line cuts marked in b,c. Each additional layer increases the contrast by around 7%, up to tetralayer, as guided with the dashed lines. Credit: Likai Li et al. Nature Nanotechnology (2016) doi:10.1038/nnano.2016.171center_img Journal information: Nature Nanotechnology © 2016 Phys.org Scientists have been studying phosphorene (single layered black phosphorus) for some time because they believe it might be useful for creating new or better types of 2-D optoelectronic devices, similar in some respects to research efforts looking into graphene. Though it was first discovered in 1669, it was not actually isolated until 2014. Since that time, researchers have attempted to study the band gaps (the energy differences between the tops of the valence bands and the bottom of the conduction bands) that exist under various layering conditions because each may represent a unique opportunity for using the material. Prior efforts to find the band gaps relied on fluorescence spectroscopy, but that technique has not offered the accuracy needed for building devices. In this new effort, the researchers took a new approach called optical absorption spectroscopy, which works by measuring absorption of radiation as it interacts with a sample. By conducting multiple experiments, the researchers found that the electronic structure of the material varied significantly when looking at materials created from a range of layers, which, they noted, was consistent with prior theories.In using the new technique, the researchers found that different band gaps aligned well with different applications. 1.15eV, for example, would match well with a silicon band gap and 0.83 eV could be used in optoelectronics because of its similarity to a telecom photon wavelength. Also, they noted that the 0.35 eV band gap could prove useful in creating infrared devices. Overall, they found that the structure of layered phosphorene gives it advantages over other 2D materials for creating new devices—including some instances of graphene.The researchers next plan to actually use their results to create various optoelectronic devices, though they acknowledge that there are still some challenges involved, such as figuring out a way to deal with the tiny flakes and the instability involved in trying to use it. Explore furtherlast_img read more

Field study suggests wealthy less willing to tax rich when poor people

first_img(Phys.org)—A study conducted by a researcher with Harvard University suggests that wealthy people are less likely to support income redistribution through a tax on the very rich after having recently been exposed to an obviously poor person. In her paper published in Proceedings of the National Academy of Sciences, Melissa Sands describes a study she carried out using volunteers in wealthy neighborhoods, what she found and her opinions regarding the impact it could be having on domestic policy decisions. Explore further Journal information: Proceedings of the National Academy of Sciences More information: Melissa L. Sands. Exposure to inequality affects support for redistribution, Proceedings of the National Academy of Sciences (2017). DOI: 10.1073/pnas.1615010113AbstractThe distribution of wealth in the United States and countries around the world is highly skewed. How does visible economic inequality affect well-off individuals’ support for redistribution? Using a placebo-controlled field experiment, I randomize the presence of poverty-stricken people in public spaces frequented by the affluent. Passersby were asked to sign a petition calling for greater redistribution through a “millionaire’s tax.” Results from 2,591 solicitations show that in a real-world-setting exposure to inequality decreases affluent individuals’ willingness to redistribute. The finding that exposure to inequality begets inequality has fundamental implications for policymakers and informs our understanding of the effects of poverty, inequality, and economic segregation. Confederate race and socioeconomic status, both of which were randomized, are shown to interact such that treatment effects vary according to the race, as well as gender, of the subject. Most people are aware of the growing divide between the very wealthy (the so-called 1 percent) and everyone else in the United States. The issue has led some to call for income redistribution by forcing the very rich to pay more taxes with the extra money going to help the poor. For such actions to actually happen, ordinary people would have to support such an initiative led by politicians. To learn more about how people might react to such an initiative in the form of a petition in a public place, Sands enlisted the assistance of several volunteers.The study consisted of having male volunteers (some white, some black) pose as either a reasonably affluent person or as someone obviously very poor. The volunteers were stationed in affluent areas in places where affluent people would have to walk past them to reach their destination, but just before arriving, they would be asked by another volunteer dressed as an affluent person to sign one of two petitions. One of the petitions supported a way to reduce the use of plastic bags (the control), while the other sought support for a 4 percent income tax increase for anyone making more than a million dollars a year. The idea was to see if people felt differently about signing a petition to tax millionaires after exposure to a rich or poor person.The researchers surveyed 2,519 people and found that contrary to what might seem logical to some, affluent people were less likely to support taxing millionaires after having met a poor person than if they had just seen someone more affluent—those surveyed were approximately twice as likely to sign the tax petition after seeing an affluent man than a poor white man. Interestingly, they were less impacted by the sight of a poor black man. Sands suggests the sight of a poor white man may have caused the affluent people to be more judgmental due to a feeling that such a person should be doing better without assistance. This document is subject to copyright. Apart from any fair dealing for the purpose of private study or research, no part may be reproduced without the written permission. The content is provided for information purposes only.center_img © 2017 Phys.org Credit: George Hodan/public domain Citation: Field study suggests wealthy less willing to tax rich when poor people are around (2017, January 10) retrieved 18 August 2019 from https://phys.org/news/2017-01-field-wealthy-tax-rich-poor.html Support for democracy linked to income inequalitylast_img read more

Genetic study of 15th century samples shows adaptive changes in bacteria that

first_img © 2018 Phys.org The skeleton (right) excavated at the St. Nikolay Church in Oslo, which carried sequences for the pathogen of louse-borne relapsing fever. Credit: PNAS Citation: Genetic study of 15th century samples shows adaptive changes in bacteria that cause relapsing fever (2018, September 25) retrieved 18 August 2019 from https://phys.org/news/2018-09-genetic-15th-century-samples-bacteria.html More information: Meriam Guellil et al. Genomic blueprint of a relapsing fever pathogen in 15th century Scandinavia, Proceedings of the National Academy of Sciences (2018). DOI: 10.1073/pnas.1807266115 Journal information: Proceedings of the National Academy of Sciences Explore furthercenter_img Gene study pinpoints superbug link between people and animals Relapsing fever, as its name implies, is an ailment whereby an infected person experiences a fever several times following a single infection. If untreated, it is fatal in 10 out of 40 cases. It is transmitted by fleas and lice. Back in the 15th century, it was responsible for killing millions of people in Europe—today, it is mostly confined to several countries in Africa. In this new effort, the researchers conducted a genetic analysis of the bacteria that caused the disease 600 years ago and compared it to bacteria causing the same disease today. Samples of Borrelia recurrentis were retrieved from skeletons excavated from St. Nikolai Cemetery in Old Oslo—they have been dated to between 1430 and 1465.After generating a genetic assembly, the researchers compared it with genetic assemblies created by prior researchers studying the genome of the modern form of the bacteria. This allowed them to see how the bacteria has evolved over time.The researchers report that they were able to sequence approximately 17 percent of the bacterial genome from skeletal bones which they bolstered by sequencing samples taken from teeth. Using data from both, they were able to sequence approximately 98.2 percent of the main chromosome. Comparing the findings with modern strains, they found that the earlier strains lacked three variable short protein genes and one plasmid found on modern strains. Prior research has shown that the proteins act as proinflammatory agents for the bacteria, which, the researchers note, are key elements of the relapsing nature of the disease. They note further that such changes likely account for the differences in relapse rates—the disease tended to relapse just once or twice back in the 1400s, but is known to relapse up to five times in people afflicted today. A team of researchers with members from the University of Oslo and the Norwegian Institute for Cultural Heritage Research has conducted a genetic analysis of the bacteria that causes relapsing fever obtained from 15th century skeletons in Norway. In their paper published in Proceedings of the National Academy of Sciences, the group describes their study and what they found when they compared their results with the genome of modern bacteria. This document is subject to copyright. Apart from any fair dealing for the purpose of private study or research, no part may be reproduced without the written permission. The content is provided for information purposes only.last_img read more

Nanoscopic protein motion on a live cell membrane

first_img LEFT: (a) A TEM (transmission electron microscope) image of a filopodium including an EGFR–GNP. (b), A filopodium surface reconstructed from 780,000 trajectory points with a localization error of σx,y = 2 nm recorded at 1,000 fps. Inset, cross-sectional slice that depicts a cylindrical surface of diameter 150 nm after accounting for the size of the GNP. (c), A raw 13 min trajectory (left) broken into four subsequent pieces that reveal the journey to and from the tip, with arrows marking direction of net motion. (d), An ATOM plot of c, corrected for filopodium drift. (e), A surface interpolation from the final 80 s. The ring-like confinement in the final phase (marked with a triangle) is a 3D pit. The scale bars are 200 nm (a), 1 μm (x, y) and 200 nm (z) (b), 1,000 nm (c) and 100 nm (x, y) and 50 nm (z) (e). RIGHT: (a), A lateral trajectory of a 48 nm GNP probe. Scale bar, 100 nm. A lower temporal sampling of this confinement would have underestimated the extent of bounding. (b), Ci of the trajectory (using a time lag of five frames), which shows partially hindered diffusion with a propensity for freer diffusion in the centre. (c), An ATOM plot of a. (d), A cut through the 3D-ATOM plot along the line of the black triangle in c shows that occupancy favours an innermost disk-like region. The axes denote 100 nm in both c and d. (e), Conversion of the temporal 2D occupation from c into an effective potential energy distribution. (f–j), Equivalent to a–e, but for a 20 nm GNP probe. Credit: Nature Photonics, doi: 10.1038/s41566-019-0414-6 More information: Richard W. Taylor et al. Interferometric scattering microscopy reveals microsecond nanoscopic protein motion on a live cell membrane, Nature Photonics (2019). DOI: 10.1038/s41566-019-0414-6 Philipp Kukura et al. High-speed nanoscopic tracking of the position and orientation of a single virus, Nature Methods (2009). DOI: 10.1038/nmeth.1395 Jordan A. Krall et al. High- and Low-Affinity Epidermal Growth Factor Receptor-Ligand Interactions Activate Distinct Signaling Pathways, PLoS ONE (2011). DOI: 10.1371/journal.pone.0015945 In a recent study, Richard W. Taylor and colleagues at the interdisciplinary departments of Physics and Biology in Germany developed a new image processing approach to overcome this difficulty. They used the method to track the transmembrane epidermal growth factor receptor (EGFR) with nanometer scale precision in three dimensions (3-D). The technique allowed imaging across microseconds to minutes. The scientists provided examples of nanoscale motion and confinement using the method to image ubiquitous processes such as diffusion in plasma membranes, transport in filopodia and rotational motion during endocytosis. The results are now published in Nature Photonics. While steady progress in fluorescence microscopy has allowed scientists to monitor cellular events at the nanometer scale, a great deal still remains to be accomplished with advanced imaging systems. The challenges of fluorescence microscopy occurred due to the finite emission rate of a fluorescent source (dye molecule or semiconductor quantum dot), where too few photon emissions during a very small time-frame prevented effective or prolonged imaging. The central difficulty of scattering-based microscopy is relative to the nanoscopic probe, which competes against the background noise and a low signal-to-noise ratio (SNR); limiting the potential of imaging to only a few nanometers in high speed tracking experiments. iSCAT microscopy on live cells. a, Experimental arrangement of the iSCAT microscope for live-cell imaging. Cells are plated in a glass-bottomed dish under Leibowitz medium. (a) micropipette delivers the EGF–GNP probes directly onto the cell culture, where they specifically target the EGFR protein in the cell membrane. The bright-field illumination channel from above assists in inspecting the culture but is not required for iSCAT imaging. L1–L3, lenses; O1, ×100 objective; BS, 90:10 beam splitter; DM, 590 nm short-pass dichroic mirror. iSCAT imaging was performed with illumination intensities of 1–8 kW cm−2, which are known to be viable for HeLa at the wavelength of interest. Inset, wavefronts of the fields contributing to the iSCAT signal. (b), A section of the membrane of the HeLa cell before labelling, viewed via reflection iSCAT. (c), iSCAT image of the cell membrane including a bound EGF–GNP probe. (d), The PSF extracted from c. Scale bars in b–d are 1 μm. Credit: Nature Photonics, doi: 10.1038/s41566-019-0414-6 In the experiments, Taylor et al. introduced the epidermal growth factor-gold nanoparticle (EGF-GNP) probes to the sample chamber of the microscope using a micropipette to label the EGFRs (epidermal growth factor receptors) on HeLa cells and verified that the probes stimulated the EGFRs. Previous studies had already indicated that the probe size could influence rates of lipid diffusion in synthetic membranes, although they did not affect the mode of diffusion. Additionally, in live cells, molecular crowding was negligible for particles equal to or smaller than 50 nm. Diffusion on a filopodium. Credit: Nature Photonics, doi: 10.1038/s41566-019-0414-6 Journal information: Nature Photonics Taylor et al. verified these two concrete cases in the present work by comparing GNPs of varying diameters at 48 nm and 20 nm. The scientists then conducted fluorescent and biochemical studies to suggest that the EGF-coated GNPs activated EGFR signaling, much like the freely available EGFs, indicating that the label did not hinder biological functions. To overcome background noise related to molecular imaging the scientists implemented a new algorithm, which extracted the full iSCAT-point spread function (iSCAT-PSF) directly from each frame for clarity. Since existing techniques are unable to visualize features at high spatial and temporal resolution, many details on intracellular activity remain a matter of debate. In response, the new method by Taylor et al. offered a wealth of dynamic heterogeneities in 3-D to shed light on intracellular protein motion.The scientists first quantitatively studied subdiffusion in the plasma membrane by considering a 2-D example of the EGFR journey on the membrane of a living HeLa cell. For this, they computed the mean square displacement (MSD) for the whole trajectory of motion. Taylor et al. did not need to make assumptions on the nature of diffusion or its geographic landscape during the computation. They gauged the occurrence of biological diffractive barriers and confinements by observing the degree of directional correlation between two vectorial steps across a time span. Explore further Raw video of an epidermal growth factor-gold nanoparticle (EGFR–GNP) diffusing on a HeLa cell membrane. Credit: Nature Photonics, doi: 10.1038/s41566-019-0414-6 © 2019 Science X Network , Nature Methods The scientists thus gained insight on the nanoscopic details of diffusion along the filopodium and recorded the data across 13 minutes. They analyzed the 3-D trajectory to create the filopodium topography using gold nanoparticles as a ‘nano rover’ and mapped the surface topology of cellular structures for deeper examination. They plotted the trajectory ATOM (accumulated temporal occupancy map) and found that the 3-D representation was consistent with the biological step of pre-endocytic membrane invagination. High-speed microscopy techniques such as iSCAT are necessary to obtain high-resolution temporal information and prevent blurring effects during nanoparticle localization-based imaging. The scientists demonstrated this feature by recording confined diffusion at 30,000 fps (frames per second) with 48 nm and 20 nm GNPs. They followed the experiments with ultra-high-speed 3-D tracking of proteins at 66,000 fps using a short exposure time of 10 µs within a time duration of 3.5 seconds. Fast iSCAT microscopy imaging provided further evidence to reveal the intricate features of endocytic events relative to clathrin-mediated endocytosis in HeLa cells when simulated by low concentrations of EGF. In this way, Taylor et al. noted that the new technique could faithfully record nano-topographical information. The results matched the observations recorded with transmission electron microscopy (TEM) without significant differences on probe size reduction from 48 nm to 20 nm, while providing new insights. The new insights included details of subdiffusion, nanoscopic confinement, 3-D contours of filopodia and clathrin structures at the nanoscale. The scientists intend to combine iSCAT with in situ super-resolution fluorescence microscopy to understand the trajectories of proteins, viruses and other nanoscopic biological entities. Taylor et al. aim to advance the methods of image analysis to track GNPs smaller than 20 nm in the future and believe the new technology and additional optimization will allow them to specifically understand the life cycle of viruses without using an external label for tracking. In the present work, Taylor et al. used interferometric scattering (iSCAT) microscopy to track protein in live cell membranes. The method could visualize probe-cell interactions to understand the dynamics between diffusion and local topology. During the experiments, the scientists used gold nanoparticles (GNPs) to label epidermal growth factor receptors (EGFRs) in HeLa cells. The EGFRs are type I transmembrane proteins that can sense and respond to extracellular signals, whose aberrant signaling is linked to a variety of disease. Taylor et al. showed the GNP-labelled protein as a ‘nano-rover’ that mapped the nano-topology of cellular features such as membrane terrains, filopodia and clathrin structures. They provided examples of subdiffusion and nanoscopic confinement motion of a protein in 3-D at high temporal resolution and long time-points. Cellular functions are dictated by the intricate motion of proteins in membranes that span across a scale of nanometers to micrometers, within a time-frame of microseconds to minutes. However, this rich parameter of space is inaccessible using fluorescence microscopy, although it is within reach of interferometric scattering (iSCAT) particle tracking. The new iSCAT technique is, however, highly sensitive to single and unlabelled proteins, thereby causing non-specific background staining as a substantial challenge during cellular imaging. , PLoS ONE Citation: Nanoscopic protein motion on a live cell membrane (2019, May 22) retrieved 18 August 2019 from https://phys.org/news/2019-05-nanoscopic-protein-motion-cell-membrane.html Diffusion on the plasma membrane. (a), A lateral diffusional trajectory (17.5 μs exposure time, see color scale for chronology). (b), MSD (mean square displacement) versus τ. The blue curve shows the MSD of a. The black curve is simulated normal diffusion (α= 1), with the grey envelope indicating the uncertainty. (c), The diffusional exponent of rolling windows (color scale) over the trajectory. Regions of subdiffusion (α<1) are indicated by darker shades. (d), αi through time. The grey shading represents a mean uncertainty of 7 ± 4%, corresponding to a 95% confidence interval for a window of 100 ms (1,000 frames) and τ= 250 μs. The points marked with the asterisk correspond to the circle in c. (e), The step-direction Ci for rolling windows along the trajectory. (f), The step-direction Ci plotted through time, with the shading denoting uncertainty. (g), ATOM occupation plot with residency time (colour scale). The bin size corresponds to the localization error. Noteworthy regions of extended occupation, marked as loops and whirls (i)–(iii), are indicative of persistent nanoscopic structures. The enclosed region represents a dense patch of notable subdiffusion. Scale bars, 100 nm. Credit: Nature Photonics, doi: 10.1038/s41566-019-0414-6 This document is subject to copyright. Apart from any fair dealing for the purpose of private study or research, no part may be reproduced without the written permission. The content is provided for information purposes only. Nanoscale magnetic imaging of ferritin in a single cell The scientists then assessed the popularity of each trajectory pixel in space by introducing an accumulated temporal occupancy map (ATOM). In this technique, they divided the lateral plane of the trajectory into nanometer-sized bins and counted the occurrence of the particle in each bin. The results indicated the arrangement of nanostructures in loops and whirls within a minimal lifetime of 250 nanoseconds (5000 frames) to potentially portray a pre-endocytic step. In total, the simulated observations showed how protein diffusion was affected by the substructure of the cell.The iSCAT microscopy technique allowed scientists to record effects for a very long period of time, which they used together with 3-D imaging capabilities to follow EGFRs on a filopodium. The filopodia are biologically rod-like cellular protrusions containing bundles of actin filaments of up to 100 to 300 nm in diameter and 100 µm in length. The nanostructures can sense mechanical stimuli for chemoattraction or repulsion in the cellular microenvironment while providing sites for cell attachment. Ligand binding and EFGR activation on filopodia occurred at low concentrations of EGF, followed by its association with actin filaments and retrograde transport of EFGR to the cell body. last_img read more