Superconducting motor to increase power density

first_imgThe scientists’ experimental setup: (1) stationary cryostat; (2) induction motor; (3) belts; (4) sliding, contacts—(a) brushes, (b) rings. Image credit: Ailam, et al. ©IEEE 2007. Explore further To test the performance of the motor, the scientists calculated the magnetic scalar potential, which tells the strength of a magnetic field in a certain area, and then determined the magnetic flux density, which is the quantity of magnetism in that area. As the scientists explained, the maximal value of the flux density exists in between two of the bulk plates, while the minimum value exists behind the plates; a large difference in magnetic flux density maximizes the motor’s performance by generating a more powerful magnetic field.The group experimentally demonstrated a performance of 118.8 volts for the motor. Further, they calculated a theoretical generated voltage of 172.5 volts, and explained that the difference is due to an uncertain value for the difference in the maximal and minimal values of the magnetic fields around the bulk plates, which was not directly measured. Improving this difference in magnetic flux density will hopefully increase the motor’s voltage. “As we demonstrate in another paper, under realization, using this structure with several superconducting wires and 20 mW generated power decreases the inductor volume 20-50 percent in comparison to a classical electrical machine,” Ailam said.In the near future, the group plans to design and construct a 100 kW superconducting machine using the same configuration.“The major advantages of these motors are a high power-volume density and a high torque-volume density, and less vibration than for the conventional motors,” Ailam said. “I think that the maritime propulsion can and the electrical traction generally can benefit principally to these motors.”Citation: Ailam, El Hadj, Netter, Denis, Lévêque, Jean, Douine, Bruno, Masson, Philippe J., and Rezzoug, Abderrezak. “Design and Testing of a Superconducting Rotating Machine.” IEEE Transactions on Applied Superconductivity, Vol. 17, No. 1, March 2007.Copyright 2007 PhysOrg.com. All rights reserved. This material may not be published, broadcast, rewritten or redistributed in whole or part without the express written permission of PhysOrg.com. Recently, scientists El Hadj Ailam and colleagues working at the Université Henri Poincaré in Nancy, France and the Center for Advanced Power Systems in Tallahassee, Florida, have designed and tested a superconducting rotating machine based on an unconventional topology. Their results, which are published in IEEE Transactions on Applied Superconductivity, show promising opportunities for the motor.“This work has two goals,” Ailam, who is currently with the Centre Universitaire Khemis Miliana in Algeria, told PhysOrg.com. “The first is to show the feasibility of an electrical motor based on the magnetic flux density, and the second is to demonstrate that superconductors can significantly ameliorate the electrical machine performances.”Building on high-temperature motors designed over the past few years, Ailam et al.’s motor is a low-temperature, eight-pole machine with a stationary superconducting inductor. Unlike copper coils, the niobium-titanium (NbTi) inductor coils in this design have no electrical resistance, which is one of the greatest advantages of superconductors.When the two NbTi coils are fed with currents moving in opposite directions, the currents create a magnetic field. Located between the two coils, four superconducting bulk plates (made of YBaCuO, or yttrium barium copper oxide) shape and distribute the magnetic flux lines, which then induces an alternating electromagnetic field based on the magnetic concentration. A rotating armature wound with copper wires then converts the electrical energy to mechanical energy, which is eventually transferred to an application.In this design, the entire inductor is cooled to 4.2 K using liquid helium to enable zero electrical resistance in the coils. (The scientists explain that high-temperature wires could also work in this configuration.) As with all superconducting motors, the superconducting wire can carry larger amounts of current than copper wire, and therefore create more powerful magnetic fields in a smaller amount of space than conventional motors.“For the majority of electrical superconducting machines, the structure is a classical one, and the magnetic flux is a radial one,” Ailam explained. “[However,] for our machine, the inductor magnetic flux is an axial one.” The field of electric motors has recently entered a new era. The electric motors that you see today in everything from washing machines, toys, and fans use the same basic principles as motors from 50 years ago. But with the realization of using superconducting wire to replace conventional copper coils, motors are becoming more compact, more energy efficient, and less expensive, which will have advantages particularly for large industrial applications.center_img Theory explains ferromagnetic superconductor behavior Citation: Superconducting motor to increase power density (2007, May 24) retrieved 18 August 2019 from https://phys.org/news/2007-05-superconducting-motor-power-density.html This document is subject to copyright. Apart from any fair dealing for the purpose of private study or research, no part may be reproduced without the written permission. The content is provided for information purposes only.last_img read more

Cambridge Nights a late night show for scientists

first_img(PhysOrg.com) — While it’s not uncommon to see scientists on TV, most of the time it’s just for a few minutes on the news to comment on a recent event or major discovery. A new late night show called “Cambridge Nights” coming out of MIT’s Media Lab is changing that by providing an outlet for researchers to talk about their work in a slower paced, conversational setting. The first episodes of the show are being posted at http://cambridgenights.media.mit.edu. © 2011 PhysOrg.com A screenshot during the intro to “Cambridge Nights.” Image credit: MIT Media Lab This document is subject to copyright. Apart from any fair dealing for the purpose of private study or research, no part may be reproduced without the written permission. The content is provided for information purposes only. Similar to how Leno, Letterman, and John Stewart interview interesting people in pop culture, Cesar Hidalgo, ABC Career Development Professor at MIT’s Media Lab, interviews academic professionals about their research, their life stories, and their views of the world. So far, eight episodes have been filmed, each about 30-45 minutes long. The episodes are being released every Wednesday, with the fourth episode appearing this week. The three episodes that have been released so far feature interviews with Marc Vidal, Professor of Genetics at Harvard Medical School; Geoffrey West, former President of the Santa Fe Institute; and Albert-László Barabási, director of Northeastern University’s Center for Complex Networks Research.Due to the laid-back setting, the guests are able to tell stories that span their careers, peppered with interesting bits of trivia. For instance, as West discusses his research on how metabolism scales with an organism’s body mass, he notes that life is often marveled at for its diversity, but no less intriguing is how the characteristics of all known life forms follow some simple physical and mathematical laws. Even the arrangement of trees in a forest follows a formula, despite looking random, he explains. As the shows are not pressed for time or commercial breaks, the guests are allowed to take their time while talking without being cut short by frequent interruptions or confrontational questions.“Guests are not asked to simplify or condense their narratives,” according to the “Cambridge Nights” philosophy. “We invite them because we want to hear what they have to say, and we want to give them the time to say it comfortably. There are many high-speed formats out there. ‘Cambridge Nights’ is an alternative where thoughts can be developed and reflected upon without the need to rush.”For these reasons, the researchers who have appeared on the show so far have given positive feedback about the new outlet.“The guests have loved the format,” Hidalgo said. “Scientists tend to be long-winded, since they have a lot to say and are careful about making distinctions. An open and relaxed format has suited them well and they have been very happy.”While the rest of the episodes of the first season continue to be released through the end of November, the show is currently preparing for the next season.“We are also getting ready to film season two,” Hidalgo said. “The plan is to film during the winter and release again next fall. We are shooting for 8 to 10 episodes for season two. Currently, our plan is to continue doing this yearly.” The kids are alright More information: http://cambridgenights.media.mit.edu Citation: ‘Cambridge Nights’: a late night show for scientists (2011, October 31) retrieved 18 August 2019 from https://phys.org/news/2011-10-cambridge-nights-late-night-scientists.html Explore furtherlast_img read more

Researchers find new source for cold ocean water abyssal layer

first_img Journal information: Nature Geoscience Scientists test powerful ocean current off Antarctica Citation: Researchers find new source for cold ocean water abyssal layer (2013, February 25) retrieved 18 August 2019 from https://phys.org/news/2013-02-source-cold-ocean-abyssal-layer.html Explore further Intense sea-ice production in the CDP, revealed from satellite data. Credit: Nature Geoscience (2013) doi:10.1038/ngeo1738 AABW is important because as it moves north from its source, it creates ocean currents that have a major impact on global climate. Until now however, scientists have only been able to identify three major sources—not nearly enough to explain the amount of AABW seen in the ocean. In this new research, the team suspected that a different type of source might be at play—one that came about in a polynya (area of open water that can’t freeze over due to rapid wind and water movement), rather than directly offshore of shelf ice. To find out they employed the use of traditional undersea sensors, and less traditionally, sensors attached to the heads of elephant seals.Ocean currents result from AABW due to the way it’s formed. When seawater freezes, much of the salt in the ice is pushed back into the water giving it a very high salinity—and because it’s also very cold, it tends to sink. As it hits the bottom it joins other cold water that slowly seeps toward the edge of the continental shelf, where if falls over into the abyss, rather like an under-the-ocean waterfall. That water falling is what generates the currents that flow north.Researchers had suspected for years that a source for AABW existed somewhere near what they call the Weddell Gyre, but had not been able to find it. In this new research, the team used satellite data to pick a likely polynya, and settled on Cape Darnley. There they sank sensors and studied data supplied by the elephant seal sensors. It was the data from the seals, the researchers report, that showed that areas in which they swam—at times as deep as 1,800 meters—revealed the layer of cold dense water the researchers were looking for—the fourth AABW source.After analyzing the Cape Darnley polynya source, the researchers have concluded that it is likely responsible for 6 to 13 percent of circumpolar AABW totals, which suggests they say, that other similar sources are out there still waiting to be found.center_img (Phys.org)—An international team of ocean researchers has found a fourth source of Antarctic bottom water (AABW)—the very cold, highly saline layer of water that lies at the bottom of the ocean. In their paper published in the journal Nature Geoscience, the team describes how they discovered the site in the Cape Darnley polynya. More information: Antarctic Bottom Water production by intense sea-ice formation in the Cape Darnley polynya, Nature Geoscience (2013) doi:10.1038/ngeo1738AbstractThe formation of Antarctic Bottom Water—the cold, dense water that occupies the abyssal layer of the global ocean—is a key process in global ocean circulation. This water mass is formed as dense shelf water sinks to depth. Three regions around Antarctica where this process takes place have been previously documented. The presence of another source has been identified in hydrographic and tracer data, although the site of formation is not well constrained. Here we document the formation of dense shelf water in the Cape Darnley polynya (65°–69° E) and its subsequent transformation into bottom water using data from moorings and instrumented elephant seals (Mirounga leonina). Unlike the previously identified sources of Antarctic Bottom Water, which require the presence of an ice shelf or a large storage volume, bottom water production at the Cape Darnley polynya is driven primarily by the flux of salt released by sea-ice formation. We estimate that about 0.3–0.7×106 m3 s−1 of dense shelf water produced by the Cape Darnley polynya is transformed into Antarctic Bottom Water. The transformation of this water mass, which we term Cape Darnley Bottom Water, accounts for 6–13% of the circumpolar total. © 2013 Phys.org This document is subject to copyright. Apart from any fair dealing for the purpose of private study or research, no part may be reproduced without the written permission. The content is provided for information purposes only.last_img read more

Optimal stem cell reprogramming through sequential protocols

first_img Back in 2006, researchers in Japan were able to effectively generate stem cells from skin cells without the need for oocytes (eggs) or other embryonic cells. By expressing the four transcription factors, Oct4, Sox2, KLF4 and c-Myc, they could generate cells that, at least in theory, could turn into any other kind of cell. Unfortunately, not only the overall yield of viable stem cells was low, the “rejuvenated” cells that were able to be extracted were generally unsuitable for subsequent patient treatment. The problem is that even when transplanting stem cells obtained from a person’s own skin, immune rejection or tumor formation still occurred. This disharmony results from the fact that the immune system, while trained over a lifetime, can be confused by the “dissonance” in expression of youthful protein isoforms, particularly when encountered astride those of more adult cells. By inducing the expression of all four transformation factors at different times, the Chinese researchers eventually hit upon the optimal sequence. In a nutshell, introducing a combination of Oct4 and Klf4 first, followed later by c-Myc, and then finally Sox2, the maximal yield could be obtained. They were surprised to find that this sequential protocol activated an epithelial-to-mesenchymal (EMT) transition, which was then followed by a delayed reverse (MET) transition. It had been known for some time that in mouse fibroblasts, reprogramming to the pluripotent stem cell state begins by going through a MET conversion. Therefore finding upregulation of the proteins SLUG and N-cadherin, factors generally associated with an EMT, was not anticipated.In embryogenesis, cells interconvert between epithelial and mesenchymal phenotypes as they lay out the basic body plan. In the epithelial state, cells possess inherent polarity and show preferential adhesion, while in the mesenchymal state, these properties are lost as cells becomes migratory and invasive. This game of run-the bases is recapitulated as more option-constrained cells later rough out the critical form of each organ. Each time cells alights in either camp, they express part of an overlapping subset of various state indicators, but their genetic arrangements are never quite the same. The authors looked at a few additional factors that might help explain the appearance of a brief mesenchymal state in the sequential procedure. By applying TGF-beta to the simultaneous factor expression protocol early on, they were able to mimic the appearance of the mesenchymal state. This was found to be accompanied by an enhancement in the reprogramming yield, but the effect disappeared when the TGF-beta was applied using a 12-day treatment protocol.TGF-beta is a whole new can of worms since it is expressed by many cells and does many things, even opposite things in different cells. It is traditionally termed a cytokine, although the distinction between that and a hormone is becoming increasingly blurred. Generally hormones are active at nanomolar concentrations in the blood and vary by less than an order of magnitude. Cytokines by contrast often circulate at less than picomolar concentration and ramp up 1000-fold when called upon during injury or infection.Capturing the essential behavior of the thousands of downstream regulators or even just four transcription factors, is just not realistic with a flowchart or state diagram. Beyond a certain level of complexity, if the transition probabilities are too low, or the branch points and exceptions too numerous, new constraints are needed before any sensible algorithmic description might be attempted. In the absence of any such obvious constraints, the authors hypothesized that while multiple pathways exist for conversion between epithelial-mesenchymal states, some are shorter or easier to access than others. They believe that their sequential recipe tips the balance towards a brief mesenchymal state which ultimately leads to a better stem cell yield. Citation: Optimal stem cell reprogramming through sequential protocols (2013, May 28) retrieved 18 August 2019 from https://phys.org/news/2013-05-optimal-stem-cell-reprogramming-sequential.html (Phys.org) —Gaining control of the ability of mature tissues to generate stem cells is the central medical challenge of our day. From taming cancer, to providing compatible cell banks for replacement organs, knowledge of how cells interconvert between stable points on the complex cellulo-genetic landscape will deliver to the doctor the same mastery the programmer now holds over bits. While researchers often speak of “reprogramming” cells, most recipes today consist only of a crude and partial ingredient list, with little consideration of sequence, quantity or prior state. We recently took stock of the latest in stem cell technology and reviewed the four major factors used to revert adult cells back into omnipotent progenitors. We also just reported on further attempts to rigorously define appropriate level of factors to supply. Researchers from China have now reported that stem cell generation can be regulated by the precise temporal expression of these factors. Publishing in the journal Nature Cell Biology, they show that the efficiency and yield of stem cells can be optimized by controlling the sequencing of the transforming factors, and furthermore provide a theoretical exploration of the possible mechanisms going on behind the scenes. Stem Cell Induction. Credit: stemcellschool.org More information: Sequential introduction of reprogramming factors reveals a time-sensitive requirement for individual factors and a sequential EMT–MET mechanism for optimal reprogramming, Nature Cell Biology (2013) doi:10.1038/ncb2765AbstractPresent practices for reprogramming somatic cells to induced pluripotent stem cells involve simultaneous introduction of reprogramming factors. Here we report that a sequential introduction protocol (Oct4–Klf4 first, then c-Myc and finally Sox2) outperforms the simultaneous one. Surprisingly, the sequential protocol activates an early epithelial-to-mesenchymal transition (EMT) as indicated by the upregulation of Slug and N-cadherin followed by a delayed mesenchymal-to-epithelial transition (MET). An early EMT induced by 1.5-day TGF-β treatment enhances reprogramming with the simultaneous protocol, whereas 12-day treatment blocks reprogramming. Consistent results were obtained when the TGF-β antagonist Repsox was applied in the sequential protocol. These results reveal a time-sensitive role of individual factors for optimal reprogramming and a sequential EMT–MET mechanism at the start of reprogramming. Our studies provide a rationale for further optimizing reprogramming, and introduce the concept of a sequential EMT–MET mechanism for cell fate decision that should be investigated further in other systems, both in vitro and in vivo. This document is subject to copyright. Apart from any fair dealing for the purpose of private study or research, no part may be reproduced without the written permission. The content is provided for information purposes only.center_img Protein central to cancer stem cell formation provides new potential target © 2013 Phys.org Journal information: Nature Cell Biology Explore furtherlast_img read more

Naturallyoccurring protein enables slowermelting ice cream

first_img The answer lies in a naturally occurring protein explored by researchers at the universities of Dundee and Edinburgh.The protein’s plus points are not only slow melting but also smoother texture in ice cream, with no gritty ice crystals forming. Another plus will interest weight-watchers: the development could allow for products to be made with lower levels of saturated fat and fewer calories. For example, the protein could be used in chocolate mousse and mayonnaise to help reduce the calories.The protein in focus is BsIA. In making ice cream, it works by binding together the air, fat and water. Because of BsIA, the team replaced some of the fat molecules which are used to stabilize oil and water mixtures, cutting the fat content.The protein was developed with support from the Engineering and Physical Sciences Research Council and the Biotechnology and Biological Sciences Research Council, said a press item from the University of Dundee.Yes, the ice cream will melt eventually but University of Edinburgh’s Prof. Cait MacPhee said in a BBC News report on Monday that hopefully by keeping it stable for longer, “it will stop the drips.” She is from the University of Edinburgh’s school of physics and astronomy, and she led the project. She told BBC Radio 5 live: “This is a natural protein already in the food chain. It’s already used to ferment some foods so it’s a natural product rather than being a ‘Frankenstein’ food.”The team estimated such a slow-melting ice cream could be available in three to five years.Radio New Zealand News said the protein occurs in friendly bacteria and it works by adhering to fat droplets and air bubbles, making them more stable in a mixture.Matthew Humphries in Geek.com spelled out what this could mean if their research were to reach manufacturing stage. “For manufacturers it’s a fantastic find. It can be added to ice cream without altering the taste or mouth feel, it also means the finished ice cream can be stored at slightly higher (yet still very cold) temperatures, which will save on energy costs. The protein can also reduce the level of saturated fat required. As long as the taste isn’t affected by that, it means the ice cream you love will contain less calories.”The ice cream news is yet another example of why researchers are keenly interested in the behavior of proteins—as MacPhee said in discussing her research interests—”the molecules that are responsible for the vast majority of functions in living organisms.” She noted that self-assembly of proteins underpins the texture of foodstuffs including egg, meat and milk products. “It is understanding this process of self-assembly – to prevent or reverse disease, or to drive the development of new materials and foodstuffs – that forms the focus of my research efforts,” she stated. Ice cream goes Southern, okra extracts may increase shelf-life The BSlA protein binds together the air, fat and water in ice cream, creating a super-smooth consistency that should keep it from melting as quickly © 2015 Phys.org More information: www.dundee.ac.uk/news/2015/slo … o-new-ingredient.php (Phys.org)—Scientists have developed a slower-melting ice cream—consider the advantages the next time a hot summer day turns your child’s cone with its dream-like mound of orange, vanilla and lemon swirls with chocolate flecks into multi-colored sludge riddled with fly-like flecks. This document is subject to copyright. Apart from any fair dealing for the purpose of private study or research, no part may be reproduced without the written permission. The content is provided for information purposes only. Explore further Citation: Naturally-occurring protein enables slower-melting ice cream (2015, August 31) retrieved 18 August 2019 from https://phys.org/news/2015-08-naturally-occurring-protein-enables-slower-melting-ice.htmllast_img read more

The path to perfection Quantum dots in electricallycontrolled cavities yield bright nearly

first_img © 2016 Phys.org , Nature Communications Researchers develop ideal single-photon source Figure 1. a, Schematic of the sources: a single semiconductor quantum dot, represented by a red dot, is positioned within 50 nm from the center of the cavity, which consists of a 3 µm pillar connected to a circular frame through 1.3 µm wide waveguides. The top electrical contact is defined on a large mesa adjacent to the circular frame. By applying a bias to the cavity, the wavelength of the emitted photons can be tuned and the charge noise strongly reduced. b, Emission map of the device: the strong signal coming from the quantum dot located at the center of the cavity demonstrates the precise positioning of the quantum dot in the cavity and the enhanced collection efficiency obtained by accelerating the quantum dot spontaneous emission. Credit: Courtesy: Dr. Pascale Senellart. Figure 2. a, Photon correlation histogram measuring the indistinguishability of photons successively emitted by one of the devices. The area of the peak at zero delay allows measuring the photon indistinguishability: it should be zero for fully indistinguishable photons. We test here two configurations: the coalescence of photons with orthogonal polarization (fully distinguishable – blue curve) and the coalescence of photon with the same polarization (red curve). The disappearance of the zero delay peak in the latter case show the near unity indistinguishability of the emitted photons. b, Graph summarizing all the source characteristics as a function of excitation power: brightness (probability of collecting a photon per pulse – red – right scale), autocorrelation function g(2)(0) (characterizing the probability of emitting more than one photon – blue – left bottom scale), indistinguishablity M (purple – left top scale). Credit: Courtesy: Dr. Pascale Senellart. Explore further , Optica , Nature Nanotechnology , Nature Dr. Pascale Senellart and Phys.org discussed the paper, Near-optimal single-photon sources in the solid state, that she and her colleagues published in Nature Photonics, which reports the design and fabrication of the first optoelectronic devices made of quantum dots in electrically controlled cavities that provide bright source generating near-unity indistinguishability and pure single photons. “The ideal single photon source is a device that produces light pulses, each of them containing exactly one, and no more than one, photon. Moreover, all the photons should be identical in spatial shape, wavelength, polarization, and a spectrum that is the Fourier transform of its temporal profile,” Senellart tells Phys.org. “As a result, to obtain near optimal single photon sources in an optoelectronic device, we had to solve many scientific and technological challenges, leading to an achievement that is the result of more than seven years of research.”While quantum dots can be considered artificial atoms that therefore emit photons one by one, she explains, due to the high refractive index of any semiconductor device, most single photons emitted by the quantum dot do not exit the semiconductor and therefore cannot be used. “We solved this problem by coupling the quantum dot to a microcavity in order to engineer the electromagnetic field around the emitter and force it to emit in a well-defined mode of the optical field,” Senellart points out. “To do so, we need to position the quantum dot with nanometer-scale accuracy in the microcavity.”Senellart notes that this is the first challenge that the researchers had to address since targeting the issue of quantum dots growing with random spatial positions. “Our team solved this issue in 20081 by proposing a new technology, in-situ lithography, which allows measuring the quantum dot position optically and drawing a pillar cavity around it. With this technique, we can position a single quantum dot with 50 nm accuracy at the center of a micron-sized pillar.” In these cavities, two distributed Bragg reflectors confine the optical field in the vertical direction, and the contrast of the index of refraction between the air and the semiconductor provides the lateral confinement of the light. “Prior to this technology, the fabrication yield of quantum dot cavity devices was in the 10-4 – but today it is larger than 50%.” The scientists used this technique to demonstrate the fabrication of bright single photon sources in 20132, showing that the device can generate light pulses containing a single photon with a probability of 80% – but while all photons had the same spatial shape and wavelength, they were not perfectly identical. More information: Near-optimal single-photon sources in the solid state, Nature Photonics 10, 340–345 (2016), doi:10.1038/nphoton.2016.23Related:1Controlled light–matter coupling for a single quantum dot embedded in a pillar microcavity using far-field optical lithography, Physical Review Letters 101, 267404 (2008), doi:10.1103/PhysRevLett.101.2674042Bright solid-state sources of indistinguishable single photons, Nature Communications 4, 1425 (2013), doi:10.1038/ncomms24343Deterministic and electrically tunable bright single-photon source, Nature Communications 5, 3240 (2014), doi:10.1038/ncomms42404On-demand semiconductor single-photon source with near-unity indistinguishability, Nature Nanotechnology 8, 213–217 (2013), doi:10.1038/nnano.2012.2625Coherent control of a solid-state quantum bit with few-photon pulses, arXiv:1512.04725 [quant-ph]6Charge noise and spin noise in a semiconductor quantum device, Nature Physics 9, 570–575 (2013), doi:10.1038/nphys26887Scalable performance in solid-state single-photon sources, Optica 3, 433-440 (2016), doi:10.1364/OPTICA.3.0004338BosonSampling with single-photon Fock states from a bright solid-state source, arXiv:1603.00054 [quant-ph]9Downconversion quantum interface for a single quantum dot spin and 1550-nm single-photon channel,Optics Express Vol. 20, Issue 25, pp. 27510-27519 (2012), doi:10.1364/OE.20.02751010Ultrabright source of entangled photon pairs, Nature 466, 217–220 (08 July 2010), doi:10.1038/nature09148 Citation: The path to perfection: Quantum dots in electrically-controlled cavities yield bright, nearly identical photons (2016, June 7) retrieved 18 August 2019 from https://phys.org/news/2016-06-path-quantum-dots-electrically-controlled-cavities.html Journal information: Nature Photonics Optical quantum technologies are based on the interactions of atoms and photons at the single-particle level, and so require sources of single photons that are highly indistinguishable – that is, as identical as possible. Current single-photon sources using semiconductor quantum dots inserted into photonic structures produce photons that are ultrabright but have limited indistinguishability due to charge noise, which results in a fluctuating electric field. Conversely, parametric down conversion sources yield photons that while being highly indistinguishable have very low brightness. Recently, however, scientists at CNRS – Université Paris-Saclay, Marcoussis, France; Université Paris Diderot, Paris, France; University of Queensland, Brisbane, Australia; and Université Grenoble Alpes, CNRS, Institut Néel, Grenoble, France; have developed devices made of quantum dots in electrically-controlled cavities that provide large numbers of highly indistinguishable photons with strongly reduced charge noise that are 20 times brighter than any source of equal quality. The researchers state that by demonstrating efficient generation of a pure single photon with near-unity indistinguishability, their novel approach promises significant advances in optical quantum technology complexity and scalability. , Physical Review Letters Senellart adds that while removing scattered photons when transmitting light in processed microstructures is typically complicated, in their case this step was straightforward. “Because the quantum dot is inserted in a cavity, the probability of the incident laser light to interact with the quantum dot is actually very high. It turns out that we send only a few photons – that is, less than 10 – on the device to have the quantum dot emitting one photon. This beautiful efficiency, also demonstrated in the excitation process, which we report in another paper5, made this step quite easy.”The devices reported in the paper have a number of implications for future technologies, one being the ability to achieve strongly-reduced charge noise by applying an electrical bias. “Charge noise has been extensively investigated in quantum dot structures,” Senellart says, “especially by Richard Warburton’s group.” Warburton and his team demonstrated that in the best quantum dot samples, the charge noise could take place on a time scale of few microseconds6 – which is actually very good, since the quantum dot emission lifetime is around 1 nanosecond. However, this was no longer the case in etched structures, where a strong charge noise is always measured on very short time scale – less than 1 ns – that prevents the photon from being indistinguishable. “I think the idea we had – that this problem would be solved by applying an electric field – was an important one,” Senellart notes. “The time scale of this charge noise does not only determine the degree of indistinguishability of the photons, it also determines how many indistinguishable photon one can generate with the same device. Therefore, this number will determine the complexity of any quantum computation or simulation scheme one can implement.” Senellart adds that in a follow-up study7 the scientists generated long streams of photons that can contain more than 200 being indistinguishable by more than 88%.In addressing how these de novo devices may lead to new levels of complexity and scalability in optical quantum technologies, Senellart first discusses the historical sources used develop optical quantum technologies. She makes the point that all previous implementations of optical quantum simulation or computing have been implemented using Spontaneous Parametric Down Conversion (SPDC) sources, in which pairs of photons are generated by the nonlinear interaction of a laser on a nonlinear crystal, wherein one photon of the pair is detected to announce the presence of the other photon. This so-called heralded source can present strongly indistinguishable photons, but only at the cost of extremely low brightness. “Indeed, the difficulty here is that the one pulse does not contain a single pair only, but some of the time several pairs,” Senellart explains. “To reduce the probability of having several pairs generated that would degrade the fidelity of a quantum simulation, calculation or the security of a quantum communication, the sources are strongly attenuated, to the point where the probability of having one pair in a pulse is below 1%. Nevertheless, with these sources, the quantum optics community has demonstrated many beautiful proofs of concept of optical quantum technologies, including long-distance teleportation, quantum computing of simple chemical or physical systems, and quantum simulations like BosonSampling.” (A BosonSampling device is a quantum machine expected to perform tasks intractable for a classical computer, yet requiring minimal non-classical resources compared to full-scale quantum computers.) “Yet, the low efficiency of these sources limits the manipulation to low photon numbers: It takes typically hundreds of hours to manipulate three photons, and the measurement time increases exponentially with the number of photons. Obviously, with the possibility to generate more many indistinguishable photons with an efficiency more than one order of magnitude greater than SPDC sources, our devices have the potential to bring optical quantum technologies to a whole new level.”Other potential applications of the newly-demonstrated devices will focus on meeting near-future challenges in optical quantum technologies, including scalability of photonic quantum computers and intermediate quantum computing tasks. “The sources presented here can be used immediately to implement quantum computing and intermediate quantum computing tasks. Actually, very recently – in the first demonstration of the superiority of our new single photon sources – our colleagues in Brisbane made use of such bright indistinguishable quantum dot-based single photon sources to demonstrate a three photon BosonSampling experiment8, where the solid-state multi-photon source was one to two orders-of-magnitude more efficient than downconversion sources, allowing to complete the experiment faster than those performed with SPDC sources. Moreover, this is a first step; we’ll progressively increase the number of manipulated photons, in both quantum simulation and quantum computing tasks.”Another target area is quantum communications transfer rate. “Such bright single photon sources could also drastically change the rate of quantum communication protocols that are currently using attenuated laser sources or SPDC sources. Yet, right now, our sources operate at 930 nm when 1.3 µm or 1.55 µm sources are needed for long distance communications. Our technique can be transferred to the 1.3 µm range, a range at which single photon emission has been successfully demonstrated – in particular by the Toshiba research group – slightly changing the quantum dot material. Reaching the 1.55 µm range will be more challenging using quantum dots, as it appears that the single photon emission is difficult to obtain at this wavelength. Nevertheless, there’s a very promising alternative possibility: the use of a 900 nm bright source, like the one we report here, to perform quantum frequency conversion of the single photons. Such efficient frequency conversion of single photons has recently been demonstrated, for example, in the lab of Prof. Yoshie Yamamoto at Stanford9.”Regarding future research, Senellart says “There are many things to do from this point. On the technology side, we will try to improve our devices by further increasing the source brightness. For that, a new excitation scheme will be implemented to excite the device from the side, as was done by Prof. Valia Voliotis and her colleagues on the Nanostructures and Quantum Systems team at Pierre and Marie Curie University in Paris and Prof. Glenn Solomon’s group at the National Institute of Standards and Technology (NIST) in Gaithersburg, Maryland. Applying this technique to our cavities should allow gaining another factor of four on source brightness. In addition, operating at another wavelength would be another important feature for our devices, since as discussed above, this would allow using the source for quantum telecommunication. For example, a shorter wavelength, in the visible/near infrared range, would open new possibilities to interconnect various quantum systems, including ions or atoms through their interaction with photons, as well as applications in quantum imaging and related fields.” The researchers also want to profit from the full potential of these sources and head to high photon number manipulation in, for instance, quantum simulation schemes. “We’re aiming at performing BosonSampling measurements with 20-30 photons, with the objective of testing the extended Church Turing thesis and proving the superiority of a quantum computer over a classical one.” The original Church Turing thesis, based on investigations of Alonzo Church and Alan Turing into computable functions, states that, ignoring resource limitations, a function on the natural numbers is computable by a human being following an algorithm, if and only if it is computable by a Turing machine.Another promising impact on future optical quantum technologies is the generation of entangled photon pairs. “A quantum dot can also generate entangled photon pairs, and in 2010 we demonstrated that we could use the in situ lithography to obtain the brightest source of entangled photon pairs10. That being said, photon indistinguishability needs to be combined with high pair brightness – and this is the next challenge we plan to tackle. Such a device would play an important role in developing quantum relays for long distance communication and quantum computing tasks.”Senellart tells Phys.org that other areas of research might well benefit from their findings, in that devices similar to the one the scientists developed to fabricate single photon sources could also provide nonlinearities at the low photon count scale. This capability could in turn allow the implementation of deterministic quantum gates, a new optical quantum computing paradigm in which reversible quantum logic gates – for example, Toffoli or CNOT (controlled NOT) gates– can simulate irreversible classical logic gates, thereby allowing quantum computers to perform any computation which can be performed by a classical deterministic computer. “Single photons can also be used to probe the mechanical modes of mechanical resonator and develop quantum sensing with macroscopic objects. Other applications,” she concludes, “could benefit from the possibility to have very efficient single photon sources, such as an imaging system with single photon sources that could allow dramatically increased imaging sensitivity. Such technique could have applications in biology where the lower the photon flux, the better for exploring in vivo samples.” “Indeed, for the photons to be fully indistinguishable, the emitter should be highly isolated from any source of decoherence induced by the solid-state environment. However, our study showed that collisions of the carriers with phonons and fluctuation of charges around the quantum dot were the main limitations.” To solve this problem, the scientists added an electrical control to the device3, such that the application of an electric field stabilized the charges around the quantum dot by sweeping out any free charge. This in turn removed the noise. Moreover, she adds, this electrical control allows tuning the quantum dot wavelength – a process that was previously done by increasing temperature at the expense of increasing vibration. “I’d like to underline here that the technology described above is unique worldwide,” Senellart stresses. “Our group is the only one with such full control of all of the quantum dot properties. That is, we control emission wavelength, emission lifetime and coupling to the environment, all in a fully deterministic and scalable way.”Specifically, implementing control of the charge environment for quantum dots in connected pillar cavities, and applying an electric field on a cavity structure optimally coupled to a quantum dot, required significant attention. “We had strong indications back in 2013 that the indistinguishability of our photons was limited by some charge fluctuations around the quantum dot: Even in the highest-quality semiconductors, charges bound to defects fluctuate and create a fluctuating electric field. In the meantime, several colleagues were observing very low charge noise in structures where an electric field was applied to the quantum dot – but this was not combined with a cavity structure.” The challenge, Senellart explains, was to define a metallic contact on a microcavity (which is typically a cylinder with a diameter of 2-3 microns) without covering the pillar’s top surface.”We solved this problem by proposing a new kind of cavity – that is, we showed that we can actually connect the cylinder to a bigger frame using some one-dimensional bridges without modifying too much the confinement of the optical field.” This geometry, which the researchers call connected pillars, allows having the same optical confinement as an isolated pillar while defining the metallic contact far from the pillar itself. Senellart says that the connected pillars geometry was the key to both controlling the quantum wavelength of dot and efficiently collecting its emission3.In demonstrating the efficient generation of a pure single photon with near-unity indistinguishability, Senellart continues, the researchers had one last step – combining high photon extraction efficiency and perfect indistinguishability – which they did by implementing a resonant excitation scheme of the quantum dot. “In 2013, Prof. Chao-Yang Lu’s team in Hefei, China showed that one could obtain photons with 96% indistinguishability by exciting the quantum dot state in a strictly resonant way4. Their result was beautiful, but again, not combined with an efficient extraction of the photons. The experimental challenge here is to suppress the scattered light from the laser and collect only the single photons radiated by the quantum dot.” This document is subject to copyright. Apart from any fair dealing for the purpose of private study or research, no part may be reproduced without the written permission. The content is provided for information purposes only. , Nature Physicslast_img read more

More precise measurements of phosphorene suggest it has advantages over other 2D

first_img More information: Likai Li et al. Direct observation of the layer-dependent electronic structure in phosphorene, Nature Nanotechnology (2016). DOI: 10.1038/nnano.2016.171AbstractPhosphorene, a single atomic layer of black phosphorus, has recently emerged as a new two-dimensional (2D) material that holds promise for electronic and photonic technologies. Here we experimentally demonstrate that the electronic structure of few-layer phosphorene varies significantly with the number of layers, in good agreement with theoretical predictions. The interband optical transitions cover a wide, technologically important spectral range from the visible to the mid-infrared. In addition, we observe strong photoluminescence in few-layer phosphorene at energies that closely match the absorption edge, indicating that they are direct bandgap semiconductors. The strongly layer-dependent electronic structure of phosphorene, in combination with its high electrical mobility, gives it distinct advantages over other 2D materials in electronic and opto-electronic applications. This document is subject to copyright. Apart from any fair dealing for the purpose of private study or research, no part may be reproduced without the written permission. The content is provided for information purposes only. (Phys.org)—A large team of researchers from China, the U.S. and Japan has developed a more precise means for measuring the various band gaps in layered phosphorene, and in so doing, have found that it possesses advantages over other 2-D materials. In their paper published in the journal Nature Nanotechnology, the group describes their technique and what they observed during their measurements. Citation: More precise measurements of phosphorene suggest it has advantages over other 2-D materials (2016, September 21) retrieved 18 August 2019 from https://phys.org/news/2016-09-precise-phosphorene-advantages-d-materials.html Understanding how flat phosphorus grows Direct observation of the layer-dependent electronic structure in phosphorene. a, The puckered honeycomb lattice of monolayer phosphorene; x and y denote the armchair and zigzag crystal orientations, respectively. b,c, Optical images of few-layer phosphorene samples. The images were recorded with a CCD camera attached to an optical microscope. The number of layers (indicated in the figure) is determined by the optical contrast in the red channel of the CCD image. d, Optical contrast profile in the red channel of the CCD images along the line cuts marked in b,c. Each additional layer increases the contrast by around 7%, up to tetralayer, as guided with the dashed lines. Credit: Likai Li et al. Nature Nanotechnology (2016) doi:10.1038/nnano.2016.171center_img Journal information: Nature Nanotechnology © 2016 Phys.org Scientists have been studying phosphorene (single layered black phosphorus) for some time because they believe it might be useful for creating new or better types of 2-D optoelectronic devices, similar in some respects to research efforts looking into graphene. Though it was first discovered in 1669, it was not actually isolated until 2014. Since that time, researchers have attempted to study the band gaps (the energy differences between the tops of the valence bands and the bottom of the conduction bands) that exist under various layering conditions because each may represent a unique opportunity for using the material. Prior efforts to find the band gaps relied on fluorescence spectroscopy, but that technique has not offered the accuracy needed for building devices. In this new effort, the researchers took a new approach called optical absorption spectroscopy, which works by measuring absorption of radiation as it interacts with a sample. By conducting multiple experiments, the researchers found that the electronic structure of the material varied significantly when looking at materials created from a range of layers, which, they noted, was consistent with prior theories.In using the new technique, the researchers found that different band gaps aligned well with different applications. 1.15eV, for example, would match well with a silicon band gap and 0.83 eV could be used in optoelectronics because of its similarity to a telecom photon wavelength. Also, they noted that the 0.35 eV band gap could prove useful in creating infrared devices. Overall, they found that the structure of layered phosphorene gives it advantages over other 2D materials for creating new devices—including some instances of graphene.The researchers next plan to actually use their results to create various optoelectronic devices, though they acknowledge that there are still some challenges involved, such as figuring out a way to deal with the tiny flakes and the instability involved in trying to use it. Explore furtherlast_img read more

Field study suggests wealthy less willing to tax rich when poor people

first_img(Phys.org)—A study conducted by a researcher with Harvard University suggests that wealthy people are less likely to support income redistribution through a tax on the very rich after having recently been exposed to an obviously poor person. In her paper published in Proceedings of the National Academy of Sciences, Melissa Sands describes a study she carried out using volunteers in wealthy neighborhoods, what she found and her opinions regarding the impact it could be having on domestic policy decisions. Explore further Journal information: Proceedings of the National Academy of Sciences More information: Melissa L. Sands. Exposure to inequality affects support for redistribution, Proceedings of the National Academy of Sciences (2017). DOI: 10.1073/pnas.1615010113AbstractThe distribution of wealth in the United States and countries around the world is highly skewed. How does visible economic inequality affect well-off individuals’ support for redistribution? Using a placebo-controlled field experiment, I randomize the presence of poverty-stricken people in public spaces frequented by the affluent. Passersby were asked to sign a petition calling for greater redistribution through a “millionaire’s tax.” Results from 2,591 solicitations show that in a real-world-setting exposure to inequality decreases affluent individuals’ willingness to redistribute. The finding that exposure to inequality begets inequality has fundamental implications for policymakers and informs our understanding of the effects of poverty, inequality, and economic segregation. Confederate race and socioeconomic status, both of which were randomized, are shown to interact such that treatment effects vary according to the race, as well as gender, of the subject. Most people are aware of the growing divide between the very wealthy (the so-called 1 percent) and everyone else in the United States. The issue has led some to call for income redistribution by forcing the very rich to pay more taxes with the extra money going to help the poor. For such actions to actually happen, ordinary people would have to support such an initiative led by politicians. To learn more about how people might react to such an initiative in the form of a petition in a public place, Sands enlisted the assistance of several volunteers.The study consisted of having male volunteers (some white, some black) pose as either a reasonably affluent person or as someone obviously very poor. The volunteers were stationed in affluent areas in places where affluent people would have to walk past them to reach their destination, but just before arriving, they would be asked by another volunteer dressed as an affluent person to sign one of two petitions. One of the petitions supported a way to reduce the use of plastic bags (the control), while the other sought support for a 4 percent income tax increase for anyone making more than a million dollars a year. The idea was to see if people felt differently about signing a petition to tax millionaires after exposure to a rich or poor person.The researchers surveyed 2,519 people and found that contrary to what might seem logical to some, affluent people were less likely to support taxing millionaires after having met a poor person than if they had just seen someone more affluent—those surveyed were approximately twice as likely to sign the tax petition after seeing an affluent man than a poor white man. Interestingly, they were less impacted by the sight of a poor black man. Sands suggests the sight of a poor white man may have caused the affluent people to be more judgmental due to a feeling that such a person should be doing better without assistance. This document is subject to copyright. Apart from any fair dealing for the purpose of private study or research, no part may be reproduced without the written permission. The content is provided for information purposes only.center_img © 2017 Phys.org Credit: George Hodan/public domain Citation: Field study suggests wealthy less willing to tax rich when poor people are around (2017, January 10) retrieved 18 August 2019 from https://phys.org/news/2017-01-field-wealthy-tax-rich-poor.html Support for democracy linked to income inequalitylast_img read more

Genetic study of 15th century samples shows adaptive changes in bacteria that

first_img © 2018 Phys.org The skeleton (right) excavated at the St. Nikolay Church in Oslo, which carried sequences for the pathogen of louse-borne relapsing fever. Credit: PNAS Citation: Genetic study of 15th century samples shows adaptive changes in bacteria that cause relapsing fever (2018, September 25) retrieved 18 August 2019 from https://phys.org/news/2018-09-genetic-15th-century-samples-bacteria.html More information: Meriam Guellil et al. Genomic blueprint of a relapsing fever pathogen in 15th century Scandinavia, Proceedings of the National Academy of Sciences (2018). DOI: 10.1073/pnas.1807266115 Journal information: Proceedings of the National Academy of Sciences Explore furthercenter_img Gene study pinpoints superbug link between people and animals Relapsing fever, as its name implies, is an ailment whereby an infected person experiences a fever several times following a single infection. If untreated, it is fatal in 10 out of 40 cases. It is transmitted by fleas and lice. Back in the 15th century, it was responsible for killing millions of people in Europe—today, it is mostly confined to several countries in Africa. In this new effort, the researchers conducted a genetic analysis of the bacteria that caused the disease 600 years ago and compared it to bacteria causing the same disease today. Samples of Borrelia recurrentis were retrieved from skeletons excavated from St. Nikolai Cemetery in Old Oslo—they have been dated to between 1430 and 1465.After generating a genetic assembly, the researchers compared it with genetic assemblies created by prior researchers studying the genome of the modern form of the bacteria. This allowed them to see how the bacteria has evolved over time.The researchers report that they were able to sequence approximately 17 percent of the bacterial genome from skeletal bones which they bolstered by sequencing samples taken from teeth. Using data from both, they were able to sequence approximately 98.2 percent of the main chromosome. Comparing the findings with modern strains, they found that the earlier strains lacked three variable short protein genes and one plasmid found on modern strains. Prior research has shown that the proteins act as proinflammatory agents for the bacteria, which, the researchers note, are key elements of the relapsing nature of the disease. They note further that such changes likely account for the differences in relapse rates—the disease tended to relapse just once or twice back in the 1400s, but is known to relapse up to five times in people afflicted today. A team of researchers with members from the University of Oslo and the Norwegian Institute for Cultural Heritage Research has conducted a genetic analysis of the bacteria that causes relapsing fever obtained from 15th century skeletons in Norway. In their paper published in Proceedings of the National Academy of Sciences, the group describes their study and what they found when they compared their results with the genome of modern bacteria. This document is subject to copyright. Apart from any fair dealing for the purpose of private study or research, no part may be reproduced without the written permission. The content is provided for information purposes only.last_img read more

Nanoscopic protein motion on a live cell membrane

first_img LEFT: (a) A TEM (transmission electron microscope) image of a filopodium including an EGFR–GNP. (b), A filopodium surface reconstructed from 780,000 trajectory points with a localization error of σx,y = 2 nm recorded at 1,000 fps. Inset, cross-sectional slice that depicts a cylindrical surface of diameter 150 nm after accounting for the size of the GNP. (c), A raw 13 min trajectory (left) broken into four subsequent pieces that reveal the journey to and from the tip, with arrows marking direction of net motion. (d), An ATOM plot of c, corrected for filopodium drift. (e), A surface interpolation from the final 80 s. The ring-like confinement in the final phase (marked with a triangle) is a 3D pit. The scale bars are 200 nm (a), 1 μm (x, y) and 200 nm (z) (b), 1,000 nm (c) and 100 nm (x, y) and 50 nm (z) (e). RIGHT: (a), A lateral trajectory of a 48 nm GNP probe. Scale bar, 100 nm. A lower temporal sampling of this confinement would have underestimated the extent of bounding. (b), Ci of the trajectory (using a time lag of five frames), which shows partially hindered diffusion with a propensity for freer diffusion in the centre. (c), An ATOM plot of a. (d), A cut through the 3D-ATOM plot along the line of the black triangle in c shows that occupancy favours an innermost disk-like region. The axes denote 100 nm in both c and d. (e), Conversion of the temporal 2D occupation from c into an effective potential energy distribution. (f–j), Equivalent to a–e, but for a 20 nm GNP probe. Credit: Nature Photonics, doi: 10.1038/s41566-019-0414-6 More information: Richard W. Taylor et al. Interferometric scattering microscopy reveals microsecond nanoscopic protein motion on a live cell membrane, Nature Photonics (2019). DOI: 10.1038/s41566-019-0414-6 Philipp Kukura et al. High-speed nanoscopic tracking of the position and orientation of a single virus, Nature Methods (2009). DOI: 10.1038/nmeth.1395 Jordan A. Krall et al. High- and Low-Affinity Epidermal Growth Factor Receptor-Ligand Interactions Activate Distinct Signaling Pathways, PLoS ONE (2011). DOI: 10.1371/journal.pone.0015945 In a recent study, Richard W. Taylor and colleagues at the interdisciplinary departments of Physics and Biology in Germany developed a new image processing approach to overcome this difficulty. They used the method to track the transmembrane epidermal growth factor receptor (EGFR) with nanometer scale precision in three dimensions (3-D). The technique allowed imaging across microseconds to minutes. The scientists provided examples of nanoscale motion and confinement using the method to image ubiquitous processes such as diffusion in plasma membranes, transport in filopodia and rotational motion during endocytosis. The results are now published in Nature Photonics. While steady progress in fluorescence microscopy has allowed scientists to monitor cellular events at the nanometer scale, a great deal still remains to be accomplished with advanced imaging systems. The challenges of fluorescence microscopy occurred due to the finite emission rate of a fluorescent source (dye molecule or semiconductor quantum dot), where too few photon emissions during a very small time-frame prevented effective or prolonged imaging. The central difficulty of scattering-based microscopy is relative to the nanoscopic probe, which competes against the background noise and a low signal-to-noise ratio (SNR); limiting the potential of imaging to only a few nanometers in high speed tracking experiments. iSCAT microscopy on live cells. a, Experimental arrangement of the iSCAT microscope for live-cell imaging. Cells are plated in a glass-bottomed dish under Leibowitz medium. (a) micropipette delivers the EGF–GNP probes directly onto the cell culture, where they specifically target the EGFR protein in the cell membrane. The bright-field illumination channel from above assists in inspecting the culture but is not required for iSCAT imaging. L1–L3, lenses; O1, ×100 objective; BS, 90:10 beam splitter; DM, 590 nm short-pass dichroic mirror. iSCAT imaging was performed with illumination intensities of 1–8 kW cm−2, which are known to be viable for HeLa at the wavelength of interest. Inset, wavefronts of the fields contributing to the iSCAT signal. (b), A section of the membrane of the HeLa cell before labelling, viewed via reflection iSCAT. (c), iSCAT image of the cell membrane including a bound EGF–GNP probe. (d), The PSF extracted from c. Scale bars in b–d are 1 μm. Credit: Nature Photonics, doi: 10.1038/s41566-019-0414-6 In the experiments, Taylor et al. introduced the epidermal growth factor-gold nanoparticle (EGF-GNP) probes to the sample chamber of the microscope using a micropipette to label the EGFRs (epidermal growth factor receptors) on HeLa cells and verified that the probes stimulated the EGFRs. Previous studies had already indicated that the probe size could influence rates of lipid diffusion in synthetic membranes, although they did not affect the mode of diffusion. Additionally, in live cells, molecular crowding was negligible for particles equal to or smaller than 50 nm. Diffusion on a filopodium. Credit: Nature Photonics, doi: 10.1038/s41566-019-0414-6 Journal information: Nature Photonics Taylor et al. verified these two concrete cases in the present work by comparing GNPs of varying diameters at 48 nm and 20 nm. The scientists then conducted fluorescent and biochemical studies to suggest that the EGF-coated GNPs activated EGFR signaling, much like the freely available EGFs, indicating that the label did not hinder biological functions. To overcome background noise related to molecular imaging the scientists implemented a new algorithm, which extracted the full iSCAT-point spread function (iSCAT-PSF) directly from each frame for clarity. Since existing techniques are unable to visualize features at high spatial and temporal resolution, many details on intracellular activity remain a matter of debate. In response, the new method by Taylor et al. offered a wealth of dynamic heterogeneities in 3-D to shed light on intracellular protein motion.The scientists first quantitatively studied subdiffusion in the plasma membrane by considering a 2-D example of the EGFR journey on the membrane of a living HeLa cell. For this, they computed the mean square displacement (MSD) for the whole trajectory of motion. Taylor et al. did not need to make assumptions on the nature of diffusion or its geographic landscape during the computation. They gauged the occurrence of biological diffractive barriers and confinements by observing the degree of directional correlation between two vectorial steps across a time span. Explore further Raw video of an epidermal growth factor-gold nanoparticle (EGFR–GNP) diffusing on a HeLa cell membrane. Credit: Nature Photonics, doi: 10.1038/s41566-019-0414-6 © 2019 Science X Network , Nature Methods The scientists thus gained insight on the nanoscopic details of diffusion along the filopodium and recorded the data across 13 minutes. They analyzed the 3-D trajectory to create the filopodium topography using gold nanoparticles as a ‘nano rover’ and mapped the surface topology of cellular structures for deeper examination. They plotted the trajectory ATOM (accumulated temporal occupancy map) and found that the 3-D representation was consistent with the biological step of pre-endocytic membrane invagination. High-speed microscopy techniques such as iSCAT are necessary to obtain high-resolution temporal information and prevent blurring effects during nanoparticle localization-based imaging. The scientists demonstrated this feature by recording confined diffusion at 30,000 fps (frames per second) with 48 nm and 20 nm GNPs. They followed the experiments with ultra-high-speed 3-D tracking of proteins at 66,000 fps using a short exposure time of 10 µs within a time duration of 3.5 seconds. Fast iSCAT microscopy imaging provided further evidence to reveal the intricate features of endocytic events relative to clathrin-mediated endocytosis in HeLa cells when simulated by low concentrations of EGF. In this way, Taylor et al. noted that the new technique could faithfully record nano-topographical information. The results matched the observations recorded with transmission electron microscopy (TEM) without significant differences on probe size reduction from 48 nm to 20 nm, while providing new insights. The new insights included details of subdiffusion, nanoscopic confinement, 3-D contours of filopodia and clathrin structures at the nanoscale. The scientists intend to combine iSCAT with in situ super-resolution fluorescence microscopy to understand the trajectories of proteins, viruses and other nanoscopic biological entities. Taylor et al. aim to advance the methods of image analysis to track GNPs smaller than 20 nm in the future and believe the new technology and additional optimization will allow them to specifically understand the life cycle of viruses without using an external label for tracking. In the present work, Taylor et al. used interferometric scattering (iSCAT) microscopy to track protein in live cell membranes. The method could visualize probe-cell interactions to understand the dynamics between diffusion and local topology. During the experiments, the scientists used gold nanoparticles (GNPs) to label epidermal growth factor receptors (EGFRs) in HeLa cells. The EGFRs are type I transmembrane proteins that can sense and respond to extracellular signals, whose aberrant signaling is linked to a variety of disease. Taylor et al. showed the GNP-labelled protein as a ‘nano-rover’ that mapped the nano-topology of cellular features such as membrane terrains, filopodia and clathrin structures. They provided examples of subdiffusion and nanoscopic confinement motion of a protein in 3-D at high temporal resolution and long time-points. Cellular functions are dictated by the intricate motion of proteins in membranes that span across a scale of nanometers to micrometers, within a time-frame of microseconds to minutes. However, this rich parameter of space is inaccessible using fluorescence microscopy, although it is within reach of interferometric scattering (iSCAT) particle tracking. The new iSCAT technique is, however, highly sensitive to single and unlabelled proteins, thereby causing non-specific background staining as a substantial challenge during cellular imaging. , PLoS ONE Citation: Nanoscopic protein motion on a live cell membrane (2019, May 22) retrieved 18 August 2019 from https://phys.org/news/2019-05-nanoscopic-protein-motion-cell-membrane.html Diffusion on the plasma membrane. (a), A lateral diffusional trajectory (17.5 μs exposure time, see color scale for chronology). (b), MSD (mean square displacement) versus τ. The blue curve shows the MSD of a. The black curve is simulated normal diffusion (α= 1), with the grey envelope indicating the uncertainty. (c), The diffusional exponent of rolling windows (color scale) over the trajectory. Regions of subdiffusion (α<1) are indicated by darker shades. (d), αi through time. The grey shading represents a mean uncertainty of 7 ± 4%, corresponding to a 95% confidence interval for a window of 100 ms (1,000 frames) and τ= 250 μs. The points marked with the asterisk correspond to the circle in c. (e), The step-direction Ci for rolling windows along the trajectory. (f), The step-direction Ci plotted through time, with the shading denoting uncertainty. (g), ATOM occupation plot with residency time (colour scale). The bin size corresponds to the localization error. Noteworthy regions of extended occupation, marked as loops and whirls (i)–(iii), are indicative of persistent nanoscopic structures. The enclosed region represents a dense patch of notable subdiffusion. Scale bars, 100 nm. Credit: Nature Photonics, doi: 10.1038/s41566-019-0414-6 This document is subject to copyright. Apart from any fair dealing for the purpose of private study or research, no part may be reproduced without the written permission. The content is provided for information purposes only. Nanoscale magnetic imaging of ferritin in a single cell The scientists then assessed the popularity of each trajectory pixel in space by introducing an accumulated temporal occupancy map (ATOM). In this technique, they divided the lateral plane of the trajectory into nanometer-sized bins and counted the occurrence of the particle in each bin. The results indicated the arrangement of nanostructures in loops and whirls within a minimal lifetime of 250 nanoseconds (5000 frames) to potentially portray a pre-endocytic step. In total, the simulated observations showed how protein diffusion was affected by the substructure of the cell.The iSCAT microscopy technique allowed scientists to record effects for a very long period of time, which they used together with 3-D imaging capabilities to follow EGFRs on a filopodium. The filopodia are biologically rod-like cellular protrusions containing bundles of actin filaments of up to 100 to 300 nm in diameter and 100 µm in length. The nanostructures can sense mechanical stimuli for chemoattraction or repulsion in the cellular microenvironment while providing sites for cell attachment. Ligand binding and EFGR activation on filopodia occurred at low concentrations of EGF, followed by its association with actin filaments and retrograde transport of EFGR to the cell body. last_img read more

Youngsters at play

first_imgTheatre lovers of the city have  reasons to rejoice. Shatiya Kala Parishad, Department Of Art, Culture and Language, Government Of Delhi, have organised the Yuva Natya Samaroh 2013 under the ideology of ‘amateurs of today-masters tomorrow’.The 10 day long festival that started on February 11 will be staging plays of writers like Vijay Tendulkar, Mohan Rakesh, Surendra Sharma, Neil Simon, Avinash Chander Mishra, Bhishan Sahani, Mahakavi Bhaas. While plays like Also Read – ‘Playing Jojo was emotionally exhausting’Ashadh ka Ek Din, Kabira Khada Bazaar mein and Qaid E-Hayat is about the life of Kalidas, the Indian saint poet Kabir and Galib, the plays like Khamosh — Adalat jaari hain talks about social and family problems. Aadhe Adhoore, directed by Chander Shekhar Sharma, is a family drama which talks about values and principles governing a particular family whereas Khamosh —Adalat Jaari hain, directed by Rohit Tripathi, focuses on the patriarchal society and how the male-dominated politics tears down a woman’s life, exposing her private life to moralistic critique. The other plays that are to performed are Also Read – Leslie doing new comedy special with NetflixBada Natakia Kaun, directed by Prakash Chander Jha, Khusar Fusar (the Hindi translation of Neil Simon’s Silence) and Pratigya Yogandhrayad, directed by Bhumikeshwar Singh.Chander Shekhar Sharma, director of Aadhe Adhoore, feels that even though the play was written so many years back, it has a contemporary feel and is as relevant today as it was back then. The male-female relationship which had  been delved into intensly is true even today. Talking about the play Khamosh —Adalat jaari hain, director Rohit Tripathi calls it ‘a drama within a drama which has been portrayed in a very realistic form.’ And referring to theatre performance, he said: ‘More and more audiences are getting attracted towards theatre. So even though movies have a bigger reach, live performance will never lose its sheen’.The variety in the content of play is sure to keep the audience interested. One will get to see the use of music and lights in a totally different perspective. The festival also is a form of encouragement and motivating factor for young and aspiring artistes.‘Through the medium of play, youngsters will get to know more about their past and about the lives of famous entities. The festival also provides a huge platform for the actors as well as the director to explore their talent,’ said Bhupen Joshi, director of Asadh Ka Ek Din.Go watch.DETAILAt: Shri Ram Centre, 4 Safdar Hashmi Marg, Mandi House On Till: 21 February Timings: 6.30 pm onwardslast_img read more

Irate students call for scrapping of exam following NEET question paper fiasco

first_imgKolkata: After the question paper fiasco in the National Eligibility cum Entrance Test (NEET) 2018, the medical aspirants from Bengal, including the HS toppers of the previous academic year appearing for the medical entrance examination with Bengali as the medium of examination, demanded scrapping of the exam.They also urged the state government to conduct its own medical entrance examination, as it was done earlier. It may be mentioned here that before the introduction of NEET, the West Bengal Joint Entrance Examination Board used to conduct the medical entrance examination. Also Read – Heavy rain hits traffic, flightsMany of the candidates who topped the HS examination last year have appeared for NEET 2018, but faced difficulties as the Bengali translation of most of the questions was done wrongly.Vehemently opposing the implementation of NEET, many of the Bengali medium students have decided to appeal to the Chief Minister, to strongly deal with the issue. They will urge Chief Minister Mamata Banerjee to take up the issue with the Centre and take necessary steps to cancel the examination. They also demanded that the Centre must allow the respective state governments to hold their own examination, as was done earlier. Also Read – Speeding Jaguar crashes into Merc, 2 B’deshi bystanders killedIt may be mentioned that following the question paper debacle in NEET 2018 held on last Sunday, the Chief Minister has written to the Union Human Resources Development minister Prakash Javadekar, demanding a re-examination.Debasish Saha, a medical aspirant who had ranked eighth in HS last year, said that if the Centre continues with this single-level entrance examination in the country, it would be disastrous for the students appearing in the vernacular languages. He alleged that the state boards were being forced to emulate the CBSE syllabus, in the name of implementing NEET. Bengali medium students, who always achieve good results in the state-level medical entrance exam, perform pathetically in NEET.Noureen Hossain, another top ranker in HS last year, said that NEET’s design is starkly favourable to the CBSE syllabus and hence, it creates an educational imbalance. Students from state educational systems will be deprived in this format of examination, she added.Many of the Bengali medium students appearing for NEET this year, viewed that it was devised in English and extended to Hindi. But those educated in vernaculars are the worst sufferers. They also said that the common entrance exam may spell doom for the majority of medical aspirants from the state boards.Shreya Dutta, who appeared for the examination with Bengali as the medium, said: “The poor students who were unable to afford the exorbitant training required to be successful in national examinations, would not be able to compete with the urban students studying in CBSE board schools. A single-level examination cannot be implemented throughout the country, where there are multiple languages and different cultures.”last_img read more

Sidharth Alia wrap up Kapoor and Sons

first_imgActors Sidharth Malhotra, Alia Bhatt and Fawad Khan have wrapped up the shooting for Shakun Batra’s Kapoor and Sons.The trio will soon release the first look of the film. “#KapoorAndSons wrapped up….and coming soon @aliaa08 @_fawadakhan_ @shakunbatra @karanjohar,” Sidharth, 30, posted on Twitter.The 22-year-old Highway actor also tweeted about the upcoming first look. “Coming soon @s1dharthm ifawadkhan #kapoorandsonssince 1921,” she wrote.Produced by Karan Johar, the romantic comedy drama is Alia and Sidharth’s second film together post their debut movie Student of the Year. Both the actors are working with the Pakistani hearthrob for the first time.The film will hit the theaters on March 18, 2016. Kapoor and Sons also stars Rishi Kapoor in a major role.last_img read more

Rage Against the Machines The Beginnings of the AntiIndustrial Movement

first_imgIt is most likely that Ned Ludd was a mythical figure, but even so, the folklore story of “General Ludd” who lived in Sherwood Forest  might have inspired one of the most passionate and disruptive uprisings in 19th century Britain. Ned Ludd’s story first appeared in The Nottingham Review on December 20, 1811. In the story, Ned Ludd is described as a weaver from Anstey (near Leicester, England).In 1779, after being beaten for not working hard enough, some accounts say he was taunted by local children, Ludd flew into a “fit of passion,” and smashed two knitting frames.Engraving of Ned Ludd, Leader of the Luddites, 1812.While this story is mostly fictional, it has since then been cited as the inspiration for what soon followed: The Luddite movement.The movement began on March 11, 1811, in Arnold, Nottingham. At first, the “Luddites” were a small band of passionate, disenchanted weavers, textile workers, and other laborers. They were frustrated and enraged at the thought of being replaced by industrial machines.Sherwood Forest.They would meet in small numbers on the moors of Nottinghamshire. There, they practiced destroying industrial machinery by the light of the pale moon. Then they would don masks and sneak into the towns and cities to carry out their attacks.It only took two years for the movement to pick up speed and sweep across England, with Luddite groups cropping up in West Riding of Yorkshire in 1813.Blue Plaque on White Lion public house, Westhoughton, commemorating the burning of Westhoughton Mill in 1812.Luddite handloom weavers burned mills. They sabotaged and wrecked factory machinery. The Luddite textile workers destroyed industrial equipment and even began attacking those who owned and operated them.Later interpretation of machine breaking (1812), showing two men superimposed on an 1844 engraving from the Penny magazine which shows a post 1820s Jacquard loom.Once such incident was carried out by a small band of Luddites led by George Mellor. They planned to ambush and assassinate William Horsfall, a wool and textile manufacturer from Ottiwells Mill in Marsden, West Yorkshire, at Crosland Moor in Huddersfield. Horsfall owned a textile factory and was an outspoken critic of the Luddite movement.George Mellor and his group ambushed Horsfall at night near Crosland Moor. After Horsfall made a scornful remark to his attackers, Mellor shot him in the groin. Horsfall died of the wound, and Mellor and his fellow Luddites were arrested.An 1812 handbill asks for information on armed men who destroyed five machines. Photo by The National Archives OGLBritish Parliament had already passed legislation to combat what they called Industrial Sabotage. In 1721, they had criminalized the act of “Machine Breaking,” and they had passed the Protection of Stocking Frames Act in 1788.18 Old English insults we need to bring backClearly, the Luddite movement proved that the penalties for these crimes weren’t severe enough. To solve this problem, Parliament had passed the Frame Breaking Act of 1812, which made machine-breaking a capital offense punishable by hanging.So in 1813, when the British Government arrested George Mellor, they brought 60 other Luddites to a trial in York where they were charged with various crimes in an attempt to squash the uprising.John Kay inventor of the Fly Shuttle by Ford Madox Brown, 1753, depicting the inventor John Kay kissing his wife goodbye as men carry him away from his home to escape a mob angry about his labor-saving mechanical loom.However, many of those who were charged were only self-proclaimed Luddites and not connected to the movement. Because of this, 30 of them were acquitted due to a lack of evidence. The Luddite movement lost a lot of momentum after this trial.In 1817, Jeremiah Brandreth led the Pentrich Rising, where a few hundred stockingers, quarrymen, and ironworkers carried out what is now considered the last action of the Luddite movement.Jeremiah Brandreth (1790 – 7 November 1817) was an out-of-work stocking maker who lived in Sutton-in-Ashfield, NottinghamshireIn 1861, Parliament continued to crack down on acts of Machine sabotage by passing the Malicious Damage Act.Read another story from us: What Caused the Great Irish Potato Famine of 1844-1849?While the Luddite movement might sound like a minor conflict, the British Army once deployed more soldiers in the fight against the Luddites than they had in their fight against Napoleon Bonaparte.last_img read more

The First Dog in American History to be Awarded Military Rank

first_imgSergeant Stubby was a hero of World War I. He led a very successful military career and was the first dog in the history of the US Army to be granted military rank. He was awarded multiple decorations for his heroism and bravery, and not just by the US — he also received a medal from France.As well as his military service, Stubby also met three sitting presidents, attended numerous veterans commemorations, and was involved in community organizations.He was known by millions of Americans and stayed a celebrity until his death in 1926.Ambulance dog during WWIStubby’s story begins in 1917, when a young private, J. Robert Conroy found a brindle puppy with a short tail at Camp Yale where his unit was undergoing basic training, according to the Smithsonian.Conroy named the puppy Stubby, and the pup was soon the unofficial mascot of Conroy’s unit, the 102nd Infantry, 26th Yankee Division.Stubby was soon training with the soldiers, learning drills and bugle calls, and even a doggy version of a salute.Decoration of regimental colors by General Passaga, 32nd French Army Corps.Because Stubby had a good effect on the soldiers’ morale, he was allowed to stay, even though dogs were forbidden in the camp.When the division left for France in October 1917, Stubby went too, hidden in the coal bin until the ship was well out to sea. When the division arrived in France, Stubby was smuggled off the ship.General John J. Pershing awards Sergeant Stubby with a medal in 1921.He was soon discovered by Conroy’s commanding officer, but the CO let him stay after being charmed by the saluting dog.When the soldiers headed for the front lines, Stubby accompanied them, and soon became used to the noise and chaos of gunfire and artillery.The American army dog Sergeant Stubby (c. 1916-1926)Stubby’s first war injury was from gas exposure. While he was taken to a field hospital and eventually recovered, that early exposure left him very sensitive to even the smallest amounts of gas in the air. When his division was the subject of an early morning gas attack, Stubby tore through the trench, waking sleeping soldiers to sound the gas alarm and preventing many men from being injured.Stubby would also help find wounded men who were between the trenches. He would listen for the sound of English being spoken and follow that to the wounded soldiers, barking for the paramedics, or if the wounded were mobile, he would lead them back to camp.Sergeant Stubby wearing military uniform and decorations.According to the Connecticut Military Department, Stubby himself was injured again in April 1918, when the 102nd was part of a raid on a German-held town. As the Germans were withdrawing, they were also lobbing hand grenades at the pursuing American forces and Stubby was wounded on a foreleg when one of the grenades went off. After that, the women of the town made him a blanket, embroidered with the flags of all the allies, to show their gratitude. The blanket also held his various stripes and awards.Once he even found and captured a German spy. The spy was attempting to map the American trenches when Stubby spotted him.Sergeant Stubby’s brick at the Liberty MemorialThe German tried to soothe the dog, but Stubby attacked the spy and bit his legs, causing him to fall over. He continued his attacks until his own soldiers could arrive. That capture is what earned him his promotion to the rank of sergeant, given by the commander of the 102nd.At the war’s end, Stubby was smuggled back to the United States, but his career wasn’t over. On his return, he was made a lifetime member of the American Legion, attending their conventions and marching in their parades until his death. He was also made a lifetime member of the Red Cross and the YMCA. He regularly went recruiting for members for the Red Cross.In addition to all his other awards, in 1921 he also received a gold medal from the Humane Society, which was presented to him by General John Pershing, who was the Commander General of the United States Army.Read another story from us: Miracle Mike – The Headless Chicken that became a StarJ. Robert Conroy, Stubby’s owner, eventually went on to study law at Georgetown University and again, Stubby went, too. He was made the mascot of the Georgetown Hoyas and remained so until his death.last_img read more

The Longest Boxing Match in History went 110 Rounds and Lasted over

first_imgOn April 6, 1893, Andy Bowen and Jack Burke fought the longest gloved boxing match in history at the Olympic Club in New Orleans, Louisiana. The bout lasted for seven hours and 19 minutes, from 9:00 pm until early morning the next day, going 110 rounds. The prize was the Lightweight Championship of the South and a purse of $2,500. Burke was the favorite in the beginning, winning the first 25 rounds, but “Iron Bowen” refused to be knocked out.Boxer Andy Bowen (1864-1894)He knocked Burke down in the 25th round, but the bell rang before Burke could be counted out. At some point during the match both of Burke’s hands were broken, and the two opponents grew so tired that their boxing talents made no difference.Most of the crowd had left by midnight, and many who hadn’t were asleep in their chairs. By the 108th round, no punches were being thrown – the men just circled each other over and over. By the 110th round, the referee, John Duffy, called the match a draw and suggested the two men could split the purse.Jack Burke on February 10, 1904According to Encyclopaedia Britannica, modern boxing observes 12 rules which attempt to make the sport more humane. They were written by Londoner John Graham Chambers and published by John Sholto Douglas, the ninth Marquess of Queensberry, in 1867. The ”Queensberry Rules” are still in force today, and, among other things, limit rounds to three minutes with a one minute break between rounds.A painting of Minoan youths boxing, from an Akrotiri fresco circa 1650 BC. This is the earliest documented use of boxing gloves.The earliest record of boxing was found in ancient Sumerian relief sculptures from Mesopotamia. An Egyptian relief from 1350 BC shows barefisted boxers and an audience. Fighting with gloves had emerged by 1500 to 1400 BC, shown by evidence from Minoan artwork discovered on the Greek island of Crete. The 23rd Olympiad of 668 BC set the first rules for the sport.A boxing scene depicted on a Panathenaic amphora from ancient Greece, circa 336 BC, British MuseumAncient Greece had no weight categories or rounds, and the opponents fought until one either gave up or was killed. When boxing came to the Romans in 393 AD, the gloves were modified with pointy metal studs making the event full of blood and gore which entertained the Romans but caused the sport to be banned in many parts of Europe.After the fall of the Roman Empire, spectator boxing became popular again in about the 12th century. According to Ancient Origins, bare-knuckle boxing became popular in Great Britain in the early 16th century.Tom Cribb vs Tom Molineaux in a re-match for the heavyweight championship of England, 1811On January 6, 1681, Christopher Monck, Second Duke of Albemarle, set up a boxing match between his butcher and his butler — with the butcher coming out ahead. There were still no set rules, and the boxers often resorted to headbutting, choking, eye gouging, kicking, biting, and hitting a man who was down to win.Amateur Boxing Club, Wales, 1963In 1743, boxer Jack Broughton put forth the “Broughton’s rules” in an attempt to curb deaths in the ring. It gave the fighters 30 seconds after being knocked down to get back up, and it included the “no hitting below the belt” rule. By 1882, bare knuckle boxing was illegal. The first modern boxing match was held at the Pelican Athletic Club in New Orleans in 1892 when “Gentleman Jim” Corbett defeated John Lawrence Sullivan in a heavyweight bout.Corbett training for his fight with JeffriesBoxing has become a multi-million dollar sport thanks to such athletes as Muhammad Ali, considered by some to be the greatest boxer of all time, Joe Frazier, George Foreman, Sugar Ray Robinson and Sugar Ray Leonard, Evander Holyfield, Rocky Marciano, Mike Tyson and Marvin Hagler, just to name a few.As well the sport has gained a huge audience from such films as the Rocky series. Check out a video below about how Sylvester Stallone turned down massive amounts of money in order to star in the first Rocky:1960 Olympians: Ali won gold against Zbigniew Pietrzykowski (1956 and 1964 bronze medalist)In 1927, a fight between Jack Dempsey and Gene Tunney grossed over $2.7 million without the aid of a television audience.Read another story from us: From boxing gloves to bath clogs, one Roman’s trash is a museum’s treasure centuries laterIn a controversial decision, Tunney won in Dempsey’s last fight. In 2015, a bout between Floyd Mayweather and Manny Pacquiao brought a more than $100 million payout to both fighters. The match went 12 rounds with Mayweather declared the winner.For this writer, promoter Don King’s hair was the best part of boxing.last_img read more

The Transformation of Ted Dreisel into Dr Seuss – Why did he

first_imgIt’s undeniable that Theodor Seuss Geisel, better known by his pen name “Dr. Seuss” is one of the most celebrated writers in history. Not only does his work constitute the bulk of popular children’s literature, but he is also widely praised for his achievements as a political cartoonist during the second world war. That said, his choice to pen most of his critically acclaimed material under the name “Dr. Seuss” didn’t occur by chance. Actually, that name has quite a bit of history attached to it, some of which centers around a father’s wish that his son would have gotten a Ph.D. Theodor started calling himself “Dr.” in his pen name as his way of recognizing his father’s dream.When it comes to the second half of his name “Seuss,” he adopted it shortly after he was relieved of his duties as the editor for The Dartmouth Jack-O-Lantern, a college humor magazine.Children’s book author Theodor Seuss Geisel. Photo by John Bryson/The LIFE Images Collection/Getty ImagesHe had contributed to the magazine for quite some time. In “The Beginnings of Dr. Seuss – An Informal Reminiscence,” originally published in Dartmouth Alumni Magazine’s April 1976 edition, he stated, “almost every night I’d be working in the Jack-O-Lantern office.” However, his time ran out in 1925 when he was caught drinking, which was illegal at the time which caused the then-Dean to suspend him of all of his duties related to the magazine.AdChoices广告inRead invented by TeadsTheodor Seuss Geisel“The night before Easter of my senior year there were ten of us gathered in my room at the Randall Club,” he said. “We had a pint for ten people, so that proves nobody was really drinking.” In any case, he, along with the other offenders, was brought before a disciplinary committee which subsequently administered his punishment.However, despite no longer being officially affiliated with the magazine, Theodor still continued to anonymously contribute to it extensively. After releasing a few of his publications under either fake names or without acknowledging the source, he then issued two with different names: “Seuss” and “T. Seuss” respectively.Ted Geisel (Dr. Seuss)At the time, he thought that it was truly an ingenious way to mask his identity while claiming ownership over his work, although he was later unconvinced. “To what extent this corny subterfuge fooled the dean, I never found out. But that’s how ‘Seuss’ first came to be used as my signature. The ‘Dr.’ was added later on.”Dr. Seuss working on ‘How the Grinch Stole Christmas!’ in early 1957His father, a man who managed one of the biggest brewing companies in New England at that time, really wanted Theodor to graduate with a PhD. Actually, he was on the road to doing so. In the 1920s, he was enrolled in the Ph.D. program in English at the University of Oxford with the hope of graduating. Unfortunately, he discontinued his studies and consequently never received the doctorate.The Hollywood Walk of Fame star of Dr. Seuss located on Hollywood Blvd. that was awarded in 2004 for achievement in motion picturesFunnily enough, despite dropping out of university, he was later awarded multiple honorary doctorates, thereby gaining something akin to a Ph.D. It’s safe to say that he was able to live up to his father’s wishes.That said, in The Beginnings of Dr. Seuss – An Informal Reminiscence, Theodor gave another reason. He said that he added the “Dr.” to his pen name in order to make him sound “more professional.”American author and illustrator Dr. Seuss (Theodor Seuss Geisel, 1904 – 1991), with a bust of one of his characters, April 25, 1957. Photo by Gene Lester/Getty ImagesAt the time, he was working on a project called “Boids and Beasties”. Therefore, he felt that the addition of “Dr.” to his name would have presented himself in a more professional light.Read another story from us: Dr. Seuss’ boxes of animal bits – The inspiration for many of his creaturesInitially, he used to sign either “Dr. Theophrastus” or “Dr. Theo”. However, those titles were eventually supplanted by “Dr. Seuss,” a name which would become his identification marker throughout the world for years to come.last_img read more

Westbrook Shouldnt Get All The Blame If Houston Fails

first_imgGuests:Chris Broussard, Kendrick Perkins, Matt Barnes, Stephen Jackson, and Jason McIntyre <span data-mce-type=”bookmark” style=”display: inline-block; width: 0px; overflow: hidden; line-height: 0;” class=”mce_SELRES_start”></span>Westbrook shouldn’t get all the blame if Houston failsRussell Westbrook is headed to Houston to team up with James Harden and Colin thinks the experiment will eventually implode. Even though he’s been critical of Westbrook, he doesn’t think Mr. Triple Double should get all the blame.Houston GM Daryl Morey knew exactly what Westbrook was when he traded for him, and thinking he’s going to magically change his game at this point is unrealistic. That’s not on Westbrook it’s on the Rockets. Also:-Sam Presti blew it with Harden and KD-Colin’s Top 12 NBA Duo Grades-Former Harden and Westbrook teammate Kendrick Perkins on why he thinks the duo  will worklast_img read more

Why Women Leave Tech It Isnt Because Math is Hard

first_imgOctober 2, 2014 Register Now » Free Webinar | Sept 5: Tips and Tools for Making Progress Toward Important Goals This story originally appeared on Fortune Magazine 7 min read Attend this free webinar and learn how you can maximize efficiency while getting the most critical things done right. I knew something was up when Sandhya, a talented project manager I only knew slightly, asked me if we could have lunch.She had recently come back from maternity leave. In her note, she said she wanted some advice from another mom.Over lunch, she confided in me that she was thinking of quitting. It was too hard to juggle everything. Her manager had pressured her to return from leave early, and was pushing her again to take a business trip and leave her nursing infant at home. She wasn’t sleeping. She felt like she was failing her job and her child at the same time.I assured her that her feelings were normal and that much of it would pass. I encouraged her to say no to her manager. I offered to speak to him on her behalf. Although she earned more than her husband did, she quit two weeks later.That was four years ago, and Sandhya still hasn’t returned to the tech industry. She has no plans to. She has since had another baby. Her story has haunted me since. She came looking for support, and I felt like I failed her.Over the last month, I have collected stories from 716 other women who have left the tech industry. Their average tenure in the industry was a little over seven years. All of them shared their single biggest reason for leaving, their current employment status, and their desire (or not) to return to tech.Motherhood as just the final pushLike many of the women I surveyed, Annabelle is highly educated; she has a PhD in linguistics and a master’s degree in computer science. She is one of 484 women to cite motherhood as a factor in her decision to leave tech. Unlike the 42 women who said they wanted to be stay-at-home mothers, Annabelle’s decision to leave was not planned:”I was the first and only person at my small company ever to take maternity leave. They had no parental leave policy previously even though they had been around for about a decade, and, having under 50 employees, weren’t covered by FMLA. I (cluelessly!) agreed to go back to work part-time starting when my daughter was six weeks. There was no set place for me to pump [breast milk] while I was at work, so it was perpetually inconvenient and awkward to work at the office for longer than a couple hours at a time.”Eighty-five women cited maternity leave policy as a major factor in their decision to leave their tech jobs. That’s over 10% of the women I surveyed. Caitlin, who worked as a data center developer for over a decade, said the following:”I negotiated 12 unpaid weeks off when my son was born. Only it wasn’t really time off. I didn’t have to go to the office every day, but I was expected to maintain regular beeper duties and respond within 15 minutes any time there was a problem. I’d be nursing my screaming baby and freaking out that I was going to get fired if I didn’t answer the beeping thing right away.”Many women said that it wasn’t motherhood alone that did in their careers. Rather, it was the lack of flexible work arrangements, the unsupportive work environment, or a salary that was inadequate to pay for childcare. As Rebecca, a former motion graphics designer, put it, “Motherhood was just the amplifier. It made all the problems that I’d been putting up with forever actually intolerable.”“Everyone’s the same, and no one’s like me.”One-hundred-ninety-two women cited discomfort working in environments that felt overtly or implicitly discriminatory as a primary factor in their decision to leave tech. That’s just over a quarter of the women surveyed. Several of them mention discrimination related to their age, race, or sexuality in addition to gender and motherhood. Dinah was a front-end developer for eight years before deciding to call it quits:”Literally 28 of the 30 people in our company were white, straight men under 35. I was the only woman. I was one of only two gay people. I was the only person of color other than one guy from Japan. My coworkers called me Halle Berry. As in, ‘Oh look, Halle Berry broke the website today.’ I’m pretty sure for some of them I’m the only actual black person they’ve ever spoken to. Everyone was the same, and no one was like me. How could I stay in that situation?”Never going backOf the 716 women surveyed, 465 are not working today. Two-hundred-fifty-one are employed in non-tech jobs, and 45 of those are running their own companies. A whopping 625 women say they have no plans to return to tech. Only 22—that’s 3%—say they would definitely like to.Stella, a senior leader with almost 20 years of experience in engineering, talks about her experience quitting and starting an ecotourism travel company:”I love coding. I have a masters in CS [computer science]. I worked in tech for two decades. So many women like me, so highly trained and for what? It was hard enough being the only woman on most projects. Try being the only woman over 40. Doesn’t matter how good you are, or even if your colleagues respect you. Eventually you get tired of being the odd duck. I took all my experience and started my own thing where I could make the rules. I’m never going back.”The pipeline isn’t the problemIt is popular to characterize the gender gap in tech in terms of a pipeline problem: not enough girls studying math and science. However, there are several indications that this may no longer be the case, at least not to the extent that it once was. High school girls and boys participate about equally in STEM electives. Elite institutions like Stanford and Berkeley now report that about 50% of their introductory computer science students are women. Yet just last year, the U.S. Census Bureau reported that men are employed in STEM occupations at about twice the rate of women with the same qualifications.Almost everyone I spoke with said that they had enjoyed the work itself. Most mothers added that they would have happily returned to their jobs a few months after giving birth, but their companies didn’t offer maternity leave and they needed to quit in order to have their kids. Some women felt that their work environments were discriminatory, but most reported something milder: the simple discomfort of not fitting in in an otherwise homogenous setting. It may not sound like a big deal if you’re used to being in the majority, but it was enough to drive many qualified engineers to quit.There may be work to do on the pipeline, but the pipeline isn’t the problem. Women are leaving tech because they’re unhappy with the work environment, not because they have lost interest in the work.As cultural issues go, this is an incredibly expensive problem. Like my friend Sandhya, these women are educated, highly trained, and weren’t planning to quit. We’re losing them anyway. And once we’ve lost them, we almost never get them back.last_img read more

Stanford Develops Computer That Literally Plugs Into Peoples Brains

first_img Free Webinar | Sept. 9: The Entrepreneur’s Playbook for Going Global Growing a business sometimes requires thinking outside the box. February 23, 2017 Register Now »center_img 2 min read I watched the first half of the video above before having to start it over from the beginning. I needed to confirm what my eyes were seeing.Yes, that’s a computer and yes it’s plugged directly into the top of that woman’s head. Like an electrical outlet, except not.I had been paying attention to what was being said. After all, the video is about a noble project by Stanford University researchers who developed a way for people with paralysis—caused by anything from Lou Gehrig’s disease to a spinal chord injury—to be able to type and communicate. The school says the method it developed lets people do this at “the highest speeds and accuracy levels reported to date.”Pill-sized electrodes were placed in the subjects’ brains to record signals from the motor cortex—the region of the brain that control muscle movement. From here, things get interesting.Related: Here’s Navdy, a New Gadget That Can Make Your Old Car SmartThe researchers developed a sort of power cable that’s connected to a computer on one end and then literally plugged into the subject’s brain on the other end—right into the top of their head. Signals from the person’s motor cortex were transmitted via the cable to the computer where they were translated by algorithms into point-and-click commands.As you can see in the video, those commands guide a cursor over characters on an on-screen keyboard. Enabling people who suffer from paralysis to communicate is amazing.This is far from the only example of invasive brain-computer interfaces developed over the years. Regardless, seeing a person with what appears to be an electrical outlet on their skull is pretty far out there.Related: You’ll Never Guess What This Fire-Spitting Drone Is Used ForSee? That’s an older example of another type of brain-computer interface, one that’s designed to help people see. Wild, isn’t it?last_img read more