A group of scientists has succeeded in creating the first transistor made from a single molecule. The team, which includes researchers from Yale University and the Gwangju Institute of Science and Technology in South Korea, published their findings in the December 24 issue of the journal Nature.
The team, including Mark Reed, the Harold Hodgkinson Professor of Engineering & Applied Science at Yale, showed that a benzene molecule attached to gold contacts could behave just like a silicon transistor.
The researchers were able to manipulate the molecule’s different energy states depending on the voltage they applied to it through the contacts. By manipulating the energy states, they were able to control the current passing through the molecule.
“It’s like rolling a ball up and over a hill, where the ball represents electrical current and the height of the hill represents the molecule’s different energy states,” Reed said. “We were able to adjust the height of the hill, allowing current to get through when it was low, and stopping the current when it was high.” In this way, the team was able to use the molecule in much the same way as regular transistors are used.
The work builds on previous research Reed did in the 1990s, which demonstrated that individual molecules could be trapped between electrical contacts. Since then, he and Takhee Lee, a former Yale postdoctoral associate and now a professor at the Gwangju Institute of Science and Technology, developed additional techniques over the years that allowed them to “see” what was happening at the molecular level.
Being able to fabricate the electrical contacts on such small scales, identifying the ideal molecules to use, and figuring out where to place them and how to connect them to the contacts were also key components of the discovery. “There were a lot of technological advances and understanding we built up over many years to make this happen,” Reed said.
There is a lot of interest in using molecules in computer circuits because traditional transistors are not feasible at such small scales. But Reed stressed that this is strictly a scientific breakthrough and that practical applications such as smaller and faster “molecular computers”—if possible at all—are many decades away.
“We’re not about to create the next generation of integrated circuits,” he said. “But after many years of work gearing up to this, we have fulfilled a decade-long quest and shown that molecules can act as transistors.”
Archive for December 24th, 2009
Posted by Xeno on December 24, 2009
Posted by Xeno on December 24, 2009
The faint tug of the sun and moon on the San Andreas Fault stimulates tremors deep underground, suggesting that the rock 15 miles below is lubricated with highly pressurized water that allows the rock to slip with little effort, according to a new study by University of California, Berkeley, seismologists.
“Tremors seem to be extremely sensitive to minute stress changes,” said Roland Bürgmann, UC Berkeley professor of earth and planetary science. “Seismic waves from the other side of the planet triggered tremors on the Cascadia subduction zone off the coast of Washington state after the Sumatra earthquake last year, while the Denali earthquake in 2002 triggered tremors on a number of faults in California. Now we also see that tides – the daily lunar and solar tides – very strongly modulate tremors.”
In a paper appearing in the Dec. 24 issue of the journal Nature, UC Berkeley graduate student Amanda M. Thomas, seismologist Robert Nadeau of the Berkeley Seismological Laboratory and Bürgmann argue that this extreme sensitivity to stress – and specifically to shearing stress along the fault – means that the water deep underground is under extreme pressure.
“The big finding is that there is very high fluid pressure down there, that is, lithostatic pressure, which means pressure equivalent to the load of all rock above it, 15 to 30 kilometers (10 to 20 miles) of rock,” Nadeau said. “Water under very high pressure essentially lubricates the rock, making the fault very weak.”
Though tides raised in the Earth by the sun and moon are not known to trigger earthquakes directly, they can trigger swarms of deep tremors, which could increase the likelihood of quakes on the fault above the tremor zone, the researchers say. At other fault zones, such as at Cascadia, swarms of tremors in the ductile zone deep underground correlate with slip at depth as well as increased stress on the shallower “seismogenic zone,” where earthquakes are generated. The situation on the San Andreas Fault is not so clear, however.
“These tremors represent slip along the fault 25 kilometers (15 miles) underground, and this slip should push the fault zone above in a similar pattern,” Bürgmann said. “But it seems like it must be very subtle, because we actually don’t see a tidal signal in regular earthquakes. Even though the earthquake zone also sees the tidal stress and also feels the added periodic behavior of the tremor below, they don’t seem to be very bothered.”
Nevertheless, said Nadeau, “It is certainly in the realm of reasonable conjecture that tremors are stressing the fault zone above it. The deep San Andreas Fault is moving faster when tremors are more active, presumably stressing the seismogenic zone, loading the fault a little bit faster. And that may have a relationship to stimulating earthquake activity.”
Seismologists were surprised when tremors were first discovered more than seven years ago, since the rock at that depth – for the San Andreas Fault, between 15 and 30 kilometers (10 to 20 miles) underground – is not brittle and subject to fracture, but deformable, like peanut butter. They called them non-volcanic tremors to distinguish them from tremors caused by fluid – water or magma – fracturing and flowing through rock under volcanoes. It was not clear, however, what caused the non-volcanic tremors, which are on the order of a magnitude 1 earthquake.
To learn more about the source of these tremors, UC Berkeley seismologists began looking for tremors five years ago in seismic recordings from the Parkfield segment of the San Andreas Fault obtained from sensitive bore-hole seismometers placed underground as part of the UC Berkeley’s High-Resolution Seismic Network. Using eight years of tremor data, Thomas, Bürgmann and Nadeau correlated tremor activity with the effects of the sun and moon on the crust and with the effects of ocean tides, which are driven by the moon.
They found the strongest effect when the pull on the Earth from the sun and moon sheared the fault in the direction it normally breaks. Because the San Andreas Fault is a right-lateral strike-slip fault, the west side of the fault tends to break north-northwestward, dragging Los Angeles closer to San Francisco.
“When shear stress on a plane parallel to the San Andreas Fault most encourages slipping in its normal slip direction is when we see the maximum tremor rate,” Bürgmann said. “The stress is many, many orders of magnitude less than the pressure down there, which was really, really surprising. You essentially could push it with your hand and it would move.”
In fact, the shear stress from the sun, moon and ocean tides amount to around 100 Pascals, or one-thousandth atmospheric pressure, whereas the pressure 25 kilometers underground is on the order of 600 megaPascals, or 6 million times greater….
Posted by Xeno on December 24, 2009
A team of researchers from the University of Girona and the Max Planck Institute in Germany has shown that some mathematical algorithms provide clues about the artistic style of a painting. The composition of colours or certain aesthetic measurements can already be quantified by a computer, but machines are still far from being able to interpret art in the way that people do.
How does one place an artwork in a particular artistic period? This is the question raised by scientists from the Laboratory of Graphics and Image in the University of Girona and the Max Planck Institute for Biological Cybernetics, in Germany. The researchers have shown that certain artificial vision algorithms mean a computer can be programmed to “understand” an image and differentiate between artistic styles based on low-level pictorial information. Human classification strategies, however, include medium and high-level concepts.
Low-level pictorial information encompasses aspects such as brush thickness, the type of material and the composition of the palette of colours. Medium-level information differentiates between certain objects and scenes appearing in a picture, as well as the type of painting (landscape, portrait, still life, etc.). High-level information takes into account the historical context and knowledge of the artists and artistic trends.
“It will never be possible to precisely determine mathematically an artistic period nor to measure the human response to a work of art, but we can look for trends”, Miquel Feixas, one of the authors of the study, published in the journal Computers and Graphics, tells SINC.
The researchers analysed various artificial vision algorithms used to classify art, and found that certain aesthetic measurements (calculating “the order” of the image based on analysing pixels and colour distribution), as well as the composition and diversity of the palette of colours, can be useful.
The team also worked with people with little knowledge of art, showing them more than 500 paintings done by artists from 11 artistic periods. The participants were “surprisingly good” at linking the artworks with their corresponding artistic period, showing the high capacity of human perception.
Beyond the implications for philosophy and art, the scientists want to apply their research in developing image viewing and analysis tools, classifying and searching for collections in museums, creating public informative and entertainment equipment, and in order to better understand the interactions between people, computers and works of art. …
Posted by Xeno on December 24, 2009
From beetles to barnacles, pikas to pine warblers, many species are already on the move in response to shifting climate regimes. But how fast will they – and their habitats – have to move to keep pace with global climate change over the next century? In a new study, a team of scientists including Dr. Healy Hamilton from the California Academy of Sciences have calculated that on average, ecosystems will need to shift about 0.42 kilometers per year (about a quarter mile per year) to keep pace with changing temperatures across the globe. Mountainous habitats will be able to move more slowly, since a modest move up or down slope can result in a large change in temperature. However, flatter ecosystems, such as flooded grasslands, mangroves, and deserts, will need to move much more rapidly to stay in their comfort zone – sometimes more than a kilometer per year. The team, which also included scientists from the Carnegie Institute of Science, Climate Central, and U.C. Berkeley, will publish their results in the December 24 issue of Nature.
“One of the most powerful aspects of this data is that it allows us to evaluate how our current protected area network will perform as we attempt to conserve biodiversity in the face of global climate change,” says Healy Hamilton, Director of the Center for Applied Biodiversity Informatics at the California Academy of Sciences. “When we look at residence times for protected areas, which we define as the amount of time it will take current climate conditions to move across and out of a given protected area, only 8% of our current protected areas have residence times of more than 100 years. If we want to improve these numbers, we need to both reduce our carbon emissions and work quickly toward expanding and connecting our global network of protected areas.”
The team calculated the velocity of global climate change by combining data on current climate and temperature regimes worldwide with a large suite of climate model projections for the next century. Their calculations are based on an “intermediate” level of projected greenhouse gas emissions over the next century (the A1B emissions scenario from The Intergovernmental Panel on Climate Change). Under these emissions levels, the velocity of climate change is projected to be the slowest in tropical and subtropical coniferous forests (0.08 kilometers per year), temperate coniferous forests (0.11 kilometers per year), and montane grasslands and shrublands (0.11 kilometers per year). The velocity of climate change is expected to be the fastest in flatter areas, including deserts and xeric shrublands (0.71 kilometers per year), mangroves (0.95 kilometers per year), and flooded grasslands and savannas (1.26 kilometers per year). …
Posted by Xeno on December 24, 2009
When New Year’s Eve rolls around and you’re deciding whether to have another glass of champagne, your decision may be predicted by your perspective of the future.
A pair of Kansas State University researchers found that people who tend to think in the long term are more likely to make positive decisions about their health, whether it’s how much they drink, what they eat, or their decision to wear sunscreen.
“If you are more willing to pick later, larger rewards rather than taking the immediate payoff, you are more future-minded than present-minded,” said James Daugherty, a doctoral student in psychology who led the study. “You’re more likely to exercise and less likely to smoke and drink.”
Daugherty conducted the research with Gary Brase, K-State associate professor of psychology. The research was presented in November at the Society for Judgment and Decision Making conference in Boston. It also appears in the January 2010 issue of the journal Personality and Individual Differences.
In addition to comparing people’s perspectives on time with their health behaviors, the researchers also wanted to see what type of time perspective measurements are better at predicting health behaviors.
To answer both of these questions, Daugherty and Brase had subjects — college students, with an average age of 19 years old — answer surveys about whether they think in the short term or the long term.
“College students tend to be more future-minded by definition because they go to college rather than get a job right out of high school,” Brase said.
One survey asked cognitive psychology questions like “Would you prefer $35 today or $45 in 35 days?” The other surveys used two types of social psychology methods. These included having the subjects rate the extent to which they agree with statements like “I am willing to sacrifice my immediate happiness or well-being in order to achieve future outcomes.”
The subjects then took surveys that asked questions like how often they ate breakfast, used tobacco and exercised, as well as their concerns with health risks like high cholesterol and contracting AIDS.
Daugherty and Brase found that the subjects who gave future-minded answers in the initial surveys were more likely to report healthy behaviors in the latter survey. They said this could have consequences for how people deal with negative health behaviors.
“There is a lot of potential for helping people make better health decisions,” Brase said. “People who tend to have a very present-minded perspective will have an easier time following through with a change if they can see rewards sooner. So if somebody goes into a weight loss center, the clinicians could measure a client’s time perspective. Then the clinicians would know the more effective way of helping the client reach his or her weight loss goal.”
Daugherty said a present-minded person could be encouraged by emphasizing minimal investment now for a quick payoff in the near future. He said it’s similar to exercise equipment commercials that tout by exercising 20 minutes a day, several times a week, you will see immediate payoffs.
“You promote the idea that you have to do very little and you’re going to see these great results,” Daugherty said.
He and Brase also found that by asking social psychology questions to determine whether someone was future-minded or present-minded, the researchers were better able to predict subjects’ health behaviors….
Posted by Xeno on December 24, 2009
Images of graphene oxide sheets deposited on a SiO2/Si substrate acquired by atomic force microscopy, scanning electron microscope, optical microscope at reflectance mode and the new fluorescence quenching microscopy (FQM). FQM offers comparable contrast and layer resolutions to AFM and SEM.
It’s been used to dye the Chicago River green on St. Patrick’s Day. It’s been used to find latent blood stains at crime scenes. And now researchers at Northwestern University have used it to examine the thinnest material in the world.
The useful tool is the dye fluorescein, and Jiaxing Huang, assistant professor of materials science and engineering at the McCormick School of Engineering and Applied Science, and his research group have used the dye to create a new imaging technique to view graphene, a one-atom thick sheet that scientists believe could be used to produce low-cost carbon-based transparent and flexible electronics.
Their results were recently published in the Journal of the American Chemical Society.
Being the world’s thinnest materials, graphene and its derivatives such as graphene oxide are quite challenging to see. Current imaging methods for graphene materials typically involve expensive and time-consuming techniques. For example, atomic force microscopy (AFM), which scans materials with a tiny tip, is frequently used to obtain images of graphene materials. But it is a slow process that can only look at small areas on smooth surfaces. Scanning electron microscopy (SEM), which scans a surface with high-energy electrons, only works if the material is placed in vacuum. Some optical microscopy methods are available, but they require the use of special substrates, too.
“There are really no good techniques that are general enough to meet the diverse imaging needs in the research and development of this group of new materials,” Huang says. “For example, people have proposed putting graphene materials on plastic sheets for flexible electronics, but seeing them on plastic has been very challenging. If one cannot exam these materials, quality control is going to be difficult.”
Fluorescent labeling has been used routinely to image biological samples, typically by using fluorescent dyes that make the objects of interest light up under a fluorescence microscope. But such a technique doesn’t work for graphene materials because of a mechanism called fluorescence quenching: they can “turn off” the fluorescence of nearby dye molecules.
“So we thought, how about we just put dye everywhere?” Huang says. “That way, the whole background lights up, and wherever you have graphene will be dark. It’s an inverse strategy that turns out to work beautifully.”
When Huang and his group coated a graphene sample with fluorescein and put it under a fluorescence microscope — a much cheaper, readily available instrument — they obtained images as clear as those acquired with AFM and SEM.
Posted by Xeno on December 24, 2009
Are your 11- and 12-year-olds staying up later, then dozing off at school the next day? Parents and educators who notice poor sleeping patterns in their children should take note of new research from Tel Aviv University ― and prepare themselves for bigger changes to come.
Prof. Avi Sadeh of TAU’s Department of Psychology suggests that changes in children’s sleep patterns are evident just before the onset of physical changes associated with puberty. He counsels parents and educators to make sure that pre-pubescent children get the good, healthy sleep that their growing and changing bodies need.
“It is very important for parents to be aware of the importance of sleep for their developing children and to maintain their supervision throughout the adolescent years,” says Sadeh, who reported his research findings in a recent issue of the journal Sleep. “School health education should also provide children with compelling information on how insufficient sleep compromises their well-being, psychological functioning and school achievements.”
Every minute counts
Results of the study, supported by the Israel Science Foundation, show that over a two-year period, sleep onset was significantly delayed by an average of 50 minutes in the study subjects, and sleep time was significantly reduced by an average of 37 minutes. Girls also had higher sleep efficiency and reported fewer night wakings than boys. For both, initial levels of sleep predicted an increase in pubertal development over time. This suggests that the neurobehavioral changes associated with puberty may be seen earlier in sleep organization than in bodily changes.
“Biological factors have a significant influence on sleep during puberty, although psychosocial issues such as school demands, social activities and technological distractions can also lead to the development of bad sleep habits,” he explains.
According to Prof. Sadeh, sleep-wake organization undergoes significant changes during the transition to adolescence. These changes include a delayed sleep phase, which involves a tendency towards later bedtimes and risetimes; shorter sleep, which is associated with increased levels of daytime sleepiness; and irregular sleep patterns, which involve sleeping very little on weekdays and sleeping longer during weekends to compensate. During maturation, adolescents also develop a greater tolerance for sleep deprivation or extended wakefulness….