Wednesday, September 23, 2009

Tel Aviv Univrsity's 'Dust Alert' exposes dangerous invisible pollution, pollen and construction waste

Public release date: 22-Sep-2009
[ Print | E-mail | Share Share ] [ Close Window ]

Contact: George Hunka
ghunka@aftau.org
212-742-9070
American Friends of Tel Aviv University
Tel Aviv University invention busts dust
TAU's 'Dust Alert' exposes dangerous invisible pollution, pollen and construction waste

Worried that dust from a nearby construction zone will harm your family's health? A new Tel Aviv University tool could either confirm your suspicions or better yet, set your mind at rest.

Prof. Eyal Ben-Dor and his Ph.D student Dr. Sandra Chudnovsky, of TAU's Department of Geography have developed a sensor called "Dust Alert" ― the first of its kind ― to help families and authorities monitor the quality of the air they breathe. Like an ozone gas or carbon monoxide meter, it measures the concentration of small particles that may contaminate the air in your home. Scientific studies on "Dust Alert" appeared recently in the journal Science of the Total Environment, Urban Air Pollution: Problems, Control Technologies and Management Practices.

"It works just like an ozone meter would," says Prof. Ben-Dor. "You put it in your home or office for three weeks, and it can give you real-time contamination levels in terms of dust, pollen and toxins." Functioning like a tiny chemistry lab, the device can precisely determine the chemical composition of the toxins, so homeowners, office managers and factories can act to improve air quality.

Using the measurements, Prof. Ben-Dor can sometimes find a quick remedy for a dusty or pollen-filled home. The solution could be as easy as keeping a window open, he says. "We've found through our ongoing research that some simple actions at home can have a profound effect on the quality of air we breathe."

Instant results

Based on a portable chemical analyzer called a spectrophotometer, the invention can be installed and begin to collect data within minutes, although several weeks' worth of samples produces the best assessment of air quality. The longer period allows for fluctuations in both internal and external environments, such as changing weather patterns.

The "Dust Alert" fills an important need. Polluted air, breathed in for weeks, months and sometimes years, can have fatal consequences, leading to asthma, bronchitis and lung cancer. With findings from Prof. Ben-Dor's invention, urban planners can provide better solutions and mitigate risks. "We can certainly give an accurate forecast about the health of a home or apartment for prospective home owners. If somebody in your family has an allergy, poor air quality can be a deal breaker ," says Prof. Ben-Dor.

Prof. Ben-Dor's device may be most useful in the aftermath of disasters, such as chemical fires, heavy dust storms, hurricanes or tragedies like 9/11. Survivors of these situations are usually unaware of the lingering environmental problems, and the government can't do enough to protect them because no accurate tools exist to define the risk. Using a Dust Alert, residents could be advised to vacate their homes and offices until the dust has cleared, or to take simple precautions such as aerating hazardous rooms in a flat, suggests Prof. Ben-Dor.

Putting dust on the map

According to Prof. Ben-Dor, the Dust Alert could also be used by cities and counties to develop "dust maps" that provide detailed environmental information about streets and neighborhoods, permitting government authorities like the EPA to more successfully identify and prosecute offenders. Currently, for example, there is no system for demonstrating how construction sites compromise people's health.

"Until now, people have had to grin and bear the polluted air they breathe," says Prof. Ben-Dor. "The Dust Alert could provide crucial reliable evidence of pollution, so that society at large can breathe easier. We can see the dust on the furniture and on the windows, but most of us can't see the dust we breathe. For the first time, we are able to detect it and measure its more dangerous components."

With their dust maps, TAU scientists have already correlated urban heat islands with high levels of particulate matter, giving urban planners crucial information for the development of green spaces and city parks. Prof. Ben-Dor also plans to develop his prototype into a home-and-office unit, while offering customized services that can help people decode what's left in the dust.

###

American Friends of Tel Aviv University (www.aftau.org) supports Israel's leading and most comprehensive center of higher learning. In independent rankings, TAU's innovations and discoveries are cited more often by the global scientific community than all but 20 other universities worldwide

Internationally recognized for the scope and groundbreaking nature of its research programs, Tel Aviv University consistently produces work with profound implications for the future.

[ Back to EurekAlert! ] [ Print | E-mail | Share Share ] [ Close Window ]

Diamonds may be the ultimate MRI probe, say Quantum physicists

A search for quantum computers led to a medical aplication. One day we may get MRI-like devices that can probe individual drug molecules and living cells.

K.S.Parthasarathy


Public release date: 22-Sep-2009
[ Print | E-mail | Share Share ] [ Close Window ]

Contact: Chad Boutin
boutin@nist.gov
301-975-4261
National Institute of Standards and Technology (NIST)
Diamonds may be the ultimate MRI probe, say Quantum physicists

Diamonds, it has long been said, are a girl's best friend. But a research team including a physicist from the National Institute of Standards and Technology (NIST) has recently found* that the gems might turn out to be a patient's best friend as well.

The team's work has the long-term goal of developing quantum computers, but it has borne fruit that may have more immediate application in medical science. Their finding that a candidate "quantum bit" has great sensitivity to magnetic fields hints that MRI-like devices that can probe individual drug molecules and living cells may be possible.

The candidate system, formed from a nitrogen atom lodged within a diamond crystal, is promising not only because it can sense atomic-scale variations in magnetism, but also because it functions at room temperature. Most other such devices used either in quantum computation or for magnetic sensing must be cooled to nearly absolute zero to operate, making it difficult to place them near live tissue. However, using the nitrogen as a sensor or switch could sidestep that limitation.

Diamond, which is formed of pure carbon, occasionally has minute imperfections within its crystalline lattice. A common impurity is a "nitrogen vacancy", in which two carbon atoms are replaced by a single atom of nitrogen, leaving the other carbon atom's space vacant. Nitrogen vacancies are in part responsible for diamond's famed luster, for they are actually fluorescent: when green light strikes them, the nitrogen atom's two excitable unpaired electrons glow a brilliant red.

The team can use slight variations in this fluorescence to determine the magnetic spin of a single electron in the nitrogen. Spin is a quantum property that has a value of either "up" or "down," and therefore could represent one or zero in binary computation. The team's recent achievement was to transfer this quantum information repeatedly between the nitrogen electron and the nuclei of adjacent carbon atoms, forming a small circuit capable of logic operations. Reading a quantum bit's spin information—a fundamental task for a quantum computer—has been a daunting challenge, but the team demonstrated that by transferring the information back and forth between the electron and the nuclei, the information could be amplified, making it much easier to read.

Still, NIST theoretical physicist Jacob Taylor said the findings are "evolutionary, not revolutionary" for the quantum computing field and that the medical world may reap practical benefits from the discovery long before a working quantum computer is built. He envisions diamond-tipped sensors performing magnetic resonance tests on individual cells within the body, or on single molecules drug companies want to investigate—a sort of MRI scanner for the microscopic. "That's commonly thought not to be possible because in both of these cases the magnetic fields are so small," Taylor says. "But this technique has very low toxicity and can be done at room temperature. It could potentially look inside a single cell and allow us to visualize what's happening in different spots."

The Harvard University-based team also includes scientists from the Joint Quantum Institute (a partnership of NIST and the University of Maryland), the Massachusetts Institute of Technology and Texas A&M University.

###

* L. Jiang, J.S. Hodges, J.R. Maze, P. Maurer, J.M. Taylor, D.G. Cory, P.R. Hemmer, R.L. Walsworth, A. Yacoby, A.S. Zibrov and M.D. Lukin. Repetitive readout of a single electronic spin via quantum logic with nuclear spin ancillae. Science, DOI: 10.1126/science.1176496, published online Sept. 10, 2009.

See http://www.nist.gov/public_affairs/techbeat/tb2009_0922.htm#diamonds for illustration to accompany story.

[ Back to EurekAlert! ] [ Print | E-mail | Share Share ] [ Close Window ]

Tuesday, September 15, 2009

Study identifies which children do not need CT scans after head trauma

There is general awareness that children must not be exposed to unwanted x-ray doses. This paper came ta the opportune time. Bold implementation of the guidelines will be essential. Professional associations must review the work urgently and accept the guidelines with modifications if any required

Dr.K.S.Parthasarathy

EureKAlert

Public release date: 14-Sep-2009
[ Print | E-mail | Share Share ] [ Close Window ]

Contact: Charlie Casey
charles.casey@ucdmc.ucdavis.edu
916-734-9048
University of California - Davis - Health System
Study identifies which children do not need CT scans after head trauma
Research provides new guidelines to identify children with mild injuries and reduce radiation exposure from CT

A substantial percentage of children who get CT scans after apparently minor head trauma do not need them, and as a result are put at increased risk of cancer due to radiation exposure. After analyzing more than 42,000 children with head trauma, a national research team led by two UC Davis emergency department physicians has developed guidelines for doctors who care for children with head trauma aimed at reducing those risks.

Their findings appear in an article published online today and in an upcoming edition of The Lancet.

The collaborative study includes data collected at 25 hospitals from children who were evaluated for the possibility of serious brain injury following trauma to the head. Researchers found that one in five children over the age of 2 and nearly one-quarter of those under 2 who received CT scans following head trauma did not need them because they were at very low risk of having serious brain injuries. In these low-risk children, the risk of developing cancer due to radiation exposure outweighed the risk of serious brain injury.

"When you have a sample size this large, it is easier to get your hands on the truth," said Nathan Kuppermann, professor and chair of emergency medicine, professor of pediatrics at UC Davis Children's Hospital and lead author of the study. "We think our investigation provides the best available evidence regarding the use of CT scans in children with head trauma, and it indicates that CT use can be safely reduced by eliminating its application in those children who are at very low risk of serious brain injuries."

As part of the study, Kuppermann and his colleagues developed a set of rules for identifying low-risk patients who would not need a CT. The "prediction rules" for children under 2 and for those 2 and older depend on the presence or absence of various symptoms and circumstances, including the way the injury was sustained, a history of loss of consciousness, neurological status at the time of evaluation and clinical evidence of skull fracture for both age groups. The use of CT in patients who do not fall into the low-risk group identified by the prediction rules will depend on other factors, such as the physician's experience, the severity and number of symptoms, and other factors.

The Centers for Disease Control estimates that 435,000 children under 14 visit emergency rooms every year to be evaluated for traumatic brain injury (TBI). Not all head trauma results in a TBI. The severity of a brain injury may range from mild, causing brief change in mental status or consciousness, to severe, causing permanent symptoms and irreversible damage.

For years, studies have suggested that CT scans were being overused to rule out traumatic brain injuries. However, those studies were considered too small to be sufficiently accurate and not precise enough to be widely applicable to a general population. The sheer size of the current study, and the fact that the investigators created the accurate prediction rules with one large group of children with head trauma and then tested the rules on another large but separate group to demonstrate their validity, allows physicians to have confidence in the results. The researchers emphasized, however, that the rules are not intended to replace clinical judgment.

"We're arming the clinician with the best available evidence so that they can make the best decisions," said James Holmes, professor of emergency medicine at UC Davis School of Medicine and a co-author of the report. "There certainly are instances when the risks of radiation are worth it, such as in cases of blunt head trauma which result in changes in neurological status or clinical evidence of skull fractures. However, clinicians need reliable data to help them make those judgment calls when it is not clear whether or not a patient needs a CT. Until now, physicians haven't had data based on large and validated research."

The current study comes on the heels of an article published in late August by The New England Journal of Medicine that showed that at least 4 million Americans under age 65 are exposed to high doses of radiation each year from medical imaging tests, with CT scans accounting for almost one half of the total radiation dose. About 10 percent of those get more than the maximum annual exposure allowed for nuclear power plant employees or anyone else who works with radioactive material.

Studies show that exposure to radiation increases the risk of cancer. Radiation exposure to the brain of developing children is of particular concern and must be weighed carefully against the risk of traumatic brain injury that could cause permanent damage or death if not identified early. If the new guidelines are applied appropriately, the use of CT scans nationwide could be significantly reduced.

The effort was made possible by the Pediatric Emergency Care Applied Research Network (PECARN), which enabled the massive collection of data. Supported by the U.S. Department of Health and Human Services' Emergency Medical Services for Children Program, PECARN is the first federally-funded, multi-institutional network for research in pediatric emergency medicine in the nation. The network conducts research into the prevention and management of acute illnesses and injuries in children and youth across the continuum of emergency medicine and health care.

"Children with medical and traumatic illnesses usually have good outcomes, but you need a lot of children to assess factors and treatments that predict both good and bad outcomes. By studying large numbers of children, in a variety of settings and from diverse populations, the results will more likely be applicable to the general population. That's the power of PECARN," Kuppermann said. "Combined, our network of emergency departments around the country evaluates approximately 1 million children per year."

Along with the UC Davis team, key PECARN researchers in the Lancet study included Peter S. Dayan, from New York-Presbyterian Hospital and Columbia University Medical Center in New York; John D. Hoyle, Jr., from Helen DeVos Children's Hospital in Grand Rapids; Shireen M. Atabaki, from Children's National Medical Center in Washington, D.C.; and Richard Holubkov from the PECARN Data Coordinating Center at the University of Utah.

In order to create the prediction rules, the PECARN investigators studied outcomes in more than 42,000 children with minor initial symptoms and signs of head trauma. CT scans were performed in nearly 15,000 of those patients. Serious brain injuries were diagnosed in 376 children, and 60 children underwent neurosurgery.

Using these data, the researchers developed two prediction rules for identifying mild cases that do not need CT scans. One rule was developed for children under the age of 2 and another for those 2 and over. It was important to study children under 2 separately because they cannot communicate their symptoms or offer information as well as older children, and they are more sensitive to the effects of radiation.

Children under 2 who fell into the low-risk group showed normal mental status, no scalp swelling, no significant loss of consciousness, no palpable skull fracture, were normal-acting (according to the parent), and had an injury that was sustained in a non-severe way. Severe accidents, which excluded children from the low-risk group, included motor vehicle crashes in which the patient was ejected, and bicycle accidents involving automobiles, in which the patient was not wearing a helmet. Key indicators for children older than 2 who were at low-risk for brain injury included normal mental status, no loss of consciousness, no vomiting, no signs of fracture of the base of skull, no severe headache, and they did not sustain the injury in a serious accident.

The researchers then validated these rules by applying them to data from a second population of more than 8,600 children. In more than 99.9 percent of the cases, the rules accurately predicted children who were not diagnosed with serious brain injuries and were therefore indeed at low risk..

The researchers also identified and separated children at intermediate and high risk of serious brain injuries. Those in the high-risk group should receive CT scans, the researchers wrote. The PECARN team is currently working on refining recommendations for the use of CT scans in those at intermediate risk. Until now, emergency room physicians have relied mostly on instincts when deciding whether or not the symptoms of a child with head trauma warrant the use of CT.

"Now we have much better evidence to assist with making decisions regarding CT use," Kuppermann said.

###

UC Davis has been part of PECARN since its inception in 2001. It is the leading center in one of four PECARN Research Nodes, which also includes Children's Hospital of Philadelphia; St. Louis Children's Hospital; Children's Hospital of Wisconsin; Cincinnati Children's Hospital Medical Center; and Primary Children's Medical Center in Salt Lake City.

A total of 32 PECARN researchers were substantially involved in this study. This research was supported by the Emergency Medical Services for Children program of the Maternal and Child Health Bureau, and the Maternal and Child Health Bureau Research Program, Health Resources and Services Administration, U.S. Department of Health and Human Services.

Friday, September 11, 2009

Caltech scientists develop novel use of neurotechnology to solve classic social problem

Public release date: 10-Sep-2009


Contact: Lori Oliwenstein
lorio@caltech.edu
626-395-3631
California Institute of Technology
Caltech scientists develop novel use of neurotechnology to solve classic social problem
Research shows how brain imaging can be used to create new and improved solutions to the public-goods provision problem

PASADENA, Calif.—Economists and neuroscientists from the California Institute of Technology (Caltech) have shown that they can use information obtained through functional magnetic resonance imaging (fMRI) measurements of whole-brain activity to create feasible, efficient, and fair solutions to one of the stickiest dilemmas in economics, the public goods free-rider problem—long thought to be unsolvable.

This is one of the first-ever applications of neurotechnology to real-life economic problems, the researchers note. "We have shown that by applying tools from neuroscience to the public-goods problem, we can get solutions that are significantly better than those that can be obtained without brain data," says Antonio Rangel, associate professor of economics at Caltech and the paper's principal investigator.

The paper describing their work was published today in the online edition of the journal Science, called Science Express.

Examples of public goods range from healthcare, education, and national defense to the weight room or heated pool that your condominium board decides to purchase. But how does the government or your condo board decide which public goods to spend its limited resources on? And how do these powers decide the best way to share the costs?

"In order to make the decision optimally and fairly," says Rangel, "a group needs to know how much everybody is willing to pay for the public good. This information is needed to know if the public good should be purchased and, in an ideal arrangement, how to split the costs in a fair way."

In such an ideal arrangement, someone who swims every day should be willing to pay more for a pool than someone who hardly ever swims. Likewise, someone who has kids in public school should have more of her taxes put toward education.

But providing public goods optimally and fairly is difficult, Rangel notes, because the group leadership doesn't have the necessary information. And when people are asked how much they value a particular public good—with that value measured in terms of how many of their own tax dollars, for instance, they'd be willing to put into it—their tendency is to lowball.

Why? "People can enjoy the good even if they don't pay for it," explains Rangel. "Underreporting its value to you will have a small effect on the final decision by the group on whether to buy the good, but it can have a large effect on how much you pay for it."

In other words, he says, "There's an incentive for you to lie about how much the good is worth to you."

That incentive to lie is at the heart of the free-rider problem, a fundamental quandary in economics, political science, law, and sociology. It's a problem that professionals in these fields have long assumed has no solution that is both efficient and fair.

In fact, for decades it's been assumed that there is no way to give people an incentive to be honest about the value they place on public goods while maintaining the fairness of the arrangement.

"But this result assumed that the group's leadership does not have direct information about people's valuations," says Rangel. "That's something that neurotechnology has now made feasible."

And so Rangel, along with Caltech graduate student Ian Krajbich and their colleagues, set out to apply neurotechnology to the public-goods problem.

In their series of experiments, the scientists tried to determine whether functional magnetic resonance imaging (fMRI) could allow them to construct informative measures of the value a person assigns to one or another public good. Once they'd determined that fMRI images—analyzed using pattern-classification techniques—can confer at least some information (albeit "noisy" and imprecise) about what a person values, they went on to test whether that information could help them solve the free-rider problem.

They did this by setting up a classic economic experiment, in which subjects would be rewarded (paid) based on the values they were assigned for an abstract public good.

As part of this experiment, volunteers were divided up into groups. "The entire group had to decide whether or not to spend their money purchasing a good from us," Rangel explains. "The good would cost a fixed amount of money to the group, but everybody would have a different benefit from it."

The subjects were asked to reveal how much they valued the good. The twist? Their brains were being imaged via fMRI as they made their decision. If there was a match between their decision and the value detected by the fMRI, they paid a lower tax than if there was a mismatch. It was, therefore, in all subjects' best interest to reveal how they truly valued a good; by doing so, they would on average pay a lower tax than if they lied.

"The rules of the experiment are such that if you tell the truth," notes Krajbich, who is the first author on the Science paper, "your expected tax will never exceed your benefit from the good."

In fact, the more cooperative subjects are when undergoing this entirely voluntary scanning procedure, "the more accurate the signal is," Krajbich says. "And that means the less likely they are to pay an inappropriate tax."

This changes the whole free-rider scenario, notes Rangel. "Now, given what we can do with the fMRI," he says, "everybody's best strategy in assigning value to a public good is to tell the truth, regardless of what you think everyone else in the group is doing."

And tell the truth they did—98 percent of the time, once the rules of the game had been established and participants realized what would happen if they lied. In this experiment, there is no free ride, and thus no free-rider problem.

"If I know something about your values, I can give you an incentive to be truthful by penalizing you when I think you are lying," says Rangel.

While the readings do give the researchers insight into the value subjects might assign to a particular public good, thus allowing them to know when those subjects are being dishonest about the amount they'd be willing to pay toward that good, Krajbich emphasizes that this is not actually a lie-detector test.

"It's not about detecting lies," he says. "It's about detecting values—and then comparing them to what the subjects say their values are."

"It's a socially desirable arrangement," adds Rangel. "No one is hurt by it, and we give people an incentive to cooperate with it and reveal the truth."

"There is mind reading going on here that can be put to good use," he says. "In the end, you get a good produced that has a high value for you."

From a scientific point of view, says Rangel, these experiments break new ground. "This is a powerful proof of concept of this technology; it shows that this is feasible and that it could have significant social gains."

And this is only the beginning. "The application of neural technologies to these sorts of problems can generate a quantum leap improvement in the solutions we can bring to them," he says.

Indeed, Rangel says, it is possible to imagine a future in which, instead of a vote on a proposition to fund a new highway, this technology is used to scan a random sample of the people who would benefit from the highway to see whether it's really worth the investment. "It would be an interesting alternative way to decide where to spend the government's money," he notes.

###

In addition to Rangel and Krajbich, other authors on the Science paper, "Using neural measures of economic value to solve the public goods free-rider problem," include Caltech's Colin Camerer, the Robert Kirby Professor of Behavioral Economics, and John Ledyard, the Allen and Lenabelle Davis Professor of Economics and Social Sciences. Their work was funded by grants from the National Science Foundation, the Gordon and Betty Moore Foundation, and the Human Frontier Science Program.


[ Back to EurekAlert! ] [ Print | E-mail | Share Share ] [ Close Window ]

Environmental scientists estimate that China could meet its entire future energy needs by wind alone

Public release date: 10-Sep-2009

Contact: Michael Patrick Rutter
mrutter@seas.harvard.edu
617-496-3815
Harvard University
Environmental scientists estimate that China could meet its entire future energy needs by wind alone
Study suggests that wind is ecologically and economically practical and could reduce CO2 emissions

Cambridge, Mass. – September 10, 2009 – A team of environmental scientists from Harvard and Tsinghua University demonstrated the enormous potential for wind-generated electricity in China. Using extensive metrological data and incorporating the Chinese government's energy bidding and financial restrictions for delivering wind power, the researchers estimate that wind alone has the potential to meet the country's electricity demands projected for 2030.

The switch from coal and other fossil fuels to greener wind-based energy could also mitigate CO2 emissions, thereby reducing pollution. The report appeared as a cover story in the September 11th issue of Science.

"The world is struggling with the question of how do you make the switch from carbon-rich fuels to something carbon-free," said lead author Michael B. McElroy, Gilbert Butler Professor of Environmental Studies at Harvard's School of Engineering and Applied Sciences (SEAS).

China has become second only to the U.S. in its national power generating capacity— 792.5 gigawatts per year with an expected future 10 percent annual increase—and is now the world's largest CO2 emitter. Thus, added McElroy, "the real question for the globe is: What alternatives does China have?"

While wind-generated energy accounts for only 0.4 percent of China's total current electricity supply, the country is rapidly becoming the world's fastest growing market for wind power, trailing only the U.S., Germany, and Spain in terms of installed capacities of existing wind farms.

Development of renewable energy in China, especially wind, received an important boost with passage of the Renewable Energy Law in 2005; the law provides favorable tax status for alternative energy investments. The Chinese government also established a concession bidding process to guarantee a reasonable return for large wind projects.

"To determine the viability of wind-based energy for China we established a location-based economic model, incorporating the bidding process, and calculated the energy cost based on geography," said co-author Xi Lu, a graduate student in McElroy's group at SEAS. "Using the same model we also evaluated the total potentials for wind energy that could be realized at a certain cost level."

Specifically, the researchers used meteorological data from the Goddard Earth Observing Data Assimilation System (GEOS) at NASA. Further, they assumed the wind energy would be produced from a set of land-based 1.5-megawatt turbines operating over non-forested, ice-free, rural areas with a slope no more than 20 percent.

"By bringing the capabilities of atmospheric science to the study of energy we were able to view the wind resource in a total context," explained co-author Chris P. Nielsen, Executive Director of the Harvard China Project, based at SEAS.

The analysis indicated that a network of wind turbines operating at as little as 20 percent of their rated capacity could provide potentially as much as 24.7 petawatt-hours of electricity annually, or more than seven times China's current consumption. The researchers also determined that wind energy alone, at around 7.6 U.S. Cents per kilowatt-hour, could accommodate the country's entire demand for electricity projected for 2030.

"Wind farms would only need to take up land areas of 0.5 million square kilometers, or regions about three quarters of the size of Texas. The physical footprints of wind turbines would be even smaller, allowing the areas to remain agricultural," said Lu.

By contrast, to meet the increased demand for electricity during the next 20 years using fossil fuel-based energy sources, China would have to construct coal-fired power plants that could produce the equivalent of 800 gigawatts of electricity, resulting in a potential increase of 3.5 gigatons of CO2 per year. The use of cleaner wind energy could both meet future demands and, even if only used to supplement existing energy sources, significantly reduce carbon emissions.

Moving to a low-carbon energy future would require China to make an investment of around $900 billion dollars (at current prices) over the same twenty-year period. The scientists consider this a large but not unreasonable investment given the present size of the Chinese economy. Moreover, whatever the energy source, the country will need to build and support an expanded energy grid to accommodate the anticipated growth in power demand.

"We are trying to cut into the current defined demand for new electricity generation in China, which is roughly a gigawatt a week—or an enormous 50 gigawatts per year," said McElroy. "China is bringing on several coal fire power plants a week. By publicizing the opportunity for a different way to go we will hope to have a positive influence."

In the coming months, the researchers plan to conduct a more intensive wind study in China, taking advantage of 25-year data with significantly higher spatial resolution for north Asian regions to investigate the geographical year-to-year variations of wind. The model used for assessing China could also be applied for assessing wind potential anywhere in the world, onshore and offshore, and could be extended to solar generated electricity.

###

Yuxuan Wang, Associate Professor in the Department of Environmental Science and Engineering at Tsinghua University, Beijing, China, also contributed to the study. The team's research was supported by a grant from the National Science Foundation (NSF).

Carbon nanotubes could make efficient solar cells

Public release date: 10-Sep-2009


Contact: Blaine Friedlander
bpf2@cornell.edu
607-254-8093
Cornell University
Carbon nanotubes could make efficient solar cells

Using a carbon nanotube instead of traditional silicon, Cornell researchers have created the basic elements of a solar cell that hopefully will lead to much more efficient ways of converting light to electricity than now used in calculators and on rooftops.

The researchers fabricated, tested and measured a simple solar cell called a photodiode, formed from an individual carbon nanotube. Reported online Sept. 11 in the journal Science, the researchers -- led by Paul McEuen, the Goldwin Smith Professor of Physics, and Jiwoong Park, assistant professor of chemistry and chemical biology -- describe how their device converts light to electricity in an extremely efficient process that multiplies the amount of electrical current that flows. This process could prove important for next-generation high efficiency solar cells, the researchers say.

"We are not only looking at a new material, but we actually put it into an application -- a true solar cell device," said first author Nathan Gabor, a graduate student in McEuen's lab.

The researchers used a single-walled carbon nanotube, which is essentially a rolled-up sheet of graphene, to create their solar cell. About the size of a DNA molecule, the nanotube was wired between two electrical contacts and close to two electrical gates, one negatively and one positively charged. Their work was inspired in part by previous research in which scientists created a diode, which is a simple transistor that allows current to flow in only one direction, using a single-walled nanotube. The Cornell team wanted to see what would happen if they built something similar, but this time shined light on it.

Shining lasers of different colors onto different areas of the nanotube, they found that higher levels of photon energy had a multiplying effect on how much electrical current was produced.

Further study revealed that the narrow, cylindrical structure of the carbon nanotube caused the electrons to be neatly squeezed through one by one. The electrons moving through the nanotube became excited and created new electrons that continued to flow. The nanotube, they discovered, may be a nearly ideal photovoltaic cell because it allowed electrons to create more electrons by utilizing the spare energy from the light.

This is unlike today's solar cells, in which extra energy is lost in the form of heat, and the cells require constant external cooling.

Though they have made a device, scaling it up to be inexpensive and reliable would be a serious challenge for engineers, Gabor said.

"What we've observed is that the physics is there," he said.

###

The research was supported by Cornell's Center for Nanoscale Systems and the Cornell NanoScale Science and Technology Facility, both National Science Foundation facilities, as well as the Microelectronics Advanced Research Corporation Focused Research Center on Materials, Structures and Devices. Research collaborators also included Zhaohui Zhong, of the University of Michigan, and Ken Bosnick, of the National Institute for Nanotechnology at University of Alberta.

(Text by Anne Ju, Cornell Chronicle)

Monday, September 7, 2009

Making more efficient fuel cells

Public release date: 6-Sep-2009
[ Print | E-mail | Share Share ] [ Close Window ]

Contact: Dianne Stilwell
diannestilwell@me.com
44-795-720-0214
Society for General Microbiology
Making more efficient fuel cells

Bacteria that generate significant amounts of electricity could be used in microbial fuel cells to provide power in remote environments or to convert waste to electricity. Professor Derek Lovley from the University of Massachusetts, USA isolated bacteria with large numbers of tiny projections called pili which were more efficient at transferring electrons to generate power in fuel cells than bacteria with a smooth surface. The team's findings were reported at the Society for General Microbiology's meeting at Heriot-Watt University, Edinburgh, today (7 September).

The researchers isolated a strain of Geobacter sulfurreducens which they called KN400 that grew prolifically on the graphite anodes of fuel cells. The bacteria formed a thick biofilm on the anode surface, which conducted electricity. The researchers found large quantities of pilin, a protein that makes the tiny fibres that conduct electricity through the sticky biofilm.

"The filaments form microscopic projections called pili that act as microbial nanowires," said Professor Lovley, "using this bacterial strain in a fuel cell to generate electricity would greatly increase the cell's power output."

The pili on the bacteria's surface seemed to be primarily for electrical conduction rather than to help them to attach to the anode; mutant forms without pili were still able to stay attached.

Microbial fuel cells can be used in monitoring devices in environments where it is difficult to replace batteries if they fail but to be successful they need to have an efficient and long-lasting source of power. Professor Lovley described how G. sulfurreducens strain KN400 might be used in sensors placed on the ocean floor to monitor migration of turtles.

###

Using waste to recover waste uranium

Public release date: 6-Sep-2009
[ Print | E-mail | Share Share ] [ Close Window ]

Contact: Dianne Stilwell
diannestilwell@me.com
44-795-720-0214
Society for General Microbiology
Using waste to recover waste uranium

Using bacteria and inositol phosphate, a chemical analogue of a cheap waste material from plants, researchers at Birmingham University have recovered uranium from the polluted waters from uranium mines. The same technology can also be used to clean up nuclear waste. Professor Lynne Macaskie, this week (7-10 September), presented the group's work to the Society for General Microbiology's meeting at Heriot-Watt University, Edinburgh.

Bacteria, in this case, E. coli, break down a source of inositol phosphate (also called phytic acid), a phosphate storage material in seeds, to free the phosphate molecules. The phosphate then binds to the uranium forming a uranium phosphate precipitate on the bacterial cells that can be harvested to recover the uranium.

This process was first described in 1995, but then a more expensive additive was used and that, combined with the then low price of uranium, made the process uneconomic. The discovery that inositol phosphate was potentially six times more effective as well as being a cheap waste material means that the process becomes economically viable, especially as the world price of uranium is likely to increase as countries move to expand their nuclear technologies in a bid to produce low-carbon energy.

As an example, if pure inositol phosphate, bought from a commercial supplier is used, the cost of this process is £1.72 per gram of uranium recovered. If a cheaper source of inositol phosphate is used (eg calcium phytate) the cost reduces to £0.09 for each gram of recovered uranium. At 2007 prices, uranium cost £0.211/g; it is currently £0.09/g. These prices make the process economic overall because there is also an environmental protection benefit. Use of low-grade inositol phosphate from agricultural wastes would bring the cost down still further and the economic benefit will also increase as the price of uranium is forecast to rise again.

"The UK has no natural uranium reserves, although a significant amount of uranium is produced in nuclear wastes. There is no global shortage of uranium but from the point of view of energy security the EU needs to be able to recover as much uranium as possible from mine run-offs (which in any case pollute the environment) as well as recycling as much uranium as possible from nuclear wastes," commented Professor Macaskie, "By using a cheap feedstock easily obtained from plant wastes we have shown that an economic, scalable process for uranium recovery is possible".

###