Monday, December 21, 2009

Boston University reseachers develop faster, cheaper DNA sequencing method

Researchers have found a new method of DNA sequencing which is claimed to be cheaper and faster than the method used so far.In the method developed by them one needs onle a smaller quantity of DNA thereby eliminating the expensive, time-consuming and error-prone step of DNA amplification.By boosting capture rates by a few orders of magnitude, and reducing the volume of the sample chamber the researchers reduced the number of DNA molecules required by a factor of 10,000 – from about 1 billion sample molecules to 100,000.

K.S.Parthasarathy







Public release date: 20-Dec-2009

Contact: Mike Seele
mseele@bu.edu
617-353-9766
Boston University College of Engineering
Boston University researchers develop faster, cheaper DNA sequencing method






IMAGE: A team of researchers led by Boston University biomedical engineer Amit Meller is using electrical fields to efficiently draw long strands of DNA through nanopore sensors, drastically reducing the number...
Click here for more information.




(BOSTON) EMBARGOED UNTIL 1 P.M. EST 12/20/09 -- Boston University biomedical engineers have devised a method for making future genome sequencing faster and cheaper by dramatically reducing the amount of DNA required, thus eliminating the expensive, time-consuming and error-prone step of DNA amplification.

In a study published in the Dec. 20 online edition of Nature Nanotechnology, a team led by Boston University Biomedical Engineering Associate Professor Amit Meller details pioneering work in detecting DNA molecules as they pass through silicon nanopores. The technique uses electrical fields to feed long strands of DNA through four-nanometer-wide pores, much like threading a needle. The method uses sensitive electrical current measurements to detect single DNA molecules as they pass through the nanopores.

"The current study shows that we can detect a much smaller amount of DNA sample than previously reported," said Meller. "When people start to implement genome sequencing or genome profiling using nanopores, they could use our nanopore capture approach to greatly reduce the number of copies used in those measurements."

Currently, genome sequencing utilizes DNA amplification to make billions of molecular copies in order to produce a sample large enough to be analyzed. In addition to the time and cost DNA amplification entails, some of the molecules – like photocopies of photocopies – come out less than perfect. Meller and his colleagues at BU, New York University and Bar-Ilan University in Israel have harnessed electrical fields surrounding the mouths of the nanopores to attract long, negatively charged strands of DNA and slide them through the nanopore where the DNA sequence can be detected. Since the DNA is drawn to the nanopores from a distance, far fewer copies of the molecule are needed.

Before creating this new method, the team had to develop an understanding of electro-physics at the nanoscale, where the rules that govern the larger world don't necessarily apply. They made a counterintuitive discovery: the longer the DNA strand, the more quickly it found the pore opening.

"That's really surprising," Meller said. "You'd expect that if you have a longer 'spaghetti,' then finding the end would be much harder. At the same time this discovery means that the nanopore system is optimized for the detection of long DNA strands -- tens of thousands basepairs, or even more. This could dramatically speed future genomic sequencing by allowing analysis of a long DNA strand in one swipe, rather than having to assemble results from many short snippets.

"DNA amplification technologies limit DNA molecule length to under a thousand basepairs," Meller added. "Because our method avoids amplification, it not only reduces the cost, time and error rate of DNA replication techniques, but also enables the analysis of very long strands of DNA, much longer than current limitations."

With this knowledge in hand, Meller and his team set out to optimize the effect. They used salt gradients to alter the electrical field around the pores, which increased the rate at which DNA molecules were captured and shortened the lag time between molecules, thus reducing the quantity of DNA needed for accurate measurements. Rather than floating around until they happened upon a nanopore, DNA strands were funneled into the openings.

By boosting capture rates by a few orders of magnitude, and reducing the volume of the sample chamber the researchers reduced the number of DNA molecules required by a factor of 10,000 – from about 1 billion sample molecules to 100,000.

###

The research was funded by the National Human Genome Research Institute of the Institutes of Health and by the National Science Foundation. The article, "Electrostatic Focusing of Unlabelled DNA into Nanoscale Pores Using a Salt Gradient," will be available at the Nature web site beginning Dec. 20 at 1 p.m. at http://dx.doi.org/10.1038/natureNNANO.2009.379.

Wednesday, November 25, 2009

Butterfly proboscis to sip cells

Nature does every thing the very best way. We shall get solutions to complex problems by keenly observing nature. This is substantiated by the following newsstory

K.S.Parthaarathy



Public release date: 22-Nov-2009

Contact: Jason Bardi
jbardi@aip.org
301-209-3091
American Institute of Physics
Butterfly proboscis to sip cells
Nature-inspired probes be presented at Fluid Dynamics Conference next week

WASHINGTON, D.C. November 18, 2009 -- A butterfly's proboscis looks like a straw -- long, slender, and used for sipping -- but it works more like a paper towel, according to Konstantin Kornev of Clemson University. He hopes to borrow the tricks of this piece of insect anatomy to make small probes that can sample the fluid inside of cells.

Kornev will present his work next week at the 62nd Annual Meeting of the American Physical Society's (APS) Division of Fluid Dynamics will take place from November 22-24 at the Minneapolis Convention Center.

At the scales at which a butterfly or moth lives, liquid is so thick that it is able to form fibers. The insects' liquid food -- drops of water, animal tears, and the juice inside decomposed fruit -- spans nearly three orders of magnitude in viscosity. Pumping liquid through its feeding tube would require an enormous amount of pressure.

"No pump would support that kind of pressure," says Kornev. "The liquid would boil spontaneously."

Instead of pumping, Kornev's findings suggest that butterflies draw liquid upwards using capillary action -- the same force that pulls liquid across a paper towel. The proboscis resembles a rolled-up paper towel, with tiny grooves that pull the liquid upwards along the edges, carrying along the bead of liquid in the middle of the tube. This process is not nearly as affected by viscosity as pumping.

Kornev has been recently awarded an NSF grant to develop artificial probes made of nanofibers that use a similar principal to draw out the viscous liquid inside of cells and examine their contents.

The presentation, "Butterfly proboscis as a biomicrofluidic system" by Konstantin Kornev et al of Clemson University is at 12:01 p.m. on Sunday, November 22, 2009.

###

Abstract: http://meetings.aps.org/Meeting/DFD09/Event/110814

MORE MEETING INFORMATION
The 62nd Annual DFD Meeting will be held at the Minneapolis Convention Center in downtown Minneapolis. All meeting information, including directions to the Convention Center is at: http://www.dfd2009.umn.edu/

PRESS REGISTRATION
Credentialed full-time journalist and professional freelance journalists working on assignment for major publications or media outlets are invited to attend the conference free of charge. If you are a reporter and would like to attend, please contact Jason Bardi (jbardi@aip.org, 301-209-3091).

USEFUL LINKS
Main meeting Web site: http://meetings.aps.org/Meeting/DFD09/Content/1629
Searchable form: http://meetings.aps.org/Meeting/DFD09/SearchAbstract
Local Conference Meeting Website: http://www.dfd2009.umn.edu/
PDF of Meeting Abstracts: http://flux.aps.org/meetings/YR09/DFD09/all_DFD09.pdf
Division of Fluid Dynamics page: http://www.aps.org/units/dfd/
Virtual Press Room: SEE BELOW

VIRTUAL PRESS ROOM
The APS Division of Fluid Dynamics Virtual Press Room will contain tips on dozens of stories as well as stunning graphics and lay-language papers detailing some of the most interesting results at the meeting. Lay-language papers are roughly 500 word summaries written for a general audience by the authors of individual presentations with accompanying graphics and multimedia files. The Virtual Press Room will serve as starting points for journalists who are interested in covering the meeting but cannot attend in person. See: http://www.aps.org/units/dfd/pressroom/index.cfm

Currently, the Division of Fluid Dynamics Virtual Press Room contains information related to the 2008 meeting. In mid-November, the Virtual Press Room will be updated for this year's meeting, and another news release will be sent out at that time.

ONSITE WORKSPACE FOR REPORTERS
A reserved workspace with wireless internet connections will be available for use by reporters. It will be located in the meeting exhibition hall (Ballroom AB) at the Minneapolis Convention Center on Sunday and Monday from 8:00 a.m. to 5:00 p.m. and on Tuesday from 8:00 a.m. to noon. Press announcements and other news will be available in the Virtual Press Room.

GALLERY OF FLUID MOTION
Every year, the APS Division of Fluid Dynamics hosts posters and videos that show stunning images and graphics from either computational or experimental studies of flow phenomena. The outstanding entries, selected by a panel of referees for artistic content, originality and ability to convey information, will be honored during the meeting, placed on display at the Annual APS Meeting in March of 2010, and will appear in the annual Gallery of Fluid Motion article in the September 2010 issue of the journal Physics of Fluids.

This year, selected entries from the 27th Annual Gallery of Fluid Motion will be hosted as part of the Fluid Dynamics Virtual Press Room. In mid-November, when the Virtual Press Room is launched, another announcement will be sent out.

ABOUT THE APS DIVISION OF FLUID DYNAMICS
The Division of Fluid Dynamics of the American Physical Society exists for the advancement and diffusion of knowledge of the physics of fluids with special emphasis on the dynamical theories of the liquid, plastic and gaseous states of matter under all conditions of temperature and pressure. See: http://www.aps.org/units/dfd/


[ Back to EurekAlert! ] [ Print | E-mail | Share Share ] [ Close Window ]

Monday, November 9, 2009

Diagnostic errors in medicine

Diagnostic errors in medicine

The latest issue of the Archives of internal medicine[2009;169(20):1881-1887] published an interesting paper analyzing the diagnostic errors in medicine. Physicians from a few reputed hospitals in USA discovered that missed or delayed diagnoses are a common but understudied area in patient safety research. They surveyed clinicians to solicit perceived cases of missed and delayed diagnoses to better understand the types, causes, and prevention of such errors.

They administered a 6-item written survey at 20 grand rounds presentations across the United States and by mail at 2 collaborating institutions. They asked the respondents to report three cases of diagnostic errors and to describe their perceived causes, seriousness, and frequency.

Three hundred and ten physicians reported 669 cases from 22 institutions. They excluded cases without diagnostic errors or lacking sufficient details; Of the 583 cases that remained ,162 errors (28%) were rated as major, 241 (41%) as moderate, and 180 (31%) as minor or insignificant. Pulmonary embolism was the most common missed or delayed diagnoses(26 cases [4.5% of total]followed by drug reactions or overdose (26 cases [4.5%]), lung cancer (23 cases [3.9%]), colorectal cancer (19 cases [3.3%]), acute coronary syndrome (18 cases [3.1%]), breast cancer (18 cases [3.1%]), and stroke (15 cases [2.6%]).

Clinicians made errors most frequently in the testing phase (failure to order, report, and follow-up laboratory results) (44%), followed by clinician assessment errors (failure to consider and over-weighing competing diagnosis) (32%), history taking (10%), physical examination (10%), and referral or consultation errors and delays (3%).

The researchers concluded that physicians readily recalled multiple cases of diagnostic errors and were willing to share their experiences. Using a new taxonomy tool and aggregating cases by diagnosis and error type revealed patterns of diagnostic failures that suggested areas for improvement. Systematic solicitation and analysis of such errors can identify potential preventive strategies.


The authors were from Departments of Medicine (Drs Schiff and Kim and Mss Krosnjar and Wisniewski) and Emergency Medicine (Dr Cosby), Cook County Hospital, Chicago, Illinois; Division of General Medicine and Primary Care, Brigham and Women's Hospital, Boston, Massachusetts (Drs Schiff and Hasan); Department of Medicine, Rush University, Chicago (Drs Schiff, Abrams, Hasler, and McNutt and Mr Odwazny); Departments of Health Policy and Administration (Dr Kim) and Medical Education (Dr Elstein) and College of Pharmacy (Dr Lambert), University of Illinois at Chicago; and Department of Family and Preventive Medicine, University of California, San Diego (Dr Kabongo).

Thursday, October 1, 2009

Spallation Neutron Source first of its kind to reach megawatt power

Public release date: 29-Sep-2009
Great news
DR Parthasarathy

Contact: Bill Cabage
cabagewh@ornl.gov
865-574-4399
DOE/Oak Ridge National Laboratory
Spallation Neutron Source first of its kind to reach megawatt power

OAK RIDGE, Tenn., Sept. 28, 2009 -- The Department of Energy's Spallation Neutron Source (SNS), already the world's most powerful facility for pulsed neutron scattering science, is now the first pulsed spallation neutron source to break the one-megawatt barrier.

"Advances in the materials sciences are fundamental to the development of clean and sustainable energy technologies. In reaching this milestone of operating power, the Spallation Neutron Source is providing scientists with an unmatched resource for unlocking the secrets of materials at the molecular level," said Dr. William F. Brinkman, Director of DOE's Office of Science.

SNS operators at DOE's Oak Ridge National Laboratory pushed the controls past the megawatt mark on September 18 as the SNS ramped up for its latest operational run.

"The attainment of one megawatt in beam power symbolizes the advancement in analytical resources that are now available to the neutron scattering community through the SNS," said ORNL Director Thom Mason, who led the SNS project during its construction. "This is a great achievement not only for DOE and Oak Ridge National Laboratory, but for the entire community of science."

Before the SNS, the world's spallation neutron sources operated in the hundred-kilowatt range. The SNS actually became a world-record holder in August 2007 when it reached 160 kilowatts, earning it an entry in the Guinness Book of World Records as the world's most powerful pulsed spallation neutron source.

Beam power isn't merely a numbers game. A more powerful beam means more neutrons are spalled from SNS's mercury target. For the researcher, the difference in beam intensity is comparable to the ability to see with a car's headlights versus a flashlight. More neutrons also enhance scientific opportunities, including flexibility for smaller samples and for real-time studies at shorter time scales. For example, experiments will be possible that use just one pulse of neutrons to illuminate the dynamics of scientific processes.

Eventually, the SNS will reach its design power of 1.4 megawatts. The gradual increase of beam power has been an ongoing process since the SNS was completed and activated in late April 2006.

In the meantime, scientists have been performing cutting-edge experiments and materials analysis as its eventual suite of 25 instruments comes on line. As DOE Office of Science user facilities, the SNS and its companion facility, the High Flux Isotope Reactor, host researchers from around the world for neutron scattering experiments.

###

ORNL is managed by UT-Battelle for the Department of Energy.

NOTE TO EDITORS: You may read other press releases from Oak Ridge National Laboratory or learn more about the lab at http://www.ornl.gov

Wednesday, September 23, 2009

Tel Aviv Univrsity's 'Dust Alert' exposes dangerous invisible pollution, pollen and construction waste

Public release date: 22-Sep-2009
[ Print | E-mail | Share Share ] [ Close Window ]

Contact: George Hunka
ghunka@aftau.org
212-742-9070
American Friends of Tel Aviv University
Tel Aviv University invention busts dust
TAU's 'Dust Alert' exposes dangerous invisible pollution, pollen and construction waste

Worried that dust from a nearby construction zone will harm your family's health? A new Tel Aviv University tool could either confirm your suspicions or better yet, set your mind at rest.

Prof. Eyal Ben-Dor and his Ph.D student Dr. Sandra Chudnovsky, of TAU's Department of Geography have developed a sensor called "Dust Alert" ― the first of its kind ― to help families and authorities monitor the quality of the air they breathe. Like an ozone gas or carbon monoxide meter, it measures the concentration of small particles that may contaminate the air in your home. Scientific studies on "Dust Alert" appeared recently in the journal Science of the Total Environment, Urban Air Pollution: Problems, Control Technologies and Management Practices.

"It works just like an ozone meter would," says Prof. Ben-Dor. "You put it in your home or office for three weeks, and it can give you real-time contamination levels in terms of dust, pollen and toxins." Functioning like a tiny chemistry lab, the device can precisely determine the chemical composition of the toxins, so homeowners, office managers and factories can act to improve air quality.

Using the measurements, Prof. Ben-Dor can sometimes find a quick remedy for a dusty or pollen-filled home. The solution could be as easy as keeping a window open, he says. "We've found through our ongoing research that some simple actions at home can have a profound effect on the quality of air we breathe."

Instant results

Based on a portable chemical analyzer called a spectrophotometer, the invention can be installed and begin to collect data within minutes, although several weeks' worth of samples produces the best assessment of air quality. The longer period allows for fluctuations in both internal and external environments, such as changing weather patterns.

The "Dust Alert" fills an important need. Polluted air, breathed in for weeks, months and sometimes years, can have fatal consequences, leading to asthma, bronchitis and lung cancer. With findings from Prof. Ben-Dor's invention, urban planners can provide better solutions and mitigate risks. "We can certainly give an accurate forecast about the health of a home or apartment for prospective home owners. If somebody in your family has an allergy, poor air quality can be a deal breaker ," says Prof. Ben-Dor.

Prof. Ben-Dor's device may be most useful in the aftermath of disasters, such as chemical fires, heavy dust storms, hurricanes or tragedies like 9/11. Survivors of these situations are usually unaware of the lingering environmental problems, and the government can't do enough to protect them because no accurate tools exist to define the risk. Using a Dust Alert, residents could be advised to vacate their homes and offices until the dust has cleared, or to take simple precautions such as aerating hazardous rooms in a flat, suggests Prof. Ben-Dor.

Putting dust on the map

According to Prof. Ben-Dor, the Dust Alert could also be used by cities and counties to develop "dust maps" that provide detailed environmental information about streets and neighborhoods, permitting government authorities like the EPA to more successfully identify and prosecute offenders. Currently, for example, there is no system for demonstrating how construction sites compromise people's health.

"Until now, people have had to grin and bear the polluted air they breathe," says Prof. Ben-Dor. "The Dust Alert could provide crucial reliable evidence of pollution, so that society at large can breathe easier. We can see the dust on the furniture and on the windows, but most of us can't see the dust we breathe. For the first time, we are able to detect it and measure its more dangerous components."

With their dust maps, TAU scientists have already correlated urban heat islands with high levels of particulate matter, giving urban planners crucial information for the development of green spaces and city parks. Prof. Ben-Dor also plans to develop his prototype into a home-and-office unit, while offering customized services that can help people decode what's left in the dust.

###

American Friends of Tel Aviv University (www.aftau.org) supports Israel's leading and most comprehensive center of higher learning. In independent rankings, TAU's innovations and discoveries are cited more often by the global scientific community than all but 20 other universities worldwide

Internationally recognized for the scope and groundbreaking nature of its research programs, Tel Aviv University consistently produces work with profound implications for the future.

[ Back to EurekAlert! ] [ Print | E-mail | Share Share ] [ Close Window ]

Diamonds may be the ultimate MRI probe, say Quantum physicists

A search for quantum computers led to a medical aplication. One day we may get MRI-like devices that can probe individual drug molecules and living cells.

K.S.Parthasarathy


Public release date: 22-Sep-2009
[ Print | E-mail | Share Share ] [ Close Window ]

Contact: Chad Boutin
boutin@nist.gov
301-975-4261
National Institute of Standards and Technology (NIST)
Diamonds may be the ultimate MRI probe, say Quantum physicists

Diamonds, it has long been said, are a girl's best friend. But a research team including a physicist from the National Institute of Standards and Technology (NIST) has recently found* that the gems might turn out to be a patient's best friend as well.

The team's work has the long-term goal of developing quantum computers, but it has borne fruit that may have more immediate application in medical science. Their finding that a candidate "quantum bit" has great sensitivity to magnetic fields hints that MRI-like devices that can probe individual drug molecules and living cells may be possible.

The candidate system, formed from a nitrogen atom lodged within a diamond crystal, is promising not only because it can sense atomic-scale variations in magnetism, but also because it functions at room temperature. Most other such devices used either in quantum computation or for magnetic sensing must be cooled to nearly absolute zero to operate, making it difficult to place them near live tissue. However, using the nitrogen as a sensor or switch could sidestep that limitation.

Diamond, which is formed of pure carbon, occasionally has minute imperfections within its crystalline lattice. A common impurity is a "nitrogen vacancy", in which two carbon atoms are replaced by a single atom of nitrogen, leaving the other carbon atom's space vacant. Nitrogen vacancies are in part responsible for diamond's famed luster, for they are actually fluorescent: when green light strikes them, the nitrogen atom's two excitable unpaired electrons glow a brilliant red.

The team can use slight variations in this fluorescence to determine the magnetic spin of a single electron in the nitrogen. Spin is a quantum property that has a value of either "up" or "down," and therefore could represent one or zero in binary computation. The team's recent achievement was to transfer this quantum information repeatedly between the nitrogen electron and the nuclei of adjacent carbon atoms, forming a small circuit capable of logic operations. Reading a quantum bit's spin information—a fundamental task for a quantum computer—has been a daunting challenge, but the team demonstrated that by transferring the information back and forth between the electron and the nuclei, the information could be amplified, making it much easier to read.

Still, NIST theoretical physicist Jacob Taylor said the findings are "evolutionary, not revolutionary" for the quantum computing field and that the medical world may reap practical benefits from the discovery long before a working quantum computer is built. He envisions diamond-tipped sensors performing magnetic resonance tests on individual cells within the body, or on single molecules drug companies want to investigate—a sort of MRI scanner for the microscopic. "That's commonly thought not to be possible because in both of these cases the magnetic fields are so small," Taylor says. "But this technique has very low toxicity and can be done at room temperature. It could potentially look inside a single cell and allow us to visualize what's happening in different spots."

The Harvard University-based team also includes scientists from the Joint Quantum Institute (a partnership of NIST and the University of Maryland), the Massachusetts Institute of Technology and Texas A&M University.

###

* L. Jiang, J.S. Hodges, J.R. Maze, P. Maurer, J.M. Taylor, D.G. Cory, P.R. Hemmer, R.L. Walsworth, A. Yacoby, A.S. Zibrov and M.D. Lukin. Repetitive readout of a single electronic spin via quantum logic with nuclear spin ancillae. Science, DOI: 10.1126/science.1176496, published online Sept. 10, 2009.

See http://www.nist.gov/public_affairs/techbeat/tb2009_0922.htm#diamonds for illustration to accompany story.

[ Back to EurekAlert! ] [ Print | E-mail | Share Share ] [ Close Window ]

Tuesday, September 15, 2009

Study identifies which children do not need CT scans after head trauma

There is general awareness that children must not be exposed to unwanted x-ray doses. This paper came ta the opportune time. Bold implementation of the guidelines will be essential. Professional associations must review the work urgently and accept the guidelines with modifications if any required

Dr.K.S.Parthasarathy

EureKAlert

Public release date: 14-Sep-2009
[ Print | E-mail | Share Share ] [ Close Window ]

Contact: Charlie Casey
charles.casey@ucdmc.ucdavis.edu
916-734-9048
University of California - Davis - Health System
Study identifies which children do not need CT scans after head trauma
Research provides new guidelines to identify children with mild injuries and reduce radiation exposure from CT

A substantial percentage of children who get CT scans after apparently minor head trauma do not need them, and as a result are put at increased risk of cancer due to radiation exposure. After analyzing more than 42,000 children with head trauma, a national research team led by two UC Davis emergency department physicians has developed guidelines for doctors who care for children with head trauma aimed at reducing those risks.

Their findings appear in an article published online today and in an upcoming edition of The Lancet.

The collaborative study includes data collected at 25 hospitals from children who were evaluated for the possibility of serious brain injury following trauma to the head. Researchers found that one in five children over the age of 2 and nearly one-quarter of those under 2 who received CT scans following head trauma did not need them because they were at very low risk of having serious brain injuries. In these low-risk children, the risk of developing cancer due to radiation exposure outweighed the risk of serious brain injury.

"When you have a sample size this large, it is easier to get your hands on the truth," said Nathan Kuppermann, professor and chair of emergency medicine, professor of pediatrics at UC Davis Children's Hospital and lead author of the study. "We think our investigation provides the best available evidence regarding the use of CT scans in children with head trauma, and it indicates that CT use can be safely reduced by eliminating its application in those children who are at very low risk of serious brain injuries."

As part of the study, Kuppermann and his colleagues developed a set of rules for identifying low-risk patients who would not need a CT. The "prediction rules" for children under 2 and for those 2 and older depend on the presence or absence of various symptoms and circumstances, including the way the injury was sustained, a history of loss of consciousness, neurological status at the time of evaluation and clinical evidence of skull fracture for both age groups. The use of CT in patients who do not fall into the low-risk group identified by the prediction rules will depend on other factors, such as the physician's experience, the severity and number of symptoms, and other factors.

The Centers for Disease Control estimates that 435,000 children under 14 visit emergency rooms every year to be evaluated for traumatic brain injury (TBI). Not all head trauma results in a TBI. The severity of a brain injury may range from mild, causing brief change in mental status or consciousness, to severe, causing permanent symptoms and irreversible damage.

For years, studies have suggested that CT scans were being overused to rule out traumatic brain injuries. However, those studies were considered too small to be sufficiently accurate and not precise enough to be widely applicable to a general population. The sheer size of the current study, and the fact that the investigators created the accurate prediction rules with one large group of children with head trauma and then tested the rules on another large but separate group to demonstrate their validity, allows physicians to have confidence in the results. The researchers emphasized, however, that the rules are not intended to replace clinical judgment.

"We're arming the clinician with the best available evidence so that they can make the best decisions," said James Holmes, professor of emergency medicine at UC Davis School of Medicine and a co-author of the report. "There certainly are instances when the risks of radiation are worth it, such as in cases of blunt head trauma which result in changes in neurological status or clinical evidence of skull fractures. However, clinicians need reliable data to help them make those judgment calls when it is not clear whether or not a patient needs a CT. Until now, physicians haven't had data based on large and validated research."

The current study comes on the heels of an article published in late August by The New England Journal of Medicine that showed that at least 4 million Americans under age 65 are exposed to high doses of radiation each year from medical imaging tests, with CT scans accounting for almost one half of the total radiation dose. About 10 percent of those get more than the maximum annual exposure allowed for nuclear power plant employees or anyone else who works with radioactive material.

Studies show that exposure to radiation increases the risk of cancer. Radiation exposure to the brain of developing children is of particular concern and must be weighed carefully against the risk of traumatic brain injury that could cause permanent damage or death if not identified early. If the new guidelines are applied appropriately, the use of CT scans nationwide could be significantly reduced.

The effort was made possible by the Pediatric Emergency Care Applied Research Network (PECARN), which enabled the massive collection of data. Supported by the U.S. Department of Health and Human Services' Emergency Medical Services for Children Program, PECARN is the first federally-funded, multi-institutional network for research in pediatric emergency medicine in the nation. The network conducts research into the prevention and management of acute illnesses and injuries in children and youth across the continuum of emergency medicine and health care.

"Children with medical and traumatic illnesses usually have good outcomes, but you need a lot of children to assess factors and treatments that predict both good and bad outcomes. By studying large numbers of children, in a variety of settings and from diverse populations, the results will more likely be applicable to the general population. That's the power of PECARN," Kuppermann said. "Combined, our network of emergency departments around the country evaluates approximately 1 million children per year."

Along with the UC Davis team, key PECARN researchers in the Lancet study included Peter S. Dayan, from New York-Presbyterian Hospital and Columbia University Medical Center in New York; John D. Hoyle, Jr., from Helen DeVos Children's Hospital in Grand Rapids; Shireen M. Atabaki, from Children's National Medical Center in Washington, D.C.; and Richard Holubkov from the PECARN Data Coordinating Center at the University of Utah.

In order to create the prediction rules, the PECARN investigators studied outcomes in more than 42,000 children with minor initial symptoms and signs of head trauma. CT scans were performed in nearly 15,000 of those patients. Serious brain injuries were diagnosed in 376 children, and 60 children underwent neurosurgery.

Using these data, the researchers developed two prediction rules for identifying mild cases that do not need CT scans. One rule was developed for children under the age of 2 and another for those 2 and over. It was important to study children under 2 separately because they cannot communicate their symptoms or offer information as well as older children, and they are more sensitive to the effects of radiation.

Children under 2 who fell into the low-risk group showed normal mental status, no scalp swelling, no significant loss of consciousness, no palpable skull fracture, were normal-acting (according to the parent), and had an injury that was sustained in a non-severe way. Severe accidents, which excluded children from the low-risk group, included motor vehicle crashes in which the patient was ejected, and bicycle accidents involving automobiles, in which the patient was not wearing a helmet. Key indicators for children older than 2 who were at low-risk for brain injury included normal mental status, no loss of consciousness, no vomiting, no signs of fracture of the base of skull, no severe headache, and they did not sustain the injury in a serious accident.

The researchers then validated these rules by applying them to data from a second population of more than 8,600 children. In more than 99.9 percent of the cases, the rules accurately predicted children who were not diagnosed with serious brain injuries and were therefore indeed at low risk..

The researchers also identified and separated children at intermediate and high risk of serious brain injuries. Those in the high-risk group should receive CT scans, the researchers wrote. The PECARN team is currently working on refining recommendations for the use of CT scans in those at intermediate risk. Until now, emergency room physicians have relied mostly on instincts when deciding whether or not the symptoms of a child with head trauma warrant the use of CT.

"Now we have much better evidence to assist with making decisions regarding CT use," Kuppermann said.

###

UC Davis has been part of PECARN since its inception in 2001. It is the leading center in one of four PECARN Research Nodes, which also includes Children's Hospital of Philadelphia; St. Louis Children's Hospital; Children's Hospital of Wisconsin; Cincinnati Children's Hospital Medical Center; and Primary Children's Medical Center in Salt Lake City.

A total of 32 PECARN researchers were substantially involved in this study. This research was supported by the Emergency Medical Services for Children program of the Maternal and Child Health Bureau, and the Maternal and Child Health Bureau Research Program, Health Resources and Services Administration, U.S. Department of Health and Human Services.

Friday, September 11, 2009

Caltech scientists develop novel use of neurotechnology to solve classic social problem

Public release date: 10-Sep-2009


Contact: Lori Oliwenstein
lorio@caltech.edu
626-395-3631
California Institute of Technology
Caltech scientists develop novel use of neurotechnology to solve classic social problem
Research shows how brain imaging can be used to create new and improved solutions to the public-goods provision problem

PASADENA, Calif.—Economists and neuroscientists from the California Institute of Technology (Caltech) have shown that they can use information obtained through functional magnetic resonance imaging (fMRI) measurements of whole-brain activity to create feasible, efficient, and fair solutions to one of the stickiest dilemmas in economics, the public goods free-rider problem—long thought to be unsolvable.

This is one of the first-ever applications of neurotechnology to real-life economic problems, the researchers note. "We have shown that by applying tools from neuroscience to the public-goods problem, we can get solutions that are significantly better than those that can be obtained without brain data," says Antonio Rangel, associate professor of economics at Caltech and the paper's principal investigator.

The paper describing their work was published today in the online edition of the journal Science, called Science Express.

Examples of public goods range from healthcare, education, and national defense to the weight room or heated pool that your condominium board decides to purchase. But how does the government or your condo board decide which public goods to spend its limited resources on? And how do these powers decide the best way to share the costs?

"In order to make the decision optimally and fairly," says Rangel, "a group needs to know how much everybody is willing to pay for the public good. This information is needed to know if the public good should be purchased and, in an ideal arrangement, how to split the costs in a fair way."

In such an ideal arrangement, someone who swims every day should be willing to pay more for a pool than someone who hardly ever swims. Likewise, someone who has kids in public school should have more of her taxes put toward education.

But providing public goods optimally and fairly is difficult, Rangel notes, because the group leadership doesn't have the necessary information. And when people are asked how much they value a particular public good—with that value measured in terms of how many of their own tax dollars, for instance, they'd be willing to put into it—their tendency is to lowball.

Why? "People can enjoy the good even if they don't pay for it," explains Rangel. "Underreporting its value to you will have a small effect on the final decision by the group on whether to buy the good, but it can have a large effect on how much you pay for it."

In other words, he says, "There's an incentive for you to lie about how much the good is worth to you."

That incentive to lie is at the heart of the free-rider problem, a fundamental quandary in economics, political science, law, and sociology. It's a problem that professionals in these fields have long assumed has no solution that is both efficient and fair.

In fact, for decades it's been assumed that there is no way to give people an incentive to be honest about the value they place on public goods while maintaining the fairness of the arrangement.

"But this result assumed that the group's leadership does not have direct information about people's valuations," says Rangel. "That's something that neurotechnology has now made feasible."

And so Rangel, along with Caltech graduate student Ian Krajbich and their colleagues, set out to apply neurotechnology to the public-goods problem.

In their series of experiments, the scientists tried to determine whether functional magnetic resonance imaging (fMRI) could allow them to construct informative measures of the value a person assigns to one or another public good. Once they'd determined that fMRI images—analyzed using pattern-classification techniques—can confer at least some information (albeit "noisy" and imprecise) about what a person values, they went on to test whether that information could help them solve the free-rider problem.

They did this by setting up a classic economic experiment, in which subjects would be rewarded (paid) based on the values they were assigned for an abstract public good.

As part of this experiment, volunteers were divided up into groups. "The entire group had to decide whether or not to spend their money purchasing a good from us," Rangel explains. "The good would cost a fixed amount of money to the group, but everybody would have a different benefit from it."

The subjects were asked to reveal how much they valued the good. The twist? Their brains were being imaged via fMRI as they made their decision. If there was a match between their decision and the value detected by the fMRI, they paid a lower tax than if there was a mismatch. It was, therefore, in all subjects' best interest to reveal how they truly valued a good; by doing so, they would on average pay a lower tax than if they lied.

"The rules of the experiment are such that if you tell the truth," notes Krajbich, who is the first author on the Science paper, "your expected tax will never exceed your benefit from the good."

In fact, the more cooperative subjects are when undergoing this entirely voluntary scanning procedure, "the more accurate the signal is," Krajbich says. "And that means the less likely they are to pay an inappropriate tax."

This changes the whole free-rider scenario, notes Rangel. "Now, given what we can do with the fMRI," he says, "everybody's best strategy in assigning value to a public good is to tell the truth, regardless of what you think everyone else in the group is doing."

And tell the truth they did—98 percent of the time, once the rules of the game had been established and participants realized what would happen if they lied. In this experiment, there is no free ride, and thus no free-rider problem.

"If I know something about your values, I can give you an incentive to be truthful by penalizing you when I think you are lying," says Rangel.

While the readings do give the researchers insight into the value subjects might assign to a particular public good, thus allowing them to know when those subjects are being dishonest about the amount they'd be willing to pay toward that good, Krajbich emphasizes that this is not actually a lie-detector test.

"It's not about detecting lies," he says. "It's about detecting values—and then comparing them to what the subjects say their values are."

"It's a socially desirable arrangement," adds Rangel. "No one is hurt by it, and we give people an incentive to cooperate with it and reveal the truth."

"There is mind reading going on here that can be put to good use," he says. "In the end, you get a good produced that has a high value for you."

From a scientific point of view, says Rangel, these experiments break new ground. "This is a powerful proof of concept of this technology; it shows that this is feasible and that it could have significant social gains."

And this is only the beginning. "The application of neural technologies to these sorts of problems can generate a quantum leap improvement in the solutions we can bring to them," he says.

Indeed, Rangel says, it is possible to imagine a future in which, instead of a vote on a proposition to fund a new highway, this technology is used to scan a random sample of the people who would benefit from the highway to see whether it's really worth the investment. "It would be an interesting alternative way to decide where to spend the government's money," he notes.

###

In addition to Rangel and Krajbich, other authors on the Science paper, "Using neural measures of economic value to solve the public goods free-rider problem," include Caltech's Colin Camerer, the Robert Kirby Professor of Behavioral Economics, and John Ledyard, the Allen and Lenabelle Davis Professor of Economics and Social Sciences. Their work was funded by grants from the National Science Foundation, the Gordon and Betty Moore Foundation, and the Human Frontier Science Program.


[ Back to EurekAlert! ] [ Print | E-mail | Share Share ] [ Close Window ]

Environmental scientists estimate that China could meet its entire future energy needs by wind alone

Public release date: 10-Sep-2009

Contact: Michael Patrick Rutter
mrutter@seas.harvard.edu
617-496-3815
Harvard University
Environmental scientists estimate that China could meet its entire future energy needs by wind alone
Study suggests that wind is ecologically and economically practical and could reduce CO2 emissions

Cambridge, Mass. – September 10, 2009 – A team of environmental scientists from Harvard and Tsinghua University demonstrated the enormous potential for wind-generated electricity in China. Using extensive metrological data and incorporating the Chinese government's energy bidding and financial restrictions for delivering wind power, the researchers estimate that wind alone has the potential to meet the country's electricity demands projected for 2030.

The switch from coal and other fossil fuels to greener wind-based energy could also mitigate CO2 emissions, thereby reducing pollution. The report appeared as a cover story in the September 11th issue of Science.

"The world is struggling with the question of how do you make the switch from carbon-rich fuels to something carbon-free," said lead author Michael B. McElroy, Gilbert Butler Professor of Environmental Studies at Harvard's School of Engineering and Applied Sciences (SEAS).

China has become second only to the U.S. in its national power generating capacity— 792.5 gigawatts per year with an expected future 10 percent annual increase—and is now the world's largest CO2 emitter. Thus, added McElroy, "the real question for the globe is: What alternatives does China have?"

While wind-generated energy accounts for only 0.4 percent of China's total current electricity supply, the country is rapidly becoming the world's fastest growing market for wind power, trailing only the U.S., Germany, and Spain in terms of installed capacities of existing wind farms.

Development of renewable energy in China, especially wind, received an important boost with passage of the Renewable Energy Law in 2005; the law provides favorable tax status for alternative energy investments. The Chinese government also established a concession bidding process to guarantee a reasonable return for large wind projects.

"To determine the viability of wind-based energy for China we established a location-based economic model, incorporating the bidding process, and calculated the energy cost based on geography," said co-author Xi Lu, a graduate student in McElroy's group at SEAS. "Using the same model we also evaluated the total potentials for wind energy that could be realized at a certain cost level."

Specifically, the researchers used meteorological data from the Goddard Earth Observing Data Assimilation System (GEOS) at NASA. Further, they assumed the wind energy would be produced from a set of land-based 1.5-megawatt turbines operating over non-forested, ice-free, rural areas with a slope no more than 20 percent.

"By bringing the capabilities of atmospheric science to the study of energy we were able to view the wind resource in a total context," explained co-author Chris P. Nielsen, Executive Director of the Harvard China Project, based at SEAS.

The analysis indicated that a network of wind turbines operating at as little as 20 percent of their rated capacity could provide potentially as much as 24.7 petawatt-hours of electricity annually, or more than seven times China's current consumption. The researchers also determined that wind energy alone, at around 7.6 U.S. Cents per kilowatt-hour, could accommodate the country's entire demand for electricity projected for 2030.

"Wind farms would only need to take up land areas of 0.5 million square kilometers, or regions about three quarters of the size of Texas. The physical footprints of wind turbines would be even smaller, allowing the areas to remain agricultural," said Lu.

By contrast, to meet the increased demand for electricity during the next 20 years using fossil fuel-based energy sources, China would have to construct coal-fired power plants that could produce the equivalent of 800 gigawatts of electricity, resulting in a potential increase of 3.5 gigatons of CO2 per year. The use of cleaner wind energy could both meet future demands and, even if only used to supplement existing energy sources, significantly reduce carbon emissions.

Moving to a low-carbon energy future would require China to make an investment of around $900 billion dollars (at current prices) over the same twenty-year period. The scientists consider this a large but not unreasonable investment given the present size of the Chinese economy. Moreover, whatever the energy source, the country will need to build and support an expanded energy grid to accommodate the anticipated growth in power demand.

"We are trying to cut into the current defined demand for new electricity generation in China, which is roughly a gigawatt a week—or an enormous 50 gigawatts per year," said McElroy. "China is bringing on several coal fire power plants a week. By publicizing the opportunity for a different way to go we will hope to have a positive influence."

In the coming months, the researchers plan to conduct a more intensive wind study in China, taking advantage of 25-year data with significantly higher spatial resolution for north Asian regions to investigate the geographical year-to-year variations of wind. The model used for assessing China could also be applied for assessing wind potential anywhere in the world, onshore and offshore, and could be extended to solar generated electricity.

###

Yuxuan Wang, Associate Professor in the Department of Environmental Science and Engineering at Tsinghua University, Beijing, China, also contributed to the study. The team's research was supported by a grant from the National Science Foundation (NSF).

Carbon nanotubes could make efficient solar cells

Public release date: 10-Sep-2009


Contact: Blaine Friedlander
bpf2@cornell.edu
607-254-8093
Cornell University
Carbon nanotubes could make efficient solar cells

Using a carbon nanotube instead of traditional silicon, Cornell researchers have created the basic elements of a solar cell that hopefully will lead to much more efficient ways of converting light to electricity than now used in calculators and on rooftops.

The researchers fabricated, tested and measured a simple solar cell called a photodiode, formed from an individual carbon nanotube. Reported online Sept. 11 in the journal Science, the researchers -- led by Paul McEuen, the Goldwin Smith Professor of Physics, and Jiwoong Park, assistant professor of chemistry and chemical biology -- describe how their device converts light to electricity in an extremely efficient process that multiplies the amount of electrical current that flows. This process could prove important for next-generation high efficiency solar cells, the researchers say.

"We are not only looking at a new material, but we actually put it into an application -- a true solar cell device," said first author Nathan Gabor, a graduate student in McEuen's lab.

The researchers used a single-walled carbon nanotube, which is essentially a rolled-up sheet of graphene, to create their solar cell. About the size of a DNA molecule, the nanotube was wired between two electrical contacts and close to two electrical gates, one negatively and one positively charged. Their work was inspired in part by previous research in which scientists created a diode, which is a simple transistor that allows current to flow in only one direction, using a single-walled nanotube. The Cornell team wanted to see what would happen if they built something similar, but this time shined light on it.

Shining lasers of different colors onto different areas of the nanotube, they found that higher levels of photon energy had a multiplying effect on how much electrical current was produced.

Further study revealed that the narrow, cylindrical structure of the carbon nanotube caused the electrons to be neatly squeezed through one by one. The electrons moving through the nanotube became excited and created new electrons that continued to flow. The nanotube, they discovered, may be a nearly ideal photovoltaic cell because it allowed electrons to create more electrons by utilizing the spare energy from the light.

This is unlike today's solar cells, in which extra energy is lost in the form of heat, and the cells require constant external cooling.

Though they have made a device, scaling it up to be inexpensive and reliable would be a serious challenge for engineers, Gabor said.

"What we've observed is that the physics is there," he said.

###

The research was supported by Cornell's Center for Nanoscale Systems and the Cornell NanoScale Science and Technology Facility, both National Science Foundation facilities, as well as the Microelectronics Advanced Research Corporation Focused Research Center on Materials, Structures and Devices. Research collaborators also included Zhaohui Zhong, of the University of Michigan, and Ken Bosnick, of the National Institute for Nanotechnology at University of Alberta.

(Text by Anne Ju, Cornell Chronicle)

Monday, September 7, 2009

Making more efficient fuel cells

Public release date: 6-Sep-2009
[ Print | E-mail | Share Share ] [ Close Window ]

Contact: Dianne Stilwell
diannestilwell@me.com
44-795-720-0214
Society for General Microbiology
Making more efficient fuel cells

Bacteria that generate significant amounts of electricity could be used in microbial fuel cells to provide power in remote environments or to convert waste to electricity. Professor Derek Lovley from the University of Massachusetts, USA isolated bacteria with large numbers of tiny projections called pili which were more efficient at transferring electrons to generate power in fuel cells than bacteria with a smooth surface. The team's findings were reported at the Society for General Microbiology's meeting at Heriot-Watt University, Edinburgh, today (7 September).

The researchers isolated a strain of Geobacter sulfurreducens which they called KN400 that grew prolifically on the graphite anodes of fuel cells. The bacteria formed a thick biofilm on the anode surface, which conducted electricity. The researchers found large quantities of pilin, a protein that makes the tiny fibres that conduct electricity through the sticky biofilm.

"The filaments form microscopic projections called pili that act as microbial nanowires," said Professor Lovley, "using this bacterial strain in a fuel cell to generate electricity would greatly increase the cell's power output."

The pili on the bacteria's surface seemed to be primarily for electrical conduction rather than to help them to attach to the anode; mutant forms without pili were still able to stay attached.

Microbial fuel cells can be used in monitoring devices in environments where it is difficult to replace batteries if they fail but to be successful they need to have an efficient and long-lasting source of power. Professor Lovley described how G. sulfurreducens strain KN400 might be used in sensors placed on the ocean floor to monitor migration of turtles.

###

Using waste to recover waste uranium

Public release date: 6-Sep-2009
[ Print | E-mail | Share Share ] [ Close Window ]

Contact: Dianne Stilwell
diannestilwell@me.com
44-795-720-0214
Society for General Microbiology
Using waste to recover waste uranium

Using bacteria and inositol phosphate, a chemical analogue of a cheap waste material from plants, researchers at Birmingham University have recovered uranium from the polluted waters from uranium mines. The same technology can also be used to clean up nuclear waste. Professor Lynne Macaskie, this week (7-10 September), presented the group's work to the Society for General Microbiology's meeting at Heriot-Watt University, Edinburgh.

Bacteria, in this case, E. coli, break down a source of inositol phosphate (also called phytic acid), a phosphate storage material in seeds, to free the phosphate molecules. The phosphate then binds to the uranium forming a uranium phosphate precipitate on the bacterial cells that can be harvested to recover the uranium.

This process was first described in 1995, but then a more expensive additive was used and that, combined with the then low price of uranium, made the process uneconomic. The discovery that inositol phosphate was potentially six times more effective as well as being a cheap waste material means that the process becomes economically viable, especially as the world price of uranium is likely to increase as countries move to expand their nuclear technologies in a bid to produce low-carbon energy.

As an example, if pure inositol phosphate, bought from a commercial supplier is used, the cost of this process is £1.72 per gram of uranium recovered. If a cheaper source of inositol phosphate is used (eg calcium phytate) the cost reduces to £0.09 for each gram of recovered uranium. At 2007 prices, uranium cost £0.211/g; it is currently £0.09/g. These prices make the process economic overall because there is also an environmental protection benefit. Use of low-grade inositol phosphate from agricultural wastes would bring the cost down still further and the economic benefit will also increase as the price of uranium is forecast to rise again.

"The UK has no natural uranium reserves, although a significant amount of uranium is produced in nuclear wastes. There is no global shortage of uranium but from the point of view of energy security the EU needs to be able to recover as much uranium as possible from mine run-offs (which in any case pollute the environment) as well as recycling as much uranium as possible from nuclear wastes," commented Professor Macaskie, "By using a cheap feedstock easily obtained from plant wastes we have shown that an economic, scalable process for uranium recovery is possible".

###

Friday, July 3, 2009



Public release date: 2-Jul-2009


Contact: Andreas Willert
andreas.willert@enas.fraunhofer.de
49-371-531-32109
Fraunhofer-Gesellschaft
Printable batteries

This release is available in German.



IMAGE: The small, thin battery comes out of the printer and can be applied to flexible substrates.
Click here for more information.




In the past, it was necessary to race to the bank for every money transfer and every bank statement. Today, bank transactions can be easily carried out at home. Now where is that piece of paper again with the TAN numbers? In the future you can spare yourself the search for the number. Simply touch your EC card and a small integrated display shows the TAN number to be used. Just type in the number and off you go. This is made possible by a printable battery that can be produced cost-effectively on a large scale. It was developed by a research team led by Prof. Dr. Reinhard Baumann of the Fraunhofer Research Institution for Electronic Nano Systems ENAS in Chemnitz together with colleagues from TU Chemnitz and Menippos GmbH. "Our goal is to be able to mass produce the batteries at a price of single digit cent range each," states Dr. Andreas Willert, group manager at ENAS.

The characteristics of the battery differ significantly from those of conventional batteries. The printable version weighs less than one gram on the scales, is not even one millimeter thick and can therefore be integrated into bank cards, for example. The battery contains no mercury and is in this respect environmentally friendly. Its voltage is 1.5 V, which lies within the normal range. By placing several batteries in a row, voltages of 3 V, 4.5 V and 6 V can also be achieved. The new type of battery is composed of different layers: a zinc anode and a manganese cathode, among others. Zinc and manganese react with one another and produce electricity. However, the anode and the cathode layer dissipate gradually during this chemical process. Therefore, the battery is suitable for applications which have a limited life span or a limited power requirement, for instance greeting cards.

The batteries are printed using a silk-screen printing method similar to that used for t-shirts and signs. A kind of rubber lip presses the printing paste through a screen onto the substrate. A template covers the areas that are not to be printed on. Through this process it is possible to apply comparatively large quantities of printing paste, and the individual layers are slightly thicker than a hair. The researchers have already produced the batteries on a laboratory scale. At the end of this year, the first products could possibly be finished.

###

Thursday, May 21, 2009

MIT: Slow growth of nuclear could harm climate efforts

The rate of deployment of new nuclear power plants around the world has been much slower than needed in order to combat climate change, the Massachusetts Institute of Technology (MIT) said in an update of its in-depth study on the future of nuclear power.
K.S.Parthasarathy



WNN
Energy And Environment
MIT: Slow growth of nuclear could harm climate efforts
21 May 2009

The rate of deployment of new nuclear power plants around the world has been much slower than needed in order to combat climate change, the Massachusetts Institute of Technology (MIT) said in an update of its in-depth study on the future of nuclear power.

The 2003 edition of the Future of Nuclear Power report said "that in order to make a serious contribution to alleviating global climate change, the world would need new nuclear plants with a total capacity of at least a terawatt [1000 gigawatts] by 2050."

In its updated study, MIT says, "Since the 2003 report, interest in using electricity for plug-in hybrids and electric cars to replace motor gasoline has increased, thus placing an even greater importance on exploiting the use of carbon-free electricity generating technologies."

It added, "With regard to nuclear power, while there has been some progress since 2003, increased deployment of nuclear power has been slow both in the United States and globally, in relation to the illustrative scenario examined in the 2003 report."

MIT noted, "While the intent to build new plants has been made public in several countries, there are only few firm commitments outside of Asia, in particular China, India, and Korea, to construction projects at this time. Even if all the announced plans for new nuclear power plant construction are realized, the total will be well behind that needed for reaching a thousand gigawatts of new capacity worldwide by 2050."

In its updated study, MIT says that, compared to 2003, "the motivation to make more use of nuclear power is greater, and more rapid progress is needed in enabling the option of nuclear power expansion to play a role in meeting the global warming challenge." It added, "The sober warning is that if more is not done, nuclear power will diminish as a practical and timely option for deployment at a scale that would constitute a material contribution to climate change risk mitigation."

Construction costs up

The latest study noted that, "Since 2003 construction costs for all types of large-scale engineered projects have escalated dramatically. The estimated cost of constructing a nuclear power plant has increased at a rate of 15% per year heading into the current economic downturn. This is based both on the cost of actual builds in Japan and Korea and on the projected cost of new plants planned for in the United States. Capital costs for both coal and natural gas have increased as well, although not by as much. The cost of natural gas and coal that peaked sharply is now receding. Taken together, these escalating costs leave the situation [of relative costs] close to where it was in 2003."

According to MIT's study, the overnight capital cost of constructing a nuclear power plant is $4000 per kilowatt (kW), in 2007 dollars. This compares with a figure of $2000/kW, in 2002 dollars, given in the original 2003 study.

The updated study says that, applying the same cost of capital to nuclear as to coal and gas, nuclear came out at 6.6 c/kWh, coal at 8.3 cents and gas at 7.4 cents, assuming a carbon charge of $25 per tonne of CO2 on the latter.



[The updated study can be downloaded from MIT's website]
Will this development help countries which have difficulties in getting electric power due to the peculiarities of geography?
K.S.Parthasarathy



WNN
New Nuclear
Assembly of Russian floating plant starts
20 May 2009

A ceremony has been held to mark the start of the assembly of the world's first floating nuclear power plant in St Petersburg, Russia. Construction had earlier been transferred from Severodvinsk.


The keel was originally laid for the first floating plant - the Akademik Lomonosov - at the Sevmash shipyard in Severodvinsk in April 2007. However, in 2008, Rosatom said that it was to transfer its construction to the Baltiysky Zavod shipbuilding company in Saint Petersburg because Sevmash was inundated with military contracts.



Click to enlarge
Five floating reactors could go to Gazprom to power oil and
gas extraction in Kola and Yamal, with four more used in
northern Yakutia in connection with mining operations. Seven
or eight units could be produced by 2015. (Click to enlarge)


A contract was signed on 27 February 2009 between Rosatom and the Baltiysky Zavod shipyard for completion of the plant. The contract was valued at almost 10 billion roubles ($315 million). A new keel has now been laid at Saint Petersburg for the first floating plant. As part of the contract, Baltiysky Zavod will receive the incomplete floating plants started by Sevmash.

The first plant will house two 35 MW KLT-40S nuclear reactors, similar to those used in Russia's nuclear powered ice breakers, and two generators, and will be capable of supplying a city of 200,000 people. OKBM will design and supply the reactors, while Kaluga Turbine Plant will supply the turbo-generators.



The Akademik Lomonosov was originally destined for the Archangelsk industrial shipyard, which is near to Severodvinsk in northwestern Russia, but the vessel is now destined for Vilyuchinsk, in the Kamchatka region in Russia's far east.


Baltiysky Zavod is to complete the floating plant in 2011. It should then be ready for transportation by the second quarter of 2012 and is set to be handed over to Energoatom by the end of 2012. Rosatom is planning to construct seven further floating nuclear power plants in addition to the one now under construction, with several remote areas under consideration for their deployment. Gazprom is expected to use a number of the floating units in order to exploit oil and gas fields near the Kola and Yamal Peninsulars.

Speaking at the ceremony, Sergey Obozov, director general of Energoatom, said that construction of a second floating plant may start in the autumn of 2010. He said, "We already have agreement with the authorities of Chukotka to station the plant in Pevek."

Bacteria with a built-in thermometer

Enigmatic features of tiny creatures
Parthasarathy



[ Public release date: 20-May-2009


Contact: Dr. Bastian Dornbach
bastian.dornbach@helmhotz-hzi.de
49-053-161-811-407
Helmholtz Association of German Research Centres
Bacteria with a built-in thermometer
Researchers at the Helmholtz Center demonstrate how bacteria measure temperature and thereby control infection

Researchers in the "Molecular Infection Biology group" at the Helmholtz Centre for Infection Research (HZI) in Braunschweig and the Braunschweig Technical University could now demonstrate for the first time that bacteria of the Yersinia genus possess a unique protein thermometer – the protein RovA - which assists them in the infection process. RovA is a multi-functional sensor: it measures both the temperature of its host as well as the host's metabolic activity and nutrients. If these are suitable for the survival of the bacteria, the RovA protein activates genes for the infection process to begin. These results have now been published in the current online edition of the PLoS Pathogens science magazine.

Yersinia can trigger various different diseases: best well-known is the Yersinia pestis type which caused the Plague in medieval times. This led to the death of around a third of Europe's population. The Yersinia enterocolitica and Yersinia pseudotuberculosis species cause an inflammation of the intestines following food poisoning: the bacteria infect the cells of the intestines, leading to heavy bouts of diarrhoea. The Yersinia bacteria contain invasin as a surface protein to help them penetrate the intestinal cells. The immune cells quickly identify this so-called virulence factor as a danger and launch an immune response. To avoid this, the bacteria quickly lose the invasin soon after entering the body. The germs then adapt their metabolism and feed on the nutrients prepared by the host cells. They also produce substances which kill off the body's defence cells, such as phagocytes. Little was known about how Yersinia is able to regulate these individual stages of infection until now.

Researchers at the HZI, led by Petra Dersch, have now identified how these mechanisms work. The RovA protein plays a key role. The protein reads the temperature for the bacteria. Depending on the environment of the bacteria, this protein either contains the factors required for the infection to begin or else adapts to life within the host. "The functioning of RovA in this way is unique among bacteria," says Petra Dersch.

If inhabiting an environment of around 25°C, the protein RovA ensures that the Yersinia bacteria form invasin as a surface protein. This ensures that the Yersinia can penetrate the intestinal cells immediately upon reaching the 37°C intestine via food. In this warm environment, the RovA alters its form and de-activates the gene for invasin production. Without invasin on their surface, the Yersinia bacteria are invisible to the body's immune system. In its new form, the RovA can now activate other genes in the bacteria to adapt the Yersinia metabolism to that of the host.

Until now, little was known about RovA and the fact that it reacts to temperature. Researchers were presented with a puzzle: "We have long been searching for the mechanisms which regulate RovA activity," says Petra Dersch. "It was therefore all the more surprising to discover that RovA controls various processes by acting as a thermometer and as such is self-regulating". At the end of the process, the RovA is responsible for its own decomposition. If the initial stages of infection prove successful, the Yersinia bacteria no longer need the RovA: in its modified form at 37°C, enzymes in the bacteria can attack and break down the RovA.

###

Original article: Herbst K, Bujara M, Heroven AK, Opitz W, Weichert M, et al. 2009 Intrinsic Thermal Sensing Controls Proteolysis of Yersinia Virulence Regulator RovA. PLoS Pathog 5(5): e1000435. doi:10.1371/journal.ppat.1000435

Sunday, April 26, 2009

Hydrogen protects nuclear fuel in final storage

An interesting development in nuclear waste management technology

Dr K.S.Parthasarathy





Public release date: 24-Apr-2009
[ Print | E-mail | Share Share ] [ Close Window ]

Contact: Sofie Hebrand
sofie.hebrand@chalmers.se
46-317-728-464
Swedish Research Council
Hydrogen protects nuclear fuel in final storage

When Sweden's spent nuclear fuel is to be permanently stored, it will be protected by three different barriers. Even if all three barriers are damaged, the nuclear fuel will not dissolve into the groundwater, according to a new doctoral dissertation from Chalmers University of Technology in Sweden.

By Midsummer it will be announced where Sweden's spent nuclear fuel will be permanently stored. Ahead of the decision a debate is underway regarding how safe the method for final storage is, primarily in terms of the three barriers that are intended to keep radioactive material from leaking into the surrounding groundwater.

But according to the new doctoral dissertation, uranium would not be dissolved by the water even if all three barriers were compromised.

"This is a result of what we call the hydrogen effect," says Patrik Fors, who will defend his thesis in nuclear chemistry at Chalmers on Friday. "The hydrogen effect was discovered in 2000. It's a powerful effect that was not factored in when plans for permanent storage began to be forged, and now I have shown that it's even more powerful than was previously thought."

The hydrogen effect is predicated on the existence of large amounts of iron in connection with the nuclear fuel. In the Swedish method for final storage, the first barrier consists of a copper capsule that is reinforced with iron. The second barrier is a buffer of bentonite clay, and the third is 500 meters of granite bedrock. Some other countries have chosen to make the first barrier entirely of iron.

It is known that microorganisms and fissure minerals in the rock will consume all the oxygen in the groundwater. If all three barriers were to be damaged, the iron in the capsule would therefore be anaerobically corroded by the water, producing large amounts of hydrogen. In final storage at a depth of 500 meters, a pressure of at least 5 megapascals of hydrogen would be created.

Patrik Fors has now created these conditions in the laboratory and examined three different types of spent nuclear fuel. All of the trials showed that the hydrogen protects the fuel from being dissolved in the water, even though the highly radioactive fuels create a corrosive environment in the water as a result of their radiation. The reason for the protective effect is that the hydrogen prevents the uranium from oxidizing and converting to liquid form.

Furthermore, the hydrogen makes the oxidized uranium that already exists as a liquid in the water shift to a solid state. The outcome was that the amount of uranium found dissolved in the water, after experiments lasting several years, was lower than the natural levels in Swedish groundwater.

"The hydrogen effect will prevent the dissolution of nuclear fuel until the fuel's radioactivity is so low that it need no longer be considered a hazard," says Patrik Fors. The amount of iron in the capsules is so great that it would produce sufficient hydrogen to protect the fuel for tens of thousands of years.

###

Patrik Fors carried out his experiments at the Institute for Transuranium Elements in Karlsruhe, Germany, in a joint project with Chalmers. The institute is operated by the European Commission. The research was also funded by SKB, the Swedish Nuclear Fuel and Waste Management Company.

The dissertation "The effect of dissolved hydrogen on spent nuclear fuel corrosion" will be publicly defended on April 24 at 10 a.m. Place: Hall KE, Chemistry Building, Kemigården 4, Chalmers University of Technology, Gothenburg, Sweden.

For more information, please contact: Patrik Fors, Nuclear Chemistry, Department of Chemical and Biological Engineering, Chalmers University of Technology, Sweden

Tel: +46707-696 334 patrik.fors@chalmers.se

Supervisor: Kastriot Spahiu, Adjunct Professor, Department of Chemical and Biological Engineering, Chalmers University of Technology, Sweden

+468-459 8561 Kastriot.spahiu@skb.se

Tuesday, March 10, 2009

Inserting catheters without X-rays

X-ray imaging to locate catheter can be avoided by using MRI but the guide wire must be plastic. The technique to prepare such wire is available now and may be available shortly for use.

K.S.Parthasarathy


Public release date: 9-Mar-2009
[ Print Article | E-mail Article | Close Window ]

Contact: Adrian Schütte
adrian.schuette@ipt.fraunhofer.de
49-241-890-4251
Fraunhofer-Gesellschaft
Inserting catheters without X-rays

This release is available in German.



Have the patient's coronary vessels, heart valves or myocardial muscle changed abnormally? Doctors can verify this and administer the necessary therapy with the help of a catheter, which is inserted into the body through a small incision in the groin area and pushed to the heart through the vascular system. A metal guide wire inside the catheter serves as a navigational aid. It is pulled and turned by the physician to steer and guide the catheter. At the same time the catheter's position in the vascular system has to be monitored. This task is performed by X-rays, which penetrate the patient and show exactly where the catheter is. The problem with this computer tomography method is that it exposes the patient to quite a high dose of radiation. In addition, a contrast medium has to be injected into the patient's body in order to make the vascular system and the soft tissue visible on the X-ray images.

Researchers at the Fraunhofer Institute for Production Technology IPT in Aachen have now found a way of avoiding both the radiation and the contrast medium. In collaboration with colleagues at Philips and University Hospital Aachen, they have developed a guide wire made of glass-fiber-reinforced plastic. "Because the guide wire is made of plastic the imaging can be performed by magnetic resonance tomography instead of computer tomography," says IPT scientist Adrian Schütte. "This is not possible with metal guide wires as the metal wire acts as an antenna and heats up too much – this would damage the vessels, and could cause proteins to clot." Magnetic resonance tomography has many advantages for doctors and patients. It does not produce ionizing radiation like computer tomography, and soft tissue is clearly visible, so there is no need for a contrast medium.

For the manufacture of the two-meter guide wires the researchers use the pultrusion method, which is the standard procedure for making continuous profiles from glass-fiber-reinforced plastic. "Diameters of half a millimeter or less are required for the guide wires – that's the absolute minimum," explains Schütte. The new guide wires will be presented at the JEC trade fair in Paris (Hall 1, Stand T18) from March 24 to 26 and will be used in hospitals for the first time in the next few months.

###Public release date: 9-Mar-2009
[ Print Article | E-mail Article | Close Window ]