Wednesday, November 25, 2009

Butterfly proboscis to sip cells

Nature does every thing the very best way. We shall get solutions to complex problems by keenly observing nature. This is substantiated by the following newsstory

K.S.Parthaarathy



Public release date: 22-Nov-2009

Contact: Jason Bardi
jbardi@aip.org
301-209-3091
American Institute of Physics
Butterfly proboscis to sip cells
Nature-inspired probes be presented at Fluid Dynamics Conference next week

WASHINGTON, D.C. November 18, 2009 -- A butterfly's proboscis looks like a straw -- long, slender, and used for sipping -- but it works more like a paper towel, according to Konstantin Kornev of Clemson University. He hopes to borrow the tricks of this piece of insect anatomy to make small probes that can sample the fluid inside of cells.

Kornev will present his work next week at the 62nd Annual Meeting of the American Physical Society's (APS) Division of Fluid Dynamics will take place from November 22-24 at the Minneapolis Convention Center.

At the scales at which a butterfly or moth lives, liquid is so thick that it is able to form fibers. The insects' liquid food -- drops of water, animal tears, and the juice inside decomposed fruit -- spans nearly three orders of magnitude in viscosity. Pumping liquid through its feeding tube would require an enormous amount of pressure.

"No pump would support that kind of pressure," says Kornev. "The liquid would boil spontaneously."

Instead of pumping, Kornev's findings suggest that butterflies draw liquid upwards using capillary action -- the same force that pulls liquid across a paper towel. The proboscis resembles a rolled-up paper towel, with tiny grooves that pull the liquid upwards along the edges, carrying along the bead of liquid in the middle of the tube. This process is not nearly as affected by viscosity as pumping.

Kornev has been recently awarded an NSF grant to develop artificial probes made of nanofibers that use a similar principal to draw out the viscous liquid inside of cells and examine their contents.

The presentation, "Butterfly proboscis as a biomicrofluidic system" by Konstantin Kornev et al of Clemson University is at 12:01 p.m. on Sunday, November 22, 2009.

###

Abstract: http://meetings.aps.org/Meeting/DFD09/Event/110814

MORE MEETING INFORMATION
The 62nd Annual DFD Meeting will be held at the Minneapolis Convention Center in downtown Minneapolis. All meeting information, including directions to the Convention Center is at: http://www.dfd2009.umn.edu/

PRESS REGISTRATION
Credentialed full-time journalist and professional freelance journalists working on assignment for major publications or media outlets are invited to attend the conference free of charge. If you are a reporter and would like to attend, please contact Jason Bardi (jbardi@aip.org, 301-209-3091).

USEFUL LINKS
Main meeting Web site: http://meetings.aps.org/Meeting/DFD09/Content/1629
Searchable form: http://meetings.aps.org/Meeting/DFD09/SearchAbstract
Local Conference Meeting Website: http://www.dfd2009.umn.edu/
PDF of Meeting Abstracts: http://flux.aps.org/meetings/YR09/DFD09/all_DFD09.pdf
Division of Fluid Dynamics page: http://www.aps.org/units/dfd/
Virtual Press Room: SEE BELOW

VIRTUAL PRESS ROOM
The APS Division of Fluid Dynamics Virtual Press Room will contain tips on dozens of stories as well as stunning graphics and lay-language papers detailing some of the most interesting results at the meeting. Lay-language papers are roughly 500 word summaries written for a general audience by the authors of individual presentations with accompanying graphics and multimedia files. The Virtual Press Room will serve as starting points for journalists who are interested in covering the meeting but cannot attend in person. See: http://www.aps.org/units/dfd/pressroom/index.cfm

Currently, the Division of Fluid Dynamics Virtual Press Room contains information related to the 2008 meeting. In mid-November, the Virtual Press Room will be updated for this year's meeting, and another news release will be sent out at that time.

ONSITE WORKSPACE FOR REPORTERS
A reserved workspace with wireless internet connections will be available for use by reporters. It will be located in the meeting exhibition hall (Ballroom AB) at the Minneapolis Convention Center on Sunday and Monday from 8:00 a.m. to 5:00 p.m. and on Tuesday from 8:00 a.m. to noon. Press announcements and other news will be available in the Virtual Press Room.

GALLERY OF FLUID MOTION
Every year, the APS Division of Fluid Dynamics hosts posters and videos that show stunning images and graphics from either computational or experimental studies of flow phenomena. The outstanding entries, selected by a panel of referees for artistic content, originality and ability to convey information, will be honored during the meeting, placed on display at the Annual APS Meeting in March of 2010, and will appear in the annual Gallery of Fluid Motion article in the September 2010 issue of the journal Physics of Fluids.

This year, selected entries from the 27th Annual Gallery of Fluid Motion will be hosted as part of the Fluid Dynamics Virtual Press Room. In mid-November, when the Virtual Press Room is launched, another announcement will be sent out.

ABOUT THE APS DIVISION OF FLUID DYNAMICS
The Division of Fluid Dynamics of the American Physical Society exists for the advancement and diffusion of knowledge of the physics of fluids with special emphasis on the dynamical theories of the liquid, plastic and gaseous states of matter under all conditions of temperature and pressure. See: http://www.aps.org/units/dfd/


[ Back to EurekAlert! ] [ Print | E-mail | Share Share ] [ Close Window ]

Monday, November 9, 2009

Diagnostic errors in medicine

Diagnostic errors in medicine

The latest issue of the Archives of internal medicine[2009;169(20):1881-1887] published an interesting paper analyzing the diagnostic errors in medicine. Physicians from a few reputed hospitals in USA discovered that missed or delayed diagnoses are a common but understudied area in patient safety research. They surveyed clinicians to solicit perceived cases of missed and delayed diagnoses to better understand the types, causes, and prevention of such errors.

They administered a 6-item written survey at 20 grand rounds presentations across the United States and by mail at 2 collaborating institutions. They asked the respondents to report three cases of diagnostic errors and to describe their perceived causes, seriousness, and frequency.

Three hundred and ten physicians reported 669 cases from 22 institutions. They excluded cases without diagnostic errors or lacking sufficient details; Of the 583 cases that remained ,162 errors (28%) were rated as major, 241 (41%) as moderate, and 180 (31%) as minor or insignificant. Pulmonary embolism was the most common missed or delayed diagnoses(26 cases [4.5% of total]followed by drug reactions or overdose (26 cases [4.5%]), lung cancer (23 cases [3.9%]), colorectal cancer (19 cases [3.3%]), acute coronary syndrome (18 cases [3.1%]), breast cancer (18 cases [3.1%]), and stroke (15 cases [2.6%]).

Clinicians made errors most frequently in the testing phase (failure to order, report, and follow-up laboratory results) (44%), followed by clinician assessment errors (failure to consider and over-weighing competing diagnosis) (32%), history taking (10%), physical examination (10%), and referral or consultation errors and delays (3%).

The researchers concluded that physicians readily recalled multiple cases of diagnostic errors and were willing to share their experiences. Using a new taxonomy tool and aggregating cases by diagnosis and error type revealed patterns of diagnostic failures that suggested areas for improvement. Systematic solicitation and analysis of such errors can identify potential preventive strategies.


The authors were from Departments of Medicine (Drs Schiff and Kim and Mss Krosnjar and Wisniewski) and Emergency Medicine (Dr Cosby), Cook County Hospital, Chicago, Illinois; Division of General Medicine and Primary Care, Brigham and Women's Hospital, Boston, Massachusetts (Drs Schiff and Hasan); Department of Medicine, Rush University, Chicago (Drs Schiff, Abrams, Hasler, and McNutt and Mr Odwazny); Departments of Health Policy and Administration (Dr Kim) and Medical Education (Dr Elstein) and College of Pharmacy (Dr Lambert), University of Illinois at Chicago; and Department of Family and Preventive Medicine, University of California, San Diego (Dr Kabongo).

Thursday, October 1, 2009

Spallation Neutron Source first of its kind to reach megawatt power

Public release date: 29-Sep-2009
Great news
DR Parthasarathy

Contact: Bill Cabage
cabagewh@ornl.gov
865-574-4399
DOE/Oak Ridge National Laboratory
Spallation Neutron Source first of its kind to reach megawatt power

OAK RIDGE, Tenn., Sept. 28, 2009 -- The Department of Energy's Spallation Neutron Source (SNS), already the world's most powerful facility for pulsed neutron scattering science, is now the first pulsed spallation neutron source to break the one-megawatt barrier.

"Advances in the materials sciences are fundamental to the development of clean and sustainable energy technologies. In reaching this milestone of operating power, the Spallation Neutron Source is providing scientists with an unmatched resource for unlocking the secrets of materials at the molecular level," said Dr. William F. Brinkman, Director of DOE's Office of Science.

SNS operators at DOE's Oak Ridge National Laboratory pushed the controls past the megawatt mark on September 18 as the SNS ramped up for its latest operational run.

"The attainment of one megawatt in beam power symbolizes the advancement in analytical resources that are now available to the neutron scattering community through the SNS," said ORNL Director Thom Mason, who led the SNS project during its construction. "This is a great achievement not only for DOE and Oak Ridge National Laboratory, but for the entire community of science."

Before the SNS, the world's spallation neutron sources operated in the hundred-kilowatt range. The SNS actually became a world-record holder in August 2007 when it reached 160 kilowatts, earning it an entry in the Guinness Book of World Records as the world's most powerful pulsed spallation neutron source.

Beam power isn't merely a numbers game. A more powerful beam means more neutrons are spalled from SNS's mercury target. For the researcher, the difference in beam intensity is comparable to the ability to see with a car's headlights versus a flashlight. More neutrons also enhance scientific opportunities, including flexibility for smaller samples and for real-time studies at shorter time scales. For example, experiments will be possible that use just one pulse of neutrons to illuminate the dynamics of scientific processes.

Eventually, the SNS will reach its design power of 1.4 megawatts. The gradual increase of beam power has been an ongoing process since the SNS was completed and activated in late April 2006.

In the meantime, scientists have been performing cutting-edge experiments and materials analysis as its eventual suite of 25 instruments comes on line. As DOE Office of Science user facilities, the SNS and its companion facility, the High Flux Isotope Reactor, host researchers from around the world for neutron scattering experiments.

###

ORNL is managed by UT-Battelle for the Department of Energy.

NOTE TO EDITORS: You may read other press releases from Oak Ridge National Laboratory or learn more about the lab at http://www.ornl.gov

Wednesday, September 23, 2009

Tel Aviv Univrsity's 'Dust Alert' exposes dangerous invisible pollution, pollen and construction waste

Public release date: 22-Sep-2009
[ Print | E-mail | Share Share ] [ Close Window ]

Contact: George Hunka
ghunka@aftau.org
212-742-9070
American Friends of Tel Aviv University
Tel Aviv University invention busts dust
TAU's 'Dust Alert' exposes dangerous invisible pollution, pollen and construction waste

Worried that dust from a nearby construction zone will harm your family's health? A new Tel Aviv University tool could either confirm your suspicions or better yet, set your mind at rest.

Prof. Eyal Ben-Dor and his Ph.D student Dr. Sandra Chudnovsky, of TAU's Department of Geography have developed a sensor called "Dust Alert" ― the first of its kind ― to help families and authorities monitor the quality of the air they breathe. Like an ozone gas or carbon monoxide meter, it measures the concentration of small particles that may contaminate the air in your home. Scientific studies on "Dust Alert" appeared recently in the journal Science of the Total Environment, Urban Air Pollution: Problems, Control Technologies and Management Practices.

"It works just like an ozone meter would," says Prof. Ben-Dor. "You put it in your home or office for three weeks, and it can give you real-time contamination levels in terms of dust, pollen and toxins." Functioning like a tiny chemistry lab, the device can precisely determine the chemical composition of the toxins, so homeowners, office managers and factories can act to improve air quality.

Using the measurements, Prof. Ben-Dor can sometimes find a quick remedy for a dusty or pollen-filled home. The solution could be as easy as keeping a window open, he says. "We've found through our ongoing research that some simple actions at home can have a profound effect on the quality of air we breathe."

Instant results

Based on a portable chemical analyzer called a spectrophotometer, the invention can be installed and begin to collect data within minutes, although several weeks' worth of samples produces the best assessment of air quality. The longer period allows for fluctuations in both internal and external environments, such as changing weather patterns.

The "Dust Alert" fills an important need. Polluted air, breathed in for weeks, months and sometimes years, can have fatal consequences, leading to asthma, bronchitis and lung cancer. With findings from Prof. Ben-Dor's invention, urban planners can provide better solutions and mitigate risks. "We can certainly give an accurate forecast about the health of a home or apartment for prospective home owners. If somebody in your family has an allergy, poor air quality can be a deal breaker ," says Prof. Ben-Dor.

Prof. Ben-Dor's device may be most useful in the aftermath of disasters, such as chemical fires, heavy dust storms, hurricanes or tragedies like 9/11. Survivors of these situations are usually unaware of the lingering environmental problems, and the government can't do enough to protect them because no accurate tools exist to define the risk. Using a Dust Alert, residents could be advised to vacate their homes and offices until the dust has cleared, or to take simple precautions such as aerating hazardous rooms in a flat, suggests Prof. Ben-Dor.

Putting dust on the map

According to Prof. Ben-Dor, the Dust Alert could also be used by cities and counties to develop "dust maps" that provide detailed environmental information about streets and neighborhoods, permitting government authorities like the EPA to more successfully identify and prosecute offenders. Currently, for example, there is no system for demonstrating how construction sites compromise people's health.

"Until now, people have had to grin and bear the polluted air they breathe," says Prof. Ben-Dor. "The Dust Alert could provide crucial reliable evidence of pollution, so that society at large can breathe easier. We can see the dust on the furniture and on the windows, but most of us can't see the dust we breathe. For the first time, we are able to detect it and measure its more dangerous components."

With their dust maps, TAU scientists have already correlated urban heat islands with high levels of particulate matter, giving urban planners crucial information for the development of green spaces and city parks. Prof. Ben-Dor also plans to develop his prototype into a home-and-office unit, while offering customized services that can help people decode what's left in the dust.

###

American Friends of Tel Aviv University (www.aftau.org) supports Israel's leading and most comprehensive center of higher learning. In independent rankings, TAU's innovations and discoveries are cited more often by the global scientific community than all but 20 other universities worldwide

Internationally recognized for the scope and groundbreaking nature of its research programs, Tel Aviv University consistently produces work with profound implications for the future.

[ Back to EurekAlert! ] [ Print | E-mail | Share Share ] [ Close Window ]

Diamonds may be the ultimate MRI probe, say Quantum physicists

A search for quantum computers led to a medical aplication. One day we may get MRI-like devices that can probe individual drug molecules and living cells.

K.S.Parthasarathy


Public release date: 22-Sep-2009
[ Print | E-mail | Share Share ] [ Close Window ]

Contact: Chad Boutin
boutin@nist.gov
301-975-4261
National Institute of Standards and Technology (NIST)
Diamonds may be the ultimate MRI probe, say Quantum physicists

Diamonds, it has long been said, are a girl's best friend. But a research team including a physicist from the National Institute of Standards and Technology (NIST) has recently found* that the gems might turn out to be a patient's best friend as well.

The team's work has the long-term goal of developing quantum computers, but it has borne fruit that may have more immediate application in medical science. Their finding that a candidate "quantum bit" has great sensitivity to magnetic fields hints that MRI-like devices that can probe individual drug molecules and living cells may be possible.

The candidate system, formed from a nitrogen atom lodged within a diamond crystal, is promising not only because it can sense atomic-scale variations in magnetism, but also because it functions at room temperature. Most other such devices used either in quantum computation or for magnetic sensing must be cooled to nearly absolute zero to operate, making it difficult to place them near live tissue. However, using the nitrogen as a sensor or switch could sidestep that limitation.

Diamond, which is formed of pure carbon, occasionally has minute imperfections within its crystalline lattice. A common impurity is a "nitrogen vacancy", in which two carbon atoms are replaced by a single atom of nitrogen, leaving the other carbon atom's space vacant. Nitrogen vacancies are in part responsible for diamond's famed luster, for they are actually fluorescent: when green light strikes them, the nitrogen atom's two excitable unpaired electrons glow a brilliant red.

The team can use slight variations in this fluorescence to determine the magnetic spin of a single electron in the nitrogen. Spin is a quantum property that has a value of either "up" or "down," and therefore could represent one or zero in binary computation. The team's recent achievement was to transfer this quantum information repeatedly between the nitrogen electron and the nuclei of adjacent carbon atoms, forming a small circuit capable of logic operations. Reading a quantum bit's spin information—a fundamental task for a quantum computer—has been a daunting challenge, but the team demonstrated that by transferring the information back and forth between the electron and the nuclei, the information could be amplified, making it much easier to read.

Still, NIST theoretical physicist Jacob Taylor said the findings are "evolutionary, not revolutionary" for the quantum computing field and that the medical world may reap practical benefits from the discovery long before a working quantum computer is built. He envisions diamond-tipped sensors performing magnetic resonance tests on individual cells within the body, or on single molecules drug companies want to investigate—a sort of MRI scanner for the microscopic. "That's commonly thought not to be possible because in both of these cases the magnetic fields are so small," Taylor says. "But this technique has very low toxicity and can be done at room temperature. It could potentially look inside a single cell and allow us to visualize what's happening in different spots."

The Harvard University-based team also includes scientists from the Joint Quantum Institute (a partnership of NIST and the University of Maryland), the Massachusetts Institute of Technology and Texas A&M University.

###

* L. Jiang, J.S. Hodges, J.R. Maze, P. Maurer, J.M. Taylor, D.G. Cory, P.R. Hemmer, R.L. Walsworth, A. Yacoby, A.S. Zibrov and M.D. Lukin. Repetitive readout of a single electronic spin via quantum logic with nuclear spin ancillae. Science, DOI: 10.1126/science.1176496, published online Sept. 10, 2009.

See http://www.nist.gov/public_affairs/techbeat/tb2009_0922.htm#diamonds for illustration to accompany story.

[ Back to EurekAlert! ] [ Print | E-mail | Share Share ] [ Close Window ]

Tuesday, September 15, 2009

Study identifies which children do not need CT scans after head trauma

There is general awareness that children must not be exposed to unwanted x-ray doses. This paper came ta the opportune time. Bold implementation of the guidelines will be essential. Professional associations must review the work urgently and accept the guidelines with modifications if any required

Dr.K.S.Parthasarathy

EureKAlert

Public release date: 14-Sep-2009
[ Print | E-mail | Share Share ] [ Close Window ]

Contact: Charlie Casey
charles.casey@ucdmc.ucdavis.edu
916-734-9048
University of California - Davis - Health System
Study identifies which children do not need CT scans after head trauma
Research provides new guidelines to identify children with mild injuries and reduce radiation exposure from CT

A substantial percentage of children who get CT scans after apparently minor head trauma do not need them, and as a result are put at increased risk of cancer due to radiation exposure. After analyzing more than 42,000 children with head trauma, a national research team led by two UC Davis emergency department physicians has developed guidelines for doctors who care for children with head trauma aimed at reducing those risks.

Their findings appear in an article published online today and in an upcoming edition of The Lancet.

The collaborative study includes data collected at 25 hospitals from children who were evaluated for the possibility of serious brain injury following trauma to the head. Researchers found that one in five children over the age of 2 and nearly one-quarter of those under 2 who received CT scans following head trauma did not need them because they were at very low risk of having serious brain injuries. In these low-risk children, the risk of developing cancer due to radiation exposure outweighed the risk of serious brain injury.

"When you have a sample size this large, it is easier to get your hands on the truth," said Nathan Kuppermann, professor and chair of emergency medicine, professor of pediatrics at UC Davis Children's Hospital and lead author of the study. "We think our investigation provides the best available evidence regarding the use of CT scans in children with head trauma, and it indicates that CT use can be safely reduced by eliminating its application in those children who are at very low risk of serious brain injuries."

As part of the study, Kuppermann and his colleagues developed a set of rules for identifying low-risk patients who would not need a CT. The "prediction rules" for children under 2 and for those 2 and older depend on the presence or absence of various symptoms and circumstances, including the way the injury was sustained, a history of loss of consciousness, neurological status at the time of evaluation and clinical evidence of skull fracture for both age groups. The use of CT in patients who do not fall into the low-risk group identified by the prediction rules will depend on other factors, such as the physician's experience, the severity and number of symptoms, and other factors.

The Centers for Disease Control estimates that 435,000 children under 14 visit emergency rooms every year to be evaluated for traumatic brain injury (TBI). Not all head trauma results in a TBI. The severity of a brain injury may range from mild, causing brief change in mental status or consciousness, to severe, causing permanent symptoms and irreversible damage.

For years, studies have suggested that CT scans were being overused to rule out traumatic brain injuries. However, those studies were considered too small to be sufficiently accurate and not precise enough to be widely applicable to a general population. The sheer size of the current study, and the fact that the investigators created the accurate prediction rules with one large group of children with head trauma and then tested the rules on another large but separate group to demonstrate their validity, allows physicians to have confidence in the results. The researchers emphasized, however, that the rules are not intended to replace clinical judgment.

"We're arming the clinician with the best available evidence so that they can make the best decisions," said James Holmes, professor of emergency medicine at UC Davis School of Medicine and a co-author of the report. "There certainly are instances when the risks of radiation are worth it, such as in cases of blunt head trauma which result in changes in neurological status or clinical evidence of skull fractures. However, clinicians need reliable data to help them make those judgment calls when it is not clear whether or not a patient needs a CT. Until now, physicians haven't had data based on large and validated research."

The current study comes on the heels of an article published in late August by The New England Journal of Medicine that showed that at least 4 million Americans under age 65 are exposed to high doses of radiation each year from medical imaging tests, with CT scans accounting for almost one half of the total radiation dose. About 10 percent of those get more than the maximum annual exposure allowed for nuclear power plant employees or anyone else who works with radioactive material.

Studies show that exposure to radiation increases the risk of cancer. Radiation exposure to the brain of developing children is of particular concern and must be weighed carefully against the risk of traumatic brain injury that could cause permanent damage or death if not identified early. If the new guidelines are applied appropriately, the use of CT scans nationwide could be significantly reduced.

The effort was made possible by the Pediatric Emergency Care Applied Research Network (PECARN), which enabled the massive collection of data. Supported by the U.S. Department of Health and Human Services' Emergency Medical Services for Children Program, PECARN is the first federally-funded, multi-institutional network for research in pediatric emergency medicine in the nation. The network conducts research into the prevention and management of acute illnesses and injuries in children and youth across the continuum of emergency medicine and health care.

"Children with medical and traumatic illnesses usually have good outcomes, but you need a lot of children to assess factors and treatments that predict both good and bad outcomes. By studying large numbers of children, in a variety of settings and from diverse populations, the results will more likely be applicable to the general population. That's the power of PECARN," Kuppermann said. "Combined, our network of emergency departments around the country evaluates approximately 1 million children per year."

Along with the UC Davis team, key PECARN researchers in the Lancet study included Peter S. Dayan, from New York-Presbyterian Hospital and Columbia University Medical Center in New York; John D. Hoyle, Jr., from Helen DeVos Children's Hospital in Grand Rapids; Shireen M. Atabaki, from Children's National Medical Center in Washington, D.C.; and Richard Holubkov from the PECARN Data Coordinating Center at the University of Utah.

In order to create the prediction rules, the PECARN investigators studied outcomes in more than 42,000 children with minor initial symptoms and signs of head trauma. CT scans were performed in nearly 15,000 of those patients. Serious brain injuries were diagnosed in 376 children, and 60 children underwent neurosurgery.

Using these data, the researchers developed two prediction rules for identifying mild cases that do not need CT scans. One rule was developed for children under the age of 2 and another for those 2 and over. It was important to study children under 2 separately because they cannot communicate their symptoms or offer information as well as older children, and they are more sensitive to the effects of radiation.

Children under 2 who fell into the low-risk group showed normal mental status, no scalp swelling, no significant loss of consciousness, no palpable skull fracture, were normal-acting (according to the parent), and had an injury that was sustained in a non-severe way. Severe accidents, which excluded children from the low-risk group, included motor vehicle crashes in which the patient was ejected, and bicycle accidents involving automobiles, in which the patient was not wearing a helmet. Key indicators for children older than 2 who were at low-risk for brain injury included normal mental status, no loss of consciousness, no vomiting, no signs of fracture of the base of skull, no severe headache, and they did not sustain the injury in a serious accident.

The researchers then validated these rules by applying them to data from a second population of more than 8,600 children. In more than 99.9 percent of the cases, the rules accurately predicted children who were not diagnosed with serious brain injuries and were therefore indeed at low risk..

The researchers also identified and separated children at intermediate and high risk of serious brain injuries. Those in the high-risk group should receive CT scans, the researchers wrote. The PECARN team is currently working on refining recommendations for the use of CT scans in those at intermediate risk. Until now, emergency room physicians have relied mostly on instincts when deciding whether or not the symptoms of a child with head trauma warrant the use of CT.

"Now we have much better evidence to assist with making decisions regarding CT use," Kuppermann said.

###

UC Davis has been part of PECARN since its inception in 2001. It is the leading center in one of four PECARN Research Nodes, which also includes Children's Hospital of Philadelphia; St. Louis Children's Hospital; Children's Hospital of Wisconsin; Cincinnati Children's Hospital Medical Center; and Primary Children's Medical Center in Salt Lake City.

A total of 32 PECARN researchers were substantially involved in this study. This research was supported by the Emergency Medical Services for Children program of the Maternal and Child Health Bureau, and the Maternal and Child Health Bureau Research Program, Health Resources and Services Administration, U.S. Department of Health and Human Services.

Friday, September 11, 2009

Caltech scientists develop novel use of neurotechnology to solve classic social problem

Public release date: 10-Sep-2009


Contact: Lori Oliwenstein
lorio@caltech.edu
626-395-3631
California Institute of Technology
Caltech scientists develop novel use of neurotechnology to solve classic social problem
Research shows how brain imaging can be used to create new and improved solutions to the public-goods provision problem

PASADENA, Calif.—Economists and neuroscientists from the California Institute of Technology (Caltech) have shown that they can use information obtained through functional magnetic resonance imaging (fMRI) measurements of whole-brain activity to create feasible, efficient, and fair solutions to one of the stickiest dilemmas in economics, the public goods free-rider problem—long thought to be unsolvable.

This is one of the first-ever applications of neurotechnology to real-life economic problems, the researchers note. "We have shown that by applying tools from neuroscience to the public-goods problem, we can get solutions that are significantly better than those that can be obtained without brain data," says Antonio Rangel, associate professor of economics at Caltech and the paper's principal investigator.

The paper describing their work was published today in the online edition of the journal Science, called Science Express.

Examples of public goods range from healthcare, education, and national defense to the weight room or heated pool that your condominium board decides to purchase. But how does the government or your condo board decide which public goods to spend its limited resources on? And how do these powers decide the best way to share the costs?

"In order to make the decision optimally and fairly," says Rangel, "a group needs to know how much everybody is willing to pay for the public good. This information is needed to know if the public good should be purchased and, in an ideal arrangement, how to split the costs in a fair way."

In such an ideal arrangement, someone who swims every day should be willing to pay more for a pool than someone who hardly ever swims. Likewise, someone who has kids in public school should have more of her taxes put toward education.

But providing public goods optimally and fairly is difficult, Rangel notes, because the group leadership doesn't have the necessary information. And when people are asked how much they value a particular public good—with that value measured in terms of how many of their own tax dollars, for instance, they'd be willing to put into it—their tendency is to lowball.

Why? "People can enjoy the good even if they don't pay for it," explains Rangel. "Underreporting its value to you will have a small effect on the final decision by the group on whether to buy the good, but it can have a large effect on how much you pay for it."

In other words, he says, "There's an incentive for you to lie about how much the good is worth to you."

That incentive to lie is at the heart of the free-rider problem, a fundamental quandary in economics, political science, law, and sociology. It's a problem that professionals in these fields have long assumed has no solution that is both efficient and fair.

In fact, for decades it's been assumed that there is no way to give people an incentive to be honest about the value they place on public goods while maintaining the fairness of the arrangement.

"But this result assumed that the group's leadership does not have direct information about people's valuations," says Rangel. "That's something that neurotechnology has now made feasible."

And so Rangel, along with Caltech graduate student Ian Krajbich and their colleagues, set out to apply neurotechnology to the public-goods problem.

In their series of experiments, the scientists tried to determine whether functional magnetic resonance imaging (fMRI) could allow them to construct informative measures of the value a person assigns to one or another public good. Once they'd determined that fMRI images—analyzed using pattern-classification techniques—can confer at least some information (albeit "noisy" and imprecise) about what a person values, they went on to test whether that information could help them solve the free-rider problem.

They did this by setting up a classic economic experiment, in which subjects would be rewarded (paid) based on the values they were assigned for an abstract public good.

As part of this experiment, volunteers were divided up into groups. "The entire group had to decide whether or not to spend their money purchasing a good from us," Rangel explains. "The good would cost a fixed amount of money to the group, but everybody would have a different benefit from it."

The subjects were asked to reveal how much they valued the good. The twist? Their brains were being imaged via fMRI as they made their decision. If there was a match between their decision and the value detected by the fMRI, they paid a lower tax than if there was a mismatch. It was, therefore, in all subjects' best interest to reveal how they truly valued a good; by doing so, they would on average pay a lower tax than if they lied.

"The rules of the experiment are such that if you tell the truth," notes Krajbich, who is the first author on the Science paper, "your expected tax will never exceed your benefit from the good."

In fact, the more cooperative subjects are when undergoing this entirely voluntary scanning procedure, "the more accurate the signal is," Krajbich says. "And that means the less likely they are to pay an inappropriate tax."

This changes the whole free-rider scenario, notes Rangel. "Now, given what we can do with the fMRI," he says, "everybody's best strategy in assigning value to a public good is to tell the truth, regardless of what you think everyone else in the group is doing."

And tell the truth they did—98 percent of the time, once the rules of the game had been established and participants realized what would happen if they lied. In this experiment, there is no free ride, and thus no free-rider problem.

"If I know something about your values, I can give you an incentive to be truthful by penalizing you when I think you are lying," says Rangel.

While the readings do give the researchers insight into the value subjects might assign to a particular public good, thus allowing them to know when those subjects are being dishonest about the amount they'd be willing to pay toward that good, Krajbich emphasizes that this is not actually a lie-detector test.

"It's not about detecting lies," he says. "It's about detecting values—and then comparing them to what the subjects say their values are."

"It's a socially desirable arrangement," adds Rangel. "No one is hurt by it, and we give people an incentive to cooperate with it and reveal the truth."

"There is mind reading going on here that can be put to good use," he says. "In the end, you get a good produced that has a high value for you."

From a scientific point of view, says Rangel, these experiments break new ground. "This is a powerful proof of concept of this technology; it shows that this is feasible and that it could have significant social gains."

And this is only the beginning. "The application of neural technologies to these sorts of problems can generate a quantum leap improvement in the solutions we can bring to them," he says.

Indeed, Rangel says, it is possible to imagine a future in which, instead of a vote on a proposition to fund a new highway, this technology is used to scan a random sample of the people who would benefit from the highway to see whether it's really worth the investment. "It would be an interesting alternative way to decide where to spend the government's money," he notes.

###

In addition to Rangel and Krajbich, other authors on the Science paper, "Using neural measures of economic value to solve the public goods free-rider problem," include Caltech's Colin Camerer, the Robert Kirby Professor of Behavioral Economics, and John Ledyard, the Allen and Lenabelle Davis Professor of Economics and Social Sciences. Their work was funded by grants from the National Science Foundation, the Gordon and Betty Moore Foundation, and the Human Frontier Science Program.


[ Back to EurekAlert! ] [ Print | E-mail | Share Share ] [ Close Window ]