Daily Science Journal (Jan. 31, 2008) — Arctic marine conditions contribute to an oil spill “response gap” that effectively limits the ability to clean up after an oil spill.

A new report commissioned by WWF concludes that the only way to avoid the potentially devastating environmental risks is to ensure that no more of the Arctic is opened up to oil development until the response gap is closed.

“The ability to effectively clean up an arctic marine oil spill is a critical component of the risk equation,” said Dr Neil Hamilton, Director of the WWF International Arctic Programme. “The fact that a catastrophic spill might exceed the operating limits of existing oil spill response technologies is a strong argument for a moratorium until the response gap is filled.”


According to the report Oil Spill Response Challenges in Arctic Waters, arctic conditions can impact on both the probability that a spill will occur from oil and gas operations and the consequences of such a spill. The same conditions that contribute to oil spill risks (including lack of natural light, extreme cold, moving ice floes, high wind and low visibility) can also make spill response operations extremely difficult or totally ineffective.

“The Arctic offers the highest level of ecological sensitivity and the lowest level of capacity to clean up after an accident,” said James Leaton, Senior Policy Adviser, WWF-UK. “This combination makes it unacceptable to expose the Arctic to an unfettered scramble for oil.”

The report recognizes that significant efforts are ongoing to test and improve spill response technologies for use in arctic conditions. However, until such technologies are field-proven and market-ready, additional prevention and planning measures are required to eliminate oil spill risks during times when response operations are not feasible.

WWF has also called for an international mandatory instrument to regulate shipping in the Arctic, as shipping imposes great risks to the Arctic Environment. Routing, zero-discharge zones, areas to be avoided and obligations to keep a certain amount of “self-help” oil spill response equipment on board are among the needed measures.


Adapted from materials provided by World Wildlife Fund.



Read the rest of this entry »

Daily Science Journal (Jan. 31, 2008) — ESA’s Cluster mission has, for the first time, observed the extent of the region that triggers magnetic reconnection, and it is much larger than previously thought. This gives future space missions a much better chance of studying it.

In a plasma (a gas of charged particles), during magnetic reconnection, magnetic field lines of opposite direction break and then reconnect, forming an X-line magnetic topology. The newly reconnected field lines accelerate the plasma away from the X-line. (Credit: Center for Visual computing, Univ. of California Riverside)

Space is filled with plasma (a gas composed of ions and electrons, globally neutral) and is threaded by magnetic fields. These magnetic fields store energy which can be released explosively, in a process called magnetic reconnection.

This process plays a key role in numerous astrophysical phenomena: star formation, solar flares and intense aurorae, to name a few. On Earth, magnetic reconnection prevents the efficient production of electricity in controlled fusion reactors, potential sources of electricity for the future.


Schematic of magnetic field lines during reconnection

At the heart of magnetic reconnection is the ‘electron diffusion region’, where reconnection is thought to be triggered. Here, a kink in newly-reconnected magnetic field lines produces large-scale high-velocity jets of plasma.

“Understanding the structure of the diffusion region and its role in controlling the rate at which magnetic energy is converted into particle energy remains a key scientific challenge,” says Dr Michael Shay, University of Delaware, USA.

Until recently, theoretical scientists believed that the electron diffusion region was relatively tiny (width about 2 km, length about 10 km). In the vastness of space, the chance of a spacecraft encountering this region would therefore be exceedingly small.

With increased computational power, simulations showed electron diffusion regions that were a lot more elongated than those seen earlier. It was not possible to judge whether the new finding was real because the length of the region increased with more powerful simulations. Nor it was known whether such a layer would be stable in the real, 3D world.

Comparison between observations and simulation

On 14 January 2003, the four Cluster satellites were crossing the magnetosheath, a turbulent plasma region located just outside Earth’s magnetosphere, when they encountered an electron diffusion region. The length of the observed region measured 3000 km, 300 times longer than the earlier theoretical expectations and four times longer than seen in recent simulations. Nevertheless, the observations strongly support new simulations.

“These Cluster observations are very significant since they are the first measurements of the length of the electron diffusion region in the space environment. The finding drastically changes the way we understand the physics of reconnection,” noted Dr James Drake, University of Maryland, USA.

“This discovery of a large electron diffusion region gives future ESA and NASA missions a much better chance to study it,” said Tai Phan at the University of California at Berkeley, USA, lead author of the paper on the findings.

Magnetic reconnection simulation

Cluster was able to detect the region based on its high-resolution magnetic field, electric field and ion measurements. But to understand the fundamental physics of the electron diffusion region responsible for reconnection, higher time resolution measurements are needed to resolve the layer.

The four spacecraft of NASA’s Magnetospheric Multi-Scale mission, planned for launch in 2014, are being designed for such measurements. Cross-scale, a mission under study at ESA in collaboration with other space agencies, would use 12 spacecraft to probe the diffusion region, whilst simultaneously measuring the consequences of energy released by reconnection in the surrounding environment.

“With the higher probability of encountering the electron diffusion region, we can be confident that future missions will be able to fully understand magnetic reconnection,” said Dr Philippe Escoubet, ESA’s Cluster and Double Star Project Scientist and Cross-scale Study Scientist.

The findings appear in, ‘Evidence for an elongated (> 60 ion skin depths) electron diffusion region during fast magnetic reconnection,’ by T. Phan, J. Drake, M. Shay, F. Mozer and J. Eastwood, published in the Physical Review Letters, on 21 December 2007.

Adapted from materials provided by European Space Agency.

------------------------------------------------------------------------------

Add On Article :

Magnetic Fields Get Reconnected In Turbulent Plasma Too, Cluster Reveals

Using measurements of the four ESA's Cluster satellites, a study published in Nature Physics shows pioneering experimental evidence of magnetic reconnection also in turbulent 'plasma' around Earth.

This image provides a model of magnetic fields at the Sun's surface using SOHO data, showing irregular magnetic fields (the 'magnetic carpet') in the solar corona (top layer of the Sun's atmosphere). Small-scale current sheets are likely to form in such turbulent environment and reconnection may occur in similar fashion as in Earth's magnetosheath. This can be relevant to a better understanding of the heating of solar corona. (Credit: Stanford-Lockheed Inst. for Space Research/NASA GSFC)

Magnetic reconnection – a phenomenon by which magnetic fields lines get interconnected and reconfigure themselves - is a universal process in space that plays a key role in various astrophysical phenomena such as star formation, solar explosions or the entry of solar material within the Earth's environment. Reconnection has been observed at large-scale boundaries between different plasma environments such as the boundary between Earth and interplanetary space. Plasma is a gas composed of charged particles.

An irregular behaviour of particle flows and magnetic fields causes plasma turbulence within which many small-scale boundaries can form, where reconnection has been predicted via modelling. However, thanks to Cluster this was the first time that this could be directly observed, opening up new perspectives to help us better understand the behaviour of turbulent plasma.

Our first line of defence against the incessant flow of solar particles, the Earth's magnetic field deflects most of this material around the Earth's magnetosphere. This is marked by a boundary layer called the magnetopause. As for any other planet which has a planetary magnetic field (for example Jupiter and Saturn), solar wind is decelerated from supersonic to subsonic speeds by a shock wave (called the 'bow shock') located in front of the magnetopause. The region between the bow shock and the magnetopause is called the magnetosheath.

One of the most turbulent environments in the near-Earth space, the terrestrial magnetosheath is an accessible laboratory to study in-situ turbulence, unlike the solar atmosphere or accretion disks. Characterising the properties of the magnetic turbulence in this region is of prime importance to understand its role in fundamental processes such as energy dissipation and particle acceleration.

Observing reconnection at small-scale boundaries in space requires simultaneous measurements by at least four spacecraft flying in close formation. With an inter-spacecraft distance of only 100 kilometres, on 27 March 2002 the four Cluster satellites observed reconnection within a very thin current 'sheet' embedded in the turbulent plasma with a typical size of about 100 kilometres.

A challenge for the instruments onboard, the observations show that the turbulent plasma is accelerated and heated during the reconnection process. This newly observed type of small-scale reconnection seems also to be associated with the acceleration of particles to energies much higher than their average which could explain, in part, the creation of high energy particles by the Sun.

To quote Alessandro Retinò, lead author of this study and PhD student at the Swedish Institute of Space Physics, Uppsala, Sweden, "we found reconnection in one single current sheet, so that in such an environment of irregular magnetic fields one may think that reconnection is sporadic, but this is not the case. For this particular magnetosheath crossing, a very large number of other thin current sheets was found where reconnection is very likely to occur, a subject currently under investigation by our team."

This discovery of reconnection in turbulent plasma has significant implications for the study of laboratory and astrophysical plasmas, where both turbulence and reconnection develop and thus where turbulent reconnection is very likely to occur. Possible applications range from the dissipation of magnetic energy in fusion devices on Earth to the understanding of the acceleration of high energy particles in solar explosions called solar flares.

"Magnetic reconnection, turbulence and shocks are three fundamental ingredients of the plasma Universe," says Philippe Escoubet Cluster and Double Star project scientist at ESA. "The detailed understanding of these key processes and their associated multi-scale physics is a challenge for the future of space physics. One of the lessons learned from Cluster is the need for new space missions equipped with instruments of higher sensitivity and better time resolution together with a larger number of satellites."

Adapted from materials provided by European Space Agency.



Read the rest of this entry »

Daily Science Journal (Jan. 31, 2008) — The High Resolution Stereo Camera (HRSC) on board ESA’s Mars Express has returned striking scenes of the Terby crater on Mars. The region is of great scientific interest as it holds information on the role of water in the history of the planet.

This false-colour image of Terby crater on Mars was derived from three HRSC colour channels and the nadir channel of the High Resolution Stereo Camera (HRSC) on board ESA's Mars Express orbiter. (Credit: ESA/DLR/FU Berlin (G. Neukum))

The image data was obtained on 13 April 2007 during orbit 4199, with a ground resolution of approximately 13 m/pixel. The Sun illuminates the scene from the west (from above in the image).

Terby crater lies at approximately 27° south and 74° east, at the northern edge of the Hellas Planitia impact basin in the southern hemisphere of Mars.

The crater, named after the Belgian astronomer Francois J. Terby (1846 – 1911), has a diameter of approximately 170 km. The scene shows a section of a second impact crater in the north.


Eye-catching finger-shaped plateaux extend in the north-south direction. They rise up to 2000 m above the surrounding terrain. The relatively old crater was filled with sediments in the past, which formed plateaux on erosion.

The flanks of the plateaux clearly exhibit layering of different-coloured material. Differences in colour usually indicate changes in the composition of the material and such layering is called ‘bedding’. Bedding structures are typical of sedimentary rock, which has been deposited either by wind or water. Different rock layers erode differently, forming terraces.

The valleys exhibit gullies, or channels cut in the ground by running liquid, mainly in the northern part of the image. These gullies and the rock-bedding structure indicate that the region has been affected by water.

The sediments in this region are interesting to study because they contain information on the role of water in the history of the planet. This is one of the reasons why Terby crater was originally short listed as one of 33 possible landing sites for NASA’s Mars Science Laboratory mission, planned for launch in 2009.

The colour scenes have been derived from the three HRSC colour channels and the nadir channel. The perspective views have been calculated from the digital terrain model derived from the HRSC stereo channels. The 3D anaglyph image was calculated from the nadir channel and one stereo channel, stereoscopic glasses are required for viewing.

Adapted from materials provided by European Space Agency.

----------------------------------------------------------------------------

Add On Article :

Europe's Eye On Mars: First Spectacular Results From Mars Express

ESA's Mars Express, successfully inserted into orbit around Mars on 25 December 2003, is about to reach its final operating orbit above the poles of the Red Planet. The scientific investigation has just started and the first results already look very promising, as this first close-up image shows.

Picture taken by the High Resolution Stereo Camera (HRSC) on board ESA’s Mars Express orbiter on 14 January 2004 under the responsibility of the Principal Investigator Prof. Gerhard Neukum. It was processed by the Institute for Planetary Research of the German Aerospace Centre (DLR), also involved in the development of the camera, and by the Institute of Geosciences of the Freie Universität Berlin.

Although the seven scientific instruments on board Mars Express are still undergoing a thorough calibration phase, they have already started collecting amazing results. The first high-resolution images and spectra of Mars have already been acquired.

This first spectacular stereoscopic colour picture was taken on 14 January 2004 by ESA's Mars Express satellite from 275 km above the surface of Mars by the High Resolution Stereo Camera (HRSC). This image is available on the ESA Portal at: http://mars.esa.int

The picture shows a portion of a 1700 km long and 65 km wide swath which was taken in south-north direction across the Grand Canyon of Mars (Valles Marineris). It is the first image of this size that shows the surface of Mars in high resolution (12 metres per pixel), in colour, and in 3D. The total area of the image on the Martian surface (top left corner) corresponds to 120 000 km². The lower part of the picture shows the same region in perspective view as if seen from a low-flying aircraft. This perspective view was generated on a computer from the original image data. One looks at a landscape which has been predominantly shaped by the erosional action of water. Millions of cubic kilometres of rock have been removed, and the surface features seen now such as mountain ranges, valleys, and mesas, have been formed.

The HRSC is just one of the instruments to have collected exciting data. To learn more about the very promising beginning to ESA's scientific exploration of Mars, media representatives are invited to attend a press conference on Friday, 23 January 2004, at 11:00 CET at ESA's Space Operations Centre in Darmstadt, Germany, and in video-conference with the other ESA centres.

There, under the auspices of ESA Council Chair, Germany's Minister for Education and Research, Mrs Edelgard Bulmahn, ESA's Director of the Scientific Programme, Prof. David Southwood and the Principal Investigators of all instruments on board Mars Express will present the first data and preliminary results.

Also a spectacular, three-dimensional video sequence, featuring famous landmarks on the surface of Mars 'as seen through European eyes' will be unveiled for the first time on Friday 23 January.

Adapted from materials provided by European Space Agency.



Read the rest of this entry »

Daily Science Journal (Jan. 31, 2008) — New research shows that people with blue eyes have a single, common ancestor. A team at the University of Copenhagen have tracked down a genetic mutation which took place 6-10,000 years ago and is the cause of the eye colour of all blue-eyed humans alive on the planet today.


Variation in the colour of the eyes from brown to green can all be explained by the amount of melanin in the iris, but blue-eyed individuals only have a small degree of variation in the amount of melanin in their eyes. (Credit: iStockphoto/Cristian Ardelean)

What is the genetic mutation

“Originally, we all had brown eyes”, said Professor Eiberg from the Department of Cellular and Molecular Medicine. “But a genetic mutation affecting the OCA2 gene in our chromosomes resulted in the creation of a “switch”, which literally “turned off” the ability to produce brown eyes”. The OCA2 gene codes for the so-called P protein, which is involved in the production of melanin, the pigment that gives colour to our hair, eyes and skin. The “switch”, which is located in the gene adjacent to OCA2 does not, however, turn off the gene entirely, but rather limits its action to reducing the production of melanin in the iris – effectively “diluting” brown eyes to blue. The switch’s effect on OCA2 is very specific therefore. If the OCA2 gene had been completely destroyed or turned off, human beings would be without melanin in their hair, eyes or skin colour – a condition known as albinism.

Limited genetic variation

Variation in the colour of the eyes from brown to green can all be explained by the amount of melanin in the iris, but blue-eyed individuals only have a small degree of variation in the amount of melanin in their eyes. “From this we can conclude that all blue-eyed individuals are linked to the same ancestor,” says Professor Eiberg. “They have all inherited the same switch at exactly the same spot in their DNA.” Brown-eyed individuals, by contrast, have considerable individual variation in the area of their DNA that controls melanin production.

Professor Eiberg and his team examined mitochondrial DNA and compared the eye colour of blue-eyed individuals in countries as diverse as Jordan, Denmark and Turkey. His findings are the latest in a decade of genetic research, which began in 1996, when Professor Eiberg first implicated the OCA2 gene as being responsible for eye colour.

Nature shuffles our genes

The mutation of brown eyes to blue represents neither a positive nor a negative mutation. It is one of several mutations such as hair colour, baldness, freckles and beauty spots, which neither increases nor reduces a human’s chance of survival. As Professor Eiberg says, “it simply shows that nature is constantly shuffling the human genome, creating a genetic cocktail of human chromosomes and trying out different changes as it does so.”

Adapted from materials provided by University of Copenhagen.



Read the rest of this entry »

Mercury's Magnetosphere Fends Off Solar Wind

Daily Science Journal (Jan. 31, 2008) — The planet Mercury's magnetic field appears to be strong enough to fend off the harsh solar wind from most of its surface, according to data gathered in part by a University of Michigan instrument onboard NASA's MESSENGER spacecraft.

Departing shots: The top left image was taken when MESSENGER was about 34,000 kilometers (21,000 miles) from Mercury, and the bottom right image was snapped from a distance of about 400,000 kilometers (250,000 miles). Mercury and Earth are the only two terrestrial planets in the solar system with magnetospheres produced by an intrinsic magnetic field. (Credit: NASA/Johns Hopkins University Applied Physics Laboratory/Carnegie Institution of Washington)

U-M's Fast Imaging Plasma Spectrometer (FIPS) on Jan. 14 took the first direct measurements of Mercury's magnetosphere to determine how the planet interacts with the space environment and the Sun.


The solar wind, a stream of charged particles, fills the entire solar system. It interacts with all planets, but bears down on Mercury, 2/3 closer than the Earth to the Sun.

Earth's magnetosphere is strong enough to protect us from the solar wind's radiation, but Mercury's magnetic field is comparatively weaker.

"From our magnetic measurements, we can tell that Mercury is managing to stand up to a lot of the solar wind and protect the surface of the planet, at least in some spots. Even though the magnetic field was weak, it was enough," said Thomas Zurbuchen, FIPS instrument project leader and a professor in the U-M Department of Atmospheric, Oceanic and Space Science.

Zurbuchen said scientists can tell Mercury is putting up a good fight because instruments detected a layer of much slower-moving magentospheric plasma around the planet.

It's possible that the magnetosphere shield has holes. Scientists found ions in the magnetosphere that may have been knocked off the surface by the solar wind at the poles, for example. The source and chemical composition of the ions is still unclear, Zurbuchen said. The particles could also be from the planet's thin atmosphere.

"Mercury's magnetosphere is more similar to Earth's than we might have thought," Zurbuchen said.

The spacecraft did find one major difference. Mercury has no Van Allen Belts, wing-shaped regions of energetic particles trapped by Earth's magnetic field.

"We flew through the region they would be in and they just weren't there," Zurbuchen said. "It could be that they're intermittent, but when we were there, they weren't."

Mercury and Earth are the only two terrestrial planets in the solar system with magnetospheres produced by an intrinsic magnetic field.

This was the first of three planned flybys of Mercury. MESSENGER is scheduled to enter orbit in 2011.

Adapted from materials provided by University of Michigan.



Read the rest of this entry »

Daily Science Journal (Jan. 31, 2008) — The rushing floodwaters in Evan Almighty, the heaving seas of the latter two Pirates of the Caribbean movies and the dragon's flaming breath in Harry Potter and the Goblet of Fire all featured computer-generated fluids in spectacular action. The science behind those splashy thrills will be recognized Feb. 9 with an Academy Award for Ron Fedkiw, associate professor of computer science at Stanford, and two collaborators at the special effects firm Industrial Light and Magic (ILM).

A computer-generated scene shows off the fluid simulation technology developed by computer science Associate Professor Ron Fedkiw, former students, and collaborators at Industrial Light and Magic. (Credit: Frank Losasso, Jerry Talton, Nipun Kwatra, Ron Fedkiw / courtesy of Stanford University)

"The primary work started a few years ago when we developed a system designed for the female liquid terminator in Terminator 3," Fedkiw said. "Almost immediately after that it was used in the first Pirates of the Caribbean movie to simulate the wine that the pirate skeleton was drinking out of the bottle in the moonlight. Things like the sinking ship in Poseidon and the large water whirlpool in Pirates of the Caribbean 3 are good examples of the system in action."


The system, co-developed with ILM scientists Nick Rasmussen and Frank Losasso Petterson (a former doctoral student of Fedkiw's), uses a method of simulating low-viscosity fluids such as water and fire, as in the explosions in Star Wars: Revenge of the Sith.

Contributing to a Star Wars movie was a particular honor for Fedkiw.

"George Lucas made Star Wars and, well, that changed the world for a lot of us," he said. "It's amazing what a movie can do to a civilization. I can only be grateful that he made three more of them and that I started working with ILM just in time to get a screen credit on the last one."

Lifelike liquids

Computer graphics experts typically have used particles and complex blobs to represent water, but these can give rise to unrealistically lumpy or grainy surfaces. Alternatively, they have used a technique called "the level set method" that gives a smooth surface representation, but some water is "under-resolved" and simply disappears when it breaks down into small volumes, as in a crashing wave.

The key innovation behind Fedkiw and former doctoral student Douglas Enright's novel "particle level set method" was to mix the use of particles and level sets so that studios could maintain smooth surfaces wherever possible and still keep all the fluid via the particle representation.

"As an added bonus, the method automatically generates spray particles and bubbles in under-resolved regions where the level set [method] loses mass and volume," Fedkiw said.

Fedkiw gives a lot of the credit to his colleagues for the system used to make the movies: "Nick made the system and Frank made it rock."

The effect's power is clearly evident in a movie on Fedkiw's website. There, gigantic waves crash against a lighthouse and produce huge sprays. In addition to incorporating the particle level set method, the rendering also uses an additional method to simulate how the spray interacts with itself and the surrounding water.

Such integrations are indicative of a future direction of Fedkiw's computer graphics research.

"This year we built a system that allows two-way coupling between rigid and deformable bodies, so we can fully physically simulate bones moving around under flesh—interacting with the environment," he said. "Another main result is a two-way, solid-fluid coupling method that can be used with it, so the environment can be water; that is, we're going to be simulating people swimming."

Of course the more immediate future calls for a trip to the Beverly Wilshire Hotel in Beverly Hills for the Scientific and Technical Academy Awards presentation Feb. 9. Fedkiw says he'll probably go to pick up his plaque.

"After wearing sandals for the last two years—even in the Lake Tahoe snow— it's going to be tough to go black tie," he said.

Adapted from materials provided by Stanford University.




Read the rest of this entry »

Daily Science Journal (Jan. 31, 2008) — A new medical imager for detecting and guiding the biopsy of suspicious breast cancer lesions is capable of spotting tumors that are half the size of the smallest ones detected by standard imaging systems, according to a new study.

The results of initial testing of the PEM/PET system, designed and constructed by scientists at the Department of Energy's Thomas Jefferson National Accelerator Facility, West Virginia University School of Medicine and the University of Maryland School of Medicine will be published in the journal Physics in Medicine and Biology on Feb. 7.

"This is the most-important and most-difficult imager we've developed so far," Stan Majewski, Jefferson Lab Radiation Detector and Medical Imaging Group leader said. "It is another example of nuclear physics detector technology that we have put a lot of time and effort into adapting for the common good."


Testing of the new imager was led by Ray Raylman, a professor of radiology and vice chair of Radiology Research at WVU and lead author on the study. Raylman's team imaged various radioactive sources to test the resolution of the system.

"We had good performance characteristics, with image resolution below two millimeters. In regular PET, the image resolution is over five millimeters, so we're quite a bit better than that," Raylman said. In addition, the initial tests revealed that the PEM/PET system can complete an image and biopsy in about the same amount of time as a traditional biopsy.

"The ability of the device to do biopsy is probably one of its most unique characteristics. There are other breast imagers, but none that are built specifically to do biopsy as well as imaging," Raylman said.

The system features components designed for imaging the unique contours of the breast. Known as positron emission mammography (PEM), this imaging capability enables users to attain high-resolution, three-dimensional PET images of the breast. The PEM/PET system images the breast with a movable array of two pairs of two flat detection heads.

If a suspected lesion is found, a single pair of heads is then used to guide a needle biopsy of the lesion; the biopsy is performed with a person-controlled robot arm. Raylman is the author of the concept and has a patent on this idea. The system is especially useful in imaging tumors in women who have indeterminate mammograms because of dense or fibroglandular breasts.

The Jefferson Lab Radiation Detector and Medical Imaging Group, with a group member now affiliated with the University of Maryland School of Medicine, developed the detector heads with the on-board electronics, the data acquisition readout and the image reconstruction software. The imaging device's gantry and the motion-control software was developed by West Virginia University researchers.

The next steps for the team include minor improvements in the detector systems and image reconstruction software and the addition of components for taking x-ray computed tomography (CT) scans. Initial clinical trials are planned after completion of system testing.

Adapted from materials provided by DOE/Thomas Jefferson National Accelerator Facility.



Read the rest of this entry »

Daily Science Journal (Jan. 30, 2008) — Carbon nanotubes have a sound future in the electronics industry, say researchers who built the world's first all-nanotube transistor radios to prove it.

Schematic exploded view of a radio-frequency transistor that uses parallel, aligned arrays of carbon nanotubes for the semiconductor. (Credit: Images courtesy John Rogers)

The nanotube radios, in which nanotube devices provide all of the active functionality in the devices, represent "important first steps toward the practical implementation of carbon-nanotube materials into high-speed analog electronics and other related applications," said John Rogers, a Founder Professor of Materials Science and Engineering at the University of Illinois.


Rogers is a corresponding author of a paper* that describes the design, fabrication and performance of the nanotube-transistor radios, which were achieved in a close collaboration with radio frequency electronics engineers at Northrop Grumman Electronics Systems in Linthicum, Md.

"These results indicate that nanotubes might have an important role to play in high-speed analog electronics, where benchmarking studies against silicon indicate significant advantages in comparably scaled devices, together with capabilities that might complement compound semiconductors," said Rogers, who also is a researcher at the Beckman Institute and at the university's Frederick Seitz Materials Research Laboratory.

Practical nanotube devices and circuits are now possible, thanks to a novel growth technique developed by Rogers and colleagues at the U. of I., Lehigh and Purdue universities, and described last year in the journal Nature Nanotechnology.

The growth technique produces linear, horizontally aligned arrays of hundreds of thousands of carbon nanotubes that function collectively as a thin-film semiconductor material in which charge moves independently through each of the nanotubes. The arrays can be integrated into electronic devices and circuits by conventional chip-processing techniques.

"The ability to grow these densely packed horizontal arrays of nanotubes to produce high current outputs, and the ability to manufacture the arrays reliably and in large quantities, allows us to build circuits and transistors with high performance and ask the next question," Rogers said. "That question is: 'What type of electronics is the most sensible place to explore applications of nanotubes"' Our results suggest that analog RF (radio frequency) represents one such area."

As a demonstration of the growth technique and today's nanotube analog potential, Rogers and collaborators at the U. of I. and Northrop Grumman fabricated nanotube transistor radios, in which nanotube devices provided all of the key functions.

The radios were based on a heterodyne receiver design consisting of four capacitively coupled stages: an active resonant antenna, two radio-frequency amplifiers, and an audio amplifier, all based on nanotube devices. Headphones plugged directly into the output of a nanotube transistor. In all, seven nanotube transistors were incorporated into the design of each radio.

In one test, the researchers tuned one of the nanotube-transistor radios to WBAL-AM (1090) in Baltimore, to pick up a traffic report.

"We were not trying to make the world's tiniest radios," Rogers said. "The nanotube radios are a demonstration, an important milestone toward building the technology into a form that ultimately would be commercially competitive with entrenched approaches."

*The paper has been accepted for publication in the Proceedings of the National Academy of Sciences, and is to be published in PNAS Online Early Edition in the first week in February, 2008.

The work was funded by the National Science Foundation and the U.S. Department of Energy.

Adapted from materials provided by University of Illinois at Urbana-Champaign.




Read the rest of this entry »

Daily Science Journal (Jan. 30, 2008) — The potential of carbon nanotubes to diagnose and treat brain tumors is being explored through a partnership between NASA's Jet Propulsion Laboratory, Pasadena, Calif., and City of Hope, a leading cancer research and treatment center in Duarte, Calif.

Benham Badie, M.D., director of the Department of Neurosurgery and the Brain Tumor program at City of Hope, performs a minimally invasive procedure to surgically remove a pituitary tumor. Nanotube technology may help in the development of new treatments that would require only minimally invasive procedures no matter the location of the brain tumor. (Credit: City of Hope)

Nanotechnology may help revolutionize medicine in the future with its promise to play a role in selective cancer therapy. City of Hope researchers hope to boost the brain's own immune response against tumors by delivering cancer-fighting agents via nanotubes. A nanotube is about 50,000 times narrower than a human hair, but it length can extend up to several centimeters.


If nanotube technology can be effectively applied to brain tumors, it might also be used to treat stroke, trauma, neurodegenerative disorders and other disease processes in the brain, said Dr. Behnam Badie, City of Hope's director of neurosurgery and of its brain tumor program.

"I'm very optimistic of how this nanotechnology will work out," he said. "We are hoping to begin testing in humans in about five years, and we have ideas about where to go next."

The Nano and Micro Systems Group at JPL, which has been researching nanotubes since about 2000, creates these tiny, cylindrical multi-walled carbon tubes for City of Hope.

City of Hope researchers, who began their quest in 2006, found good results: The nanotubes, which they used on mice, were non-toxic in brain cells, did not change cell reproduction and were capable of carrying DNA and siRNA, two types of molecules that encode genetic information.

JPL's Nano and Micro Systems Group grows the nanotubes on silicon strips a few square millimeters in area. The growth process forms them into hollow tubes as if by rolling sheets of graphite-like carbon.

Carbon nanotubes are extremely strong, flexible, heat-resistant, and have very sharp tips. Consequently, JPL uses nanotubes as field-emission cathodes -- vehicles that help produce electrons -- for various space applications such as x-ray and mass spectroscopy instruments, vacuum microelectronics and high-frequency communications.

"Nanotubes are important for miniaturizing spectroscopic instruments for space applications, developing extreme environment electronics, as well as for remote sensing," said Harish Manohara, the technical group supervisor for JPL's Nano and Micro Systems Group.

Nanotubes are a fairly new innovation, so they are not yet routinely used in current NASA missions, he added. However, they may be used in gas-analysis or mineralogical instruments for future missions to Mars, Venus and the Jupiter system.

JPL's collaboration with City of Hope began last year, after Manohara, Badie and Dr. Babak Kateb, City of Hope's former director of research and development in the brain tumor program, discussed using nanostructures to better diagnose and treat brain cancer. Badie said his team's nanomedical research continues, and the next goal will be to functionalize and attach inhibitory RNA to the nanotubes and deliver it to specific areas of the brain.

The JPL and City of Hope teams published the results of the study earlier this year in the journal NeuroImage.

Badie says that JPL's contribution to City of Hope's nanomedicine research has been invaluable.

"The fact that we can get pristine and really clean nanotubes from Manohara's department is unique," he said. "The fact that we are both collaborating for biological purposes is also really unique."

The collaboration between JPL and City of Hope is conducted under NASA's Innovative Partnership Program, designed to bring benefits of the space program to the public.

Adapted from materials provided by NASA/Jet Propulsion Laboratory.





Read the rest of this entry »

Daily Science Journal (Jan. 30, 2008) — For thousands of years, human beings have relied on commodity barter as an essential aspect of their lives. It is the behavior that allows specialized professions, as one individual gives up some of what he has reaped to exchange with another for something different. In this way, both individuals end up better off. Despite the importance of this behavior, little is known about how barter evolved and developed.

Researchers examined the circumstances under which chimpanzees, our closest relatives, will exchange one inherently valuable commodity (an apple slice) for another (a grape), which is what early humans must have somehow learned to do. (Credit: iStockphoto/Nicola Stratford)


This study is the first to examine the circumstances under which chimpanzees, our closest relatives, will exchange one inherently valuable commodity (an apple slice) for another (a grape), which is what early humans must have somehow learned to do. Economists believe that commodity barter is one of the most basic precursors to economic specialization, which we observe in humans but not in other primate species. First of all, the researchers found that chimpanzees often did not spontaneously barter food items, but needed to be trained to engage in commodity barter. Moreover, even after the chimpanzees had been trained to do barters with reliable human trading partners, they were reluctant to engage in extreme deals in which a very good commodity (apple slices) had to be sacrificed in order to get an even more preferred commodity (grapes).

Prior animal behavior studies have largely examined chimpanzees' willingness to trade tokens for valuable commodities. Tokens do not exist in nature, and lack inherent value, so a chimpanzee's willingness to trade a token for a valuable commodity, such as a grape, may say little about chimpanzee behavior outside the laboratory.

In a series of experiments, chimpanzees at two different facilities were given items of food and then offered the chance to exchange them for other food items. A collaboration of researchers from Georgia State University, the University of California, Los Angeles, and the U.T. M.D. Anderson Cancer Center found that the chimpanzees, once they were trained, were willing to barter food with humans, but if they could gain something significantly better -- say, giving up carrots for much preferred grapes. Otherwise, they preferred to keep what they had.

The observed chimpanzee behavior could be reasonable because chimpanzees lack social systems to enforce deals and, as a society, punish an individual that cheats its trading partner by running off with both commodities. Also because of their lack of property ownership norms, chimpanzees in nature do not store property and thus would have little opportunity to trade commodities.

Nevertheless, as prior research has demonstrated, they do possess highly active service economies. In their natural environment, only current possessions are "owned," and the threat of losing what one has is very high, so chimpanzees frequently possess nothing to trade.

"This reluctance to trade appears to be deeply ingrained in the chimpanzee psyche," said one of the lead authors, Sarah Brosnan, an assistant professor of psychology at Georgia State University. "They're perfectly capable of barter, but they don't do so in a way which will maximize their outcomes."

The other lead author, Professor Mark F. Grady, Director of UCLA's Center for Law and Economics, commented: "I believe that chimpanzees are reluctant to barter commodities mainly because they lack effective ownership norms. These norms are especially costly to enforce, and for this species the game has evidently not been worth the candle. Fortunately, services can be protected without ownership norms, so chimpanzees can and do trade services with each other. As chimpanzee societies demonstrate, however, a service economy does not lead to the same degree of economic specialization that we observe among humans."

The research could additionally shed light on the instances in which humans also don't maximize their gains, Brosnan said.

The laboratory experiments for this study was conducted at Georgia State's Language Research Center and the University of Texas M.D. Anderson Cancer Center, and the much of the conceptual work was done at UCLA's Center for Law and Economics.

Citation: Brosnan SF, Grady MF, Lambeth SP, Schapiro SJ, Beran MJ (2008) Chimpanzee Autarky. PLoS One 3(1): e1518. doi:10.1371/journal.pone.0001518 http://www.plosone.org/doi/pone.0001518

Adapted from materials provided by Public Library of Science, via EurekAlert!, a service of AAAS.



Read the rest of this entry »

Daily Science Journal (Jan. 30, 2008) — Using mice as models, researchers at the Max Planck Institute for Evolutionary Anthropology traced some of the differences between humans and chimpanzees to differences in our diet.

Humans consume a distinct diet compared to other apes, like this chimpanzee eating an apple. Not only do we consume much more meat and fat, but we also cook our food. It has been hypothesized that adopting these dietary patterns played a key role during human evolution. (Credit: iStockphoto/Stephanie Swartz)


Humans consume a distinct diet compared to other apes. Not only do we consume much more meat and fat, but we also cook our food. It has been hypothesized that adopting these dietary patterns played a key role during human evolution. However, to date, the influence of diet on the physiological and genetic differences between humans and other apes has not been widely examined.

By feeding laboratory mice different human and chimp diets over a mere two week period, researchers at the Max-Planck-Institute for Evolutionary Anthropology in Leipzig, Germany, were able to reconstruct some of the physiological and genetic differences observed between humans and chimpanzees.

The researchers fed laboratory mice one of three diets: a raw fruit and vegetable diet fed to chimpanzees in zoos, a human diet consisting of food served at the Institute cafeteria or a pure fast food menu from the local McDonald's™ (the latter caused the mice to significantly gain weight). The chimpanzee diet was clearly distinct from the two human diets in its effect on the liver - thousands of differences were observed in the levels at which genes were expressed in the mouse livers. No such differences were observed in the mouse brains. A significant fraction of the genes that changed in the mouse livers, had previously been observed as different between humans and chimpanzees. This indicates that the differences observed in these particular genes might be caused by the difference in human and chimpanzee diets.

Furthermore, the diet-related genes also appear to have evolved faster than other genes - protein and promoter sequences of these genes changed faster than expected, possibly because of adaptation to new diets.

Citation: Somel M, Creely H, Franz H, Mueller U, Lachmann M, et al (2008) Human and Chimpanzee Gene Expression Differences Replicated in Mice Fed Different Diets. PLoS One 3(1): e1504. doi:10.1371/journal.pone.0001504 http://www.plosone.org/doi/pone.0001504

Adapted from materials provided by Public Library of Science, via EurekAlert!, a service of AAAS.



Read the rest of this entry »

Daily Science Journal (Jan. 29, 2008) — The Fertile Crescent of the Middle East has long been identified as a "cradle of civilization" for humans. In a new genetic study, researchers at the University of California, Davis, have concluded that all ancestral roads for the modern day domestic cat also lead back to the same locale.

Cats, with their penchant for hunting mice, rats and other rodents, became useful companions as people domesticated, grew and stored wild grains and grasses. Eventually, cats also became pets but were never fully domesticated. Even today, most domestic cats remain self-sufficient, if necessary, and continue to be efficient hunters, even when provided with food. (Credit: Michele Hogan)

Findings of the study, involving more than 11,000 cats, are reported in the January issue of the journal Genomics.


"This study confirms earlier research suggesting that the domestication of the cat started in the Fertile Crescent region," said Monika Lipinski, lead researcher on the study and a doctoral candidate in the School of Veterinary Medicine. "It also provides a warning for modern cat fanciers to make sure they maintain a broad genetic base as they further develop their breeds."

Leslie Lyons, an authority on cat genetics and principal investigator on this study, said: "More than 200 genetic disorders have been identified in modern cats, and many are found in pure breeds. We hope that cat breeders will use the genetic information uncovered by this study to develop efficient breed-management plans and avoid introducing genetically linked health problems into their breeds."

History of the modern cat

Earlier archaeological evidence and research on the evolutionary history of cats has suggested that domestication of the cat originated about 5,000 to 8,000 years ago in the Fertile Crescent, a region located today in the Middle East. This is the area around the eastern end of the Mediterranean, stretching from Turkey to northern Africa and eastward to modern day Iraq and Iran. This domestication of the cat occurred as humans transitioned from nomadic herding to raising crops and livestock.

Cats, with their penchant for hunting mice, rats and other rodents, became useful companions as people domesticated, grew and stored wild grains and grasses. Eventually, cats also became pets but were never fully domesticated. Even today, most domestic cats remain self-sufficient, if necessary, and continue to be efficient hunters, even when provided with food.

Cats and their gene pools spread rapidly around the world as ancient civilizations developed trade routes. Unlike other domesticated species, there has been little effort to improve on the cat for functional purposes. Instead, development of cat breeds has been driven more by preferences for certain aesthetic qualities like coat color and color patterns.

Today, there are 50 recognized cat breeds. Of that total, 16 breeds are thought to be "natural breeds" that occurred in specific regions, while the remaining breeds were developed during the past 50 years.

DNA of 11,000 cats

In this study, the UC Davis research team focused on:
  • tracing the movement of the modern cat through the ancient world and to the Americas;
  • measuring changes in genetic diversity as cats dispersed throughout the world; and
  • measuring any loss of genetic diversity that might have occurred in the development of the older or more contemporary breeds.

The researchers collected samples of cheek cells from more than 11,000 cats. These cats represented 17 populations of randomly bred cats from Europe, the Mediterranean, Asia, Africa and the Americas, as well as 22 recognized breeds.

DNA samples of most breeds were obtained at cat shows or were sent in upon the lab's request by cat owners in the United States. The study was assisted by a host of collaborators from throughout the world. DNA, or deoxyribonucleic acid, is the hereditary material in humans, other animals and plants. It carries the instructions or blueprint for making all the structures and materials that the organism needs to function.

Genetic markers called "microsatellite markers," commonly used for DNA profiling, were used to determine the genetic relationships of cat breeds, their geographic origins and the levels of genetic loss that have resulted from inbreeding.

Findings

From the DNA analysis, the researchers found that the cats were genetically clustered in four groups that corresponded with the regions of Europe, the Mediterranean basin, east Africa and Asia.

They discovered that randomly bred cats in the Americas were genetically similar to randomly bred cats from Western Europe. They also found that the Maine coone and American shorthair -- two breeds that originated in the United States -- were genetically similar to the seven Western European breeds. This suggests that cats brought to the New World by European settlers have not had sufficient time to develop significant genetic differentiation from their Western European ancestors.

The study yielded many interesting breed-specific findings. For example, the researchers found that the Persian breed, perhaps the oldest recognized pure breed, was not genetically associated with randomly bred cat populations from the Near East, but rather was more closely associated with randomly bred cats of Western Europe.

In addition, the researchers found that, of the Asian cat breeds, only the Japanese bobtail was genetically clustered with Western cats, although it did retain some Asian influence.

Cats from the Mediterranean region were found to be genetically uniform, perhaps a result of the constant movement of ships and caravans during the early era of the cat's domestication, the researchers suggested.

Lesson for cat breeders

The study found that genetic diversity remained surprisingly broad among cats from various parts of the world. However the data indicated that there was some loss of diversity associated even with the long-term development of foundation cat breeds -- those breeds that provided the genetic basis from which modern pure breeds were developed.

The researchers note that, given the relatively short time span during which modern breeds are emerging, cat breeders should proceed cautiously as they develop their breeds, making sure to maintain a broad genetic base that will minimize introduction of genetically based health problems.

Funding for this study was provided by the National Institutes of Health, the Winn Feline Foundation and the George and Phyllis Miller Feline Health Fund. Also supporting the study were the Center for Companion Animal Health and the Koret Center of Veterinary Genetics, both within the UC Davis School of Veterinary Medicine.

Adapted from materials provided by University of California - Davis.




Read the rest of this entry »

ScienceDaily (Jan. 29, 2008) — A wave of new NASA research on tsunamis has yielded an innovative method to improve existing tsunami warning systems, and a potentially groundbreaking new theory on the source of the December 2004 Indian Ocean tsunami.

Using GPS data (purple arrows) to measure ground displacements, scientists replicated the December 2004 Indian Ocean tsunami, whose crests and troughs are shown here in reds and blues, respectively. The research showed GPS data can be used to reliably estimate a tsunami's destructive potential within minutes. (Credit: NASA/JPL)

In one study, published last fall in Geophysical Research Letters, researcher Y. Tony Song of NASA's Jet Propulsion Laboratory, Pasadena, Calif., demonstrated that real-time data from NASA's network of global positioning system (GPS) stations can detect ground motions preceding tsunamis and reliably estimate a tsunami's destructive potential within minutes, well before it reaches coastal areas. The method could lead to development of more reliable global tsunami warning systems, saving lives and reducing false alarms.


Conventional tsunami warning systems rely on estimates of an earthquake's magnitude to determine whether a large tsunami will be generated. Earthquake magnitude is not always a reliable indicator of tsunami potential, however. The 2004 Indian Ocean quake generated a huge tsunami, while the 2005 Nias (Indonesia) quake did not, even though both had almost the same magnitude from initial estimates. Between 2005 and 2007, five false tsunami alarms were issued worldwide. Such alarms have negative societal and economic effects.

Song's method estimates the energy an undersea earthquake transfers to the ocean to generate a tsunami by using data from coastal GPS stations near the epicenter. With these data, ocean floor displacements caused by the earthquake can be inferred. Tsunamis typically originate at undersea boundaries of tectonic plates near the edges of continents.

"Tsunamis can travel as fast as jet planes, so rapid assessment following quakes is vital to mitigate their hazard," said Ichiro Fukumori, a JPL oceanographer not involved in the study. "Song and his colleagues have demonstrated that GPS technology can help improve both the speed and accuracy of such analyses."

Song's method works as follows: an earthquake's epicenter is located using seismometer data. GPS displacement data from stations near the epicenter are then gathered to derive seafloor motions. Based upon these data, local topography data and new theoretical developments, a new "tsunami scale" measurement from one to 10 is generated, much like the Richter Scale used for earthquakes. Song proposes using the scale to make a distinction between earthquakes capable of generating destructive tsunamis from those unlikely to do so.

To demonstrate his methodology on real earthquake-tsunamis, Song examined three historical tsunamis with well-documented ground motion measurements and tsunami observations: Alaska in 1964; the Indian Ocean in 2004; and Nias Island, Indonesia in 2005. His method successfully replicated all three. The data compared favorably with conventional seismic solutions that usually take hours or days to calculate.

Song said many coastal GPS stations are already in operation, measuring ground motions near earthquake faults in real time once every few seconds. "A coastal GPS network established and combined with the existing International GPS Service global sites could provide a more reliable global tsunami warning system than those available today," he said.

The theory behind the GPS study was published in the December 20 issue of Ocean Modelling. Song and his team from JPL; the California Institute of Technology, Pasadena, Calif.; University of California, Santa Barbara; and Ohio State University, Columbus, Ohio, theorized most of the height and energy generated by the 2004 Indian Ocean tsunami resulted from horizontal, not vertical, faulting motions. The study uses a 3-D earthquake-tsunami model based on seismograph and GPS data to explain how the fault's horizontal motions might be the major cause of the tsunami's genesis.

Scientists have long believed tsunamis form from vertical deformation of seafloor during undersea earthquakes. However, seismograph and GPS data show such deformation from the 2004 Sumatra earthquake was too small to generate the powerful tsunami that ensued. Song's team found horizontal forces were responsible for two-thirds of the tsunami's height, as observed by three satellites (NASA's Jason, the U.S. Navy's Geosat Follow-on and the European Space Agency's Environmental Satellite), and generated five times more energy than the earthquake's vertical displacements. The horizontal forces also best explain the way the tsunami spread out across the Indian Ocean. The same mechanism was also found to explain the data observed from the 2005 Nias earthquake and tsunami.

Co-author C.K. Shum of Ohio State University said the study suggests horizontal faulting motions play a much more important role in tsunami generation than previously believed. "If this is found to be true for other tsunamis, we may have to revise some early views on how tsunamis are formed and where mega tsunamis are likely to happen in the future," he said.

Adapted from materials provided by NASA/Jet Propulsion Laboratory.



Read the rest of this entry »

Daily Science Journal (Jan. 29, 2008) — Researchers at Purdue University are working with the state of Indiana to develop a system that would use a network of cell phones to detect and track radiation to help prevent terrorist attacks with radiological "dirty bombs" and nuclear weapons.

Purdue physics professor Ephraim Fischbach, at right, and nuclear engineer Jere Jenkins review radiation-tracking data as part of research to develop a system that would use a network of cell phones to detect and track radiation. Such a system could help prevent terrorist attacks with radiological "dirty bombs" and nuclear weapons by blanketing the nation with millions of cell phones equipped with radiation sensors able to detect even light residues of radioactive material. Because cell phones already contain global positioning locators, the network of phones would serve as a tracking system. (Credit: Purdue News Service photo/David Umberger)

Such a system could blanket the nation with millions of cell phones equipped with radiation sensors able to detect even light residues of radioactive material. Because cell phones already contain global positioning locators, the network of phones would serve as a tracking system, said physics professor Ephraim Fischbach. Fischbach is working with Jere Jenkins, director of Purdue's radiation laboratories within the School of Nuclear Engineering.


"It's the ubiquitous nature of cell phones and other portable electronic devices that give this system its power," Fischbach said. "It's meant to be small, cheap and eventually built into laptops, personal digital assistants and cell phones."

The system was developed by Andrew Longman, a consulting instrumentation scientist. Longman developed the software for the system and then worked with Purdue researchers to integrate the software with radiation detectors and cell phones. Cellular data air time was provided by AT&T.

The research has been funded by the Indiana Department of Transportation through the Joint Transportation Research Program and School of Civil Engineering at Purdue.

"The likely targets of a potential terrorist attack would be big cities with concentrated populations, and a system like this would make it very difficult for someone to go undetected with a radiological dirty bomb in such an area," said Longman, who also is Purdue alumnus. "The more people are walking around with cell phones and PDAs, the easier it would be to detect and catch the perpetrator. We are asking the public to push for this."

Tiny solid-state radiation sensors are commercially available. The detection system would require additional circuitry and would not add significant bulk to portable electronic products, Fischbach said.

The technology is unlike any other system, particularly because the software can work with a variety of sensor types, he said.

"Cell phones today also function as Internet computers that can report their locations and data to their towers in real time," Fischbach said. "So this system would use the same process to send an extra signal to a home station. The software can uncover information from this data and evaluate the levels of radiation."

The researchers tested the system in November, demonstrating that it is capable of detecting a weak radiation source 15 feet from the sensors.

"We set up a test source on campus, and people randomly walked around carrying these detectors," Jenkins said. "The test was extremely safe because we used a very weak, sealed radiation source, and we went through all of the necessary approval processes required for radiological safety. This was a source much weaker than you would see with a radiological dirty bomb."

Officials from the Indiana Department of Transportation participated in the test.

"The threat from a radiological dirty bomb is significant, especially in metropolitan areas that have dense populations," said Barry Partridge, director of INDOT's Division of Research and Development.

Long before the sensors would detect significant radiation, the system would send data to a receiving center.

"The sensors don't really perform the detection task individually," Fischbach said. "The collective action of the sensors, combined with the software analysis, detects the source. The system would transmit signals to a data center, and the data center would transmit information to authorities without alerting the person carrying the phone. Say a car is transporting radioactive material for a bomb, and that car is driving down Meridian Street in Indianapolis or Fifth Avenue in New York. As the car passes people, their cell phones individually would send signals to a command center, allowing authorities to track the source."

The signal grows weaker with increasing distance from the source, and the software is able to use the data from many cell phones to pinpoint the location of the radiation source.

"So the system would know that you were getting closer or farther from something hot," Jenkins said. "If I had handled radioactive material and you were sitting near me at a restaurant, this system would be sensitive enough to detect the residue. "

The Purdue Research Foundation owns patents associated with the technology licensed through the Office of Technology Commercialization.

In addition to detecting radiological dirty bombs designed to scatter hazardous radioactive materials over an area, the system also could be used to detect nuclear weapons, which create a nuclear chain reaction that causes a powerful explosion. The system also could be used to detect spills of radioactive materials.

"It's impossible to completely shield a weapon's radioactive material without making the device too heavy to transport," Jenkins said.

The system could be trained to ignore known radiation sources, such as hospitals, and radiation from certain common items, such as bananas, which contain a radioactive isotope of potassium.

"The radiological dirty bomb or a suitcase nuclear weapon is going to give off higher levels of radiation than those background sources," Fischbach said. "The system would be sensitive enough to detect these tiny levels of radiation, but it would be smart enough to discern which sources posed potential threats and which are harmless."

The team is working with Karen White, senior technology manager at the Purdue Research Foundation, to commercialize the system. For more information on licensing the cell phone sensor technology, contact White at (765) 494-2609

Adapted from materials provided by Purdue University.





Read the rest of this entry »

Daily Science Journal (Jan. 29, 2008) — A new computer-based text-searching tool developed by UT Southwestern Medical Center researchers automatically -- and quickly -- compares multiple documents in a database for similarities, providing a more efficient method to carry out literature searches, as well as offering scientific journal editors a new tool to thwart questionable publication practices.

Dr. Harold "Skip" Garner. (Credit: UT Southwestern Medical Center)

The eTBLAST computer program is efficient at flagging publications that are highly similar, said Dr. Harold "Skip" Garner, a professor of biochemistry and internal medicine at UT Southwestern who developed the computer code along with his colleagues. Not only does the code identify duplication of key words, but it also compares word proximity and order, among other variables.


The tool is especially useful for investigators who wish to analyze an unpublished abstract or project idea in order to find previous publications on the topic or identify possible collaborators working in the same field.

Another application of eTBLAST is to aid journal editors in detecting potentially plagiarized or duplicate articles submitted for publication. Dr. Garner and his colleagues explored that application in two recent articles: in a scientific paper in the Jan. 15 issue of Bioinformatics and in a commentary in the Jan. 24 issue of Nature.

In the first phase of the study, published in Bioinformatics, researchers used eTBLAST to analyze more than 62,000 abstracts from the past 12 years, randomly selected from Medline, one of the largest databases of biomedical research articles. They found that 0.04 percent of papers with no shared authors were highly similar and cases representing potential plagiarism. The small percentage found in the sample may appear insignificant, but when extrapolated to the 17 million scientific papers currently cited in the database, the number of potential plagiarism cases grows to nearly 7,000.

The researchers also found that 1.35 percent of papers with shared authors were sufficiently similar to be considered duplicate publications of the same data, another questionable practice.

In the second phase of the study, outlined in the Nature commentary, Dr. Garner and Dr. Mounir Errami, an instructor in internal medicine, refined their electronic search process so that is was thousands of times faster. An analysis of more than seven million Medline abstracts turned up nearly 70,000 highly similar papers.

Plagiarism may be the most extreme and nefarious form of unethical publication, Dr. Garner said, but simultaneously submitting the same research results to multiple journals or repeated publication of the same data may also be considered unacceptable in many circumstances.

When it comes to duplicate or repeated publications, however, there are some forms that are not only completely ethical, but also valuable to the scientific community. For example, long-term studies such as clinical trial updates and longitudinal surveys require annual or bi-annual publication of progress, and these updates often contain verbatim reproductions of much of the original text.

"We can identify near-duplicate publications using our search engine," said Dr. Garner, who is a faculty member in the Eugene McDermott Center for Human Growth and Development at UT Southwestern. "But neither the computer nor we can make judgment calls as to whether an article is plagiarized or otherwise unethical. That task must be left to human reviewers, such as university ethics committees and journal editors, the groups ultimately responsible for determining legitimacy."

Dr. Garner said eTBLAST not only detects the prevalence of duplicate publications, but also offers a possible solution to help prevent future unethical behavior.

"Our objective in this research is to make a significant impact on how scientific publications may be handled in the future," Dr. Garner said. "As it becomes more widely known that there are tools such as eTBLAST available, and that journal editors and others can use it to look at papers during the submission process, we hope to see the numbers of potentially unethical duplications diminish considerably."

Other UT Southwestern researchers in the McDermott Center who were involved in the research are computer programmer Justin Hicks, postdoctoral researcher Dr. Wayne Fisher, network analyst David Trusty and staff member Tara Long. Dr. Jonathan Wren at the Oklahoma Medical Research Foundation also participated.

The research was funded by the Hudson Foundation and the National Institutes of Health.

Adapted from materials provided by UT Southwestern Medical Center.




Read the rest of this entry »

Videos Extract Mechanical Properties Of Liquid-gel Interfaces

Daily Science Journal (Jan. 29, 2008) — Blood coursing through vessels, lubricated cartilage sliding against joints, ink jets splashing on paper--living and nonliving things abound with fluids meeting solids. However important these liquid/solid boundaries may be, conventional methods cannot measure basic mechanical properties of these interfaces in their natural environments. Now, researchers at the National Institute of Standards and Technology (NIST) and the University of Minnesota have demonstrated a video method that eventually may be able to make measurements on these types of biological and industrial systems.*

Microscopic beads embedded in a gel surface were used to trace the motion of a gel forming an interface with a liquid. As the gel/liquid interface was stirred, the beads followed a complicated trajectory (patterns above photos), which the researchers broke down into a range of small, fast movements to large, slow movements in order to determine the gel's underlying mechanical properties. As the strength of the flow is increased (from left to right), the scale of the motion increases. (Credit: NIST)


Optical microrheology--an emerging tool for studying flow in small samples--usually relies on heat to stir up motion. Analyzing this heat-induced movement can provide the information needed to determine important mechanical properties of fluids and the interfaces that fluids form with other materials. However, when strong flows overwhelm heat-based motion, this method isn't applicable.

Motivated by this, researchers developed a video method that can extract optically basic properties of the liquid/solid interface in strong flows. The solid material they chose was a gel, a substance that has both solid-like properties such as elasticity and liquid-like properties such as viscosity (resistance to flow).

In between a pair of centimeter-scale circular plates, the researchers deposited a gel of polydimethylsiloxane (a common material used in contact lenses and microfluidics devices). Pouring a liquid solution of polypropylene glycol on the gel, they then rotated the top plate to create forces at the liquid/gel interface. The results could be observed by tracking the motion of styrene beads in the gel.

The researchers discovered that the boundary between the liquid and gel became unstable in response to "mechanical noise" (irregularities in the motion of the plates). Such "noise" occurs in real-world physical systems. Surprisingly, a small amount of this mechanical noise produced a lot of motion at the fluid/gel interface. This motion provided so much useful information that the researchers could determine the gel's mechanical properties--namely its "viscoelasticity"--at the liquid/gel interface.

The encouraging results from this model system show that this new approach could potentially be applied to determining properties of many useful and important liquid/solid interfaces. The NIST/Minnesota approach has possible applications in areas as diverse as speech therapy where observing the flow of air over vocal cords could enable noninvasive measures of vocal tissue elasticity and help clinicians detect problems at an early stage. Also, this research may help clarify specific plastics manufacturing problems, such as "shear banding," in which flow can separate a uniformly blended polymer undesirably into different components.

* E.K. Hobbie, S. Lin-Gibson, and S. Kumar Non-Brownian microrheology of a fluid-gel interface, To appear in Physical Review Letters.

Adapted from materials provided by National Institute of Standards and Technology.




Read the rest of this entry »

Daily Science Journal (Nov. 11, 2007) — As this year's holiday season approaches, your credit card transactions may be a little more secure thanks to standards adopted by the payment card industry. The latest incarnation of these standards include the Common Vulnerability Scoring System (CVSS) Version 2 that was coauthored this year by researchers at the National Institute of Standards and Technology and Carnegie Mellon University in collaboration with 23 other organizations.

When you make an electronic transaction--either swiping a card at a checkout counter or through a commercial Web site--you enter personal payment information into a computer. That information is sent to a payment-card "server," a computer system often run by the bank or merchant that sponsors the particular card. The server processes the payment data, communicates the transaction to the vendor, and authorizes the purchase.


According to NIST's Peter Mell, lead author of CVSS Version 2, a payment-card server is like a house with many doors. Each door represents a potential vulnerability in the operating system or programs. Attackers check to see if any of the "doors" are open, and if they find one, they can often take control of all or part of the server and potentially steal financial information, such as credit card numbers.

For every potential vulnerability, CVSS Version 2 calculates its risks on a scale from zero to 10, assesses how the vulnerability could compromise confidentiality (exposing private information such as credit card numbers), availability (could it be used to shut down the credit card system") and integrity (can it change credit card data"). The CVSS scores used by the credit card industry are those for the 28,000 vulnerabilities provided by the NIST National Vulnerability Database (NVD), sponsored by the Department of Homeland Security.

To assess the security of their servers, payment card vendors use software that scans their systems for vulnerabilities. To promote uniform standards in this important software, the PCI (Payment Card Industry) Security Standards Council, an industry organization, maintains the Approved Scanning Vendor (ASV) compliance program, which currently covers 135 vendors, including assessors who do onsite audits of PCI information security. By June 2008, all ASV scanners must use the current version of CVSS in order to identify security vulnerabilities and score them.

Requiring ASV software to use CVSS, according to Bob Russo, General Manager of the PCI Security Standards Council, promotes consistency between vendors and ultimately provides good information for protecting electronic transactions. The council also plans to use NIST's upcoming enhancements to CVSS, which will go beyond scoring vulnerabilities to identify secure configurations on operation systems and applications.

Adapted from materials provided by National Institute of Standards and Technology, via EurekAlert!, a service of AAAS.




Read the rest of this entry »