Daily Science Journal (Dec. 31, 2007) — Researchers at Chalmers University in Sweden have succeeded in combining a receiver for high frequencies with an antenna on a small chip.

The receiver is just a few square millimetre and is suitable for new safety systems, image sensors, and radio communication for high bitrates. The receiver is an electronic circuit including antenna, low noise amplifier, and frequency converter monolithically integrated on gallium arsenide.

"This is a breakthrough in our research. Our result opens the possibility to manufacture systems for very high frequencies within the so called 'THZ-electronics' area, to a relatively low cost. In the next phase of this project even more functions can be integrated on the same chip", according to Herbert Zirath, professor at the department of Microwave Electronics.


This circuit can be used, for instance, in radiometer systems in future safety systems looking for concealed weapons without personal intrusive search. Other applications for this circuit are imaging sensors that can look through darkness, smoke or fog. This is an important safety function for vehicles such as cars and aircrafts.

"Thanks to this technology, we now have the possibility of integrating imaging sensors by using circuits of a few square millimetre which is much smaller that the present technology at a lower cost. For automotive applications such as cars, aircrafts and satellites, the size and weight is of utmost importance. The present systems consist of many pieces and demands several cubic decimetres volume", says Herbert Zirath.

The new circuit is designed to work at the frequency of 220 gigahertz, but this is not an upper limit. According to professor Zirath, the technology can be used up to and above 300GHz in a near future.

The technology is also interesting for wireless data communication because, due to the very high bandwidth, data rate well above 10 Gbit/s is possible to realize in future radio links. Together with Omnisys Instruments in Gothenburg, we are also implementing receivers for future earth observation satellites for environmental studies and weather forecasts at frequencies 118 and 183 GHz, using the same technology.

This work is the results of a co-operation between Chalmers, Saab Microwave Systems, Omnisys Instruments AB, FOI, The Fraunhofer Institute IAF in Freiburg and FGAN, Germany, within the project "nanoComp".

Adapted from materials provided by Chalmers University.



Read the rest of this entry »

Daily Science Journal (Dec. 22, 2007) — Homes today are filled with increasing numbers of high-tech gadgets, from smart phones and PCs to state-of-the-art TV and audio systems, many of them with built-in networking capabilities. Combined, these devices could form the building blocks of the smart homes of the future, but only if they can be made to work together intelligently.

Although the idea of creating intelligent networked home environments as a way to make life easier, safer and more enjoyable has been around for some time, the technology has yet to catch up with the vision. Home automation systems have become more commonplace and consumer electronics have more networking capability, but no one has, so far, gotten all the high-tech and not so high-tech gadgetry cluttering modern homes to work together in an intelligent way. It is not yet common for fridges to talk to your TV to warn that the door has been left open or for heating systems to turn on when you return home, for example.

“People are finding themselves with all these networkable devices and are wondering where the applications are that can use these devices to make life easier and how they could be of more value together than individually,” says Maddy Janse, a researcher for Dutch consumer electronics group Philips.


There are two fundamental obstacles to realising the vision of the intelligent networked home: lack of interoperability between individual devices and the need for context-aware artificial intelligence to manage them. And, to make smart homes a reality, the two issues must be addressed together.

Software wrapper to get gadgets talking

The EU-funded Amigo project, coordinated by Janse, is doing just that, creating a middleware software platform that will get all networkable devices in the home talking to each other and providing an artificial intelligence layer to control them.

“With the Amigo system, you can take any networkable device, create a software wrapper for it and dynamically integrate it into the networked home environment,” Janse explains.

The project, which involves several big industrial and research partners, is unique in that it is addressing the issues of interoperability and intelligence together and, most significantly, its software is modular and open source.

By steering away from creating a monolithic system and making the software accessible to all, the partners believe they can overcome the complications that have held back other smart home projects. For consumer electronics companies and telecoms firms, the system has the additional benefit of providing a test bed for new products and services.

“What we are trying to do is so large and so complex that it has to be broken down into smaller parts. By making it open source and letting third-party developers create applications we can ensure the system addresses whatever challenges arise,” Janse says.

The Amigo architecture consists of a base middleware layer, an intelligent user services layer, and a programming and deployment framework that developers can use to create individual applications and services. These individual software modules form the building blocks of the networked home environment, which has the flexibility to grow as and when new devices and applications are added.

Interoperability is ensured through support for and abstraction of common interaction and home automation standards and protocols, such as UPnP and DNLA as well as web services, while the definition of appropriate ontologies enables common understanding at a semantic level.

“A lot of applications are already available today and more will be created as more developers start to use the software,” Janse says.

Vision of the future

A video created by the project partners underscores their vision for the future in which homes adapt to the behaviour of occupants, automatically setting ambient lighting for watching a movie, locking the doors when someone leaves or contacting relatives or emergency services if someone is ill or has an accident. In an extended home environment, the homes of friends and relatives are interconnected, allowing information and experiences to be shared more easily and setting the stage for the use of tele-presence applications to communicate and interact socially.

Initially, Janse sees such networked systems being employed in larger scale environments than an individual home or for specific purposes. Some subsets of applications could be rolled out in hotels or hospitals or used to monitor the wellbeing of the elderly or infirm, for example.

“With the exception of people with a lot of money building their homes from scratch, it will be a while before intelligent networked homes become commonplace,” the coordinator notes. “In addition, this isn’t something average consumers can easily set up themselves, currently some degree of programming knowledge is needed and installers need to become familiar with the concepts and their potential.”

Even so, the project is hoping to continue to stimulate the growth of the sector.

In October, it launched the Amigo Challenge, a competition in which third-party programmers have been invited to come up with new applications using the Amigo software. Janse expects the initiative will lead to the software being used in even more innovative and possibly unexpected ways.

Adapted from materials provided by ICT Results.




Read the rest of this entry »

Daily Science Journal (Dec. 21, 2007) — New generation active computer games stimulate greater energy expenditure than sedentary games, but are no substitute for playing real sports, according to a study in the Christmas issue of the British Medical Journal.

Young people are currently recommended to take an hour of moderate to vigorous physical exercise each day, which should use at least three times as much energy as is used at rest. But many adolescents have mostly sedentary lifestyles.

Time spent in front of television and computer screens has been linked to physical inactivity and obesity.


The new generation of wireless based computer games is meant to stimulate greater interaction and movement during play, so researchers at Liverpool John Moore's University compared the energy expenditure of adolescents when playing sedentary and new generation active computer games.

Six boys and five girls aged 13-15 years were included in the study. All were a healthy weight, competent at sport and regularly played sedentary computer games.

Before the study, each participant practiced playing both the active and inactive games.

On the day of the study, participants played four computer games for 15 minutes each while wearing a monitoring device to record energy expenditure.

The participants first played on the inactive Project Gotham Racing 3 game (XBOX 360). After a five minute rest, they then played competitive bowling, tennis and boxing matches (Nintendo Wii Sports) for 15 minutes each with a five minute rest between sports. Total playing time for each child was 60 minutes.

Energy expenditure was increased by 60 kcal per hour during active compared with sedentary gaming.

However, energy expenditure during active gaming was much lower than authentic bowling, tennis and boxing, and was not intense enough to contribute towards the recommended amount of daily physical activity for children.

When translated to a typical week of computer play for these participants, active rather than passive gaming would increase total energy expenditure by less than 2%.

Adapted from materials provided by BMJ-British Medical Journal, via EurekAlert!, a service of AAAS.




Read the rest of this entry »

Daily Science Journal (Dec. 21, 2007) — Astronomers funded by NASA are monitoring the trajectory of an asteroid estimated to be 50 meters (164 feet) wide that is expected to cross Mars' orbital path early next year. Observations provided by the astronomers and analyzed by NASA's Near-Earth Object Office at the Jet Propulsion Laboratory in Pasadena, Calif., indicate the object may pass within 30,000 miles of Mars at about 6 a.m. EST (3 a.m. PST) on Jan. 30, 2008.

This artist rendering uses an arrow to show the predicted path of the asteroid on Jan. 30, 2008, and the orange swath indicates the area it is expected to pass through. Mars may or may not be in its path. (Credit: NASA/JPL)

"Right now asteroid 2007 WD5 is about half-way between Earth and Mars and closing the distance at a speed of about 27,900 miles per hour," said Don Yeomans, manager of the Near Earth Object Office at JPL. "Over the next five weeks, we hope to gather more information from observatories so we can further refine the asteroid's trajectory."


NASA detects and tracks asteroids and comets passing close to Earth. The Near Earth Object Observation Program, commonly called "Spaceguard," plots the orbits of these objects to determine if any could be potentially hazardous to our planet.

Asteroid 2007 WD5 was first discovered on Nov. 20, 2007, by the NASA-funded Catalina Sky Survey and put on a "watch list" because its orbit passes near Earth. Further observations from both the NASA-funded Spacewatch at Kitt Peak, Ariz., and the Magdalena Ridge Observatory in New Mexico gave scientists enough data to determine that the asteroid was not a danger to Earth, but could potentially impact Mars. This makes it a member of an interesting class of small objects that are both near Earth objects and "Mars crossers."

Because of current uncertainties about the asteroid's exact orbit, there is a 1-in-75 chance of 2007 WD5 impacting Mars. If this unlikely event were to occur, it would be somewhere within a broad swath across the planet north of where the Opportunity rover is located.

"We estimate such impacts occur on Mars every thousand years or so," said Steve Chesley, a scientist at JPL. "If 2007 WD5 were to thump Mars on Jan. 30, we calculate it would hit at about 30,000 miles per hour and might create a crater more than half-a-mile wide." The Mars Rover Opportunity is exploring a crater approximately this size right now.

Such a collision could release about three megatons of energy. Scientists believe an event of comparable magnitude occurred here on Earth in 1908 in Tunguska, Siberia, but no crater was created. The object was disintegrated by Earth's thicker atmosphere before it hit the ground, although the air blast devastated a large area of unpopulated forest.

NASA and its partners will continue to track asteroid 2007 WD5 and will provide an update in January when further information is available. For more information on the Near Earth Object program, visit: http://neo.jpl.nasa.gov/.

Adapted from materials provided by NASA/Jet Propulsion Laboratory.



Read the rest of this entry »

Daily Science Journal (Dec. 20, 2007) — Researchers at the University of Warwick's Department of Computer Science have developed a colour based Sudoku Puzzle that will help Sudoku players solve traditional Sudoku puzzles but also helps demonstrate the potential benefits of a radical new vision for computing.

The colour Sudoku adds another dimension to solving the puzzle by assigning a colour to each digit. Squares containing a digit are coloured according to the digit's colour. Empty squares are coloured according to which digits are possible for that square taking account of all current entries in the square's row, column and region. The empty square's colour is the combination of the colours assigned to each possible digit. This gives players major clues as darker coloured empty squares imply fewer number possibilities.

More usefully an empty square that has the same colour as a completed square must contain the same digit. If a black square is encountered then a mistake has been made. Players also can gain additional clues by changing the colour assigned to the each digit and watching the unfolding changes in the pattern of colours.


Sudoku players can test this for themselves at: http://www.warwick.ac.uk/go/sudoku. (NB page requires Flash 9)

However the colour Sudoku is more than just a game to the University of Warwick Computer Scientists. For doctoral researcher Antony Harfield it is a way of exploring how logic and perception interact using a radical approach to computing called Empirical Modelling. The method can be applied to other creative problems and he is exploring how this experimental modelling technique can be used in educational technology and learning.

The interplay between logic and perception, as it relates to interactions between computers and humans is viewed as key to the building of better software. It is of particular relevance for artificial intelligence, computer graphics, and educational technology. The interaction between the shifting colour squares and the logical deductions of the Sudoku puzzle solver is a good illustration of the unusual quality of this "Empirical Modelling" approach.

Previously the researchers have been able to use their principles to analyse a railway accident in the Clayton Tunnel near Brighton when the telegraph was introduced in 1861. Reports at the time sought to blame various railway personnel but by applying Empirical Modelling the researchers have created an environment in which experimenters can replay the roles of the drivers, signalmen and other personnel involved. This has shown that there were systemic problems arising from the introduction of the new technology.

Dr Steve Russ of the Empirical Modelling group at the University of Warwick said: "Traditional computer programs are best-suited for tasks that are so well-understood they can, without much loss, be expressed in a closed, mechanical form in which all interactions or changes are 'pre-planned'. Even in something so simple as a Sudoku puzzle humans use a mixture of perception, expectation, experience and logic that is just incompatible with the way a computer program would typically solve the puzzle. For safety-critical systems (such as railway management) it is literally a matter of life and death that we learn to use computers in ways that integrate smoothly with human perception, communication and action. This is our goal with Empirical Modelling."

Adapted from materials provided by University of Warwick, via EurekAlert!, a service of AAAS.





Read the rest of this entry »

Daily Science Journal (Dec. 19, 2007) — Certain types of tantrums in preschoolers may be a sign of serious emotional or behavioral problems, according to researchers at Washington University School of Medicine in St. Louis. Although temper tantrums are common and normal in young children, the researchers found that long, frequent, violent and/or self-destructive tantrums may indicate the presence of psychiatric illness.

Most children have temper tantrums at some point, but the researchers found healthy children tend to be less aggressive and generally have shorter tantrums than their peers with depression and disruptive disorders. (Credit: Image courtesy of Washington University School of Medicine in St. Louis)

Researchers compared tantrums in healthy children to the tantrums in children diagnosed with depression or disruptive disorders, such as attention-deficit/hyperactivity disorder. Most children have temper tantrums at some point, but the researchers found healthy children tend to be less aggressive and generally have shorter tantrums than their peers with depression and disruptive disorders.


"It's clearly normal for young children to have occasional tantrums," says first author Andrew C. Belden, Ph.D., a National Institute of Mental Health (NIMH) post-doctoral research scholar in child psychiatry. "Healthy children may even display extreme behaviors if they're very tired or sick or hungry. But if a child is regularly engaging in specific types of tantrum behaviors, there may be a problem."

The researchers studied 270 children between 3 and 6 years old. They gathered the information about tantrums from a parent. The children were divided into four groups according to psychiatric symptoms: no psychiatric diagnosis, major depressive disorder, disruptive disorder, or depression and disruptive disorder. All of the children were part of a larger NIMH-funded study of psychiatric illness in preschoolers.

"We've been following these children for several years," says principal investigator Joan L. Luby, M.D., associate professor of child psychiatry and director of the Early Emotional Development Program at the School of Medicine. "It's important to find age-specific ways to diagnose depression and other problems in young children because it can be difficult to get very young children to tell you about their feelings. We've successfully used narrative and observational techniques, but characteristics of tantrums when present might be another helpful tool."

Luby, Belden and colleagues identified five types of tantrum behavior that appeared to be connected with depression or diagnosable disruptive disorders.

The first involves extremely aggressive behavior during a tantrum. When a toddler displays aggression directed at a caregiver or violently destructive behavior toward an object such as a toy during most tantrums, parents should be concerned. The study found that these children tend to have diagnoses of ADHD, oppositional-defiant disorder and other disruptive disorders.

The second worrisome tantrum behavior is when toddlers intentionally injure themselves — actions such as scratching until the skin bleeds, head-banging or biting themselves.

"It doesn't matter how long these types of tantrums last or how often they occur, self-injurious behavior almost always was associated with a psychiatric diagnosis in this study," Belden says. "Children with major depressive disorder tended to hurt themselves. We didn't see that in healthy kids or those with ADHD and other disruptive disorders. It really surprised us that this type of behavior was emerging at such a young age."

Other "red flags" involved children who had more than five tantrums a day for several consecutive days. Very long tantrums also signaled a problem. Healthy children might have a tantrum that lasts 10 or 11 minutes, but several children in the study, especially those with disruptive disorders, averaged more than 25 minutes per tantrum.

Finally, when preschoolers are unable to calm themselves following a tantrum, they appear to be at much greater risk of psychiatric problems.

"If a child is having tantrums and parents always have to bribe the child with cookies or other rewards to calm him or her down, this may be something more serious than normal toddler volatility," Belden says.

It's important, he stresses, to replicate these findings in studies of other children and to more rigorously classify what types of behavior may be problematic. Since this study relied on parent reports of children's tantrum behaviors, future studies will involve video analysis of them.

Belden, who has two young children, became interested in tantrum behavior because of the very different tantrum styles displayed by each of his two children. His advice for parents is not to worry when a child has a tantrum but to pay attention to how the child is behaving during the tantrum.

"The best news from this paper is that it's normal for children to display excessive behavior sometimes," Belden says. "If a child lashes out at you, it doesn't mean, 'Oh my gosh! They're doomed!' But if they lash out and hit you every time, there might be a problem. And if they hurt themselves intentionally, I think it's best to consult a pediatrician or mental health professional."

Journal reference: Belden, AC, Thomson NR, Luby JL. Temper tantrums in healthy versus depressed and disruptive preschoolers: defining tantrum behaviors associated with clinical problems. The Journal of Pediatrics, vol. 151:6, Jan. 2008. doi:10.1016/j.jpeds.2007.06.030

This research was supported by a grant from the National Institute of Mental Health.

Adapted from materials provided by Washington University School of Medicine in St. Louis.



Read the rest of this entry »

Daily Science Journal (Dec. 19, 2007) — USC professor's sustainable design is the first of its kind: 10-meter span in Hunan province was assembled in days without heavy equipment and easily carries 8-ton vehicles.

The 10 meter long modern bamboo bridge under construction.
(Credit: Yan Xiao, Image courtesy USC Viterbi School of Engineering)

In China bamboo is used for furniture, artwork, building scaffolding, panels for concrete casting and now, truck bridges.


Yan Xiao, a professor at the University of Southern California Viterbi School of Engineering is the designer of a new span in the village of Leiyang, Hunan Province, which formally opens for traffic December 12.

Made from pre-fabricated structural elements, the bridge was erected within a week by a team of eight workers without heavy construction equipment. While traffic on the Leiyang bridge will be limited to the 8-ton design capacity, preliminary tests on a duplicate bridge erected on the campus of Hunan University have shown much higher strength -- tests are continuing.

The new bridge is the latest installment in research on structural bamboo being carried on by Xiao, who in addition to his appointment at the USC Sonny Astani Department of Civil and Enviornmental Engineering holds an appointment at the College of Civil Engineering of the Hunan University, China.

Last year, Xiao demonstrated a high capacity bamboo footbridge, which was a featured attraction at a recent conference organized by Xioa in Changsha, China.

Prof. Xiao expects his modern bamboo bridge technology to be widely used in pedestrian crossing, large number of bridges in rural areas in China, as a environmental friendly and sustainable construction material. Besides bridges, Xiao's team has also built a mobile house using similar technology they developed.

Meanwhile, they are also constructing a prototype 250 square meter, two-story single-family house, similar to the lightweight wood frame houses widely built in California, where Dr. Xiao lives.

Adapted from materials provided by University of Southern California.



Read the rest of this entry »

Daily Science Journal (Dec. 17, 2007) — People addicted to cocaine have an impaired ability to perceive rewards and exercise control due to disruptions in the brain's reward and control circuits, according to a series of brain-mapping studies and neuropsychological tests conducted at the U.S. Department of Energy's Brookhaven National Laboratory.

In control subjects (left), brain regions that play a part in experiencing reward were activated in a graded fashion in response to increasing monetary rewards. These regions were not activated in cocaine-addicted subjects offered the same rewards (right). This indicates that cocaine-addicted subjects' ability to respond to non-drug rewards is compromised.
(Image courtesy of Brookhaven National Laboratory)

"Our findings provide the first evidence that the brain's threshold for responding to monetary rewards is modified in drug-addicted people, and is directly linked to changes in the responsiveness of the prefrontal cortex, a part of the brain essential for monitoring and controlling behavior," said Rita Goldstein, a psychologist at Brookhaven Lab. "These results also attest to the benefit of using sophisticated brain-imaging tools combined with sensitive behavioral, cognitive, and emotional probes to optimize the study of drug addiction, a psychopathology that these tools have helped to identify as a disorder of the brain."


Goldstein will present details of these studies at a press conference on neuroscience and addiction at the Society for Neuroscience (SfN) annual meeting in Atlanta, Georgia, on Sunday, October 15, 2006, 2 to 3 p.m., and at a SfN symposium on Wednesday, October 18, 8:30 a.m.*

Goldstein's experiments were designed to test a theoretical model, called the Impaired Response Inhibition and Salience Attribution (I-RISA) model, which postulates that drug-addicted individuals disproportionately attribute salience, or value, to their drug of choice at the expense of other potentially but no-longer-rewarding stimuli -- with a concomitant decrease in the ability to inhibit maladaptive drug use. In the experiments, the scientists subjected cocaine-addicted and non-drug-addicted individuals to a range of tests of behavior, cognition/thought, and emotion, while simultaneously monitoring their brain activity using functional magnetic resonance imaging (fMRI) and/or recordings of event-related potentials (ERP).

In one study, subjects were given a monetary reward for their performance on an attention task. Subjects were given one of three amounts (no money, one cent, or 45 cents) for each correct response, up to a total reward of $50 for their performance. The researchers also asked the subjects how much they valued different amounts of monetary reward, ranging from $10 to $1000.

More than half of the cocaine abusers rated $10 as equally valuable as $1000, "demonstrating a reduced subjective sensitivity to relative monetary reward," Goldstein said.

"Such a 'flattened' sensitivity to gradients in reward may play a role in the inability of drug-addicted individuals to use internal cues and feedback from the environment to inhibit inappropriate behavior, and may also predispose these individuals to disadvantageous decisions -- for example, trading a car for a couple of cocaine hits. Without a relative context, drug use and its intense effects -- craving, anticipation, and high -- could become all the more overpowering," she said.

The behavioral data collected during fMRI further suggested that, in the cocaine abusers, there was a "disconnect" between subjective measures of motivation (how much they said they were engaged in the task) and the objective measures of motivation (how fast and accurately they performed on the task). "These behavioral data implicate a disruption in the ability to perceive inner motivational drives in cocaine addiction," Goldstein said.

The fMRI results also revealed that non-addicted subjects responded to the different monetary amounts in a graded fashion: the higher the potential reward, the greater the response in the prefrontal cortex. In cocaine-addicted subjects, however, this region did not demonstrate a graded pattern of response to the monetary reward offered. Furthermore, within the cocaine-addicted group, the higher the sensitivity to money in the prefrontal cortex, the higher was the motivation and the self-reported ability to control behavior.

The ERP results showed a similarly graded brain response to monetary reward in healthy control subjects, but not in cocaine-addicted individuals.

"The dysfunctional interplay between reward processing and control of behavior observed in these studies could help to explain the chronically relapsing nature of drug addiction," Goldstein said. "Our results also suggest the need for new clinical interventions aimed at helping drug abusers manage these symptoms as part of an effective treatment strategy."

This work was funded by the Office of Biological and Environmental Research within the U.S. Department of Energy's Office of Science; the National Institute on Drug Abuse; the National Alliance for Research on Schizophrenia and Depression (NARSAD) Young Investigator Award; the National Institute on Alcohol Abuse and Alcoholism; and the General Clinical Research Center at University Hospital Stony Brook. DOE has a long-standing interest in brain-imaging studies on addiction. Brain-imaging techniques such as MRI are a direct outgrowth of DOE's support of basic physics and chemistry research.

Coauthors on the study included: Nelly Alia-Klein, Dardo Tomasi, Lisa A. Cottone, Thomas Maloney, Frank Telang, and Elisabeth C. Caparelli of Brookhaven Lab; Lei Zhang, Dimitris Samaras, and Nancy K. Squires of Stony Brook University; Linda Chang and Thomas Ernst of the University of Hawaii; and Nora D. Volkow of the National Institute on Drug Abuse.

Adapted from materials provided by Brookhaven National Laboratory.




Read the rest of this entry »

Daily Science Journal (Dec. 14, 2007) — It is not science fiction to think that our eyes could very soon be the key to unlocking our homes, accessing our bank accounts and logging on to our computers, according to Queensland University of Technology researcher Sammy Phang.

QUT researcher Sammy Phang. (Credit: Image courtesy of Queensland University of Technology)

Research by Ms Phang, from QUT's Faculty of Built Environment and Engineering, is helping to remove one of the final obstacles to the everyday application of iris scanning technology.


Ms Phang said the pattern of an iris was like a fingerprint in that every iris was unique. "Every individual iris is unique and even the iris pattern of the left eye is different from the right. The iris pattern is fixed throughout a person's lifetime" she said.

"By using iris recognition it is possible to confirm the identity of a person based on who the person is rather than what the person possesses, such as an ID card or password.

"It is already being used around the world and it is possible that within the next 10 to 20 years it will be part of our everyday lives."

Ms Phang said although iris recognition systems were being used in a number of civilian applications, the system was not perfect. "Changes in lighting conditions change a person's pupil size and distort the iris pattern," she said.

"If the pupil size is very different, the distortion of the iris pattern can be significant, and makes it hard for the iris recognition system to work properly."

To overcome this flaw, Ms Phang has developed the technology to estimate the effect of the change in the iris pattern as a result of changes in surrounding lighting conditions. "It is possible for a pupil to change in size from 0.8mm to 8mm, depending on lighting conditions," she said.

Ms Phang said by using a high-speed camera which could capture up to 1200 images per second it was possible to track the iris surface's movements to study how the iris pattern changed depending on the variation of pupil sizes caused by the light. "The study showed that everyone's iris surface movement is different."

She said results of tests conducted using iris images showed it was possible to estimate the change on the surface of the iris and account for the way the iris features changed due to different lighting conditions.

"Preliminary image similarity comparisons between the actual iris image and the estimated iris image based on this study suggest that this can possibly improve iris verification performance."

Adapted from materials provided by Queensland University of Technology.





Read the rest of this entry »

Daily Science Journal (Dec. 13, 2007) — Researchers at Rensselaer Polytechnic Institute have developed a new way to seek out specific proteins, including dangerous proteins such as anthrax toxin, and render them harmless using nothing but light. The technique lends itself to the creation of new antibacterial and antimicrobial films to help curb the spread of germs, and also holds promise for new methods of seeking out and killing tumors in the human body.

Transmission electron microscope images show carbon nanotubes conjugated with anthrax toxin before (a, b) and after (c, d) exposure to ultraviolet light. This light caused the adsorbed toxin to deactivate and fall off the nanotube, which is why the structures in pictures c and d are smaller in diameter than those in pictures a and b. (Credit: Rensselaer/Ravi Kane)

Scientists have long been interested in wrapping proteins around carbon nanotubes, and the process is used for various applications in imaging, biosensing, and cellular delivery. But this new study at Rensselaer is the first to remotely control the activity of these conjugated nanotubes.


A team of Rensselaer researchers led by Ravi S. Kane, professor of chemical and biological engineering, has worked for nearly a year to develop a means to remotely deactivate protein-wrapped carbon nanotubes by exposing them to invisible and near-infrared light. The group demonstrated this method by successfully deactivating anthrax toxin and other proteins.

"By attaching peptides to carbon nanotubes, we gave them the ability to selectively recognize a protein of interest -- in this case anthrax toxin -- from a mixture of different proteins," Kane said. "Then, by exposing the mixture to light, we could selectively deactivate this protein without disturbing the other proteins in the mixture."

By conjugating carbon nanotubes with different peptides, this process can be easily tailored to work on other harmful proteins, Kane said. Also, employing different wavelengths of light that can pass harmlessly through the human body, the remote control process will also be able to target and deactivate specific proteins or toxins in the human body. Shining light on the conjugated carbon nanotubes creates free radicals, called reactive oxygen species. It was the presence of radicals, Kane said, that deactivated the proteins.

Kane's new method for selective nanotube-assisted protein deactivation could be used in defense, homeland security, and laboratory settings to destroy harmful toxins and pathogens. The method could also offer a new method for the targeted destruction of tumor cells. By conjugating carbon nanotubes with peptides engineered to seek out specific cancer cells, and then releasing those nanotubes into a patient, doctors may be able to use this remote protein deactivation technology as a powerful tool to prevent the spread of cancer.

Kane's team also developed a thin, clear film made of carbon nanotubes that employs this technology. This self-cleaning film may be fashioned into a coating that -- at the flip of a light switch -- could help prevent the spread of harmful bacteria, toxins, and microbes.

"The ability of these coatings to generate reactive oxygen species upon exposure to light might allow these coatings to kill any bacteria that have attached to them," Kane said. "You could use these transparent coatings on countertops, doorknobs, in hospitals or airplanes -- essentially any surface, inside or outside, that might be exposed to harmful contaminants."

Kane said he and his team will continue to hone this new technology and further explore its potential applications.

Details of the project are outlined in the article "Nanotube-Assisted Protein Deactivation" in the December issue of Nature Nanotechnology.

Co-authors of the paper include Department of Chemical and Biological Engineering graduate students Amit Joshi and Shyam Sundhar Bale; postdoctoral researcher Supriya Punyani; Rensselaer Nanotechnology Center Laboratory Manager Hoichang Yang; and professor Theodorian Borca-Tasciuc of the Department of Mechanical, Aerospace, and Nuclear Engineering.

The group has filed a patent disclosure for their new selective nanotube-assisted protein deactivation technology. The research project was funded by the U.S. National Institutes of Health and the National Science Foundation.

Adapted from materials provided by Rensselaer Polytechnic Institute, via EurekAlert!, a service of AAAS.




Read the rest of this entry »

Daily Science Journal (Dec. 13, 2007) — Natural climate variations, which tend to involve localized changes in sea surface temperature, may have a larger effect on hurricane activity than the more uniform patterns of global warming, a report in Nature suggests.

The multiple effects of warming oceans on hurricane intensity. (Credit: NOAA, GFDL)

In the debate over the effect of global warming on hurricanes, it is generally assumed that warmer oceans provide a more favorable environment for hurricane development and intensification. However, several other factors, such as atmospheric temperature and moisture, also come into play.


Drs. Gabriel A. Vecchi of the NOAA Geophysical Fluid Dynamics Laboratory and Brian J. Soden from the University of Miami Rosenstiel School of Marine & Atmospheric Science analyzed climate model projections and observational reconstructions to explore the relationship between changes in sea surface temperature and tropical cyclone 'potential intensity' - a measure that provides an upper limit on cyclone intensity.

They found that warmer oceans do not alone produce a more favorable environment for storms because the effect of remote warming can counter, and sometimes overwhelm, the effect of local surface warming. "Warming near the storm acts to increase the potential intensity of hurricanes, whereas warming away from the storms acts to decrease their potential intensity," Vecchi said.

Their study found that long-term changes in potential intensity are more closely related to the regional pattern of warming than to local ocean temperature change. Regions that warm more than the tropical average are characterized by increased potential intensity, and vice versa. "A surprising result is that the current potential intensity for Atlantic hurricanes is about average, despite the record high temperatures of the Atlantic Ocean over the past decade." Soden said. "This is due to the compensating warmth in other ocean basins."

"As we try to understand the future changes in hurricane intensity, we must look beyond changes in Atlantic Ocean temperature. If the Atlantic warms more slowly than the rest of the tropical oceans, we would expect a decrease in the upper limit on hurricane intensity," Vecchi added. "This is an interesting piece of the puzzle."

"While these results challenge some current notions regarding the link between climate change and hurricane activity, they do not contradict the widespread scientific consensus on the reality of global warming," Soden noted.

The journal article is entitled "Effect of Remote Sea Surface Temperature Change on Tropical Cyclone Potential Intensity."

Adapted from materials provided by University of Miami Rosenstiel School of Marine & Atmospheric Science, via EurekAlert!, a service of AAAS.




Read the rest of this entry »

Rensselaer Researchers Experiment With Solar Underwater Robots

Daily Science Journal (Dec. 13, 2007) — A collaborative group of researchers are conducting experiments with underwater robots at Rensselaer's Darrin Fresh Water Institute (DFWI) on Lake George, N.Y., as part of the RiverNet project, an NSF-funded initiative. The group is working to develop a network of distributed sensing devices and water-monitoring robots, including solar-powered autonomous underwater vehicles (SAUVs), for detection of chemical and biological trends that may guide the management and improvement of water quality.

(Photo courtesy of Art Sanderson, Rensselaer Polytechnic Institute, and D. Richard Blidberg, Autonomous Undersea Systems Institute.)

SAUVs are a new technology that will allow underwater robots to be deployed long-term by using solar power. Autonomous underwater vehicles (AUVs) equipped with sensors are currently used for water monitoring, but must be taken out of the water frequently to recharge the batteries.


The goal of ongoing experimentation is to develop SAUVs that will communicate and network together, thus allowing a coordinated effort of long-term monitoring, according to Art Sanderson, professor of electrical, computer, and systems engineering at Rensselaer and principal investigator of the RiverNet project. Key technologies used in SAUVs include integrated sensor microsystems, pervasive computing, wireless communications, and sensor mobility with robotics.

During recent tests in Lake George at the DFWI, two SAUVs and one AUV were deployed to test communication, interaction, and maneuvering capabilities. Researchers were encouraged by the success of the networking capabilities.

"The Lake George field tests provided us an excellent opportunity to further our research and technology development of SAUVs," said Sanderson. "Once fully realized, this technology will allow better monitoring of complex environmental systems, including the Hudson River."

Sanderson has been working on SAUV development in collaboration with D. Richard Blidberg of the Autonomous Undersea Systems Institute in Lee, N.H. The collaborative research group working on this project also includes Technology Systems Inc., Falmouth Scientific Inc., Rensselaer's Darrin Fresh Water Institute, and the Naval Undersea Warfare Center.

"This research is a significant step toward obtaining real-time, 3-D sensor monitoring of water quality," said Sandra Nierzwicki-Bauer, chair of the external advisory committee of the Upper Hudson Satellite of the Rivers and Estuaries Center, director of the Darrin Fresh Water Institute, and professor of biology at Rensselaer.

Adapted from materials provided by Rensselaer Polytechnic Institute.




Read the rest of this entry »

Daily Science Journal (Dec. 12, 2007) — High blood pressure appears to be associated with an increased risk for mild cognitive impairment, a condition that involves difficulties with thinking and learning, according to a report in the December issue of Archives of Neurology, one of the JAMA/Archives journals.

"Mild cognitive impairment has attracted increasing interest during the past years, particularly as a means of identifying the early stages of Alzheimer's disease as a target for treatment and prevention," the authors write as background information in the article. About 9.9 of every 1,000 elderly individuals without dementia develop mild cognitive impairment yearly. Of those, 10 percent to 12 percent progress to Alzheimer's disease each year, compared with 1 percent to 2 percent of the general population.


Christiane Reitz, M.D., Ph.D., and colleagues at the Columbia University Medical Center, New York, followed 918 Medicare recipients age 65 and older (average age 76.3) without mild cognitive impairment beginning in 1992 through 1994. All participants underwent an initial interview and physical examination, along with tests of cognitive function, and then were examined again approximately every 18 months for an average of 4.7 years. Individuals with mild cognitive impairment had low cognitive scores and a memory complaint, but could still perform daily activities and did not receive a dementia diagnosis.

Over the follow-up period, 334 individuals developed mild cognitive impairment. This included 160 cases of amnestic mild cognitive impairment, which involves low scores on memory portions of the neuropsychological tests, and 174 cases of non-amnestic mild cognitive impairment. Hypertension (high blood pressure) was associated with an increased risk of all types of mild cognitive impairment that was mostly driven by an increased risk of non-amnestic mild cognitive impairment; hypertension was not associated with amnestic mild cognitive impairment, nor with the change over time in memory and language abilities.

"The mechanisms by which blood pressure affects the risk of cognitive impairment or dementia remain unclear," the authors write. "Hypertension may cause cognitive impairment through cerebrovascular disease. Hypertension is a risk factor for subcortical white matter lesions found commonly in Alzheimer's disease. Hypertension may also contribute to a blood-brain barrier dysfunction, which has been suggested to be involved in the cause of Alzheimer's disease. Other possible explanations for the association are shared risk factors," including the formation of cell-damaging compounds known as free radicals.

"Our findings support the hypothesis that hypertension increases the risk of incident mild cognitive impairment, especially non-amnestic mild cognitive impairment," the authors conclude. "Preventing and treating hypertension may have an important impact in lowering the risk of cognitive impairment."

Adapted from materials provided by JAMA and Archives Journals, via EurekAlert!, a service of AAAS.




Read the rest of this entry »

Daily Science Journal (Dec. 11, 2007) — The next generation of laptops, desk computers, cell phones and other semiconductor devices may get faster and more cost-effective with research from Clemson University.

Prototype of the semiconductor processing equipment may lead to commercial manufacturing tools for developing future generations of silicon chips. (Credit: Image courtesy of Clemson University)

“We’ve developed a new process and equipment that will lead to a significant reduction in heat generated by silicon chips or microprocessors while speeding up the rate at which information is sent,” says Rajendra Singh, D. Houser Banks Professor and director for the Center for Silicon Nanoelectronics at Clemson University.


The heart of many high-tech devices is the microprocessor that performs the logic functions. These devices produce heat depending on the speed at which the microprocessor operates. Higher speed microprocessors generate more heat than lower speed ones. Presently, dual-core or quad-core microprocessors are packaged as a single product in laptops so that heat is reduced without compromising overall speed of the computing system. The problem, according to Singh, is that writing software for these multicore processors, along with making them profitable, remains a challenge.

“Our new process and equipment improve the performance of the materials produced, resulting in less power lost through leakage. Based on our work, microprocessors can operate faster and cooler. In the future it will be possible to use a smaller number of microprocessors in a single chip since we’ve increased the speed of the individual microprocessors. At the same time, we’ve reduced power loss six-fold to a level never seen before. Heat loss and, therefore, lost power has been a major obstacle in the past,” said Singh.

The researchers say the patented technique has the potential to improve the performance and lower the cost of next-generation computer chips and a number of semiconductor devices, which include green energy conversion devices such as solar cells.

“The potential of this new process and equipment is the low cost of manufacturing, along with better performance, reliability and yield,” Singh said. “The semiconductor industry is currently debating whether to change from smaller (300 mm wafer) manufacturing tools to larger ones that provide more chips (450 mm). Cost is the barrier to change right now. This invention potentially will enable a reduction of many processing steps and will result in a reduction in overall costs.”

Participants in the research included Aarthi Venkateshan, Kelvin F. Poole, James Harriss, Herman Senter, Robert Teague of Clemson and J. Narayan of North Carolina State University at Raleigh. Results were published in Electronics Letters, Oct. 11, 2007, Volume: 43, Issue: 21,
 pages: 1130-1131. The work reported here is covered by a broad-base patent of Singh and Poole issued to Clemson University in 2003.

Adapted from materials provided by Clemson University.



Read the rest of this entry »

Daily Science Journal (Dec. 10, 2007) — The herbal extract of a yellow-flowered mountain plant indigenous to the Arctic regions of Europe and Asia increased the lifespan of fruit fly populations, according to a University of California, Irvine study.

Rhodiola rosea. (Credit: Image courtesy of University of California - Irvine)

Flies that ate a diet rich with Rhodiola rosea, an herbal supplement long used for its purported stress-relief effects, lived on an average of 10 percent longer than fly groups that didn’t eat the herb.


“Although this study does not present clinical evidence that Rhodiola can extend human life, the finding that it does extend the lifespan of a model organism, combined with its known health benefits in humans, make this herb a promising candidate for further anti-aging research,” said Mahtab Jafari, a professor of pharmaceutical sciences and study leader. “Our results reveal that Rhodiola is worthy of continued study, and we are now investigating why this herb works to increase lifespan.”

In their study, the UC Irvine researchers fed adult fruit fly populations diets supplemented at different dose levels with four herbs known for their anti-aging properties. The herbs were mixed into a yeast paste, which adult flies ate for the duration of their lives. Three of the herbs – known by their Chinese names as Lu Duo Wei, Bu Zhong Yi Qi Tang and San Zhi Pian – had no effect on fruit fly longevity, while Rhodiola was found to significantly reduce mortality. On average, Rhodiola increased survival 3.5 days in males and 3.2 days in females.

Rhodiola rosea, also known as the golden root, grows in cold climates at high altitudes and has been used by Scandinavians and Russians for centuries for its anti-stress qualities. The herb is thought to have anti-oxidative properties and has been widely studied.

Soviet researchers have been studying Rhodiola since the 1940s on athletes and cosmonauts, finding that the herb boosts the body’s response to stress. And earlier this year, a Nordic Journal of Psychiatry study on people with mild-to-moderate depression showed that patients taking a Rhodiola extract called SHR-5 reported fewer symptoms of depression than did those who took a placebo.

Jafari said she is evaluating the molecular mechanism of Rhodiola by measuring its impact on energy metabolism, oxidative stress and anti-oxidant defenses in fruit flies. She is also beginning studies in mice and in mouse and human cell cultures. These latter studies should help understand the benefits of Rhodiola seen in human trials.

Study results appear in the online version of Rejuvenation Research. Jeffrey Felgner, Irvin Bussel, Anthony Hutchili, Behnood Khodayari, Michael Rose and Laurence Mueller of UC Irvine participated in the study. Sun Ten Inc. provided the herbs.

Adapted from materials provided by University of California - Irvine.



Read the rest of this entry »

Daily Science Journal (Dec. 5, 2007) — University of British Columbia astronomer Harvey Richer and UBC graduate student Saul Davis have discovered that white dwarf stars are born with a natal kick, explaining why these smoldering embers of Sun-like stars are found on the edge rather than at the centre of globular star clusters.

These images show young and old white dwarf stars — the burned-out relics of normal stars — in the ancient globular star cluster NGC 6397. The image at left shows the dense swarm of hundreds of thousands of stars that make up the globular cluster. The image at top, right reveals young white dwarfs less than 800 million years old and older white dwarfs between 1.4 and 3.5 billion years old. The blue squares pinpoint the young white dwarfs; the red circles outline the older white dwarfs. (Credit: D. Verschatse (Antilhue Observatory, Chile), NASA, ESA, and H. Richer (University of British Columbia))


White dwarfs represent the third major stage of a star's evolution. Like the Sun, each star begins its life with a long stable state where nuclear reactions take place in the core supplying the energy. After the core fuel is depleted, it swells up and turns into a huge red giant. Later, the red giant ejects its outer atmosphere and its core becomes a white dwarf that slowly cools over time and radiates its stored thermal heat into space.

Using NASA's Hubble telescope, Richer and his team looked at the position of white dwarfs in NGC 6397, one of the globular star clusters closest to Earth. Globular clusters are dense swarms of hundreds of thousands of stars. About 150 of these clusters exist in the Milky Way, each containing between 100,000 and one million stars.

"The distribution of young white dwarfs is the exact opposite of what we expected," says Prof. Richer, whose study will appear in the Monthly Notices of the Royal Astronomical Society Letters in January 2008.

Richer explains that globular clusters sort out stars according to their mass, governed by a gravitational billiard-ball game among stars. Heavier stars slow down and sink to the cluster's core, while lighter stars pick up speed and move across the cluster to its outskirts. The team found that the older white dwarfs were behaving as expected; they were scattered throughout the cluster according to weight.

"Newly-minted white dwarfs should be near the center, but they are not," says Richer. "Our idea is that when these white dwarfs were born, they were given a small kick of 7,000 to 11,000 miles an hour (three to five kilometers a second), which rocketed them to the outer reaches of the cluster."

Using computer simulations, Richer and his team showed that when white dwarfs were born, their own mass acts like "rocket fuel" propelling them forward.

"If more of this mass is ejected in one direction, it could propel the emerging white dwarf through space, just as exhaust from a rocket engine thrusts it from the launch pad," says Richer.

The researchers studied 22 young white dwarfs up to about 800 million years old and 62 older white dwarfs between 1.4 and 3.5 billion years old. They distinguished the younger from the older white dwarfs based on their color and brightness. The younger ones are hotter, and therefore bluer and brighter than the older ones.

Study co-authors are: I. King, University of Washington; J.Anderson, Space Telescope Science Institute; J. Coffey, UBC, G. Fahlman, National Research Council of Canada's Herzberg Institute of Astrophysics; J Hurley. Swinburne, University of Technology; and J. Kalirai, University of California, Santa Cruz.

Adapted from materials provided by University of British Columbia, via EurekAlert!, a service of AAAS.



Read the rest of this entry »

Daily Science Journal (Dec. 3, 2007) — An international team of space scientists led by researchers from the University of New Hampshire have new findings on the first experimental evidence that points in a new direction toward the solution of a longstanding, central problem of plasma astrophysics and space physics.

Diagram of the effects of a solar flare. (Credit: NOAA)

The mystery involves electron acceleration during magnetic explosions that occur, for example, in solar flares and "substorms" in the Earth's magnetosphere - the comet-shaped protective sheath that surrounds the planet and where brilliant auroras occur.

During solar flares, accelerated electrons take away up to 50 percent of the total released flare energy. How so many electrons are accelerated to such high energies during these explosive events in our local part of the universe has remained unexplained.


A mainstream theory holds that the mysterious, fast-moving electrons are primarily accelerated at the magnetic explosion site - called the reconnection layer - where the magnetic fields are annihilated and the magnetic energy is rapidly released. However, physicist Li-Jen Chen of the Space Science Center within the UNH Institute for the Study of Earth, Oceans, and Space discovered that the most powerful electron acceleration occurs in the regions between adjacent reconnection layers, in structures called magnetic islands.

When Chen analyzed 2001 data from the four-spacecraft Cluster satellite mission, which has been studying various aspects of Earth's magnetosphere, she found a series of reconnection layers and islands that were formed due to magnetic reconnection.

"Our research demonstrates for the first time that energetic electrons are found most abundantly at sites of compressed density within islands," reports Chen.

Another recent theory, published in the journal Nature, has suggested that "contracting magnetic islands" provide a mechanism for electron acceleration. While the theory appears relevant, it needs to be developed further and tested by computer simulations and experiments, according to the UNH authors.

Until the UNH discovery there had been no evidence showing any association between energetic electrons and magnetic islands. This lack of data is likely due to the fact that encounters of spacecraft with active magnetic explosion sites are rare and, if they do occur, there is insufficient time resolution of the data to resolve island structures.

In the Nature Physics paper, entitled "Observation of energetic electrons within magnetic islands," lead author Chen reports the first experimental evidence for the one-to-one correspondence between multiple magnetic islands and energetic electron bursts during reconnection in the Earth's magnetosphere.

"Our study is an important step towards solving the mystery of electron acceleration during magnetic reconnection and points out a clear path for future progress to be made," says Chen. UNH collaborators on the paper include Amitava Bhattacharjee, Pamela Puhl-Quinn, Hong-ang Yang, and Naoki Bessho.

This research was published recently in the journal Nature Physics.

Adapted from materials provided by University of New Hampshire.



Read the rest of this entry »

Daily Science Journal (Dec. 2, 2007) — Scientists in Portsmouth and Shanghai are working on intelligent software that will take them a step closer to building the perfect robotic hand.

The 'cyberglove' used to capture data about how the human hand moves. (Credit: Image courtesy of University Of Portsmouth)

Using artificial intelligence, they are creating software which will learn and copy human hand movements.


They hope to replicate this in a robotic device which will be able to perform the dexterous actions only capable today by the human hand.

Dr Honghai Liu, senior lecturer at the University of Portsmouth’s Institute of Industrial Research, and Professor Xiangyang Zhu from the Robotics Institute at Jiao Tong University in Shanghai, were awarded a Royal Society grant to further their research.

The technology has the potential to revolutionise the manufacturing industry and medicine and scientists hope that in the future it could be used to produce the perfect artificial limb.

“A robotic hand which can perform tasks with the dexterity of a human hand is one of the holy grails of science,” said Dr Honghai Liu, who lectures artificial intelligence at the University’s Institute of Industrial Research. The Institute specialises in artificial intelligence including intelligent robotics, image processing and intelligent data analysis.

He said: “We are talking about having super high level control of a robotic device.

Nothing which exists today even comes close.”

Dr Liu used a cyberglove covered in tiny sensors to capture data about how the human hand moves. It was filmed in a motion capture suite by eight high-resolution CCD cameras with infrared illumination and measurement accuracy up to a few millimetres.

Professor Xiangyang Zhu from The Robotics Institute at the Jiao Tong University in Shanghai, which is recognised as one of the world-class research institutions on robotics, said that the research partnership would strengthen the interface between artificial intelligence techniques and robotics and pave the way for a new chapter in robotics technology.

“Humans move efficiently and effectively in a continuous flowing motion, something we have perfected over generations of evolution and which we all learn to do as babies. Developments in science mean we will teach robots to move in the same way.”

Adapted from materials provided by University Of Portsmouth.





Read the rest of this entry »

Daily Science Journal (Nov. 29, 2007) — The oil spill that wreaked havoc in the Kerch Strait leading to the Black Sea in early November will take at least 5 to 10 years for the marine environment to recover, says WWF.

According to WWF specialists, the 2000-tonne spill has badly affected the local fishing industry. Fish caught in the Kerch Strait are not safe for consumption.

The spill has also threatened birds. About 11 endangered species inhabit the area around the strait, including the Dalmatian pelican and great black-headed gull, and many more migrating birds will be wintering in this area in the coming months.


Thanks to the efforts of clean-up crews, including WWF staff and members, some birds have been rescued. However, these activities can only help save a very small percentage of the thousands of affected birds. Two dolphins have also been found washed up on shore where clean-up operations are being conducted, but their chances of survival are slim. The Black Sea is home to common and bottlenose dolphins.

“Although it is practically impossible to completely eliminate the damage caused by the large oil spill,” said Igor Chestin, CEO of WWF-Russia, “we believe that to avoid such disasters in the future drastic changes need to be made in the oil transportation system; oil pollution laws need to be enacted.”

To avoid such accidents in the future, WWF and other environmental NGOs are developing recommendations for the Russian government, which include:
  • Local volunteers should be trained to respond to oil spills (WWF has already been training clean-up teams on the Russian coast of the Barents Sea for several years).
  • Oil export via the river-sea corridor should be stopped, and river vessels not suited for marine conditions should be instructed to enter ports.
  • Russia should develop a legislative base for oil spills, similar to the US Oil Pollution Act adopted after the Exxon Valdez oil spill in 1989, and should set up an independent agency responsible for environmental protection.
According to Alexey Knizhnikov, head of WWF-Russia’s oil and gas project, there is a prepared draft law introducing the “polluter pays” principle and environmental insurance. However, they have not been approved by the State Duma (Russia’s lower house of parliament).

“If these draft law is approved, many problems will be solved, as companies will feel more responsible for the risks they take,” says Knizhnikov.

“We hope that this accident will spur the process in adopting these laws and creating such an agency.”

Adapted from materials provided by World Wildlife Fund.



Read the rest of this entry »

Dogs Can Classify Complex Photos In Categories Like Humans Do

Daily Science Journal (Nov. 29, 2007) — Like us, our canine friends are able to form abstract concepts. Friederike Range and colleagues from the University of Vienna in Austria have shown for the first time that dogs can classify complex color photographs and place them into categories in the same way that humans do. And the dogs successfully demonstrate their learning through the use of computer automated touch-screens, eliminating potential human influence.

Researchers have shown for the first time that dogs can classify complex color photographs and place them into categories in the same way that humans do. (Credit: iStockphoto/Rami Ben Ami)

In order to test whether dogs can visually categorize pictures, and transfer their knowledge to new situations, four dogs were shown landscape and dog photographs, and expected to make a selection on a computer touch-screen.


In the training phase, the dogs were shown both the landscape and dog photographs simultaneously and were rewarded with a food pellet if they selected the dog picture (positive stimulus). The dogs then took part in two tests.

In the first test, the dogs were shown completely different dog and landscape pictures. They continued to reliably select the dog photographs, demonstrating that they could transfer their knowledge gained in the training phase to a new set of visual stimuli, even though they had never seen those particular pictures before.

In the second test, the dogs were shown new dog pictures pasted onto the landscape pictures used in the training phase, facing them with contradictory information: on the one hand, a new positive stimulus as the pictures contained dogs even though they were new dogs; on the other hand, a familiar negative stimulus in the form of the landscape.

When the dogs were faced with a choice between the new dog on the familiar landscape and a completely new landscape with no dog, they reliably selected the option with the dog. These results show that the dogs were able to form a concept i.e. ‘dog’, although the experiment cannot tell us whether they recognized the dog pictures as actual dogs.

The authors also draw some conclusions on the strength of their methodology: “Using touch-screen computers with dogs opens up a whole world of possibilities on how to test the cognitive abilities of dogs by basically completely controlling any influence from the owner or experimenter.” They add that the method can also be used to test a range of learning strategies and has the potential to allow researchers to compare the cognitive abilities of different species using a single method.

Journal reference: Range F et al (2007). Visual categorization of natural stimuli by domestic dogs (Canis familiaris). Animal Cognition (DOI 10.1007/s10071-007-0123-2).

Adapted from materials provided by Springer.



Read the rest of this entry »

Daily Science Journal (Nov. 27, 2007) — Using an innovative method called transcranial magnetic stimulation (TMS) to measure brain responsiveness, Yale researchers have found that a cocaine addict’s response to stimulation is decreased, indicating possible evidence that cocaine causes permanent brain damage.

"Contrary to what we expected, the results showed that cocaine-dependent individuals displayed increased resistance to brain stimulation," said Nashaat Boutros, associate professor of psychiatry at Yale and principal investigator on the study. "We expected them to be jumpy or more responsive because of the sensitizing effects of cocaine, but it took much stronger stimulation to get them to respond."

Boutros and his team examined 10 cocaine-dependent subjects (four men and six women) who had not used cocaine for at least three weeks and were addicted to no other drugs. A magnetic stimulus was delivered by TMS using a hand-held magnetic coil over the motor cortex, the part of the brain that moves the hands and fingers. The amount of magnetic stimulus needed to move the fingers is an indication of sensitivity in that part of the brain.


The results, published in the February 15 issue of Biological Psychiatry, showed that those without cocaine addiction need about 35 to 55 percent of the output to move their fingers. Boutros said cocaine-addicted individuals sometimes need as much as 80 percent. "Somehow addicts have increased their resistance to the effect of the stimulus," Boutros said.

Boutros and his team started out with the theory that the longer people use cocaine, the more it causes symptoms like paranoia, panic attacks and seizures. "People have thought that a process called kindling—over time, less stimulus is needed to elicit the same response—would apply to cocaine addicts," Boutros said. "But these results raise other possibilities."

Boutros believes there are two possible theories that could explain the findings. One is that the cocaine caused widespread damage that decreased the brain’s ability to respond to stimulation. "The other possibility," Boutros said, "is that the brain was indeed sensitized via the kindling response, and what we’re seeing is a normal response from the brain, like an overcompensation. The brain feels too sensitive from all the cocaine jitter and cools off the response."

A follow-up study, Boutros said, will confirm this initial finding by examining a different group of cocaine-addicted people using additional TMS techniques. TMS has been used to study neurological disorders for 10 years, but was first used in exploration of cortical function in psychiatric disorders about three years ago.

Adapted from materials provided by Yale University.




Read the rest of this entry »

Daily Science Journal (Nov. 22, 2007) — Astronomers have discovered white dwarf stars with pure carbon atmospheres. The discovery could offer a unique view into the hearts of dying stars.

Artists' concept of the surface of the white dwarf star H1504+65, believed to have somehow expelled all its hydrogen and all but a very small trace of its helium, leaving an essentially bare stellar nucleus with a surface of 50 percent oxygen and 50 percent carbon. When this star cools, it may have a carbon atmosphere, like the stars newly found by University of Arizona, Canadian and French astronomers. (Credit: Illustration credit: M.S. Sliwinski and L. I. Slivinska of Lunarismaar, Copyright photo by Sliwinski, M.S. and Sliwinska, L.I.)

These stars possibly evolved in a sequence astronomers didn't know before. They may have evolved from stars that are not quite massive enough to explode as supernovae but are just on the borderline. All but the most massive two or three percent of stars eventually die as white dwarfs rather than explode as supernovae.


When a star burns helium, it leaves "ashes" of carbon and oxygen. When its nuclear fuel is exhausted, the star then dies as a white dwarf, which is an extremely dense object that packs the mass of our sun into an object about the size of Earth. Astronomers believe that most white dwarf stars have a core made of carbon and oxygen which is hidden from view by a surrounding atmosphere of hydrogen or helium.

They didn't expect stars with carbon atmospheres.

"We've found stars with no detectable traces of helium and hydrogen in their atmospheres," said University of Arizona Steward Observatory astronomer Patrick Dufour. "We might actually be observing directly a bare stellar core. We possibly have a window on what used to be the star's nuclear furnace and are seeing the ashes of the nuclear reaction that once took place."

Dufour, UA astronomy Professor James Liebert and their colleagues at the Université de Montréal and Paris Observatory published the results in the Nov. 22 issue of Nature.

The stars were discovered among 10,000 new white dwarf stars found in the Sloan Digital Sky Survey. The survey, known as the SDSS, found about four times as many white dwarf stars previously known.

Liebert identified a few dozens of the newfound white dwarfs as "DQ" white dwarfs in 2003. When observed in optical light, DQ stars appear to be mostly helium and carbon. Astronomers believe that convection in the helium zone dredges up carbon from the star's carbon-oxygen core.

Dufour developed a model to analyze the atmospheres of DQ stars as part of his doctoral research at the Université de Montréal. His model simulated cool DQ stars, stars at temperatures between 5,000 degrees and 12,000 degrees Kelvin. For reference, our sun's surface temperature is around 5,780 degrees Kelvin.

When Dufour joined Steward Observatory in January, he updated his code to analyze hotter stars, stars as hot as 24,000 degrees Kelvin.

"When I first started modeling the atmospheres of these hotter DQ stars, my first thought was that these are helium-rich stars with traces of carbon, just like the cooler ones," Dufour said. "But as I started analyzing the stars with the higher temperature model, I realized that even if I increased the carbon abundance, the model still didn't agree with the SDSS data," Dufour said.

In May 2007, "out of pure desperation, I decided to try modeling a pure-carbon atmosphere. It worked," Dufour said. "I found that if I calculated a pure carbon atmosphere model, it reproduces the spectra exactly as observed. No one had calculated a pure carbon atmosphere model before. No one believed that it existed. We were surprised and excited."

Dufour and his colleagues have identified eight carbon-dominated atmosphere white dwarf stars among about 200 DQ stars they've checked in the Sloan data so far.

The great mystery is why these carbon-atmosphere stars are found only between about 18,000 degrees and 23,000 degrees Kelvin. "These stars are too hot to be explained by the standard convective dredge-up scenario, so there must be another explanation," Dufour said.

Dufour and Liebert say they these stars might have evolved from a star like the unique, much hotter star called H1504+65 that Pennsylvania State University astronomer John A. Nousek, Liebert and others reported in 1986. If so, carbon-atmosphere stars represent a previously unknown sequence of stellar evolution.

H1504+65 is a very massive star at 200,000 degrees Kelvin.

Astronomers currently believe this star somehow violently expelled all its hydrogen and all but a very small trace of its helium, leaving an essentially bare stellar nucleus with a surface of 50 percent carbon and 50 percent oxygen.

"We think that when a star like H1504+65 cools, it eventually becomes like the pure-carbon stars," Dufour said. As the massive star cools, gravity separates carbon, oxygen and trace helium. Above 25,000 degrees Kelvin, the trace helium rises to the top, forming a thin layer above the much more massive carbon envelope, effectively disguising the star as a helium-atmosphere white dwarf, Dufour and Liebert said.

But between 18,000 and 23,000 degrees Kelvin, convection in the carbon zone probably dilutes the thin helium layer. At these temperatures, oxygen, which is heavier than carbon, has probably sunk too deep to be dredged to the surface.

Dufour and his colleagues say that models of stars nine to 11 solar masses might explain their peculiar carbon stars.

Astronomers predicted in 1999 that stars nine or 10 times as massive as our sun would become white dwarfs with oxygen-magnesium-neon cores and mostly carbon-oxygen atmospheres. More massive stars explode as supernovae.

But scientists aren't sure where the dividing line is, whether stars eight, nine, 10 or 11 times as massive as our sun are required to create supernovae.

"We don't know if these carbon atmosphere stars are the result of nine-or-10 solar mass star evolution, which is a key question," Liebert said.

The UA astronomers plan making new observations of the carbon atmosphere stars at the 6.5-meter MMT Observatory on Mount Hopkins, Ariz., in December to better pinpoint their masses. The observations could help define the mass limit for stars dying as white dwarfs or dying as supernovae, Dufour said.

Adapted from materials provided by University of Arizona.



Read the rest of this entry »