Friday, December 31, 2010

Trapped micro-cylinders act a bit like neurons

Both the micro-cylinders and theare 'excitable', i.e. they respond to an external disturbance by producing a pulse (e.g. a voltage) of a given, fixed size. The results of this study was published online on theNature Physicswebsite on December 19th.

Simultaneously, the researchers have shown that the rotating micro-cylinders can detect the presence of microscopic particles in liquid. This is because the presence of such particles in the vicinity of a rotating micro-cylinder produces a clearly measurable disturbance in the torque experienced by the cylinder. This provides a means of detecting, counting, or separating cells (or other microscopic particles) in liquids.

For the purposes of this study, the researchers employed optical torque tweezers. This unique instrument is capable of measuring both the force and angular momentum exerted on microscopic objects, including biological molecules such as DNA.


Source

Thursday, December 30, 2010

Researchers develop first high-temperature spin-field-effect transistor

The team has developed an electrically controllable device whose functionality is based on an electron's. Their results, the culmination of a 20-year scientific quest involving many international researchers and groups, are published in the current issue ofScience.

The team, which also includes researchers from the Hitachi Cambridge Laboratory and the Universities of Cambridge and Nottingham in the United Kingdom as well as the Academy of Sciences and Charles University in the Czech Republic, is the first to combine the spin-helix state and anomalous Hall effect to create a realistic spin-field-effect transistor (FET) operable at high temperatures, complete with an AND-gate logic device— the first such realization in the type of transistors originally proposed by Purdue University's Supriyo Datta and Biswajit Das in 1989.

"One of the major stumbling blocks was that to manipulate spin, one may also destroy it,"Sinova explains."It has only recently been realized that one could manipulate it without destroying it by choosing a particular set-up for the device and manipulating the material. One also has to detect it without destroying it, which we were able to do by exploiting our findings from our study of the spin Hall effect six years ago. It is the combination of these basic physics research projects that has given rise to the first spin-FET."

Sixty years after the transistor's discovery, its operation is still based on the same physical principles of electrical manipulation and detection of electronic charges in a semiconductor, says Hitachi's Dr. Jorg Wunderlich, senior researcher in the team. He says subsequent technology has focused on down-scaling the device size, succeeding to the point where we are approaching the ultimate limit, shifting the focus to establishing new physical principles of operation to overcome these limits— specifically, using its elementary magnetic movement, or so-called"spin,"as the logic variable instead of the charge.

This new approach constitutes the field of"spintronics,"which promises potential advances in low-power electronics, hybrid electronic-magnetic systems and completely new functionalities.

Wunderlich says the 20-year-old theory of electrical manipulation and detection of electron's spin in semiconductors— the cornerstone of which is the"holy grail"known as the spin transistor— has proven to be unexpectedly difficult to experimentally realize.

"We used recently discovered quantum-relativistic phenomena for both spin manipulation and detection to realize and confirm all the principal phenomena of the spin transistor concept,"Wunderlich explains.

To observe the electrical manipulation and detection of spins, the team made a specially designed planar photo-diode (as opposed to the typically used circularly polarized light source) placed next to the transistor channel. By shining light on the diode, they injected photo-excited electrons, rather than the customary spin-polarized electrons, into the transistor channel. Voltages were applied to input-gate electrodes to control the procession of spins via quantum-relativistic effects. These effects— attributable to quantum relativity— are also responsible for the onset of transverse electrical voltages in the device, which represent the output signal, dependent on the local orientation of processing electron spins in the transistor channel.

The new device can have a broad range of applications in spintronics research as an efficient tool for manipulating and detecting spins in semiconductors without disturbing the spin-polarized current or using magnetic elements.

Wunderlich notes the observed output electrical signals remain large at high temperatures and are linearly dependent on the degree of circular polarization of the incident light. The device therefore represents a realization of an electrically controllable solid-state polarimeter which directly converts polarization of light into electric voltage signals. He says future applications may exploit the device to detect the content of chiral molecules in solutions, for example, to measure the blood-sugar levels of patients or the sugar content of wine.

This work forms part of wider spintronics activity within Hitachi worldwide, which expects to develop new functionalities for use in fields as diverse as energy transfer, high-speed secure communications and various forms of sensor.

While Wunderlich acknowledges it is yet to be determined whether or not spin-based devices will become a viable alternative to or complement of their standard electron-charge-based counterparts in current information-processing devices, he says his team's discovery has shifted the focus from the theoretical academic speculation to prototype microelectronic device development.

"For spintronics to revolutionize information technology, one needs a further step of creating a spin amplifier,"Sinova says."For now, the device aspect— the ability to inject, manipulate and create a logic step with spin alone— has been achieved, and I am happy that Texas A&M University is a part of that accomplishment."


Source

Wednesday, December 29, 2010

Antarctic IceCube observatory to hunt dark matter

Some 5,160 optical sensors, each about the size of a basketball, were suspended on cables in holes bored into the ice

Enlarge

Building the, the world's largest neutrino observatory, has taken a gruelling decade of work in the Antarctic tundra and will help scientists study space particles in the search for, invisible material that makes up most of the Universe's mass.

The observatory, located 1,400 metres underground near the US Amundsen-Scott South Pole Station, cost more than 270 million dollars, according to the US National Science Foundation (NSF).

The cube is a network of 5,160, each about the size of a basketball, which have been suspended on cables in 86 holes bored into the ice with a specially-designed hot-water drill.

NSF said the final sensor was installed in the cube, which is one kilometre (0.62 miles) long in each direction, on December 18. Once in place they will be forever embedded in theas the drill holes fill with ice.

The point of the exercise is to study neutrinos,that travel at close to the speed of light but are so small they can pass through solid matter without colliding with any molecules.

Scientists believe neutrinos were first created during the Big Bang and are still generated by nuclear reactions in suns and when aexplodes, creating a supernova.

Scientists have hailed the IceCube as a milestone for international research
Enlarge

A handout picture taken on December 18, released by the US National Science Foundation (NSF) on December 23, shows the final Digital Optical Module (DOM) IceCube project members. An extraordinary underground observatory for subatomic particles has been completed in a huge cube of ice one kilometre on each side deep under the South Pole, researchers said.

Trillions of them pass through the entire planet all the time without leaving a trace, but the IceCube seeks to detect the blue light emitted when an occasional neutrino crashes into an atom in the ice.

"Antarctichas turned out to be an ideal medium for detecting neutrinos,"the NSF said in a statement announcing the project's completion.

"It is exceptionally pure, transparent and free of radioactivity."

Scientists have hailed the IceCube as a milestone for international research and say studying neutrinos will help them understand the origins of the Universe.

"From its vantage point at the end of the world, IceCube provides an innovative means to investigate the properties of fundamental particles that originate in some of the most spectacular phenomena in the Universe,"NSF said.

Most of the IceCube's funding came from the NSF, with contributions from Germany, Belgium and Sweden.

Researchers from Canada, Japan, New Zealand, Switzerland, Britain and Barbados also worked on the project.

It is operated by the University of Wisconsin-Madison.


Source

Tuesday, December 28, 2010

Santa: A Claus-et physicist

Santa in Sleigh

Several professors in the school's Department of Mechanical and Aerospace Engineering recently asked their students to explore the aerodynamic and thermodynamic challenges of delivering gifts to millions of children worldwide in a single night from an airborne sleigh.

The results, posted atweb.ncsu.edu/abstract/tag/science-of-santa, posit thatis a brilliant engineer and physicist.

One of the professors, Dr. Larry Silverberg, said the students concluded that Santa has expanded Einstein's theory of relativity to take advantage of"relativity clouds"that stretch time and bend the universe."Relativity clouds are controllable domains - rips in time - that allow him months to deliver presents while only a few minutes pass on Earth,"he said.

The site reports that his sleigh must be an advanced aerodynamic design made of honeycombed titanium alloy, capable of altered shape in flight and yet stable enough for landings on steep roofs. Laser sensors would help select the fastest route, and a porous, nano-structured skin outfitted with a low-pressure system reduces drag up to 90 percent, Silverberg said.

Silverberg confessed that he really didn't understand all of it, even though he's an expert in unified field theory.

"The man is a genius,"Silverberg said of Santa, whom he described as"jolly, but learned."

What about figuring out who is naughty and nice? Theory: A mile-wide antenna of super-thin mesh relying on electromagnetic induction principles picks up brain waves of children around the world. Filter algorithms organize desires and behaviors, and microprocessors feed the data to an onboard sleigh guidance system.

Also, Santa must be checking kids' Facebook and Twitter accounts.

And does Santa carry all those presents in a single sleigh? Not possible, according to Silverberg.

More plausible: He creates them on-site, i.e., on each rooftop, using a reversible thermodynamic processor - a sort of nano-toymaker known as the"magic sack."The carbon from chimney soot would be a common building block.

But the students theorized that he still delivers presents the old-fashioned way, climbing down chimneys, dressed in a fire-resistant halocarbon polymer suit.


Source

Monday, December 27, 2010

Chameleon model tries to explain the origin of dark energy

Chasing chameleons

Enlarge

According to thechameleon model, dark energy stems from particles that change their mass depending upon the local environment.

In the presence of ordinary matter, chameleons are massive particles that mediate a short-range force– too short to have appeared in searches for new forces. But in the vacuum of space, chameleons would have small masses. In principle, chameleons would interact with electromagnetic fields and, under certain conditions, could create photons, and photons could create chameleons.

Scientists of the Chameleon Afterglow Search (CHASE) explore this possibility by shining a laser beam into a vacuum chamber located inside a long, strong magnet. Traversing the magnetic field, the laser light might produce a population of chameleon particles within the chamber. When the laser is turned off, the chameleons would continue to interact with the magnetic field and produce an observable afterglow of photons.

No chameleon afterglow signal was seen in the CHASE data, which allowed the collaboration to place more stringent limits on chameleon models of. The new limits span a range of nearly four orders of magnitude in chameleon mass (see graphic) and are nearly five orders of magnitude more stringent than previous bounds from particle collider experiments.

The results are available in thearXiv preprint serverand will soon appear in.


Source

Tuesday, December 7, 2010

Researchers continue search for elusive new particles at CERN

Sung-Won Lee, an assistant professor of physics at Texas Tech and a member of the university’s High Energy Physics Group, said researchers have not given up finding any possible hints of new physics, which could add more subatomic particles to theof particle physics.

Their findings were published recently inPhysical Review Letters. Their results are the first of the“new physics” research papers produced from theat LHC.

“So far, we have not yet found any hint of the new particles with early LHC data, but we set the world’s most stringent limits on the existence of several theorized new types of particles,” said Lee, who co-led the analysis team searching for these new particles.

Currently, the Standard Model of physics only explains about 5 percent of the universe, Lee said.

“The Standard Model of particle physics has been enormously successful, but it leaves many important questions unanswered,” Lee said.“Also, it is widely acknowledged that, from the theoretical standpoint, the Standard Model must be part of a larger theory, known as‘beyond the Standard Model,’ which is yet to be experimentally confirmed.”

Finding evidence of new particles could open the door to whole new realms of physics that researchers believe could be there, such as string theory, which posits that subatomic particles such as electrons and quarks are not zero-dimensional objects, but rather one-dimensional lines, or“strings.” It could also help prove space-time-matter theory, which requires the existence of several extra spatial dimensions to the universe as well as length, width, height and time.

One of the most popular suggestions for the‘beyond the Standard Model’ theory is Supersymmetry, which introduces a new symmetry between fundamental particles, he said. Supersymmetry signals are of particular interest, as they provide a natural explanation for the“dark matter” known to pervade our universe and help us to understand the fundamental connection between particle physics and cosmology.

Furthermore there are a large number of important theoretical models that make strong cases for looking for new physics at the LHC.

“Basically, we’re looking for the door to new theories such as string theory, extra dimensions and black holes,” Lee said.“None of the rich new spectrum ofpredicted by these models has yet been found within the kinematic regime reachable at the present experiments. The LHC will increase this range dramatically after several years of running at the highest energy and luminosity.

“I believe that, with our extensive research experience, Texas Tech’s High Energy Physics Group can contribute to making such discoveries.”


Source

Monday, December 6, 2010

Scientists crash lead nuclei together to create the hottest and densest nuclear material ever

Hottest Show On Earth

Enlarge

On December 2 several scientists at thelaboratory in Geneva, Switzerland reported the first results of an experiment in which the nuclei ofatoms were shot around the 17 mile racetrack called theand then smashed into each other to create, for an instant, a speck of matter at a temperature of trillions of degrees.

Although the miniature fireballs that occur at the lead-lead collision points only last a fleeting moment -- about a trillionth of a trillionth of a second -- the immense detectors poised nearby are designed to act rapidly and sort through the myriad debris particles streaming outwards.

"This is the hottest nuclear matter ever created in a lab,"said Bolek Wyslouch of the Ecole Polytechnique near Paris who spoke at the CERN gathering. He is a representative of the Compact Muon Solenoid collaboration, which uses one of the giant detectors at LHC to observe the lead-lead collisions.

"I like to call this the Little Bang,"said Juergen Schukraft, also speaking at the CERN colloquium, suggesting that the violent collisions of heavy ions at the LHC were smaller cousins of the Big Bang explosion that ushered in the visible universe some 14 billion years ago. Indeed, the conditions of the mini-fireballs at LHC resemble theas it was only microseconds after the Big Bang in terms of energy density and temperature. Schukraft represented a second CERN detector group called Alice.

Never before has so much energy -- in this case hundreds of trillions of electron volts abbreviated as TeV -- been deliberately deposited in a volume of space only a few times the size of a proton. A proton is one of the constituents of the nucleus inside each atom, and is some 10,000 times smaller than the atom itself. Scientists who work at accelerators often use the electron volt as their unit of energy since it is precisely the energy gained by an electron accelerated by an electric force difference of one volt.

What happens when two lead nuclei containing hundreds of protons and neutrons, each of which have an energy of 1.4 TeV, smash into each other in an almost head-on collision? As they meet and interact the protons and neutrons melt into even more basic constituents, called quarks and gluons. What you get is a seething liquid of hundreds of strongly interacting particles, called by physicists a quark-gluon plasma.

Earlier this year scientists at the Relativistic Heavy Ion Collider at Brookhaven National Laboratory in New York reported on RHIC's collision measurements from a quark-gluon plasma made by colliding gold nuclei. They reported the temperature of the plasma to be 4 trillion degrees, the hottest temperature ever carefully measured in an experiment.

The LHC scientists haven't yet directly measured the temperature of their quark plasma. Schukraft said that since theof the collisions is some three times larger at LHC than at RHIC, the temperatures will be higher also.

In following weeks, a series of specific results from the LHC heavy ions will appear in scientific journals. Scientists from the Atlas collaboration -- which operates a third large detector at LHC -- report on their observations of huge jets emerging sideways from the collisions. A jet is a powerful cone of energy, in the form of flying particles that emerges from the fireball shortly after the collision. Scientists expect that if a powerful jet shoots out of the collision on one side, there should be a complementary jet on the other side that balances momentum.

In many collision events, however, only one jet is observed. In an article about to appear in the journalPhysical Review Letters, the Atlas scientists report the first such examples of the imbalance between jets in the lead-lead collisions. But what happened to the missing jet?

Brian Cole, speaking at CERN on behalf of the Atlas team, said that the quark-gluon plasma itself is probably absorbing part or all of the jets on their way outwards. This process doesn't have to be symmetric.

"The more central the collision,"Cole said, referring to how head-on the collision,"the more asymmetric the jets are."

Another Atlas scientist, Peter Steinberg, said that scientists expected that some of the jet energy would be absorbed, but were surprised that in some events the jet seemed to be completely absorbed.

The asymmetric appearance of jets, the scientists hope, can be used to understand the unprecedented nature of this densest matter ever observed in a lab.


Source

Sunday, December 5, 2010

Farmers slowed down by hunter-gatherers: Our ancestors' fight for space

Farmers slowed down by hunter-gatherers: Our ancestors' fight for space

Research published today, Friday, 3 December 2010, inNew Journal of Physics, details a physical model, which can potentially explain how the spreading of Neolithic farmers was slowed down by thedensity of hunter-gatherers.

The researchers from Girona, in Catalonia, Spain, use a reaction-diffusion model, which explains the relation between population growth and available space, taking into account the directional space dependency of the established Mesolithic population density.

The findings confirm archeological data, which shows that the slowdown in the spreading of farming communities was not, as often assumed, the result of crops needing to adapt to chillier climates, but indeed a consequence of the struggle for space with prevalent hunter-gatherer communities.

In the future, the researchers' model could be used for further physical modeling of socioeconomic transitions in the history of humanity. As the researchers write,"Thepresented in this work could be applied to many examples of invasion fronts in which the indigenous population and the invasive one compete for space in a single biological niche, both in natural habitats and in microbiological assays."


Source

Saturday, December 4, 2010

Researchers create high performance infrared camera based on type-II InAs/GaSb superlattices

Created by Manijeh Razeghi, Walter P. Murphy Professor of Electrical Engineering and Computer Science, and researchers in the Center for Quantum Devices in the McCormick School of Engineering and Applied Science, the long wavelength infrared focal plane array camera provides a 16-fold increase in the number of pixels in the image and can providein the dark. Their results were recently published in the journal, Volume 97, Issue 19, 193505 (2010).

The goal of the research is to offer a better alternative to existing long wavelength(LWIR) cameras, which, with their thermal imaging capabilities, are used in everything from electrical inspections to security and nighttime surveillance. Current LWIR cameras are based on mercury(MCT) materials, but the Type-II superlattice is mercury-free, more robust, and can be deposited with better uniformity. This will significantly increase yield and reduce camera cost once the technology goes commercial.

"Not only does it prove Type-II superlattices as a viable alternative to MCT, but also it widens the field of applications for infrared cameras,"Razeghi said."The importance of this work is similar to that of the realization of mega-pixel visible cameras in the last decade, which shaped the world's favor for digital cameras."

Type-II InAs/GaSb superlattices were first invented by Nobel laureate Leo Esaki in the 1970s, but it has taken time for the material to mature. The LWIR detection mechanism relies on quantum size effects in a completely artificial layer sequence to tune the wavelength sensitivity and demonstrate high efficiency. Razeghi's group has been instrumental in pioneering the recent development of Type-II superlattices, having demonstrated the world's first Type-II–based 256×256just a few years ago.

"Type-II is a very interesting and promising new material for infrared detection,"Razeghi said."Everything is there to support its future: the beautiful physics, the practicality of experimental realization of the material. It has just taken time to prove itself, but now, the time has come."

Tremendous obstacles, especially in the fabrication process, had to be overcome to ensure that the 1024×1024 Type-II superlattice–based camera would have equivalent performance as the previously realized 320×256 cameras. Operating at 81 K, the new camera can collect 78 percent of the light and is capable of showing temperature differences as small as 0.02° C.


Source

Friday, December 3, 2010

Magnetic switching under pressure

Magnetic Switching under Pressure

Now scientists at the U.S. Department of Energy’s (DOE’s) Argonne National Laboratory, along with a collaborator from Eastern Washington University, have harnessed the power of extreme high pressure and discovered a novel approach to predictably tune the switching of a promising new family of next-generation magnetic materials. Their results were published in the online version ofAngewandte Chemie International Editionon September 10, 2010.

The magnetic materials the team studied are made from copper ions and simple building blocks, including water and fluoride. Their simple structures are held together by strong hydrogen bonds, making very robust molecular networks. Each copper ion, which sits at the corner of a molecular cube, contains one unpaired electron. These spins are disordered at normal temperatures, but begin to align in opposite directions at low temperatures, creating the magnetic state called antiferromagnetism.

Using state-of-the-art high-pressure equipment and the high-energy, highly focused x-ray beams from the X-ray Science Division 1-BM beamline at the Argonne Advanced Photon Source, the scientists observed a series of structural transitions as they exerted pressure on the material. These rearrangements abruptly reoriented the magnetic spins of the material, creating a reversible magnetic switch effect.

High-pressure science has traditionally been the domain of Earth scientists and, until now, has not been routinely used to study molecular systems. Such studies promise to greatly improve our understanding of the often complex way materials’ functional properties are related to their molecular structures—a key step in realizing the diverse potential of molecular materials.

“Molecule-based materials are much softer than traditional solid-state materials, like oxides and minerals,” said Argonne scientist Gregory Halder.“As such, we can expect to induce dramatic changes to their structures and functional properties at relatively low, industrially relevant pressures.”

Next up for this team of researchers: Studying a targeted series of molecular networkunder pressure to understand the broader implications of this new phenomenon.


Source

Thursday, December 2, 2010

A step toward fusion power: MIT advance helps remove contaminants that slow fusion reactions

A step toward fusion power

Enlarge

The new experiments have revealed a set of operating parameters for the reactor— a so-called“mode” of operation— that may provide a solution to a longstanding operational problem: How to keep heat tightly confined within the hot charged gas (called plasma) inside the reactor, while allowing contaminating particles, which can interfere with the fusion reaction, to escape and be removed from the chamber.

Most of the world’s experimental fusion reactors, like the one at MIT’s Plasma Science and Fusion Center, are of a type called tokamaks, in which powerful magnetic fields are used to trap the hot plasma inside a doughnut-shaped (or toroidal) chamber. Typically, depending on how the strength and shape of the magnetic field are set, both heat and particles can constantly leak out of the plasma (in a setup called L-mode, for low-confinement) or can be held more tightly in place (called H-mode, for high-confinement).

Now, after some 30 years of tests using the Alcator series of reactors (which have evolved over the years), the MIT researchers have found another mode of operation, which they call I-mode (for improved), in which the heat stays tightly confined, but the particles, including contaminants, can leak away. This should prevent these contaminants from“poisoning” the fusion reaction.“This is very exciting,” says Dennis Whyte, professor in the MIT Department of Nuclear Science and Engineering and coauthor of some recent papers that describe more than 100 experiments testing the new mode. Whyte presented the results in October at the International Atomic Energy Agency International Fusion Conference in South Korea.“It really looks distinct” from the previously known modes, he says.

While in previous experiments in tokamaks the degree of confinement of heat and particles always changed in unison,“we’ve at last proved that they don’t have to go together,” says Amanda Hubbard, a principal research scientist at MIT’s Plasma Science and Fusion Center and coauthor of the reports. Hubbard presented the latest results in an invited talk at the November meeting of the American Physical Society’s Division of Plasma Physics, and says the findings“attracted a lot of attention.” But, she added,“we’re still trying to figure out why” the new mode works as it does. The work is funded by the U.S. Department of Energy.

A step toward fusion power
Enlarge

Alcator C-Mod, shown here, is the most powerful university-based fusion device in the world. Recent findings there could help point the way to power-producing fusion reactors. Image: Plasma Science and Fusion Center

The fuel in planned tokamaks, which comprises the hydrogen isotopes deuterium and tritium, is heated to up to more than 100 million degrees Celsius (although in present reactors like Alcator C-Mod, tritium is not used, and the temperatures are usually somewhat lower). This hot plasma is confined inside a doughnut-shaped magnetic“bottle” that keeps it from touching— and melting— the chamber’s walls. Nevertheless, its proximity to those walls and the occasional leakage of some hot plasma causes a small number of particles from the walls to mix with the plasma, producing one kind of contaminant. The other kind of expected contaminant is a product of the fusion reactions themselves: helium atoms, created by the fusing of hydrogen atoms, but which are not capable of further fusion under the same conditions.

When a fusion reactor operates, the impurities accumulate. Whyte says there have been various experimental observations and theoretical proposals for removing them at intervals after they accumulate. Now, he says,“We seem to have discovered a completely different flushing mechanism… so they don’t build up in the first place.”

One of the keys to triggering the new mode was to configure the magnetic fields inside the tokamak in a way that is essentially upside-down from the usual H-mode setup, Hubbard says.

The findings could be significant in enabling the next step forward in fusion energy, where fusion reactions and power are sustained mostly by“self-heating” without requiring a larger constant addition of outside power. Researchers expect to achieve this milestone, referred to as“fusion burn,” in a new international collaboration on a reactor called ITER, currently being built in France. The findings from MIT“almost certainly could be applied” to the very similar design of the ITER reactor, Whyte says.

Patrick Diamond PhD’79, professor of plasma physics at the University of California at San Diego, says,“The findings are potentially of great importance,” because they could solve a key problem facing the design of next-generationreactors: the occurrence of unpredictable bursts of heat from the edge of the confined plasma, which can“fry” some of the tokamak’s internal parts.“The I-mode eliminates or greatly reduces” these bursts of heat,“because it allows a steep temperature gradient— which is what you want— but does not allow a steep density gradient, which we don’t really need,” he says.

Diamond adds that theorists will have their work cut out to explain this mode.“Why do heat and particle transport behave differently? This is a really fundamental question, since most theories would predict a strong coupling between the two,” he says.“It’s a real challenge to us theorists— and important conceptually as well as practically.”

Rich Hawryluk, a researcher at the Princeton Plasma Physics Laboratory, says this is a"significant advance"which has generated considerable international interest and that other groups are now planning to follow up on these results. One area of research will be whether it is possible to"reliably operate in the I-mode and not go into the H-mode, which might have these violent edge instabilities. The operating conditions and the control requirements to stay in I-mode need to be better understood."

Hubbard explained that one of the key differences that made it possible to discover this phenomenon in MIT’s Alcator C-Mod was that this relatively small reactor, though large enough to produce results relevant to future reactors such as ITER, has great operational flexibility and can easily follow up on new findings. While larger reactors typically plan all their tests up to two years in advance, she says,“with this smaller machine, we have the ability to try new things when they appear. This ability to explore has been a key.”


This story is republished courtesy of MIT News (http://web.mit.edu/newsoffice/), a popular site that covers news about MIT research, innovation and teaching.


Source

Wednesday, December 1, 2010

Dark matter could transfer energy in the Sun

Dark matter could transfer energy in the Sun

Enlarge

"We assume that theinteract weakly with the Sun's atoms, and what we have done is calculate at what level these interactions can occur, in order to better describe the structure and evolution of the Sun", Marco Taoso, researcher at the IFIC, a combined centre of the Spanish National Research Council and the University of Valencia, explains to SINC.

The astrophysical observations suggest that our galaxy is situated in a halo of dark matter particles. According to the models, some of these particles, the WIMPs (Weakly Interacting Massive Particles) interact weakly with other normal ones, such as atoms, and could be building up on the inside of stars. The study, recently published in the journalPhysical Review D, carries out an in-depth study of the case of the Sun in particular.

"When the WIMPs pass through the Sun they can break up the atoms of our star and lose energy. This prevents them from escaping the gravitational force of the Sun which captures them, and they become trapped, orbiting inside it, with no way of escaping", the researcher points out.

The dark matter cools down the Sun's core

Scientists believe that the majority of the dark matter particles gather together in the centre of the Sun, but in their elliptic orbits they also travel to the outer part, interacting and exchanging with the solar atoms. In this way, the WIMPs transport the energy from the burning central core to the cooler peripheral parts.

"This effect produces a cooling down of the core, the region from where the neutrinos originate due to the nuclear reactions of the Sun", Taoso points out."And this corresponds to a reduction in the flux of solar neutrinos, since these depend greatly on the temperature of the core".

The neutrinos that reach the Earth can be measured by means of different techniques. These data can be used to detect the modifications of the solar temperature caused by the WIMPs. The transport of energy by these particles depends on the likelihood of them interacting with the atoms, and the"size"of these interactions is related to the reduction in the neutrino flux.

"As a result, current data about solar neutrinos can be used to put limits on the extent of the interactions between dark matter and, and using numerical codes we have proved that certain values correspond to a reduction in the flux of solar neutrinos and clash with the measurements", the scientist reveals.

The team has applied their calculations to better understand the effects of low massparticles (between 4 and 10 gigaelectronvolts). At this level we find models that attempt to explain the results of experiments such as DAMA (beneath an Italian mountain) or CoGent (in a mine in the USA), which look for dark material using"scintillators"or WIMP detectors.

Debate about WIMP and solar composition

This year another study by scientists from Oxford University (United Kingdom) also appeared. It states that WIMPs not only reduce the fluxes of solar, but also, furthermore, modify the structure of the Sun and can explain its composition.

"Our calculations, however, show that the modifications of the star's structure are too small to support this claim and that thecannot explain the problem of the composition of the", Taoso concludes.


Source

Tuesday, November 30, 2010

Rotating light provides indirect look into the nucleus

Results reported in TheJournal of Chemical Physicsintroduce an alternative path to this information, by using light to observe nuclei indirectly via the orbiting.

"We are not looking at a way to replace the conventional technique but there are a number of applications in which optical detection could provide complementary information,"says author Carlos Meriles of the City University of New York.

The new technique is based on Optical Faraday Rotation (OFR), a phenomenon in which the plane of linearly polarized light rotates upon crossing a material immersed in a. When nuclei are sufficiently polarized, the extra magnetic field they produce is 'felt' by the electrons in the sample thus leading to Faraday rotation of their own. Because the interaction between electrons and nuclei depends on the local molecular structure, OFR-detected NMR spectroscopy provides complementary information to conventional detection.

Another interesting facet of the technique is that, unlike conventional NMR, the signal response is proportional to the sample length, but not its volume."Although we have not yet demonstrated it, our calculations show that we could magnify the signal by creating a very long optical path in a short, thin tube,"Meriles says. This signal magnification would use mirrors at both ends of a channel in a microfluidics device to reflectrepeatedly through the sample, increasing the signal amplitude with each pass.


Source

Monday, November 29, 2010

Tempest in a teapot: Scientists describe swirling natural phenomena

Tempest in a teapot: International team of scientists describes swirling natural phenomena

Enlarge

The earth's atmosphere and its molten outer core have one thing in common: Both contain powerful, swirling vortices. While in the atmosphere these vortices include cyclones and hurricanes, in the outer core they are essential for the formation of the earth's magnetic field. These phenomena in earth's interior and its atmosphere are both governed by the same natural mechanisms, according to experimental physicists at UC Santa Barbara working with a computation team in the Netherlands.

Using laboratory cylinders from 4 to 40 inches high, the team studied these underlying physical processes. The results are published in the journal.

"To study the atmosphere would be too complicated for our purposes,"said Guenter Ahlers, senior author and professor of physics at UCSB."Physicists like to take one ingredient of a complicated situation and study it in a quantitative way under ideal conditions."The research team, including first author Stephan Weiss, a postdoctoral fellow at UCSB, filled the laboratory cylinders with water, and heated the water from below and cooled it from above.

Due to that temperature difference, the warm fluid at the bottom plate rose, while the cold fluid at the top sank–– a phenomenon known as convection. In addition, the whole cylinder was rotated around its own axis; this had a strong influence on how the water flowed inside the cylinder., such as the earth's rotation, is a key factor in the development of vortices. The temperature difference between the top and the bottom of the cylinder is another causal factor since it drives the flow in the first place. Finally, the relation of the diameter of the cylinder to the height is also significant.

Ahlers and his team discovered a new unexpected phenomenon that was not known before for turbulent flows like this. When spinning the container slowly enough, no vortices occurred at first. But, at a certain critical rotation speed, the flow structure changed. Vortices then occurred inside the flow and the warm fluid was transported faster from the bottom to the top than at lower rotation rates."It is remarkable that this point exists,"Ahlers said."You must rotate at a certain speed to get to this critical point."

The rotation rate at which the first vortices appeared depended on the relation between the diameter and the height of the cylinder. For wide cylinders that are not very high, this transition appeared at relatively low rotation rates, while for narrow but high cylinders, the cylinder had to rotate relatively fast in order to produce vortices. Further, it was found that vortices do not exist very close to the sidewall of the cylinder. Instead they always stayed a certain distance away from it. That characteristic distance is called the"healing length."

"You can't go from nothing to something quickly,"said Ahlers."The change must occur over a characteristic length. We found that when you slow down to a smaller rotation rate, the healing length increases."

The authors showed that their experimental findings are in keeping with a theoretical model similar to the one first developed by Vitaly Lazarevich Ginzburg and Lev Landau in the theory of superconductivity. That same model is also applicable to other areas of physics such as pattern formation and critical phenomena. The model explains that the very existence of the transition from the state without vortices to the one with them is due to the presence of the sidewalls of the container. For a sample so wide (relative to its height) that the walls become unimportant, the vortices would start to form even for very slow rotation. The model makes it possible to describe the experimental discoveries, reported in the article, in precise mathematical language.


Source

Sunday, November 28, 2010

When bird meets machine, bioinspired flight

When bird meets machine, bioinspired flight

IOP Publishing'sBioinspiration&Biomimeticspublishes a special edition today, Wednesday 24 November 2010, entitled Bioinspired Flight, comprising of nine journal papers which display the wealth of knowledge being accrued by researchers in the field.

Nature outclasses man's best efforts at robotic flight, as even the geometry and descent dynamics of a simple maple seed lead one research team from the University of Maryland, led by Dr. Evan Ulrich, to show that micro helicopters could be much simplified by imitating the maple seed's wing pitch for controlled hovering and, surprisingly, forward flight.

The issue, starting with two papers on tactics employed for controlled descent by geckoes and flying snakes, is accompanied by a selection of films - four of which areavailable on YouTube.

The first film, from a team led by graduate student Ardian Jusufi from UC Berkeley, shows how researchers have studied the gecko's trick of employing its tail to right and turn itself mid-air, helping it always fall on its feet, and have now made a robot model gecko which can employ the same grace on descent.

You need Flash installed to watch this video

Gecko fall. How mid-air righting gecko inspires robot gecko that can right itself during free fall. Credits: Ardian Jusufi and co-workers

A second film from Professor Jake Socha and his team at Virginia Tech displays themystifying skills of flying snakes, which direct their flight mid-air by slithering.

You need Flash installed to watch this video

Compilation of high-speed video's of flying snakes Chrysopelea paradisi. Credits: Courtesy of National Geographic Television; compiled by Jake Socha

Moving on from tactical descent, the special edition also covers humming birds' perfect hover; birds' intuitive exploitation of thermal updrafts; the mechanical motion of insects' wings, and seagulls' magnificent sense of flight environment, which allows them incredible angles of attack and increased control in crosswinds.

You need Flash installed to watch this video

Thorax design of the Harvard robot fly for powering and controlling its wingbeat. Credit: Benjamin Finio and Robert Wood

As the special edition's editor, Professor David Lentink from Wageningen University, writes in an accompanying editorial,"Because biologists and engineers are typically trained quite differently, there is a gap between the understanding of naturalof biologists and the engineer's expertise in designing vehicles that function well. In the middle however is a few pioneering engineers who are able to bridge both fields."


Source

Saturday, November 27, 2010

Extreme lasers at work

Extreme lasers at work

Enlarge

The researchers focused on the behavior of argon atoms, which is easy to handle and well-characterized, under illumination byabout one hundred trillion times brighter than the noonday sun, and containing about seven times more energy per photon than the bluest light visible to the human eye. Previous work by other researchers showed that such intense, energetic light removes multiple electrons from target atoms, resulting in highly charged. While the mechanism of the ionization process was partially understood from observations of the yields and momenta of these ions, important details were missing.

Hikosaka, Nagasono and colleagues chose to observe the electrons emitted during the ionization process (Fig. 1), instead of the ions themselves. Not only do thesecarry unique information about the ionization process, but they can be measured after each ultra-short laser pulse. Since the laser spectrum and power are constantly fluctuating, the fine details of the ionization process are averaged or‘smeared’ during a continuous measurement. A shot-by-shot measurement, however, can account for laser fluctuations.

The experiment showed that the dominant ionization pathway of the argon atoms has two steps: first, a single laser photon is absorbed to create singly-ionized argon, and then two more photons are absorbed to create doubly-ionized argon. The researchers also found that the intermediate argon ion states had energy levels, or energy resonances, that induced this pathway.

The research leverages the recent development of free electron lasers, which are uniquely capable of producing very bright, energetic and short pulses of radiation. The work also illustrates that energy resonances are key to multi-photon, multiple ionization processes, a finding that is likely to be relevant to a variety of research programs. Hikosaka says that the research team will continue to focus on the basic science, as well as applications:“Our goal is to develop and leverage a deep understanding of the mechanism and dynamics of non-linear processes in order to manipulate or control these processes and their final products.”


Source

Friday, November 26, 2010

Large Hadron Collider experiments bring new insight into primordial universe

Large Hadron Collider experiments bring new insight into primordial universe

Enlarge

This result is reported in a paper from the ATLAS collaboration accepted for publication yesterday in the scientific journalPhysical Review Letters. A CMS paper will follow shortly, and results from all of the experiments will be presented at a seminar on Thursday 2 December at CERN. Data taking with ions continues to 6 December.

“It is impressive how fast the experiments have arrived at these results, which deal with very complex physics,” said CERN’s Research Director Sergio Bertolucci.“The experiments are competing with each other to publish first, but then working together to assemble the full picture and cross check their results. It’s a beautiful example of how competition and collaboration is a key feature of this field of research.”

One of the primary goals of the lead-ion programme at CERN is to create matter as it would have been at the birth of the Universe. Back then, the ordinary nuclear matter of which we and the visible universe are made could not have existed: conditions would have been too hot and turbulent for quarks to be bound up by gluons into protons and neutrons, the building blocks of the elements. Instead, these elementary particles would have roamed freely in a sort of quark gluon plasma. Showing beyond doubt that we can produce and study quark gluon plasma will bring important insights into the evolution of the early Universe, and the nature of the strong force that binds quarks and gluons together into protons, neutrons and ultimately all the nuclei of the periodic table of the elements.

When lead-ions collide in the LHC, they can concentrate enough energy in a tiny volume to produce tiny droplets of this primordial state of matter, which signal their presence by a wide range of measureable signals. The ALICE papers point to a large increase in the number of particles produced in the collisions compared to previous experiments, and confirm that the much hotter plasma produced at the LHC behaves as a very low viscosity liquid (a perfect fluid), in keeping with earlier observations from Brookhaven’s RHIC collider. Taken together, these results have already ruled out some theories about how the primordial Universe behaved.

“With nuclear collisions, the LHC has become a fantastic 'Big Bang' machine,” said ALICE spokesperson Jürgen Schukraft.“In some respects, the quark-gluon matter looks familiar, still the ideal liquid seen at RHIC, but we’re also starting to see glimpses of something new.”

The ATLAS and CMS experiments play to the strength of their detectors, which both have very powerful and hermetic energy measuring capability. This allows them to measure jets of particles that emerge from collisions. Jets are formed as the basic constituents of nuclear matter, quarks and gluons, fly away from the collision point. In proton collisions, jets usually appear in pairs, emerging back to back. However, in heavy ion collisions the jets interact in the tumultuous conditions of the hot dense medium. This leads to a very characteristic signal, known as jet quenching, in which the energy of the jets can be severely degraded, signalling interactions with the medium more intense than ever seen before. Jet quenching is a powerful tool for studying the behaviour of the plasma in detail.

“ATLAS is the first experiment to report direct observation of jet quenching,” said ATLAS Spokesperson Fabiola Gianotti.“The excellent capabilities of ATLAS to determine jet energies enabled us to observe a striking imbalance in energies of pairs of jets, where one jet is almost completely absorbed by the medium. It’s a very exciting result of which the Collaboration is proud, obtained in a very short time thanks in particular to the dedication and enthusiasm of young scientists.”

“It is truly amazing to be looking, albeit on a microscopic scale, at the conditions and state of matter that existed at the dawn of time,” said CMS Spokesperson Guido Tonelli.“Since the very first days of lead-ion collisions the quenching of jets appeared in our data while other striking features, like the observation of Z particles, never seen before in heavy-ion collisions, are under investigation. The challenge is now to put together all possible studies that could lead us to a much better understanding of the properties of this new, extraordinary state of matter"

The ATLAS and CMS measurements herald a new era in the use of jets to probe the quark gluon plasma. Future jet quenching and other measurements from the three LHC experiments will provide powerful insight into the properties of the primordial plasma and the interactions among its quarks and gluons.

With data taking continuing for over one more week, and the LHC already having delivered the programmed amount of data for 2010, the heavy-ion community at the LHC is looking forward to further analysing their data, which will greatly contribute to the emergence of a more complete model of, and consequently the very early Universe.


Source

Thursday, November 25, 2010

Optimizing large wind farms

Optimizing large wind farms

Enlarge

Charles Meneveau, who studies fluid dynamics at Johns Hopkins University, and his collaborator Johan Meyers from Leuven University in Belgium, have developed a model to calculate the optimal spacing of turbines for the very largeof the future. They will present their work today at the American Physical Society Division of(DFD) meeting in Long Beach, CA.

"The optimal spacing between individual wind turbines is actually a little farther apart than what people use these days,"said Meneveau.

The blades of a turbine distort wind, creating eddies of turbulence that can affect otherfarther downwind. Most previous studies have used computer models to calculate the wake effect of one individual turbine on another.

Starting with large-scaleand small-scale experiments in a, Meneveau's model considers the cumulative effects of hundreds or thousands of turbines interacting with the atmosphere.

"There's relatively little knowledge about what happens when you put lots of these together,"said Meneveau.

The energy a large wind farm can produce, he and his coworkers discovered, depends less on horizontal winds and more on entraining strong winds from higher in the atmosphere. A 100-meter turbine in a large wind farm must harness energy drawn from the atmospheric boundary layer thousands of feet up.

In the right configuration, lots of turbines essentially change the roughness of the land -- much in the same way that trees do -- and create turbulence. Turbulence, in this case, isn't a bad thing. It mixes the air and helps to pull downfrom above.

Using as example 5 megawatt-rated machines and some reasonable economic figures, Meneveau calculates that the optimal spacing between turbines should be about 15 rotor diameters instead of the currently prevalent figure of 7 rotor diameters.


Source

When Belgium sneezes, the world catches a cold

Using data from Bureau Van Dijk - the company information and business intelligence provider - to assess the reach and size of different countries' economies, and applying the Susceptible-Infected-Recovered (SIR) model, physicists from universities in Greece, Switzerland and Israel have identified the twelve countries with greatest power to spread a crisis globally.

The research published today, Thursday 25 November 2010, in(co-owned by the Institute of Physics and German Physical Society), groups Belgium and Luxembourg alongside more obviously impactful economies such as the USA in the top twelve.

Using a statistical physics approach, the researchers from the Universities of Thessaloniki, Lausanne and Bar-Ilan used two different databases to model the effect of hypothetical economic crashes in different countries. The use of two different databases aided the avoidance of bias but threw up very similar results.

The data used allowed the physicists to identify links between the different countries, by mapping theto a, and gauge the likelihood of one failed economy having an effect on another.

One network was created using data on the 4000 world corporations with highest turnover and a second using data onandrelations between 82 countries.

The SIR model, successfully used previously to model the spreading of disease epidemics, is applied to these two networks taking into consideration the strength of links between countries, the size of the crash, and the economic strength of the country in potential danger.

When put to the test with the corporate data, the USA, the UK, France, Germany, Netherlands, Japan, Sweden, Italy, Switzerland, Spain, Belgium and Luxembourg were part of an inner core of countries that would individually cause the most economic damage globally if their economies were to fail.

Using the import/export data, China, Russia, Japan, Spain, UK, Netherlands, Italy, Germany, Belgium, Luxembourg, USA, and France formed the inner core, with the researchers explaining that the difference– particularly the addition of China to this second list– is due to a large fraction of Chinese trade volume coming from subsidiaries of western corporations based in China.

The researchers write,"Surprisingly, not all 12 countries have the largest total weights or the largest GDP. Nevertheless, our results suggest that they do play an important role in the global economic network. This is explained by the fact that these smallerdo not support only their local economy, but they are a haven for foreign investments."


Source

Wednesday, November 24, 2010

The physics of coffee rings

You might thinkring formation, first described quantitatively by Deegan et al in a heavily cited article, is the most widely and ritualistically performed experiment in the world, given the prevalence of caffeine in cultures. But most of us lack theand mathematical models to evaluate our stain data properly, or reach meaningful conclusions beyond"Use a coaster."

Now Shreyas Mandre of Brown University, Ning Wu from Colorado School of Mines and L. Mahadevan and Joanna Aizenberg from Harvard University have devised athat combines laboratory studies of microscopic glass particles in solution with mathematical theories to predict the existence, thickness and length of the banded ring patterns that formed.

Their results, presented today at the American Physical Society Division of Fluid Dynamics meeting in Long Beach, CA, suggest the patterned deposition of particles can be controlled by altering physical parameters such as evaporation and-- and perhaps one day manipulated to create small-particle tools.

"Controlling the ring deposition process would be useful for creating such things as new microphysics tools operating at a scale where pliers or other traditional tools for moving particles cannot operate,"notes Mandre.

The team found that during ring deposition, a particle layer of uniform thickness is deposited if the concentration is above a certain threshold. Below that threshold the deposits form non-uniform bands. The threshold is formed because evaporation at the solid-liquid interface of the rim occurs faster than a replenishing flow of water from the center of the droplet can replace the evaporating rim fluid. This leaves the particles on the rim high, dry -- and deposited.

Exploiting this competition between evaporation and replenishment is the key to controlling the process as a microtool, says Mandre. Potential applications include printing, making industrial coatings, fabricating electronics, and designing new medicines.


Source

Tuesday, November 23, 2010

Scientists glimpse universe before the Big Bang

Pre Big Bang Circles

Enlarge

The CMB is the radiation that exists everywhere in the universe, thought to be left over from when the universe was only 300,000 years old. In the early 1990s, scientists discovered that the CMB temperature has anisotropies, meaning that the temperature fluctuates at the level of about 1 part in 100,000. These fluctuations provide one of the strongest pieces of observational evidence for the Big Bang theory, since the tiny fluctuations are thought to have grown into the large-scale structures we see today. Importantly, these fluctuations are considered to be random due to the period of inflation that is thought to have occurred in the fraction of a second after the Big Bang, which made the radiation nearly uniform.

However, Penrose and Gurzadyan have now discovered concentric circles within the CMB in which the temperature variation is much lower than expected, implying that CMB anisotropies are not completely random. The scientists think that these circles stem from the results of collisions between supermassive black holes that released huge, mostly isotropic bursts of energy. The bursts have much more energy than the normal local variations in temperature. The strange part is that the scientists calculated that some of the larger of these nearly isotropic circles must have occurred before the time of the Big Bang.

The discovery doesn't suggest that there wasn't a Big Bang - rather, it supports the idea that there could have been many of them. The scientists explain that the CMB circles support the possibility that we live in a cyclic universe, in which the end of one“aeon” or universe triggers anotherthat starts another aeon, and the process repeats indefinitely. The black hole encounters that caused the circles likely occurred within the later stages of the aeon right before ours, according to the scientists.

In the past, Penrose has investigated cyclic cosmology models because he has noticed another shortcoming of the much more widely accepted inflationary theory: it cannot explain why there was such low entropy at the beginning of the universe. The low entropy state (or high degree of order) was essential for making complex matter possible. The cyclic cosmology idea is that, when a universe expands to its full extent, black holes will evaporate and all the information they contain will somehow vanish, removing entropy from the. At this point, a new aeon with a low entropy state will begin.

Because of the great significance of these little circles, the scientists will do further work to confirm their existence and see which models can best explain them. Already, Penrose and Gurzadyan used data from two experiments - WMAP and BOOMERanG98 - to detect the circles and eliminate the possibility of an instrumental cause for the effects. But even if the circles really do stem from sources in a pre-Big Bang era, cyclic cosmology may not offer the best explanation for them. Among its challenges, cyclic cosmology still needs to explain the vast shift of scale between aeons, as well as why it requires all particles to lose their mass at some point in the future.


Source

Monday, November 22, 2010

Enhancing the efficiency of wind turbines

New ideas for enhancing the efficiency ofare being presented this week at the American Physical Society Division of Fluid Dynamics meeting in Long Beach, CA.

One issue confronting the efficiency of wind energy is the wind itself -- specifically, its changeability. The aerodynamic performance of a wind turbine is best under steady wind flow, and the efficiency of the blades degrades when exposed to conditions such as wind gusts, turbulent flow, upstream turbine wakes, and wind shear.

Now a new type of air-flow technology may soon increase the efficiency of large wind turbines under many different wind conditions.

Syracuse University researchers Guannan Wang, Basman El Hadidi, Jakub Walczak, Mark Glauser and Hiroshi Higuchi are testing new intelligent-systems-based active flow control methods with support from the U.S. Department of Energy through the University of Minnesota Wind Energy Consortium. The approach estimates the flow conditions over the blade surfaces from surface measurements and then feeds this information to an intelligent controller to implement real-time actuation on the blades to control the airflow and increase the overall efficiency of the wind turbine system. The work may also reduce excessive noise and vibration due to flow separation.

Initial simulation results suggest that flow control applied on the outboard side of the blade beyond the half radius could significantly enlarge the overall operational range of the wind turbine with the same rated power output or considerably increase the rated output power for the same level of operational range. The team is also investigating a characteristic airfoil in a new anechoic wind tunnel facility at Syracuse University to determine the airfoil lift and drag characteristics with appropriate flow control while exposed to large-scale flow unsteadiness. In addition, the effects of flow control on the noise spectrum of the wind turbine will be also assessed and measured in the anechoic chamber.

Another problem with wind energy is drag, the resistance felt by the turbine blades as they beat the air. Scientists at the University of Minnesota have been looking at the drag-reduction effect of placing tiny grooves on turbine blades. The grooves are in the form of triangular riblets scored into a coating on the blade surface. They are so shallow (between 40 and 225 microns) that they can't be seen by the human eye -- leaving the blades looking perfectly smooth.

Using wind-tunnel tests of 2.5 megawatt turbine airfoil surfaces (becoming one of the popular industry standards) and computer simulations, they are looking at the efficacies of various groove geometries and angles of attack (how the blades are positioned relative to the air stream).

Riblets like these have been used before, in the sails on sailboats taking part in the last America's Cup regatta and on the Airbus airliner, where they produced a drag reduction of about 6 percent. The design of wind turbine blades was, at first, closely analogous to that of airplane wings. But owing to different engineering concerns, such ashaving a much thicker cross section close to the hub and wind turbines having to cope with peculiar turbulence near the ground, drag reduction won't be quite the same for wind turbines.

University of Minnesota researchers Roger Arndt, Leonardo P. Chamorro and Fotis Sotiropoulos believe that riblets will increase wind turbine efficiency by about 3 percent.


Source

Sunday, November 21, 2010

Pitt physicist wins 2011 Einstein Prize for lifetime unraveling, reshaping general relativity theory

Pitt physicist wins 2011 Einstein Prize for lifetime unraveling, reshaping general relativity theory

Enlarge

To recognize Newman's lifetime of work at the forefront of general relativity, the American Physical Society has awarded him the 2011 Einstein Prize for his part in devising the renowned Newman-Penrose formalism—an extension of Einstein's—as well as for composing a variety of solutions to Einstein's equations, particularly the Kerr-Newman black hole. The prize also commends Newman's ongoing work to explain the significance of far-flung light energy.

Newman joins a select roster of physicists who have received the biennial $10,000 prize since its 2003 inception, including noted Einstein collaborators John A. Wheeler of Princeton University, and Syracuse University (SU) physicist Peter Bergmann, Newman's mentor when he pursued his PhD degree at SU, which he earned in 1956.

In 1962, six years after Newman joined Pitt's Department of Physics and, he and University of Oxford professor Roger Penrose developed the Newman-Penrose formalism, one of the most-cited sets of equations in relativity. In short, the formalism is an alternative method for describing Einstein's equations that replaces Einstein's own version, Newman explained.

The significance of the Newman-Penrose formalism is that it allows for special conditions to be imposed before one even attempts to solve an equation—conditions for which Einstein's original theory does not allow. Instead of using the four standard space-time coordinates, the Newman-Penrose equations use four different vectors to describe the geometric constructions of the theory that arise from massive objects in motion.

"We knew we had something good,"Newman recalled."We performed the Goldberg-Sachs theorem, which originally required a great deal of effort, at the drop of a hat. We knew it was a powerful technique then. I've used it virtually every day since the original paper, and when I lecture now to a technical audience, I assume that most people are familiar with it."

Three years later, in 1965, Newman inadvertently took part in constructing another important solution, the Kerr-Newman black hole.

As a hotshot young physicist, Newman stated in the Journal of Mathematical Physics that a class of solutions to Einstein's equations did not exist. In all of Newman's mathematics, however, there was one lowly plus-sign that should have been a minus. Roy Kerr, then a professor of physics at the University of Texas at Austin, discovered the error and found that the class of solutions did in fact exist. But it turned out that the now-correct equation easily allowed Newman to solve the Einstein-Maxwell equations for describing rotating, electrically charged black holes and their surrounding region. The Kerr-Newman stands as one of four solutions of Einstein's equations describing black holes.

In his more recent work, Newman investigates null foliation, or the patterns light rays form as they fill space-time. In 1980, Newman first identified a property known as H-space that occurs at the outer reaches of light's range when light rays no longer have physical contact—like the fingertips of a splayed hand. Newman is currently working on possible applications of H-space theory for explaining observable phenomena. {Also known as Heaven theory after a good-natured play on the"H"coined fittingly at a lecture Newman gave in Israel, the work gained notoriety after antipork-spending crusader Sen. William Proxmire (D-Wisconsin) took the name seriously and decried Newman's National Science Foundation grant application for a project to find"Heaven."Newman got the grant anyway.}

Newman's outpouring of research and many collaborations characterize the spirit of the golden age of general relativity that fell approximately between 1955 and 1975, he said.

Contemporary audiences may struggle to imagine a time when Einstein's theories were not highly regarded. Yet, when Newman entered Syracuse in 1951 as a graduate student in Bergmann's lab, general relativity was out of fashion, having been superseded since the mid-1920s by quantum theory. There were rumblings, however, partly attributable to Einstein's dismissal of major quantum principles, that quantum theory had serious shortcomings. Bergmann—who collaborated with Einstein on his unified field theory work—and his group began to revisit general relativity along with research groups at Princeton, in the United Kingdom, and in Eastern Europe.

"When I joined Bergmann's group, general relativity was in the doldrums. No one worked on it and Einstein, though honored as a great thinker, was considered to be passé, a fogey,"Newman said.

"But groups in a handful of institutions around the world began accepting Einstein's theory of relativity as relevant to theof the day. There was an open exchange of ideas among the different groups that stimulated a rapid revitalization of relativity,"Newman continued."Soon, a deeper understanding of the Einstein equations was developed and predictions of the existence and properties of gravitational waves were made. The theory of relativity became mainstream.

"Those were wonderful years of friendship and collaboration."


Source

Saturday, November 20, 2010

Physicists study behavior of enzyme linked to Alzheimer's, cancer

UH physicists study behavior of enzyme linked to Alzheimer's, cancer

Enlarge

Margaret Cheung, assistant professor of physics at UH, and Antonios Samiotakis, a physics Ph.D. student, described their findings in a paper titled"Structure, function, and folding of phosphoglycerate kinase (PGK) are strongly perturbed by macromolecular crowding,"published in a recent issue of the journal, one of the world's most-cited multidisciplinary scientific serials.

"Imagine you're walking down the aisle toward an exit after a movie in a crowded theatre. The pace of your motion would be slowed down by the moving crowd and narrow space between the aisles. However, you can still maneuver your arm, stretch out and pat your friend on the shoulder who slept through the movie,"Cheung said."This can be the same environment inside a crowded cell from the viewpoint of a protein, the workhorse of all living systems. Proteins always 'talk' to each other inside cells, and they pass information about what happens to the cell and how to respond promptly. Failure to do so may cause uncontrollable cell growth that leads to cancer or cause malfunction of a cell that leads to Alzheimer's disease. Understanding a protein inside cells– in terms of structures and enzymatic activity– is important to shed light on preventing, managing or curing these diseases at a molecular level."

Cheung, a theoretical physicist, and Martin Gruebele, her experimental collaborator at the University of Illinois at Urbana-Champaign, led a team that unlocked this mystery. Studying the PGK enzyme, Cheung used computer models that simulate the environment inside a cell. Biochemists typically study proteins in water, but such test tube research is limited because it cannot gauge how a protein actually functions inside a crowded cell, where it can interact with DNA, ribosomes and other molecules.

The PGK enzyme plays a key role in the process of glycolysis, which is the metabolic breakdown of glucose and other sugars that releases energy in the form of ATP. ATP molecules are basically like packets of fuel that power biological molecular motors. This conversion of food to energy is present in every organism, from yeast to humans. Malfunction of the glycolytic pathway has been linked to Alzheimer's disease and cancer. Patients with reduced metabolic rates in the brain have been found to be at risk for Alzheimer's disease, while out-of-control metabolic rates are believed to fuel the growth of malignant tumor cells.

Scientists had previously believed that a PGK enzyme shaped like Pac-Man had to undergo a dynamic hinge motion to perform its metabolic function. However, in the computer models mimicking the cell interior, Cheung found that the enzyme was already functioning in its closed Pac-Man state in the jam-packed surrounding. In fact, thewas 15 times more active in the tight spaces of a crowded cell. This shows that in cell-like conditions the function of ais more active and efficient than in a dilute condition, such as a test tube. This finding can drastically transform how scientists view proteins and their behavior when the environment of a cell is taken into account.

"This work deepens researchers' understanding of how proteins function, or don't function, in real cell conditions,"Samiotakis said."By understanding the impact of a crowded cell on the structure, dynamics of proteins can help researchers design efficient therapeutic means that will work better inside cells, with the goal to prevent diseases and improve human health."

Cheung and Samiotakis'– performed using the supercomputers at the Texas Learning and Computation Center (TLC2)– were coupled with in vitro experiments by Gruebele and his team. Using the high-performance computing resources of TLC2 factored significantly in the success of their work.

"Picture having a type of medicine that can precisely recognize and target a key that causes Alzheimer's or cancer inside a crowded cell. Envision, then, the ability to switch a sick cell like this back to its healthy form of interaction at a molecular level,"Cheung said."This may become a reality in the near future. Our lab at UH is working toward that vision."

The research was funded by a nearly $224,000 National Science Foundation grant in support of Samiotakis' dissertation.


Source

Friday, November 19, 2010

New look at relativity: Electrons can't exceed the speed of light -- thanks to light itself, says biologist

Any space with a temperature aboveconsists of. As a result of the, the moving electron experiences the photons crashing into the front of it as being blue-shifted, and the photons colliding with the back of it as being red-shifted. Since blue-shifted photons exert more momentum than red-shifted photons, the photons themselves exert a counterforce on the moving electron, just as the cytoplasm in a cell exerts a viscous force on the moving organelles. The viscous force that arises from the Doppler-shifted photons prevents electrons from exceeding the speed of light, according to Randy Wayne, associate professor of plant biology.

Wayne's research,"Charged Particles Are Prevented From Going Faster Than the Speed of Light by Light Itself: A Biophysical Cell Biologist's Contribution to Physics,"appears in the November 2010 issue ofActa Physica Polonica B.

On determining whether electrons can surpass the speed of light, Albert Einstein's specialcontends that electrons are prevented from exceeding the speed of light as a result of the relativity of time. But Wayne contends that Einstein didn't take the environment through which themove into account.

"Given the prominence of viscous forces within and around cells and the experience of identifying and quantifying such resistive forces, biophysical cell biologists have an unique perspective in discovering the viscous forces that cause moving particles to respond to an applied force in a nonlinear manner,"he explained."Consequently, light itself prevents charged particles from moving faster than the."

Wayne will publish a related paper,"The Relativity of Simultaneity: An Analysis Based on the Properties of Electromagnetic Waves,"in a forthcoming volume of theAfrican Physical Review, which is a juried publication.


Source