Monday, May 23, 2011

Large Hadron Collider smashes another record

Scientists look at computer screens to examine activity in the Large Hadron Collider

Enlarge

"Last night, a symbolic frontier was crossed,"said Michel Spiro, president of the board of the European Organisation for(), explaining that the rate of sub-atomic smashups in its vast machine had multiplied 10-fold in the space of a month.

CERN's Large Hadron Collider (LHC) is housed in a 27-kilometre (16.9-mile) ring-shaped tunnel 100 metres (325 feet) below ground, straddling the French-Swiss border.

It is designed to accelerate beams ofto nearly thein contra-rotating directions.

Then, using magnets, the beams are then directed into labs where some of the protons collide while others escape.

Detectors record the seething sub-atomic debris, hoping to find traces of particles that can strengthen fundamental understanding of physics.

A month ago, the LHC set a record of 10 million collisions per second.

Director General of the European Organization for Nuclear Research (CERN), Rolf-Dieter Heuer
Enlarge

The director general of the European Organization for Nuclear Research (CERN), Rolf-Dieter Heuer, speaks to journalists in May 2011. CERN runs the world's biggest particle collider, located on the outskirts of Geneva. It set a new record, a feat that should accelerate the quest to pinpoint the elusive particle known as the Higgs Boson, a senior physicist said.

"This is now 100 million collisions per second,"Spiro said at a conference in Paris on the"infinitely small and the infinitely big."

Among the puzzles that physicists are seeking to answer is the existence of the Higgs, which has been dubbed"the"for being mysterious yet ubiquitous.

If found, it would explain the nature of mass, filling a major piece of the theoretical construct of physics known as the.

In London last week, CERN physicists said they believed that by the end of 2012 they could determine once and for all whether the Higgs existed or not.

Spiro said that this search would certainly be helped by the stepped-up pace of collision, which is the equivalent to sifting more earth in search of nuggets of gold.

"If we're lucky, and it (the Higgs) is in the right zone for expected mass, we may be able to find it this summer,"he said."On the other hand, ruling it out will take us to the end of next year."

To provide a confirmation would require notching up"at least 15"detections, he said.

The first proton collisions at theoccurred on September 10, 2008. The smasher then had to endure a 14-month shutdown to fix technical problems.

It had been due to shut down in early 2012 for work enabling it to crank up to full power. But a decision was made several weeks ago to delay closure for a year to help the Higgs hunt.


Source

Sunday, May 22, 2011

Long-standing question about swimming in elastic liquids, answered

Paulo Arratia, assistant professor ofand applied mechanics, along with student Xiaoning Shen, conducted the experiment. Their findings were published in the journal.

Many animals,and cells move by undulation, and they often do so through elastic fluids. From worms aeratingto sperm racing toward an egg,dynamics in elastic fluids is relevant to a number of facets of everyday life; however, decades of research in this area have been almost entirely theoretical or done with computer models. Only a few investigations involved live organisms.

“There have been qualitative observations of sperm cells, for example, where you put sperm in water and watch their tails, then put them in an elastic fluid and see how they swim differently,” Arratia said.“But this difference has never been characterized, never put into numbers to quantify exactly how muchaffects the way they swim, is it faster or slower and why.”

The main obstacle for quantitatively testing these theories with live organisms is developing an elastic fluid in which they can survive, behave normally and in which they can be effectively observed under a microscope.

Arratia and Shen experimented on the nematode C. elegans, building a swimming course for the millimeter-long. The researchers filmed them through a microscope while the creatures swam the course in many different liquids with different elasticity but the same viscosity.

Though the two liquid traits, elasticity and viscosity, sound like they are two sides of the same coin, they are actually independent of each other. Viscosity is a liquid’s resistance to flowing; elasticity describes its tendency to resume its original shape after it has been deformed. All fluids have some level of viscosity, but certain liquids like saliva or mucus, under certain conditions, can act like a rubber band.

Increased viscosity would slow a swimming organism, but how one would fare with increased elasticity was an open question.

“The theorists had a lot of different predictions,” Arratia said.“Some people said elasticity would make things go faster. Others said it would make things go slower. It was all over the map.

“We were the first ones to show that, with this animal, elasticity actually brings the speed and swimming efficiency down.”

The reason the nematodes swam slower has to do with how viscosity and elasticity can influence each other.

“In order to make our fluids elastic, we put polymers in them,” Arratia said.“DNA, for example, is a polymer. What we use is very similar to DNA, in that if you leave it alone it is coiled. But if you apply a force to it, the DNA or our polymer, will start to unravel.

“With each swimming stroke, the nematode stretches the polymer. And every time the polymers are stretched, the viscosity goes up. And as thegoes up, it's more resistance to move through.”

Beyond giving theorists and models a real-world benchmark to work from, Arratia and Shen’s experiment opens the door for more live-organism experiments. There are still many un-answered questions relating to swimming dynamics and elasticity.

“We can increase the elasticity and see if there is a mode in which speed goes up again. Once the fluid is strongly elastic, or closer to a solid, we want to see what happens,” Arratia said.“Is there a point where it switches from swimming to crawling?”


Source

Saturday, May 21, 2011

'Kinks' in tiny chains reveal Brownian rotation

'Kinks' in tiny chains reveal Brownian rotation

Enlarge

Biswal, an assistant professor in chemical and biomolecular engineering, said it's easy to view microscopic rods as they wiggle and weave under the influence of Brownian forces. But it's never been easy to see one spin along its axis, let alone measure that motion.

The technique created by Biswal and her team involves micron-scale rigid chains in liquid that act like perfect cylinders as they exhibit Brownian motion. But the rodlike chains incorporate the slightest of. These nearly invisible"kinks"are just big enough to the measure the chain's rotation without influencing it.

Knowing how elongated molecules move in a solution is important to those who study the structure of liquid crystals orlike the dynamics of lipid bilayers, the gatekeepers in living cells, Biswal said.

The research follows her lab's creation of a technique to build stiff chains ofthat mimic rod-like polymer or biological molecules. Using them like Legos, the lab assembles chains from DNA-grafted paramagneticparticles, which line up when exposed to a magnetic field and link together where the strands of DNA meet.

The result looks like a string of beads. Depending on the length and type of the DNA linkers, the rods can be stiff or flexible. Slight variations in the paramagnetic properties of each particle account for the kinks."We can make them robust; we can make them stable,"Biswal said."Now we're actually using them as a model for."

This video is not supported by your browser at this time.

See video of the microscopic rods turning

It's long been known that stiff rods in a solution rotate as they dance and are pushed by the atoms around them. Nearly two centuries ago, Robert Brown observed the rotation of flatflakes but had no way to characterize that motion. While Albert Einstein and others have since made progress in applying formulas to Brownian motion, the particulars of rotation have remained a relative mystery.

Dichuan Li, a graduate student in Biswal's lab and lead author of the new paper, was inspired to look at rotation after reading Brown's 1827 report in a classic-paper reading club."He noticed what he thought must be axial rotation, but he wasn't able to measure how fast it was moving,"Li said.

The new method is the first systematic approach to measuring the axial rotation of particles, he said. Once chains are formed, theis released and the chains are free to move in a solution between two cover plates. Li isolated and filmed the structures as they twisted, and he later analyzed the kinks to quantify the chains' motion.

The finding opens a door to further study of longer or more complex polymer or biological chains, Biswal said. She said the paramagnetic beads could be used to model rods of varying stiffness,"even more flexible structures that can actually curve and bend, just like DNA, or branch-like structures. Then we can apply forces to them and see what happens."

Biswal hopes to take a closer look at how polymers entangle in materials of varying density."How they're stabilized by entanglement is not well understood,"she said."We're moving toward being able to create not just single chains for study, but large collections of these chains to see if they provide good models to look at things like entanglement."


Source

Friday, May 20, 2011

AMS is ready to discover the particle universe

Ready to discover the particle universe

Enlarge

Cosmic rays are high-energy particles that shoot through space close to the speed of light. The newly installed(AMS-02) will detect and catalogue these rays looking for new clues into the fundamental nature of matter.

One of the biggest mysteries that AMS will attempt to solve is where the cosmic rays originate. The fearsome energies of the particles could be generated in the tangled magnetic fields of exploded stars, or perhaps in the hearts of active galaxies, or maybe in places as yet unseen by astronomers.

Ready to discover the particle universe
Enlarge

AMS-02 on ISS. Credits: NASA

By collecting and measuring vast numbers of cosmic rays and their energies, particle physicists hope to understand more about how and where they are born.

AMS-02 is the culmination of a programme that launched a prototype detector aboard thein 1998. Known as AMS-01, the experiment showed the great potential for discovery.

AMS-02 will operate on theto 2020 and beyond. Part of its mission is to search for antimatter within the. A European satellite package– Payload for Antimatter Matter Exploration and Light-Nuclei Astrophysics (PAMELA)– has already shown that there is more antimatter in space than conventional astrophysics expects.

One possibility that AMS-02 will investigate is whether it is coming from collisions between particles of‘dark matter’. This is the mysterious substance that astronomers believe may pervade the Universe, outweighing normal matter by about ten to one.

There is also the remote chance that AMS-02 will detect a particle of anti-helium, left over from the Big Bang itself.

“The most exciting objective of AMS is to probe the unknown; to search for phenomena which exist in nature that we have not yet imagined nor had the tools to discover,” says Nobel Prize Laureate Samuel Ting, who leads the international project.

Ready to discover the particle universe
Enlarge

Assembling the AMS-02 magnet. Credits: CERN / AMS-02 Collaboration

Significant European participation has gone into the AMS collaboration. The instrument is a unique suite of different detectors and was mostly built in Europe by institutes in Italy, France, Germany, Spain, Portugal and Switzerland, together with the participation of US, China, Russia and Taiwan.

In all, the experiment’s team consists of more than 600 scientists from 56 institutes in 16 countries, with the European involvement coordinated by Prof. Roberto Battiston.

The installation of AMS-02 is part of the‘DAMA’ (dark matter) mission of ESA astronaut Roberto Vittori, one of the six astronauts on Space Shuttle Endeavor.


Source

Thursday, May 19, 2011

'Critical baby step' taken for spying life on a molecular scale

'Critical baby step' taken for spying life on a molecular scale

In a study published today, Thursday, 19 May, in the Institute of Physics and the German Physical Society's, researchers have developed a technique, exploiting a specific defect in the lattice structure of diamond, to externally detect the spins of individual molecules.

(MRI) has already taken advantage of a molecule's spin to give clear snapshots of organs and tissue within the human body, however to get a more detailed insight into the workings of disease, the imaging scale must be brought down to individual, and captured whilst theare still alive.

Co-lead author Professor Phillip Hemmer, Electrical&Computer Engineering, Texas A&M University, said,"Many conditions, such as cancer and aging, have their roots at the molecular scale. Therefore if we could somehow develop a tool that would allow us to do magnetic resonance imaging of individual biomolecules in a living cell then we would have a powerful new tool for diagnosing and eventually developing cures for such stubborn diseases."

To do this, the researchers, from Professor Joerg Wrachtrup's group at the University of Stuttgart and Texas A&M University, used a constructed defect in the structure of diamond called a nitrogen vacancy (NV)—a position within thewhere one of theis replaced with a nitrogen atom.

Instead of bonding to four other carbon atoms, the nitrogen atom only bonds to three carbon atoms leaving a spare pair of electrons, acting as one of the strongest magnets on an atomic scale.

The most important characteristic of a diamond NV is that it has an optical readout—it emits bright red light when excited by a laser, which is dependent on which way the magnet is pointing.

The researchers found that if an external spin is placed close to the NV it will cause the magnet to point in a different direction, therefore changing the amount of light emitted by it.

This change of light can be used to gauge which way the external molecule is spinning and therefore create a one-dimensional image of the external spin. If combined with additional knowledge of the surface, or a second NV nearby, a more detailed image with additional dimensions could be had.

To test this theory, nitrogen was implanted into a sample of diamond in order to produce the necessary NVs. Externalwere brought to the surface of the diamond, using several chemical interactions, for their spins to be analyzed.

Spins that exist within the diamond structure itself have already been modelled, so to test that the spins were indeed external, the researchers chemically cleaned the diamond surface and performed the analysis again to prove that the spins had been washed away.

Professor Hemmer continued,"Currently, biological interactions are deduced mostly by looking at large ensembles. In this case you are looking only at statistical averages and details of the interaction which are not always clear. Often the data is taken after killing the cell and spreading its contents onto a gene chip, so it is like looking at snapshots in time when you really want to see the whole movie."

"Clearly there is much work to be done before we can, if ever, reach our long-term goal of spying on the inner workings of life on the molecular scale. But we have to learn to walk before we can run, and this breakthrough represents one of the first critical baby steps."


Source

Wednesday, May 18, 2011

Some particles are able to flow up small waterfalls, physicists show

Cuban physicists discover some particles are able to flow up small waterfalls

Enlarge

With just a small bit of research, the team was able to show what seems to be counterintuitive; that it truly is possible for some(and some of the liquid itself) to travel up a very small waterfall and into the reservoir behind it. While most anyone that has observedhas likely noted the whirls and eddies that form when water tries to flow along or past obstacles, it’s difficult to imagine such counter-flows being created with sufficient force to actually push the fluid uphill. Althsuler et al. show that in fact, it can.

To see what was going on, they used two lab containers; one to hold the room temperature water, the other to hold the chalk they used instead of mate leaf bits (figuring it would be much easier to follow with the naked eye). They then placed an open half-cylinder shapedbetween the two containers that would allow water to flow smoothly from the first container down the channel, where it would then drop into the second container. With this setup, they discovered that as the liquid came rushing down the channel, the main mass oftraveled down the center, creating vortices that caused small amounts of fluid to travel along the edges of the channel in the opposite direction, allowing the chalk to work its way up to the higher level container. But, they also discovered by varying the height of the channel, that it only occurred when the dropping distance was very slight; in this case, 1 centimeter, or less.

As a result of this study, it’s likely that certain industrial processes might have to be modified to make certain unintentional contamination doesn’t occur that is currently being overlooked. Also, its likely future research on thiswill need to be done to determine if other factors can affect the height of the fall or the amount of particulate that is able to travel uphill to another vessel.


Source

Tuesday, May 17, 2011

Physicists accelerate simulations of thin film growth

Physicists accelerate simulations of thin film growth

Enlarge

Jacques Amar, Ph.D., professor of physics at the University of Toledo (UT), studies the modeling and growth of materials at the atomic level. He uses Ohio Supercomputer Center (OSC) resources and Kinetic Monte Carlo (KMC) methods to simulate the molecular(MBE) process, where metals are heated until they transition into a gaseous state and then reform as thin films by condensing on a wafer in single-crystal thick layers.

"One of the main advantages of MBE is the ability to control the deposition of thin films and atomic structures on thein order to create nanostructures,"explained Amar.

are used in industry to create a variety of products, such as semiconductors, optical coatings, pharmaceuticals and.

"Ohio's status as a worldwide manufacturing leader has led OSC to focus on the field of advanced materials as one of our areas of primary support,"noted Ashok Krishnamurthy, co-interim co-executive director of the center."As a result, numerous respected physicists, chemists and engineers, such as Dr. Amar, have accessed OSC computation and storage resources to advance their vital materials science research."

Recently, Amar leveraged the center's powerful supercomputers to implement a"first-passage time approach"to speed up KMC simulations of the creation of materials just a few atoms thick.

Physicists accelerate simulations of thin film growth
Enlarge

In tests run on Ohio Supercomputer Systems, Jacques Amar, Ph.D., compared the surface simulations of thin film copper growth using regular (a) Kinetic Monte Carlo methods (KMC) and (b) first-passage-time distribution KMC simulations. Credit: Amar/University of Toledo

"The KMC method has been successfully used to carry out simulations of a wide variety of dynamical processes over experimentally relevant time and length scales,"Amar noted."However, in some cases, much of the simulation time can be 'wasted' on rapid, repetitive, low-barrier events."

While a variety of approaches to dealing with the inefficiencies have been suggested, Amar settled on using a first-passage-time (FPT) approach to improve KMC processing speeds. FPT, sometimes also called first-hitting-time, is a statistical model that sets a certain threshold for a process and then estimates certain factors, such as the probability that the process reaches that threshold within a certain amount time or the mean time until which the threshold is reached.

"In this approach, one avoids simulating the numerous diffusive hops of atoms, and instead replaces them with the first-passage time to make a transition from one location to another,"Amar said.

In particular, Amar and colleagues from the UT department of Physics and Astronomy targeted two atomic-level events for testing the FPT approach: edge-diffusion and corner rounding. Edge-diffusion involves the"hopping"movement of surface atoms– called adatoms– along the edges of islands, which are formed as the material is growing. Corner rounding involves the hopping of adatoms around island corners, leading to smoother islands.

Amar compared the KMC-FPT and regular KMC simulation approaches using several different models of thin film growth: Cu/Cu(100), fcc(100) and solid-on-solid (SOS). Additionally, he employed two different methods for calculating the FPT for these events: the mean FPT (MFPT), as well as the full FPT distribution.

"Both methods provided"very good agreement"between the FPT-KMC approach and regular KMC simulations,"Amar concluded."In addition, we find that our FPT approach can lead to a significant speed-up, compared to regular KMC simulations."

Amar's FPT-KMC approach accelerated simulations by a factor of approximately 63 to 100 times faster than the corresponding KMC simulations for the fcc(100) model. The SOS model was improved by a factor of 36 to 76 times faster. For the Cu/Cu(100) tests, speed-up factors of 31 to 42 and 22 to 28 times faster were achieved, respectively, for simulations using the full FPT distribution and MFPT calculations.


Source

Monday, May 16, 2011

The next computer: your genes

DNA

Shu has been working with his students, Qi-Wen Wang and Kian-Yan Yong, at Nanyang Technical University to propose a way that the manipulation ofcould be used to solve certain types of problems. Their work has been published in:“DNA-Based Computing of Strategic Assignment Problems.”

The computations that the human body performs naturally are much faster than even the fastest silicon-based computer. On top of that, Shu points out, silicon is not very environmentally friendly.“There are also heat problems. DNA-based computing could be faster, friendlier for the environment, and eliminate some of the other problems that come with.”

DNA-based computing could prove especially useful for strategic assignment problems.“Even with developments in silicon-based computing, there are some problems that take even the fastest computers months to solve,” Shu says. With DNA-based computing, massive parallel problems, combinatorial problems and AI solving problems could all be addressed with the possibility of greater efficiency.

“Silicon-based computing relies on a binary system,” Shu explains.“With DNA-based computing, you can do more than have ones and zeroes. DNA is made up of A, G, C, T, which gives it more range. DNA-based computing has the potential to deal with fuzzy data, going beyond digital data.”

Shu and his students manipulated strands of DNA at the strand level and at the test tube level. They found that they could fuse strands together, as well as cut them, and perform other operations that would affect the ability of the DNA to compute. In this model, DNA molecules are used to store information that can be used for computational purposes.

“We can join strands together, creating an addition operation, or we can divide by making the DNA smaller by denaturization,” Shu says.“We expect that more complex operations can be done as well.”

However, Shu points out that DNA-based computing is in the most basic of stages right now.“So far, there are a lot of human manipulations that must be done. We’d like to refine it so that there is less human interference. In silicon-based computing, we let the CPU do everything. We need to get to the point where we just need to provide a command and let the DNA do everything.” Cost is also an issue.“DNA right now is very costly and hard to commercialize.”

Another challenge with DNA-based computing is the display. It’s very difficult to display the results from DNA-based computing, since electronics have to be used.“We need to find the missing link between electronic speed, which slow, and DNA speed, which is fast– more like optical speed.”

Despite the challenges, Shu is optimistic.“We have made some progress, and we expect to continue making more progress.is the future of computing.”


Source

Sunday, May 15, 2011

Venom tears: Snake bites can turn out to be groovy

Venom tears: Snake bites can turn out to be groovy

Surprisingly, only now have scientists carefully measured the flow of. They say that many snakes kill not by injecting poison under pressure via a syringe-like tube, but rather by the force of surface tension along an open groove.

"Nobody ever bothered about the question of why most snakes have fangs with open grooves rather than a tubular shape,"said Leo van Hemmen, who works at the physics department of the Technical University of Munich in Germany. He and his colleagues teamed up with researchers in the department of physical therapy at the University of Massachusetts Lowell to study exactly how venom gets from the snake to the victim.

The surface tension of a fluid arises from the interaction of molecules in the fluid. This tension is what allows a lightweight mosquito to stand on water. To illustrate how surface tension actually propels liquids, van Hemmen points to something called"tears of wine."

In this common phenomenon, thin drops of wine -- resembling tears -- form on the side of a wineglass above the level of the wine. The reason for this is that the surface tension for alcohol is lower than for water. Near the surface some of the liquid, which has lost some of its alcohol to, is boosted upwards by liquid below with its greater alcohol content. This film of wine, hoisted upwards by the difference in surface tension, collects and forms into tear-shaped.

Something like this happens when a snake bites its prey. The deadly fluid from the venom gland moves along the open groove in the snake's fang because of the differences inbetween the venom inside and outside of the groove. Van Hemmen and his colleagues are the first to measure this process.

If the venom seeps rather than squirts under pressure, wouldn't this make"envenomation"a slow process? It would, but then most snakes aren't in any hurry.

Venom tears: Snake bites can turn out to be groovy

Sketch of head of Pacific Rattlesnake, showing a partially elevated fang. Normally it lies close to the roof of the mouth and is covered by a membrane not shown in the drawing. When the fang is erected, preparatory to biting, this membrane is folded down at the base of the fang and directs the poison as it leaves the end of the duct of the gland into the base of the fang. Credit: nps.gov

One collaborator in the new study, Bruce A. Young of UMass Lowell, said that snake venom incapacitates the prey and also helps with digestion.

"From the snake's point of view, there is no need to kill the prey, as long as it cannot escape or injure the snake,"said Young."Most venomoushold their prey in their mouth while the venom works."

Young, who directs the anatomical laboratory in the department of physical therapy at UMASS Lowell, said that some herpetologists were so fascinated by the whole venom-delivery process that after themselves being bitten they would neglect to get aid in time, with fatal results.

And in case you were wondering howworks, Young summarizes the effects, which depend on the specific snake:"Some venoms disrupt nerve transmission, some impair muscle contraction, some disrupt blood cells, and others do a more generalized disruption to the cells."

The new venomous resultsappearin the journalPhysical Review Letters.


Source

Saturday, May 14, 2011

Perspective on: The future of fusion

Perspective on: The future of fusion

Enlarge

What new initiatives have you been focusing on since assuming the leadership at PPPL?
 
PPPL scientists regularly generate intriguing new ideas. We have made great progress toward a major enhancement of our major experimental facility– the National Spherical Torus Experiment or NSTX. We have funding for an upgrade that will yield an order-of-magnitude improvement in its physics capabilities– a doubling of the plasma current, doubling of the heating power, and quintupling of the plasma duration. This will expand the physics parameter space, advancing all the NSTX missions. We expect to complete the upgrade in three to four years, so the upgrade will guarantee the scientific vitality of NSTX for at least a decade into the future.

Before the upgrade starts, we will be running experiments on NSTX starting in July and running eight months where researchers will study how heat escapes as hot magnetized plasma, and what materials are best for handling intense plasma powers.

We have also moved forward with new studies of a liquid boundary for aplasma, in contrast with the more common solid boundary, with expanded operation of our exploratory Lithium Tokamak Experiment (LTX), and have enjoyed very promising interactions with materials and surface scientists in the engineering school.

PPPL has led a two-year national planning effort to define a program to apply the most powerful computers to model the whole, complex fusion plasma system. We have plans for expanded work in plasma astrophysics, and have led a national study to define opportunities in this field. Looking into the fusion future, we have completed a conceptual study of a fusion pilot plant as a possible next step for the U.S. fusion program, investigating various designs and the strategic implications of such a step. At PPPL we are generating many new ideas and initiatives, even in this difficult budgetary climate.

Fusion scientists, like you, have been working to produce fusion reactions for many decades. Why is it so hard to create fusion energy?

In a nuclear fusion reaction, two atomic nuclei fuse and release energy. In a fusion reactor, the core will contain the plasma producing this energy. It’s a difficult process because it requires making a hot gas that is 10 times hotter than the core of the sun -- 100 hundred million degrees -- and confining that for long periods of time in a controllable way. Plasmas exhibit complex behavior that is difficult to understand. The engineering challenge is also huge, because you have to surround this hundred million degree plasma by a material structure. We often say that fusion is maybe the most or one of the most difficult science and engineering challenges ever undertaken.
 
Also, researchers had to create an entirely new area of science to work through this problem. It’s as if you said,“Let’s go cure cancer,” but the field of microbiology did not exist. You’d first have to go and establish this field of science. So that’s what began in the very late 1950s, establishing this field of plasma physics, with the goals of understanding how plasmas behave and learning how to control plasmas.
 
Why should the U.S. maintain its funding of the fusion program?
 
The first reason is U.S. competitiveness, both the specific competitiveness in fusion and the general competitiveness in science and technology. Whoever controls the energy sector, whoever innovates with the science, is going to be economically dominant. Fusion is a perfect case study of where we can be either retaining our competitiveness or we can give it up. If the latter, we will be importing fusion reactors.
 
Second, in fusion, our contributions are needed. The U.S. has a workforce for fusion that is second to none. In other countries, they have outbuilt us and they may have better hardware. But, since the U.S. has been at this for quite a while and has operated world-class facilities, we have a broad and deep workforce of fusion physicists and engineers. That’s a fabulous workforce that takes time to nurture. Also, producing fusion energy is a complex, multi-faceted problem and others are not doing everything. We have ideas for facilities here in the U.S. that are needed in the world fusion program.
 
You can ask the question, if the U.S. just disappears from fusion will the rest of the world get there? I think so, but I don’t think they’ll get there as rapidly as they would if the U.S. contributed. And time is important in this problem.
 
Why is it that fusion is not always mentioned in discussions on alternative energy?
 
Fusion is not going to be affecting the electrical grid in 10 years, and most discussions focus on the very-near term. However, underfunding fusion becomes a self-fulfilling prophecy that keeps it always in the long term. Twenty years ago, we proposed building a small burning plasma experiment. It wasn’t built. If it had been, we could have shown by now how a burning plasma works, and not be waiting for results on ITER in the 2020s. Fifteen years ago we proposed building a long pulse superconducting tokamak, which can be operated for long periods of time to investigate the science of controlling plasmas. We would have had that data by now instead of waiting to see the results on such experiments now starting up in Asia.
 
Have there been practical benefits from fusion research so far?
 
There are huge practical benefits and untapped potential benefits, as well. The plasma science learned from fusion has enormous application. We all know about plasma TVs, but plasmas are used to make computer chips, to develop more efficient lighting, to burn up wastes, to treat medical wounds, and to power rockets through plasma thrusters, to name but a few. Spinoffs from fusion technology include new techniques to detect nuclear materials and electromagnetic launchers for aircraft on carriers. Plasma science learned from fusion is now being used to enhance our understanding of the cosmos. Most of the visible universe, after all, is plasma. Plasma physics underlies some of the major questions in astrophysics.
 
Is commercially viable fusion energy truly achievable?
 
If you look where we are now, the progress is really quite remarkable. If a physicist who started the field went to sleep for 40 years and came back today, he or she would be amazed. We routinely produce plasmas that are hundreds of millions of degrees in temperature. We’ve learned how to control them in very fine ways so we can manipulate them with remarkable finesse. We’re not yet done but we can actually produce and tweak how a hundred million degree plasma behaves. We have come so far that we can approximate conditions of a fusion reactor in a laboratory. We’ve come so far that the world collectively is going ahead and building ITER (an international experiment designed to demonstrate the scientific and technological feasibility of fusion energy planned to go online in 2019), which will produce 500 million watts of fusion power. We’ve come so far that we can see the endpoint.
 
So it’s reasonable to believe that fusion reactors will someday exist?
 
That question has been largely answered with some degree of confidence. It’s really a matter of deciding whether we really want to commit the resources to the remaining development that needs to be done. We have a clear choice before us: The United States can either design and build fusion energy plants or we can buy them from Asia or Europe.
 
In terms of scope and ambition, how does the lab compare to what it was in the 1980s and 1990s when the TFTR experiments were in full swing?

The laboratory today is equally as vital as it has ever been. But it’s smaller. The laboratory’s reduction in size has paralleled the evolution of the fusion program in the United States at large. Fifteen years ago and before, the United States was probably the world leader in fusion research. It was a well-funded program. Princeton headed a truly world-leading experiment in TFTR. It had this peak where it demonstrated the production of ten million watts of power as an experiment. After that, in the mid-90s, the fusion budget in the U.S. fell when Congress was reducing budgets severely. Since then, the U.S. fusion budget has been stable, but about one-third of the size of its former program. At the same time, other countries in the world have had fusion programs surge in size because they have recognized the power of fusion. Princeton once had a fusion facility that was second to none in terms of its capabilities and it produced a huge milestone for fusion of generating ten million watts of power, showing that fusion power is real. But today the facilities we have anywhere in the United States are not as up to date as facilities elsewhere. The U.S. and PPPL are still a leader in fusion but we’re not“the” leader in fusion. At PPPL today, we do world-class research and we are looking at a spectrum of ideas that are key to the future of fusion, no question about it. But other countries are also leading because of their present and planned investments. We greatly need to collaborate with other nations with more powerful experiments. This will allow us to maintain our expertise and knowledge. It also will enable us to preserve the option for this nation to build fusion reactors in the future.
 
What kind of fusion research programs are being pursued in other countries?
 
There has been a surge of interest in Asia. South Korea has blasted onto the fusion scene and recently begun operating a new experiment. This type of new experiment was designed to be built at PPPL, but it was cancelled by the Department of Energy because of lack of funds. Korean researchers picked up the idea and built it. They also are now discussing moving forward to a demonstration fusion power plant.

Exactly the same can be said about China. Chinese researchers also have built a similar kind of new experiment and recently begun operations. The Chinese fusion program is growing in leaps and bounds. The same can be said for the European Union. Parts of it have always been strongly supportive of fusion research. Germany is constructing a new facility. And, of course, the E.U. is hosting ITER (which is being built in France.) The Indian government is increasing its fusion program; it is presently constructing a new facility similar in type to the Chinese and Korean facilities but not quite as powerful. The Japanese government also is refurbishing the country’s large tokamak to such an extent that it is also going to be a new, major facility. So those countries are really outbuilding the United States in fusion.
 
All of these new experiments are superconducting, which means they operate with superconducting magnets. Superconducting magnets are probably essential for a fusion reactor. They’re advantageous because they consume no power to run them. Once you turn them on, they run without dissipating any energy. You need to enter the superconducting era if you want to do fusion research. None of the experiments in the U.S. operate with superconducting coils. All of the recently built and major new experiments being constructed abroad do and will operate with superconducting coils. So it’s a kind of an indicator of how they are marching more directly to fusion energy than we are.
 
What’s your explanation for the difference in outlook between the U.S. where investment in fusion has flattened, and countries like China and South Korea where investment in fusion is booming?
 
There are several things happening. Some of those countries feel the energy resource threat much more deeply than we do in the United States. In the U.S., we still have some natural resources -- oil and coal. Other countries import a larger sector of their energy, and to them, producing a clean energy source is felt in a much more jugular way. They take it much more seriously. For them, creating fusion energy is a way out of their energy problem. In Asia, particularly, in China and in South Korea, leaders recognize generally that research in science and energy is key to their economic and national security futures. They are ramping up in their science and energy sectors.
 
Fusion research has been described as a science without borders. Are collaborations with other countries a component of PPPL’s program?
 
We have very strong collaboration programs with other countries. Other nations solicit the collaboration in PPPL because we have such deep expertise. And, conversely, we want to make use of the new facilities abroad.
 
We have many research partnerships. PPPL is a partner with Oak Ridge and Savannah River national laboratories in the U.S. collaboration on ITER construction. We have collaborations with essentially all the new facilities I mentioned previously. We have collaborations with the new tokamaks in South Korea and China. We have collaborations with a new fusion experiment in Japan, a superconducting stellarator. We have a collaboration that’s growing with a new experiment that’s being built in Germany. Germany has had a long-standing laboratory, the Institute of Plasma Physics, but they built a new branch of it in the former East Germany in a town called Greifswald. They are constructing a billion dollar class stellarator there right now.
 
What’s the difference between ITER and Princeton’s big current experiment, the National Spherical Torus Experiment or NSTX?
 
ITER will be the biggest fusion experiment ever built and will be the size of a commercial reactor. NSTX is smaller. ITER will operate with fusion fuel and will produce what is called a burning plasma, meaning it will be self sustaining. NSTX is more compact– smaller and rounder. When you make this variation in the geometry, the opportunities are so rich that there are many reasons to do this. The NSTX design is a leading candidate for the next major step in fusion research in the United States– the establishment of a facility that operates with fusion fuel, producing large fluxes of neutrons (products of the fusion reaction) to develop and test the material components that surround the plasma.
 
ITER and NSTX are in the family of fusion devices called tokamaks. They are doughnut shaped, tori. But NSTX has a small hole in the center of the doughnut and it’s smaller and rounder on the outside. So it has the advantage of compactness. It also turns out, by getting smaller, the magnetic field in the plasma is shaped in way where you can get to high values of plasma pressure compared to the magnetic pressure that is confining the plasma. One figure of merit for a fusion system is how high the plasma pressure is. That is, how hot and dense it is compared with the strength of magnetic field, or the magnetic pressure that you used to confine the plasma. The higher the plasma pressure, the more fusion power you will get. The lower the magnetic pressure, the less expensive the reactor. NSTX operates with a high value of this ratio -- high plasma pressure and low magnetic pressure.
 
Why do experiments on the NSTX?
 
When you make this variation in the geometry, the opportunities are so rich that there are many reasons to do this. But perhaps the most prominent reason at the present time is that the NSTX design is a candidate for the next major step in fusion research in the United States. That next envisioned major step for fusion in the United States is to build a facility that operates with fusion fuel, a deuterium-tritium facility. It will produce fusion power, do it completely steadily and in so doing generate large amounts of neutrons. Designers will be able to test the integrated science and engineering of a fusion reactor. In the U.S., this next-stage project is sometimes called a fusion nuclear science facility because it will begin to address the nuclear science associated with fusion, that is, the interaction of the neutrons produced in the fusion reaction with the surrounding material structure. This may be the penultimate step prior to a full-blown fusion power plant. The NSTX design is a potentially attractive design for this next step because it is compact, generates an intense flux of neutrons (due to its small surface area), and may be less expensive. NSTX is also a wonderful facility to develop the science and solution to the plasma-material interface. The high heat flux emanating from NSTX affords testing of materials that must survive exposure to the hot plasma. NSTX is also developing novel solutions, such as liquid boundaries and new ways to magnetically channel the heat exhaust to the boundary. NSTX researchers are also engaged in the broad range of physics issues essential for ITER and fusion in general, including plasma stability and turbulence.
 
What are some of the other fusion experiments at PPPL?

In fusion, we have several experiments aimed at novel approaches to some of the most thorny problems. Indeed, even NSTX is novel in its geometry in that it is different from the mainline tokamak.
 
One of the main challenges of fusion is finding the best way to surround a hundred million degree plasma with a material structure. So the main line approach to that is to surround it with a solid material, tungsten, which has been quite successful in present fusion experiments. However, there are many questions concerning its survivability in a fusion reactor. At PPPL, we are developing an alternative approach. We surround the plasma not by a solid but actually by a liquid, a liquid“wall.” This is an alternative approach to the plasma materials problem. If a solid gets bombarded by some particles streaming out of a hot plasma, it can break, it can sputter, it can erode. Liquids, however, don’t break. Liquids are automatically self-healing. So if we surround the plasma with a liquid, it could possibly erase a significant amount of the materials problems for fusion research. And if the liquid is flowing, the liquid can take the heat of the plasma. One particular liquid, liquid lithium, has a possibly remarkable effect on the plasma. Particles that hit it get absorbed very well, so when you surround a plasma by liquid lithium, it is like a sponge. Particles don’t come back. They get stuck. Why is that good? If you have a standard material, cold particles from the material get ejected into the plasma due to sputtering. That cools down the plasma edge, can make the plasma more turbulent, the plasma can cool further, and the fusion reaction rate is diminished. A liquid lithium wall doesn’t do that. The plasma stays hot. Plasma physicists predict that with the boundary condition of lithium, the plasma should be less turbulent. So liquid lithium is in the vision of plasma engineers because it is a material that won’t break, and in the vision of theoretical physicists because it improves the properties of the plasma. So this is a major research thrust at PPPL.
 
We also operate an exploratory magnetic configuration in which the plasma is confined by a donut with no hole in the center. Sometimes called a compact torus, it is an elongated ball of plasma confined by a relatively weak magnetic field. In a very early stage of development, this approach is quite different from the tokamak– much more compact with higher plasma pressure (relative to magnetic pressure).
 
We are working on developing new variations of the magnetic configuration for fusion. We have produced 21st century magnetic field designs that could not have been designed without the use of modern computers. We can now evolve designs for modern fusion reactors that are really remarkable– highly three-dimensional magnetic shapes, almost non-intuitive, that are optimized according to a variety of physics guidelines. These designs produce magnet shapes that are perfected for fusion, if they are buildable. They are candidates for future experiments at PPPL.
 
What is fusion?
 
Fusion is the energy source of the sun and all the stars. In a nuclear fusion reaction two atomic nuclei fuse and hen produce other particles. In so doing a tiny amount of mass is converted to energy of motion in the products. With billions and billions of such reactions occurring in a gas– a hot plasma– substantial heat can be produced. In a fusion reactor, the core will be a hot plasma that produces heat from fusion. The heat is then converted to electricity by conventional means. The image of producing a star on earth captures the goal well. 
 
What is plasma physics?
 
Plasma physics is the study of how this complex state of matter, plasma, behaves. One can put that in the context of physics, more generally, even science more generally. Historically, the direction of physics has been to study smaller and smaller bits of matter. In the 19th century, it was understood that the air in the room was made up of molecules. Molecules are made of atoms, and atoms are made of protons, electrons and neutrons. And physicists kept trying to understand the smallest bits of matter and the forces between them. And then they understood that nuclei are made up of quarks. So this lineage has proceeded from atomic physics to nuclear physics to particle physics and beyond, in a way. The reductionist approach to nature has been the dominant direction of physics and it makes sense. But it’s also been realized in recent decades that not everything can be understood by looking at the smallest bits of nature. There are properties that emerge as systems become more complex. This is the science of complex systems. It’s the direction opposite to the reductionist approach. They are not in competition. The most complex system that we know, a living organism, might be understood in some way by knowing what nuclei are made of. But practically speaking it cannot. Plasma behavior is determined by billions of particles interacting simultaneously with each other. This produces a fascinating array of phenomena. Our goal is to discover basic principles that describe these phenomena. In the 19th century, the powerful concept of entropy production was discovered to describe the relatively simpler case of a gas of neutral particles in equilibrium. Unraveling the behavior of the plasma state is teaching us how to produce fusion energy, understand the plasma universe, and make computer chips.
 
Congress has been debating budgets for months with much talk of cuts to research, including the budget of the DOE’s Office of Science, which funds the American fusion program. What is your view on proposed reductions to research budgets?
 
Cutting research and development in science, engineering and energy research is counterproductive to our economic health. Yes, one has to control spending. But one has to do it to make us more economically competitive, not less. If times are tight economically and one erodes our science and energy research infrastructure, it will make us more poor, not more prosperous. As is said: to lighten the load in an airplane in flight, you don’t throw the engine overboard.
 
What is your position on cuts specific to the fusion research budget?
 
Fusion is almost a special case. The U.S. investment used to be about three times what it is today. So the fusion program is already pretty lean. We are trying to stay at the world forefront despite our resources. Significant further cuts would knock the U.S. off the world stage in fusion and consign us to third world status in fusion.
 
What are your priorities and vision for PPPL?
 
The vision for the lab is that it be at the world forefront of fusion research, in basic plasma physics, and in many applications of plasma science. We aim to aggressively enhance the knowledge base to deliver fusion to the world as quickly as possible. We also wish to expand our activities across the broad frontier of plasma science and technology.
 
Within fusion, we wish the lab to continue its leading role in planning the next step in fusion research in the U.S. PPPL should play a key scientific role in such a fusion nuclear facility, wherever it is constructed. We wish to develop new solutions for fusion that require major facilities at PPPL. We want to use our talents to fill the need to solve the remaining problems, such as how to control the plasma and how to surround the plasma with the proper material. There is an enormous need for new ideas. That’s where we can thrive.
 
Can you discuss the lab’s working relationship with Princeton University?
 
One of the terrific aspects of PPPL is that it is part of Princeton University. We host the plasma physics graduate program through the Department of Astrophysical Sciences, with 35 graduate students working at the lab. We have scientific links and collaborations with many parts of the university. Scientists at PPPL are working with material scientists on campus to find the best substances to contain plasmas during fusion reactions. PPPLphysicists are collaborating with theoretical astrophysicists on campus to solve problems from the how solar flares work and why matter accretes so rapidly onto black holes. And PPPL is wonderfully managed by the University.
 
Why did you pick plasma physics as your area of research?
 
Two reasons– fascinating physics and potentially momentous application. While a graduate student in the 1970s, it appeared to me that we were running out of energy. Now we have the added issue of global climate change. As time goes on, the need for fusion only becomes greater. People often think that fusion scientists are frustrated every day because fusion is not yet available on the commercial power grid. But the physics and engineering of this problem are captivating. When we discover something in the lab, it is just a pleasure to learn it. And it’s useful because of all the spinoffs to science and technology. It’s gratifying the way that art is gratifying. The only frustration is that our progress toward fusion could be more rapid if called for.


Source

Friday, May 13, 2011

Mini black holes that look like atoms could pass through Earth daily

lithium atom

Enlarge

In their paper, which isposted at arXiv.org, Aaron P. VanDevender from Halcyon Molecular in Redwood City, California, and J. Pace VanDevender from Sandia National Laboratories in Albuquerque, New Mexico, wanted to find a way to detect the minithat are thought to exist in nature. Their calculations suggest that mini black holes may be passing through theon a daily basis, and pose a very minimal threat to the planet.

Orbiting matter

Mini black holes are different than the ordinary astrophysical black hole in terms of how they’re formed and their size. Whereas astrophysical black holes are formed by the collapse of giant stars, mini black holes are thought to have formed during the Big Bang, which is why they’re also called primordial black holes. And while an astrophysical black hole has a minimum mass of 1030kg, the mass of mini black holes range from the tiny Planck mass to trillions of kilograms or more, but are still much smaller than astrophysical black holes. (Although physics should allow for black holes of all sizes, scientists don't yet know of any mechanism that could produce objects in the intermediate range.) The expected mass of laboratory-produced mini black holes is on the small side, about 10-23kg. Because of their extreme density, even the most massive mini black hole is microscopic in size.

The conventional view of a black hole is one of an object that is so dense that its powerful gravity pulls in all nearby matter past a critical point called the event horizon, from where it cannot escape. But the VanDevenders have suggested that something different happens with mini black holes with masses below 1012kg. Instead of absorbing matter, these mini black holes may gravitationally bind matter, so that matter orbits the black holes at a certain distance. Because matter atoms orbiting a black hole due to gravity are reminiscent of the way that electrons orbit a nucleus due to electrostatic forces - both without collapsing inward - the physicists call this theoretical system the Gravitational Equivalent of an Atom (GEA).

Although this may seem purely theoretical, the concept could provide a way to test the current theory of how mini black holes age and die, called quantum evaporation. In this process, mini black holes lose mass until they eventually disappear. As they lose mass, they should produce X-rays. However, attempts to observe the X-ray signature of the final stages of evaporation have so far been unsuccessful. This lack of evidence suggests that either mini black holes were not created in large numbers as predicted, or that they do not evaporate.

Assuming the latter explanation, the VanDevenders propose that, instead of searching only for evaporation effects, researchers should search for evidence of the actual existence of the mini black holes, as well. If their theory of mini black holes as GEAs is correct, then the gravitationally bound matter in a GEA should produce emissions that could be detected with current detectors, even though the chance of detecting these emissions would be slim.

“Quantum evaporation has been a major cornerstone of quantum gravity theories for three decades, yet it has never been experimentally confirmed,” Aaron VanDevender toldPhysOrg.com.“Our study asks,‘what if small back holes do not evaporate?’ We have shown that if they do not evaporate, they may interact with matter and be detected. If we are able to observe such objects, it will have a significant impact on our understanding of black hole evaporation, and quantum gravity in general.”

How a GEA works

In their paper, the researchers mathematically describe how a black hole can exist on Earth without consuming all of the surrounding mass. Such a mini black hole has constraints on its Schwarzschild radius, which is the closest an object can be to a black hole before it is absorbed, never to escape. Any object smaller than its Schwarzschild radius is a black hole. But because mini black holes with masses below 1012kg are so small, they can have a Schwarzschild radius that is much smaller than the orbit of the gravitationally bound matter particles. As long as these matter particles stay beyond the mini black hole’s Schwarzschild radius, they will orbit rather than be absorbed. (Black holes with masses of 1012kg have a Schwarzschild radius that equals the ground state radius at which the nearest matter particles orbit, so this mass is the upper limit for a GEA.) The researchers compare the GEA’s risk of collapse with that of real atoms.

“The concern that a terrestrial GEA might absorb the earth is similar to the early 20th century expectation that electrons orbiting a nucleus should radiate their energy away and fall into the nucleus,” the researchers wrote in their study.“Since the electron energy levels are quantized and the expectation value of the radius of the ground state is much larger than the radius of the nucleus, the probability of an electron being captured by the nucleus is vanishingly small. Similarly, particles of massmare unlikely to fall into the black hole at the center of a GEA; however, those few that do could, in principle, provide energy for observable emissions.”

Up close

The scientists calculated that mini black holes with a mass of about 100,000 kg may be of particular interest, since they could be candidates for dark matter. They estimated that, if dark matter is composed primarily of mini black holes and is evenly distributed throughout the galaxy, then about 40 million kg of mini black holes should pass through the Earth every year. The researchers calculated that about 400 mini black holes per year could be detectable through their strong electromagnetic emissions from their gravitationally bound matter.

If a particle on Earth approaches a GEA while it’s passing through the planet, the particle could either scatter off, be captured in orbit, or strip an already bound particle off. Due to the mini black hole’s high velocity compared to the binding energy required to capture a particle, the researchers predict that the mini black hole would quickly be stripped of its particles as it passes through the Earth. Therefore, the search for the emissions should be focused on space-based sources.

“It would be difficult, but not impossible {to detect one of the mini black holes passing through the Earth},” Aaron VanDevender said.“The available power of a GEA to emit detectable radiation is small but not negligible. It would likely be substantially easier to observe a GEA in orbit around the Earth, rather than one that is passing through at a tremendous velocity. Also, the larger GEA will likely be much easier to detect, so it is worth focusing our observational efforts on objects in the range of 104to 106kg.”

The researchers also noted that black holes created at the LHC would be too small and not have sufficient binding energy to bind matter into quantum orbitals that might emit detectable radiation.

In any case, according to this theory, mini black holes of any size would not absorb large amounts of matter very quickly. The scientists calculated that, for a black hole with a mass of 1 kg, it would take 1033years to swallow the Earth. For comparison, the Universe is about 13.7 x 109years old. And for smaller black holes like those that might be formed at the LHC, the time it would take to absorb the Earth would be even longer.


Source

Tuesday, May 10, 2011

169 years after its discovery, Doppler effect found even at molecular level

But for the first time, scientists have experimentally shown a different version of the Doppler effect at a much, much smaller level– the rotation of an individual molecule. Prior to this such an effect had been theorized, but it took a complex experiment with a synchrotron to prove it's for real.

"Some of us thought of this some time ago, but it's very difficult to show experimentally,"said T. Darrah Thomas, a professor emeritus of chemistry at Oregon State University and part of an international research team that today announced its findings in.

Most illustrations of the Doppler effect are called"translational,"meaning the change in frequency of light or sound when one object moves away from the other in a straight line, like a car passing a radar gun. The basic concept has been understood since an Austrian physicist named Christian Doppler first proposed it in 1842.

But a similar effect can be observed when something rotates as well, scientists say.

"There is plenty of evidence of the rotational Doppler effect in large bodies, such as a spinning planet or galaxy,"Thomas said."When a planet rotates, the light coming from it shifts to higher frequency on the side spinning toward you and a lower frequency on the side spinning away from you. But this same basic force is at work even on the molecular level."

In astrophysics, this rotational Doppler effect has been used to determine the rotational velocity of things such as planets. But in the new study, scientists from Japan, Sweden, France and the United States provided the first experimental proof that the same thing happens even with molecules.

At this tiny level, they found, the rotational Doppler effect can be even more important than the linear motion of the molecules, the study showed.

The findings are expected to have application in a better understanding of molecular spectroscopy, in which the radiation emitted from molecules is used to study their makeup and chemical properties. It is also relevant to the study of high energy electrons, Thomas said.

"There are some studies where a better understanding of this rotational Doppler effect will be important,"Thomas said."Mostly it's just interesting. We've known about the Doppler effect for a very long time but until now have never been able to see the rotationalin molecules."


Source

Monday, May 9, 2011

Fundamental question on how life started solved?

The researchers published their results in the coming issue of the scientific journal.

"Attempts to calculate the Hoyle state have been unsuccessful since 1954,"said Professor Dr. Ulf-G. Meißner (Helmholtz-Institut für Strahlen- und Kernphysik der Universität Bonn)."But now, we have done it!"The Hoyle state is an energy-rich form of the carbon nucleus. It is the mountain pass over which all roads from one valley to the next lead: From the three nuclei of helium gas to the much larger carbon nucleus. This fusion reaction takes place in the hot interior of heavy stars. If the Hoyle state did not exist, only very little carbon or other higher elements such as oxygen, nitrogen and iron could have formed. Without this type of carbon nucleus, life probably also would not have been possible.

The search for the"slave transmitter"

The Hoyle state had been verified by experiments as early as 1954, but calculating it always failed. For this form of carbon consists of only three, very loosely linked helium nuclei - more of a cloudy diffuse carbon nucleus. And it does not occur individually, only together with other forms of carbon."This is as if you wanted to analyze a radio signal whose main transmitter and several slave transmitters are interfering with each other,"explained Prof. Dr. Evgeny Epelbaum (Institute of Theoretical Physics II at Ruhr-Universität Bochum). The main transmitter is the stable carbon nucleus from which humans - among others - are made."But we are interested in one of the unstable, energy-rich carbon nuclei; so we have to separate the weaker radio transmitter somehow from the dominant signal by means of a noise filter."

What made this possible was a new, improved calculating approach the researchers used that allowed calculating the forces between several nuclear particles more precisely than ever. And in JUGENE, the supercomputer at Forschungszentrum Jülich, a suitable tool was found. It took JUGENE almost a week of calculating. The results matched the experimental data so well that the researchers can be certain that they have indeed calculated the Hoyle state.

More about how the Universe came into existence

"Now we can analyze this exciting and essential form of the carbon nucleus in every detail,"explained Prof. Meißner."We will determine how big it is, and what its structure is. And it also means that we can now take a very close look at the entire chain of how elements are formed."

In future, this may even allow answering philosophical questions using science. For decades, the Hoyle state was a prime example for the theory that natural constants must have precisely their experimentally determined values, and not any different ones, since otherwise we would not be here to observe the Universe (the anthropic principle)."For the Hoyle state this means that it must have exactly the amount of energy it has, or else, we would not exist,"said Prof. Meißner."Now we can calculate whether - in a changed world with other parameters - the Hoyle state would indeed have a different energy when comparing the mass of three helium nuclei."If this is so, this would confirm the anthropic principle.


Source

Sunday, May 8, 2011

Minnesota researcher's findings on dark matter jibe with Italy's DAMA/LIBRA claims

The DAMA/LIBRA team has insisted for 12 years that the data it has found with its detectors backs up the theory that not only does dark matter exist, but that it can be shown to exist by seasonal changes in the amount of recoil hitting in their germanium. The idea is that because the Earth moves and supposed clouds of dark matter don’t, there should be times of higher activity when the Earth is moving into or through an area of dense dark matter, and lower activity when it’s not. This is the basis of the argument the DAMA/LIBRA team has had to explain the seasonal changes in the number of hits they see.

The problem with all this though is that there are other teams that have not been able to reproduce the results shown first by the DAMA/LIBRA team, and now by those with the CoGeNT team (most notably the Swiss XENON100 team.) Making the whole argument even more sensational is that the CoGeNT team actually set out to prove to the world that the DAMA/LIBRA team was wrong.

All of the teams are working to prove or disprove the notion that theoretical particles of dark matter, called WIMPS, exist and thus can be used, or not to back up or refute many other theories that serve to explain many of the unexplained phenomena that exist in the universe; such as what holds everything together. Some theories suggest that ifdoes truly exist, it likely makes up eighty percent of everything there is; if it doesn’t however, a lot ofwill be going back to the drawing board.

One thing is certain however, and that is much more research will have to be done, both by those that are seeing results and those that aren’t, before anyone can even come close to claiming they understand the invisible forces that make the universe what it is.


Source

Saturday, May 7, 2011

New tool for proton spin

New tool for proton spin

Enlarge

Nowadays, the most popular theory for subatomic particles is the Standard Model: a menagerie of fundamental particles including quarks, which come in six different types or flavors, and four fundamental forces. These forces include the‘weak’ force that is mediated bycalled W bosons, which are created, albeit only briefly, when protons collide. The researchers discovered that these W bosons are a sensitive probe of the quarks that make up a

To investigate proton spin, the PHENIX team fired two beams of high-energy protons at one another using the Relativistic Heavy Ion Collider at BNL.“Most of the interactions that take place when the protons collide are‘strong’ interactions,” explains Okada.“But our experiment was sensitive enough to detect‘weak’ interactions too.” The researchers identified two such weak reactions: detection of an electron indicated the decay of a negatively charged W boson (Fig. 1); and detection of a positron—a positively charged electron—indicated the decay of a positively charged W boson. By counting the number of resulting electrons and positrons, the researchers could calculate the probability of each type of interaction.

The PHENIX team then performed two experiments simultaneously. In one, they made protons spin parallel to the axis of the beam; and in the other, they made them spin in the opposite direction. The difference in the rate of weak interactions in each experiment provided information about the spin direction of the quarks in the proton.“The asymmetry of the production rates is connected to the probability that the spin of a particular flavor of quark is aligned to the proton spin direction,” says Okada. This approach could soon be extended to identify the spin contribution of all the proton’s quarks.

Next the team hopes to improve the sensitivity of the experiment.“This time, we only caught electrons and positrons that emerged at 90 degrees to the beam axis,” explains Okada.“We are preparing new detectors to extend this detection region for a more complete analysis.”


Source

Friday, May 6, 2011

Testing technicolor physics

Testing technicolor physics

Enlarge

Theis a central component of the"standard model,"a theory that defines the relationships between the forces of the universe. But, what if the Higgs boson is not a fundamental particle, but rather a bound state of new particles that have not yet been seen?

"From the beginning of the standard model, people have been unhappy with the idea that the Higgs boson is a fundamental particle,"said Thomas DeGrand, professor of physics at the University of Colorado.

Scientists who advocate alternative models of particle physics are motivated by the theory of superconductivity. The, which is very different from ordinary matter, is not characterized by new particles, but by Cooper pairs--bound states of.

The standard model of particle physics suggests that theandwe're familiar with, that comprise the nuclei of atoms, are made up of combinations of different types of elementary particles called quarks and gluons. We cannot see these particles in isolation because a fundamental force called"the strong interaction"holds them firmly together, but experiments lead physicists to believe these quarks come in three varieties, each with different properties, or"colors".

No one knows why the particle content of the standard model is what it is. A logical possibility exists that there could be more kinds of quarks and gluons with different numbers of colors, strongly interacting with each other. Collectively, these possibilities are known as technicolor theories.

DeGrand spent many years studying the theory behind the interactions of quarks and gluons, known as(QCD), before he turned his attention to technicolor theories. He is the co-author of one of the standard books in the field.

However, QCD does not describe all aspects of the standard model, especially the nature of"final"undiscovered particles. The construction of the LHC, and the sense that new knowledge was waiting around the corner, drew DeGrand to explorations of alternative particle theories.

"I became a little tired of lattice QCD,"DeGrand said."The technicolor theories held more interesting questions."

Within the last five years, scientists realized that many of the computational techniques that had been invented for QCD could be applied technicolor theories as well. In 2008, two colleagues from Tel Aviv University, Yigal Shamir and Benjamin Svetitsky, invited DeGrand to join their research team. Applying the same methodology DeGrand helped perfect for QCD, the team began simulating technicolor candidate theories and drawing conclusions from the odd outcomes of these simulated worlds.

Testing technicolor physics
Enlarge

Above is a map of technicolor particle theories that Thomas DeGrand studies. The axes represent (horizontal) the number of colors and (vertical) the number of flavors of quarks. The different colors describe different kinds of color structure for the quarks. The shaded bands are where theorists (D. Dietrich and F. Sannino) predict that there are"unparticle"theories. Credit: Physical Review D75, 085018 (2007)

To calculate the interactions of new kinds of quarks and gluons in various configurations on lattices of giant grids, the team used the National Science Foundation (NSF)-funded Ranger supercomputer--one of the largest in the world--located at the Texas Advanced Computing Center. In the numerical simulations, the grids are housed (virtually) in boxes of various sizes and the reaction of the particles to the box size provides information about the energy characteristics of the system.

"This is not something an experimentalist can do,"DeGrand said."But as theorists, we can invent these fake worlds where the system sits in a specific sized box, and we can measure the strength of the quark-gluon interactions in large boxes, medium boxes and small boxes to see how it changes. Changing the energy or momentum scales is related to changing the physical size of the system."

Supercomputing plays a pivotal role in this process, which involves solving complex quantum equations for a large number of particles. Over the last two years, DeGrand and his colleagues have used nearly three million processing hours on Ranger (the equivalent of 340 years on a single processor) to simulate new particles comprised of quarks with two colors and three colors, respectively. The simulations help characterize the new particles and determine whether they are candidates for beyond-the-standard-model physics.

The Ranger simulations revealed that the simplest technicolor models, with two colors, have properties that are very different from a conventional particle system. Professor H. Georgi of Harvard University coined the name"unparticle theory"to describe such systems.

The three-color system was a bit more mysterious. The researchers couldn't tell if it was a particle theory or an unparticle theory. However, the simulations clearly didn't represent a viable real-world scenario. For a technicolor theory to be a feasible candidate for new physics, it must exhibit unusual behavior to avoid conflict with the constraints defined by present-day experimental knowledge. The three-color system didn't satisfy these criteria. The researchers are currently continuing their map of possible particle models by simulating four-color quark systems.

"The idea that the Higgs mechanism could be caused by the strong interactions of still-to-be-discoveredhas been with us for some time, but until recently, it has been difficult to test this idea for lack of adequate computing resources,"said Carlton DeTar, a longtime collaborator from the University of Utah not involved in the current research."DeGrand and his collaborators are among the foremost groups in the world using powerful numerical simulations to investigate this exciting alternative. The results could have profound implications for the search for the Higgs particle at the Large Hadron Collider in Europe."

Just as Salvador Dali's surreal paintings reveal taken-for-granted aspects of our material world, the alternative technicolor theories have intellectual value. They teach us about particle theories by placing them in a larger context.

For DeGrand, the stretching of the mind is what matters most--that, and the adventure of discovering something truly novel.

"It's high-risk, high-reward research,"DeGrand said.

But, if he and his colleagues find a viable alternative to the, it could drive the next theory of everything.

The work was also supported by the U.S. Department of Energy. Additional computations were done on facilities of the USQCD Collaboration at Fermilab, which are funded by the Office of Science of the U.S. Department of Energy.


Source