Saturday, March 24, 2007

Alternative theory of gravity explains large structure formation -- without dark matter


But the feature of Bekenstein’s theory that Dodelson and Liguori focus on most is that the theory—unlike standard general relativity—allows for fast growth of density perturbations arising from small inhomogeneities during recombination. Building on this finding from scientists Skordis et al. earlier this year, Dodelson and Liguori have found which aspect of the theory actually causes the enhanced growth—the part that may solve the cosmological structure problem.

The pair has discovered that, while Bekenstein’s theory has three functions which characterize space-time—a tensor, vector and scalar (TeVeS)—it’s the perturbations in the vector field that are key to the enhanced growth. General relativity describes space-time with only a tensor (the metric), so it does not include these vector perturbations.

“The vector field solves only the enhanced growth problem,” said Dodelson. “It does so by exploiting a little-known fact about gravity. In our solar system or galaxy, when we attack the problem of gravity, we solve the equation for the Newtonian potential. Actually, there are two potentials that characterize gravity: the one usually called the Newtonian potential and the perturbation to the curvature of space. These two potentials are almost always very nearly equal to one another, so it is not usually necessary to distinguish them.

“In the case of TeVeS, the vector field sources the difference between the two,” he continued. “As it begins to grow, the difference between the two potentials grows as well. This is ultimately what drives the overdense regions to accrete more matter than in standard general relativity. The quite remarkable thing about this growth is that Bekenstein introduced the vector field for his own completely independent reasons. As he remarked to me, ‘Sometimes theories are smarter than their creators.’"

Dodelson and Liguori see this solution to large structure formation as an important step for a gravity theory based on baryon-only matter. Other problems that their theory (or any alternative theory) will have to confront include accounting for the mismatch in galaxy clusters between mass and light. Also, the theory must conform to at least two observations: the galaxy power spectrum on large scales, and the cosmic microwave background fluctuations, which correspond to baby galaxies and galaxy clusters.

“As Scott says, until dark matter will be observed, skeptics will be allowed,” said Liguori. “Despite the many and impressive successes of the dark matter paradigm, which make it very likely to be correct, we still don't have any final and definitive answer. In light of this, it is important to keep an eye open for possible alternative explanations. Even when, after the analysis, alternative theories turn out to be wrong, the result is still important, as it strengthen the evidence for dark matter as the only possible explanation of observations.”

Citation: Dodelson, Scott and Liguori, Michele. “Can Cosmic Structure Form without Dark Matter?” Physical Review Letters 97, 231301 (2006).

By Lisa Zyga, Copyright 2006 PhysOrg.com

Wednesday, March 21, 2007

Loop quantum gravity


Loop quantum gravity (LQG), also known as loop gravity and quantum geometry, is a proposed quantum theory of spacetime which attempts to reconcile the seemingly incompatible theories of quantum mechanics and general relativity. This theory is one of a family of theories called canonical quantum gravity. It was developed in parallel with loop quantization, a rigorous framework for nonperturbative quantization of diffeomorphism-invariant gauge theory. In plain English, this is a quantum theory of gravity in which the very space that all other physics occurs in is quantized.

Loop quantum gravity (LQG) is a proposed theory of spacetime which is constructed with the idea of spacetime quantization via the mathematically rigorous theory of loop quantization. It preserves many of the important features of general relativity, while at the same time employing quantization of both space and time at the Planck scale in the tradition of quantum mechanics.

LQG is not the only theory of quantum gravity. The critics of this theory say that LQG is a theory of gravity and nothing more, though some LQG theorists have tried to show that the theory can describe matter as well. There are other theories of quantum gravity, and a list of them can be found on the quantum gravity page.

Many string theorists believe that it is impossible to quantize gravity in 3+1 dimensions without creating matter and energy artifacts. This is not proven, and it is also unproven that the matter artifacts, predicted by string theory, are not exactly the same as observed matter. Should LQG succeed as a quantum theory of gravity, the known matter fields would have to be incorporated into the theory a posteriori. Lee Smolin, one of the fathers of LQG, has explored the possibility that string theory and LQG are two different approximations to the same ultimate theory.

The main claimed successes of loop quantum gravity are:

1. It is a nonperturbative quantization of 3-space geometry, with quantized area and volume operators.
2. It includes a calculation of the entropy of black holes.
3. It is a viable gravity-only alternative to string theory.

However, these claims are not universally accepted. While many of the core results are rigorous mathematical physics, their physical interpretations remain speculative. LQG may possibly be viable as a refinement of either gravity or geometry. For example, entropy calculated in (2) is for a kind of hole which may, or may not, be a black hole.

Some alternative approaches to quantum gravity, such as spin foam models, are closely related to loop quantum gravity.

MORE AT <http://en.wikipedia.org/wiki/Loop_quantum_gravity>

Tuesday, March 20, 2007

Computers That Run On Light?


An £820,000 research project begins soon which could be an important step in bringing the dream of photonic computers – devices run using light rather than electronics – onto the desktop.

Physicists at the University of Bath will be looking at developing attosecond technology – the ability to send out light in a continuous series of pulses that last only an attosecond, one billion-billionth of a second.

The research could not only develop the important technology of photonics, but could give physicists that chance to look at the world of atomic structure very closely for the first time.

In June Dr Fetah Benabid, of the Department of Physics at Bath, will lead a team of researchers to develop a new technique which would enable them to synthesise ‘waveforms’ using light photons with the same accuracy as electrons are used in electronics. Waveform synthesis is the ability to control very precisely the way that electric fields vary their energy.

Ordinarily, electric fields rise and fall in energy in a regular pattern similar to the troughs and crests of waves on the ocean, but modern electronics allows a close control over the shape of the ‘wave’ – in effect creating waves that are square or triangular or other shapes rather than curved.

It is this control of the variation of the electric field that allows electronic devices such as computers to function in the precise way needed.

But electronics has its limitations, and the development of ever smaller silicon chips which has allowed computers to double in memory size every 18 months or so will come to an end in the next few years because the laws of physics do not permit chips smaller than a certain size.

Instead, engineers are looking to the science of photonics, which uses light to convey information, as a much more powerful alternative. But so far photonics can use light whose waveform is in one shape only – a curve known as a sine wave – and this has limited value for the communications needed to run a computer, for example.

The Bath researchers want to allow photonics to create waveforms in a variety of different patterns. To do this, they are using the new photonic crystal fibres which are a great step forward in photonics because, unlike conventional optical fibres, they can channel light without losing much of its energy.

In the research, light of one wavelength will be passed down a photonic crystal fibre which then branches off in a tree-like arrangement of fibres, each with a slightly separate wavelength, creating a broad ‘comb-like’ spectrum of light from ultra-violet to the middle of the infra-red range.

This broad spectrum would allow close control over the electric field, which is the basis of conveying enormous amounts of information that modern devices like computers need. They are funded by a grant from the Engineering and Physical Sciences Research Council.

“Harnessing optical waves would represent a huge step, perhaps the definitive one, in establishing the photonics era,” said Dr Benabid.

“Since the development of the laser, a major goal in science and technology has been to emulate the breakthroughs of electronics by using optical waves. We feel this project could be a big step in this.

“If successful, the research will be the basis for a revolution in computer power as dramatic as that over the past 50 years."

Dr Benabid said that the technology that could be built if his research was successful could, for instance, make lasers that operate at wavelengths that current technology cannot now create, which would be important for surgery.

The continual series of short bursts of light will not only dramatically affect technology - it will also advance physics by giving researchers the chance to look inside the atom.

Although atoms can now be “seen” using devices such as electron microscopes, it has not been possible to examine their fast dynamics.

By sending the light in short bursts into an atom, they will be able to work out the movements of electrons, the tiny negatively charged particles that orbit the atom’s nucleus.

This may throw light, literally, upon the strange quantum world of sub-atomic particles, which have no definite position, but are only ‘probably’ in one place until observed.

TAKEN FROM <http://www.sciencedaily.com/releases/2007/03/070319114521.htm>

Monday, March 19, 2007

Project Orion


Project Orion was an advanced rocket design explored in the 1960s. Orion is also the name of NASA's new spacecraft for human space exploration, previously known as the Crew Exploration Vehicle, which is designed to replace the Space Shuttle and eventually return to the Moon. For details of the Orion Crew Exploration Vehicle, go here.

The 1960s Project Orion examined the feasibility of building a nuclear-pulse rocket powered by nuclear fission. It was carried out by physicist Theodore Taylor and others over a seven-year period, beginning in 1958, with United States Air Force support. The propulsion system advocated for the Orion spacecraft was based on an idea first put forward by Stanislaw Ulam and Cornelius Everett in a classified paper in 1955. Ulam and Everett suggested releasing atomic bombs behind a spacecraft, followed by disks made of solid propellant. The bombs would explode, vaporizing the material of the disks and converting it into hot plasma. As this plasma rushed out in all directions, some of it would catch up with the spacecraft, impinge upon a pusher plate, and so drive the vehicle forward.

Overshadowed by the Moon race, Orion was forgotten by almost everybody except Dyson and Taylor.1 Dyson reflected that "this is the first time in modern history that a major expansion of human technology has been suppressed for political reasons." In 1968 he wrote a paper2 about nuclear pulse drives and even large starships that might be propelled in this way. But ultimately, the radiation hazard associated with the early ground-launch idea led him to become disillusioned with the idea. Even so, he argued that the most extensive flight program envisaged by Taylor and himself would have added no more than 1% to the atmospheric contamination then (c. 1960) being created by the weapons-testing of the major powers.

Being based on fission fuel, the Orion concept is inherently "dirty" and probably no longer socially acceptable even if used only well away from planetary environments. A much better basis for a nuclear-pulse rocket is nuclear fusion – a possibility first explored in detail by the British Interplanetary Society in the Daedalus project.

TAKEN FROM <http://www.daviddarling.info/encyclopedia/O/OrionProj.html>

Sunday, March 18, 2007

Light goes backwards in time


Physicists today claim that they reached 300 times the speed of light. But don't write off Einstein, and don't hold your breath for a time-travelling Star Trek universe, warns Paul Davies.

On the face of it, today's announcement in Nature that a team of Princeton physicists have broken the light barrier demolishes what is arguably science's most cherished principle.

Ever since Albert Einstein formulated his theory of relativity nearly a century ago, it has been a central tenet of physics that nothing can travel faster than light. Now it is claimed that in certain circumstances, light itself can be accelerated up to 300 times its usual speed. But it's too soon to consign the textbooks to the dustbin. As always, the devil is in the detail.

Moving through a vacuum, light travels at 300,000 km per second. According to the theory of relativity, it is the ultimate speed limit for the propagation of any physical influence. That includes spacecraft, subatomic particles, radio signals, or anything that might convey information or cause an effect.

When light passes through a medium such as air, it is slowed. The effect is best explained by analogy with water waves. Try throwing a stone in a pond to make ripples. Focus on a particular wave crest, and it will appear to move fairly fast, but then take a wider perspective to view the group of waves as a whole, and it travels outwards from the point of disturbance noticeably more slowly. It is almost as if the waves are rushing to get nowhere fast. You can watch as new ripples rise up at the back of the wave group, whiz forwards, and fade away at the front.

The same thing happens to light in a medium. It comes about because atoms in the medium create outgoing ripples of light as the primary light wave sweeps by them. When these ripples overlap and combine with the primary wave, they obliterate the parts racing on ahead, suppressing the fast-moving wave front and serving to slow down the group. So light passing through a medium has two associated velocities: that of the group as a whole, and that of the individual wave crests, known as the phase velocity.

A normal medium always reduces the group velocity of light to below its phase velocity, leading to the familiar phenomenon of refraction - the effect that causes a stick to look bent when it is stuck in water. The special feature of the Princeton experiment was the creation of a peculiar state of matter in which this situation is reversed: the secondary ripples of light actually make the wave group travel faster than the phase velocity.

To achieve this odd state of affairs, the scientists used a gas of cold caesium, and then excited the caesium atoms with a laser. So energised, the atoms do more than cause secondary ripples of light, they amplify the light too. It is this amplification that is the key to boosting the speed of the wave group, reportedly to 300 times the speed of light in a vacuum. Bizarrely, the wave distortion achieved is so large, it causes the group velocity to become negative, which means the peak of the wave pulse appears to exit the gas before it enters. In other words, the light waves seem to run backwards.

What makes this result so sensational is the relationship between light speed and causality. The theory of relativity predicts that speed slows time. For example, time passes a bit slower in an aircraft than on the ground, an effect that has been verified using atomic clocks. The time warp is small for everyday motion, but grows without limit as the speed of light is approached. Cosmic rays, for example, travel exceedingly close to the speed of light, and their internal clocks are slowed millions of times.

Relativity theory predicts that if a particle could exceed the speed of light, the time warp would become negative, and the particle could then travel backwards in time.

As Dr Who fans are aware, travel into the past opens up a nest of paradoxes. For example, suppose a faster-than-light particle is used as a signal to explode a bomb in the very lab that the particle itself is created. If the bomb explodes yesterday, the particle cannot be made today. So the bomb won't explode, and the particle will be made.

Either way, you get contradictory nonsense. At stake, then, is the very rationality and causal order of the universe. Allow faster-than-light travel, and the physical world turns into a madhouse .

Timing the speed of a pulse of light is fraught with complications, not least because the shape of the pulse changes when it passes through a medium. To make a pulse of a short duration, it is necessary to mix together waves of many different frequencies, and in a medium each wave will propagate differently.

As for transmitting information, opinions differ about how to associate it with a pulse that has a complicated, changing shape. The inherent fuzziness in a light pulse made up of many different waves superimposed precludes a clean definition of how fast actual information travels.

The problem is closely related to the quantum nature of light, where each frequency component can be thought of as made up of pho tons that behave in some ways like particles. But photons are subject to Heisenberg's principle, according to which there is an inherent uncertainty in their whereabouts. In the pulses of light used in the experiment, it isn't possible to pick out a given component photon and observe it travelling at superluminal velocity.

The Princeton physicists believe this fundamental fuzziness associated with a finite pulse of waves prevents information from exceeding the speed of light, so in an operational sense the light barrier remains unbroken and the causal order of the cosmos is still safe. It is intriguing to see how the wave nature of light rescues the theory of relativity from paradox.

• Paul Davies is visiting professor of physics at Imperial College London, and author of About Time.

TAKEN FROM <http://www.guardian.co.uk/science/story/0,3605,345099,00.html>

Saturday, March 17, 2007

Gravitational Time Dilation


Gravitational time dilation is manifested in accelerated frames of reference or, by virtue of the equivalence principle, in the gravitational field of massive objects. In more simple terms, clocks which are far from massive bodies (or at higher gravitational potentials) run faster, and clocks close to massive bodies (or at lower gravitational potentials) run slower.

It can also be manifested by any other kind of accelerated reference frame such as a dragster or space shuttle. Spinning objects such as merry-go-rounds and ferris wheels are subjected to gravitation time dilation as an effect of their angular spin.

This is supported by General Relativity due to the equivalence principle that states all accelerated reference frames possess a gravitational field. According to General Relativity, inertial mass and gravitational mass are the same. Not all gravitational fields are "curved" or "spherical", some are flat as in the case of an accelerating dragster or space shuttle. Any kind of g-load contributes to gravitational time dilation.

TAKEN FROM http://en.wikipedia.org/wiki/Gravitational_time_dilation

Friday, March 16, 2007

Castle Bravo


Castle Bravo was the code name given to the first U.S. test of a so-called dry fuel thermonuclear device, detonated on March 1, 1954 at Bikini Atoll, Marshall Islands, by the United States, as the first test of Operation Castle (a longer series of tests of various devices). Unexpected fallout from the detonation—intended to be a secret test—poisoned the crew of Daigo FukuryĆ« Maru ("Lucky Dragon No. 5"), a Japanese fishing boat, and created international concern about atmospheric thermonuclear testing.

The bomb used lithium deuteride fuel for the fusion stage, unlike the cryogenic liquid deuterium used as fuel for the fusion stage of the U.S. first-generation Ivy Mike device. It was therefore the basis for the first practical deliverable hydrogen bomb in the U.S. arsenal. The Soviet Union had previously used lithium deuteride in a nuclear bomb, their Sloika (also known as Alarm Clock) design, but since it was a single-stage weapon, its maximum yield was limited. Like Mike, Bravo used the more advanced Teller-Ulam design for creating a multi-stage thermonuclear device.

It was the most powerful nuclear device ever detonated by the United States, with a yield of 15 megatons. That yield, far exceeding the expected yield of 4 to 8 megatons, combined with other factors to produce the worst radiological accident ever caused by the United States.

Though some 1,000 times more powerful than the atomic bombs which were dropped on Hiroshima and Nagasaki during World War II, it was considerably smaller than the largest nuclear test conducted by the Soviet Union several years later, the ~50 Mt Tsar Bomba.



Thursday, March 15, 2007

Tunguska, killer asteroids, NASA funding cut, and other adventures


The Tunguska event was a massive explosion that occurred near the Podkamennaya (Under Rock) Tunguska River in what is now Krasnoyarsk Krai of Russia, at 7:17 AM on June 30, 1908. The event is sometimes referred to as the Great Siberian Explosion.

The explosion was probably caused by the airburst of an asteroid or comet 5 to 10 kilometers (3–6 mi) above the Earth's surface. The energy of the blast was later estimated to be between 10 and 20 megatons of TNT, which would be equivalent to Castle Bravo, the most powerful nuclear bomb ever detonated by the US. It felled an estimated 80 million trees over 2,150 square kilometers (830 sq mi). An overhead satellite view centered at 60°55′N 101°57′E (near ground zero for this event) shows an area of reduced forest density, with a fully visible irregular clearing of somewhat less than one square kilometer in area.

In recent history, the Tunguska event stands out as one of the rare large-scale demonstrations that a full doomsday event is a real possibility.

AND IN RELATED NEWS

NASA officials say the space agency is capable of finding nearly all the asteroids that might pose a devastating hit to Earth, but there isn't enough money to pay for the task so it won't get done.
The cost to find at least 90 percent of the 20,000 potentially hazardous asteroids and comets by 2020 would be about $1 billion, according to a report NASA will release later this week. The report was previewed Monday at a Planetary Defense Conference in Washington.

Congress in 2005 asked NASA to come up with a plan to track most killer asteroids and propose how to deflect the potentially catastrophic ones.

"We know what to do, we just don't have the money," said Simon "Pete" Worden, director of NASA's Ames Research Center.

These are asteroids that are bigger than 460 feet in diameter - slightly smaller than the Superdome in New Orleans. They are a threat even if they don't hit Earth because if they explode while close enough - an event caused by heating in both the rock and the atmosphere - the devastation from the shockwaves is still immense. The explosion alone could have with the power of 100 million tons of dynamite, enough to devastate an entire state, such as Maryland, they said.

The agency is already tracking bigger objects, at least 3,300 feet in diameter, that could wipe out most life on Earth, much like what is theorized to have happened to dinosaurs 65 million years ago. But even that search, which has spotted 769 asteroids and comets - none of which is on course to hit Earth - is behind schedule. It's supposed to be complete by the end of next year.

NASA needs to do more to locate other smaller, but still potentially dangerous space bodies. While an Italian observatory is doing some work, the United States is the only government with an asteroid-tracking program, NASA said.

One solution would be to build a new ground telescope solely for the asteroid hunt, and piggyback that use with other agencies' telescopes for a total of $800 million. Another would be to launch a space infrared telescope that could do the job faster for $1.1 billion. But NASA program scientist Lindley Johnson said NASA and the White House called both those choices too costly.

A cheaper option would be to simply piggyback on other agencies' telescopes, a cost of about $300 million, also rejected, Johnson said.

"The decision of the agency is we just can't do anything about it right now," he added.

Earth got a scare in 2004, when initial readings suggested an 885-foot asteroid called 99942 Apophis seemed to have a chance of hitting Earth in 2029. But more observations showed that wouldn't happen. Scientists say there is a 1-in-45,000 chance that it could hit in 2036.

They think it would mostly likely strike the Pacific Ocean, which would cause a tsunami on the U.S. West Coast the size of the devastating 2004 Indian Ocean wave.

John Logsdon, space policy director at George Washington University, said a stepped-up search for such asteroids is needed.

"You can't deflect them if you can't find them," Logsdon said. "And we can't find things that can cause massive damage."


Wednesday, March 14, 2007

Speed of Light


Constant velocity from all inertial reference frames

It is important to realise that the speed of light is not a "speed limit" in the conventional sense. An observer chasing a beam of light will measure it moving away from him at the same speed as will a stationary observer. This leads to some unusual consequences for velocities.

Most individuals are accustomed to the addition rule of velocities: if two cars approach each other from opposite directions, each travelling at a speed of 50 km/h, relative to the road surface, one expects that each car will perceive the other as approaching at a combined speed of 50 + 50 = 100 km/h to a very high degree of accuracy.

At velocities at or approaching the speed of light, however, it becomes clear from experimental results that this rule does not apply. Two spaceships approaching each other, each travelling at 90% the speed of light relative to some third observer between them, do not perceive each other as approaching at 90% + 90% = 180% the speed of light; instead they each perceive the other as approaching at slightly less than 99.5% the speed of light.

This last result is given by the Einstein velocity addition formula:

u = {v + w \over 1 + v w / c^2} \,\!

where v and w are the speeds of the spaceships as observed by the third observer, and u is the speed of either space ship as observed by the other.

Contrary to one's usual intuitions, regardless of the speed at which one observer is moving relative to another observer, both will measure the speed of an incoming light beam as the same constant value, the speed of light.

The above equation was derived by Albert Einstein from his theory of special relativity, which takes the principle of relativity as a main premise. This principle (originally proposed by Galileo Galilei) requires physical laws to act in the same way in all reference frames. Maxwell’s equations predict a speed of light, in much the same way as is the speed of sound in water. The speed of sound in water is a function of physical constants proper to water. The speed of light was believed to be relative to characteristics of the medium of transmission for light that acted as does water for the transmission of sound -- the luminiferous aether. But the Michelson-Morley experiment, arguably the most famous and useful failed experiment in the history of physics, could not find any trace of this luminiferous aether, suggesting, as a result, that it is impossible to detect one's presumed absolute motion, i.e., motion with respect to the hypothesized luminiferous aether. It should be noted that the Michelson-Morley experiment said little about the speed of light relative to the light’s source and observer’s velocity, as both the source and observer in this experiment were travelling at the same velocity together in space.

Technical impossibility of travel faster than the speed of light

To understand why an object cannot travel faster than light, it is useful to understand the concept of spacetime. Spacetime is an extension of the concept of three-dimensional space to a form of four-dimensional space-time. Having the classical concepts of height, width, and depth as the first three dimensions, the new, fourth dimension is that of time. Graphically it can be imagined as a series of static,three-dimensional 'bubbles', positioned along an arbitrarily chosen line, each bubble representing a separate position along one of the four dimensions. That graphical approach is analogous to using a sequence of two-dimensional cross-sections taken at some standard interval along the third dimension to represent a three-dimensional object on a two-dimensional surface. (Imagine a map of a multi-story building that is created by giving the floor plan for each story of the building on a new page.) The mapping of space and time can be rotated so that, e.g., the x dimension is replaced by the t dimension, and each "bubble" represents a cross-section taken along the x dimension. Supposing that travel is occurring along the y and or the z dimension, what one will observe is that change along the t dimension will decrease from "bubble" to "bubble" as change across the y-z plane increases from "bubble" to "bubble."

With this understood, there is a clear implication that an object has a total velocity through space-time at any instant, and for all particles of matter this velocity is equal to the speed of light. While this result may seem contradictory to the idea of speed-of-light travel being impossible, it in fact proves it, taking into account the fact that faster-than-light travel was a spatial, or three-dimensional concept, not a four-dimensional concept. In the case of four-dimensions, all of the total velocity of an object not accounted for in three-dimensional space is in the fourth dimension, or time. To go back to our bubble picture, if an object is remaining at the same x, y, z positions it will make maximum progress in the t dimension. And that is just to say that any clock associated with whatever we are watching at x, y, z is ticking away at its maximum rate according to a static observer in the same frame of reference, e.g., somebody at x+3, y+4, z+5 or any other position that is not changing with respect to x, y, and z. But the greater the changes of x, y, and z according to the clock of the other observer, the smaller will be the changes in t. But using the Phythagorean theorem to calculate the distances between a point at x,y,z,t and some later point x', y', z', t', then those distances will always be the same.

While this may seem confusing, it shows that as displacement through space increases, measured time will decrease to maintain the overall space-time velocity. If this is the case, it makes speed-of-light travel impossible, since when as an object approaches the speed of light spacially, it will have to approach zero velocity temporally. Another implication is that an object might be said to travel through four-dimensional space-time at the speed of light, but only in cases wherein its velocity through space is zero. That statement is just a counter-intuitive way of expressing the idea that when one is motionless (according to another observer) one's clock is ticking away most rapidly, and that as one moves faster and faster (according to the other observer) one's clock is ticking at slower and slower rates that approach zero.

TAKEN FROM

Monday, March 12, 2007

8 Technologies to Save the World

http://money.cnn.com/galleries/2007/biz2/0701/gallery.8greentechs/index.html

Shape of the Universe


Could the Universe be shaped like a medieval horn? It may sound like a surrealist's dream, but according to Frank Steiner at the University of Ulm in Germany, recent observations hint that the cosmos is stretched out into a long funnel, with a narrow tube at one end flaring out into a bell. It would also mean that space is finite.

Adopting such an apparently outlandish model could explain two puzzling observations. The first is the pattern of hot and cold spots in the cosmic microwave background radiation, which shows what the Universe looked like just 380,000 years after the Big Bang.

It was charted in detail in 2003 by NASA's Wilkinson Microwave Anisotropy Probe. WMAP found that the pattern fades on the largest scales: there are no clear hot or cold blobs more than about 60 degrees across.

Steiner and his group claim that a finite, horn-shaped Universe fits this observation. It simply does not have room to hold very big blobs.

The present-day volume of their model universe is nearly 1032 cubic light years. Back when the Universe was only 380,000 years old it would have been a fraction of that size, too small to allow big fluctuations.

Infinitely long

In the model, technically called a Picard topology, the Universe curves in a strange way. One end is infinitely long, but so narrow that it has a finite volume. At the other end, the horn flares out, but not for ever - if you could fly towards the flared end in a spaceship, at some point you would find yourself flying back in on the other side of the horn (see diagram).

Horn-shaped models were suggested in the 1990s to fit a similar anomaly seen by the COBE satellite, but Steiner's group is the first to show that this idea fits the WMAP data. In 2003, another group claimed that the Universe might be finite (New Scientist, 8 October 2003.)

In this group's model, space had a soccer ball-like shape. But the model has run into trouble. It should have left a clear signature on the microwave sky - a set of circles that mirror each other's spot patterns - but these circles do not seem to be there.

The horn universe is harder to pin down. It would also make matching circles, but the pattern depends on what part of the horn we are in. "Our published search for matching circles probably does not rule out the Picard topology," says Neil Cornish of Montana State University in Bozeman.

Little ellipses

And the idea has another advantage. In the flat space of conventional cosmology, the smallest blobs on microwave sky maps ought to be round. But they are not. "If you look at the small structures, they look like little ellipses," says Steiner. The curve of the horn-shaped universe could be just right to explain this. If you look at any little piece of the horn, it is saddle-shaped like a Pringles potato chip - curving down in one direction and up in the perpendicular one. This "negatively curved" space would act like a warped lens, distorting the image of round primordial blobs in a way that makes them look elliptical to us. Mathematicians can construct an infinite number of different kinds of negatively curved space, most of them with one or more horns, and many of which might fit the data, but the Picard topology is one of the simplest.

This model would force scientists to abandon the "cosmological principle", the idea that all parts of the cosmos are roughly the same. "If one happens to find oneself a long way up the narrow end of the horn, things indeed look very strange, with two very small dimensions," says Holger Then, a member of the team.

Statistical flukes

At an extreme enough point, you would be able to see the back of your own head. It would be an interesting place to explore - but we are probably too far from the narrow end of the horn to examine it with telescopes.

Both of the crucial observations are still ambiguous, however, and may be statistical flukes. Over the next year or so, WMAP and other experiments will test whether large blobs really are lacking and whether small ones really are elliptical.


TAKEN FROM: Big Bang Glow Hints at Funnel-Shaped Universe, Stephan Battersby, New Scientist, <www.newscientist.com/article.ns?id=dn4879>

Sunday, March 11, 2007

What is gravity?



Gravity, the odd force out when it comes to small particles and the energy that holds them together. When Einstein improved on Newton's theory, he extended the concept of gravity by taking into account both extremely large gravitational fields and objects moving at velocities close to the speed of light. These extensions lead to the famous concepts of relativity and space-time. But Einstein's theories do not pay any attention to quantum mechanics, the realm of the extremely small, because gravitational forces are negligible at small scales, and discrete packets of gravity, unlike discrete packets of energy that hold atoms together, have never been experimentally observed.

Nonetheless, there are extreme conditions in nature in which gravity is compelled to get up close and personal with the small stuff. For example, near the heart of a black hole, where huge amounts of matter are squeezed into quantum spaces, gravitational forces become very powerful at tiny distances. The same must have been true in the dense primordial universe around the time of the Big Bang.

Physicist Stephen Hawking identified a specific problem about black holes that requires a bridging of quantum mechanics and gravity before we can have a unified theory of anything. According to Hawking, the assertion that nothing, even light, can escape from a black hole is not strictly true. Weak thermal energy does radiate from around black holes. Hawking theorized that this energy is born when particle-antiparticle pairs materialize from the vacuum in the vicinity of a black hole. Before the matter-antimatter particles can recombine and annihilate each other, one that may be slightly closer to the black hole will be sucked in, while the other that is slightly farther away escapes as heat. This release does not connect in any obvious way to the states of matter and energy that were earlier sucked into that black hole and therefore violates a law of quantum physics stipulating that all events must be traceable to previous events. New theories may be needed to explain this problem.

DISCOVER, <http://discovermagazine.com/2002/feb/cover>

Saturday, March 10, 2007

Precession of the equinoxes


The Earth goes through one complete precession cycle in a period of approximately 25,800 years, during which the positions of stars as measured in the equatorial coordinate system will slowly change; the change is actually due to the change of the coordinates. Over this cycle the Earth's north axial pole moves from where it is now, within 1° of Polaris, in a circle around the ecliptic pole, with an angular radius of about 23.5 degrees (exactly 23 degrees 27 arcminutes [3]). The shift is 1 degree in 180 years, where the angle is taken from the observer, not from the center of the circle.

The precession of the equinoxes was discovered in antiquity by the Greek astronomer Hipparchus, and was later explained by Newtonian physics. The Earth has a nonspherical shape, being oblate spheroid, bulging outward at the equator. The gravitational tidal forces of the Moon and Sun apply torque as they attempt to pull the equatorial bulge into the plane of the ecliptic. The portion of the precession due to the combined action of the Sun and the Moon is called lunisolar precession.

Currently, this annual motion is about 50.3 seconds of arc per year or 1 degree every 71.6 years. The process is slow, but cumulative. A complete precession cycle covers a period of approximately 25,765 years, the so called great Platonic year, during which time the equinox regresses over a full 360°. Precessional movement also is the determining factor in the length of an Astrological Age. Which means you are not the astrological sign you were believed to be. I'm supposed to be a taurus but I'm actually an Aries.

Does the Precession of the equinoxes have a relation to the ice ages?



Friday, March 9, 2007

geomagnetic reversal


Geomagnetic reversal is a change in the orientation of the Earth’s magnetic poles where south becomes north and north becomes south. The periods between pole changes vary greatly, anywhere from 1-5 changes per one million years and the change itself takes anywhere from a few hundred to a few thousand years. The last reversal was 780,000 years ago. Our current pole position is showing signs of a new reversal starting to take place and it could temporarily collapse our current field by 3000-4000 C.E.

What causes geomagnetic reversals? Simply stated, magnetic field lines that help generate the geomagnetic field can sometimes become tangled and disorganized through the chaotic motions of the Earth’s liquid core. This instability causes the magnetic field to flip to the opposite direction and is supported by observing the solar magnetic field which also undergoes spontaneous reversals but in a much faster time period (7-15 years).

How would this affect life on Earth?

Oldest Solar Observatory


The oldest solar observatory in the Americas has been found, suggesting the existence of early, sophisticated Sun cults, scientists report.

It comprises a group of 2,300-year-old structures, known as the Thirteen Towers, which are found in the Chankillo archaeological site, Peru.

The towers span the annual rising and setting arcs of the Sun, providing a solar calendar to mark special dates.

The study is published in the journal Science.

Clive Ruggles, professor of archaeoastronomy at Leicester University, UK, said: "These towers have been known to exist for a century or so. It seems extraordinary that nobody really recognised them for what they were for so long.

"I was gobsmacked when I saw them for the first time - the array of towers covers the entire solar arc."

The Thirteen Towers of Chankillo run from north to south along the ridge of a low hill within the site; they are relatively well-preserved and each has a pair of inset staircases leading to the summit.

The rectangular structures, between 75 and 125 square metres (807-1,345 sq ft) in size, are regularly spaced - forming a "toothed" horizon with narrow gaps at regular intervals.

About 230m (750ft) to the east and west are what scientists believe to be two observation points. From these vantages, the 300m- (1,000ft-) long spread of the towers along the horizon corresponds very closely to the rising and setting positions of the Sun over the year.

The Thirteen Towers seen at winter solstice (Ivan Ghezzi)
When viewed from the western observation point, the Sun appears to the left of the left-most tower

"For example," said Professor Ruggles, "if you were stood at the western observing point, you would see the Sun coming up in the morning, but where it would appear along the span of towers would depend on the time of the year."

"So, on the summer solstice, which is in December in Peru, you would see the Sun just to the right of the right-most tower; for the winter solstice, in June, you would see the Sun rise to the left of the left-most tower; and in-between, the Sun would move up and down the horizon."

This means the ancient civilisation could have regulated a calendar, he said, by keeping track of the number of days it took for the Sun to move from tower to tower.

Sun cults

The site where the towers are based is about four square kilometres (1.5 square miles) in size, and is believed to be a ceremonial centre that was occupied in the 4th Century BC. It is based at the coast of Peru in the Casma-Sechin River Basin and contains many buildings and plazas, as well as a fortified temple that has attracted much attention.

The authors of the paper, who include Professor Ivan Ghezzi of the National Institute of Culture, Peru, believe the population was an ancient Sun cult and the observatory was used to mark special days in their solar calendar.

Professor Ruggles said: "The western observing point, and to some extent, the eastern one, are very restricted - you couldn't have got more than two or three people watching from them. And all the evidence suggests that there was a formal or ceremonial approach to that point and that there were special rituals going on there.

"This implies that you have someone special - the priests perhaps - who watched the Sun rise or set, while in the plaza next door, the crowds were feasting and could see the Sun rise, but not from that special perspective.

Written records suggest the Incas were making solar observations by 1500 AD, and that their religion centred on Sun worship.

"We know that in Inca times, towers were used to observe the Sun near the solstices, which makes you speculate that there are elements of cult practice that go back a lot further," Professor Ruggles told the BBC News website.

taken from <http://news.bbc.co.uk/2/hi/science/nature/6408231.stm>

SPECIAL THANKS TO SBVE

Wednesday, March 7, 2007

Antikythera mechanism





"Antikythera mechanism is an ancient mechanical analog computer designed to calculate astronomical positions. It was discovered in the Antikythera wreck off the Greek island of Antikythera, between Kythera and Crete, and has been dated to about 150-100 BC.The device is remarkable for the level of miniaturization and complexity of its parts, which is comparable to that of 18th century clocks. It has over 30 gears, with teeth formed through equilateral triangles. When past or future dates were entered via a crank (now lost), the mechanism calculated the position of the Sun, Moon or other astronomical information such as the location of other planets. It is possible that the mechanism is based on heliocentric principles, rather than the then-dominant geocentric view espoused by Aristotle and others.

While a century of research is finally answering the question of what the mechanism did, we are actually no nearer to answering the question what it was for. There are numerous suggestions, any of which could be right.

Practical uses of this device may have included the following:

  • Astrology was commonly practiced in the ancient world. In order to create an astrological chart, the configuration of the heavens at a particular point of time is needed. It can be very difficult and time-consuming to work this out by hand, and a mechanism such as this would have made an astrologer's work very much easier.
  • Calculating solar and lunar eclipses. However, the device would probably only have indicated days when eclipses might occur, and a more accurate calculation of the time of day would have to be done by hand.
  • Setting the dates of religious festivals connected with astronomical events.
  • Adjusting calendars, which were based on lunar cycles as well as the solar year"
-taken from Wikipedia, en.wikipedia.org/wiki/Antikythera_mechanism>

For more on the Antikythera mechanism check these links out:

http://www.world-mysteries.com/sar_4.htm

www.math.sunysb.edu/~tony/whatsnew/column/antikytheraI-0400/kyth1.html

http://www.antikythera-mechanism.gr/

white holes


White holes are the opposite of black holes. While black holes capture and trap, white holes, on the opposite end, would shoot out everything but i would shoot it out in another universe. Quasars were once thought to be white holes in our universe.

"The Schwarzschild metric admits negative square root as well as positive square root solutions for the geometry.

The complete Schwarzschild geometry consists of a black hole, a white hole, and two Universes connected at their horizons by a wormhole.

The negative square root solution inside the horizon represents a white hole. A white hole is a black hole running backwards in time. Just as black holes swallow things irretrievably, so also do white holes spit them out. White holes cannot exist, since they violate the second law of thermodynamics.

General Relativity is time symmetric. It does not know about the second law of thermodynamics, and it does not know about which way cause and effect go. But we do.

The negative square root solution outside the horizon represents another Universe. The wormhole joining the two separate Universes is known as the Einstein-Rosen bridge." http://casa.colorado.edu/~ajsh/schww.html



Do Schwarzschild wormholes really exist?

Schwarzschild wormholes certainly exist as exact solutions of Einstein's equations.

However:

  • When a realistic star collapses to a black hole, it does not produce a wormhole (see Collapse to a Black Hole);
  • The complete Schwarzschild geometry includes a white hole, which violates the second law of thermodynamics (see above);
  • Even if a Schwarzschild wormhole were somehow formed, it would be unstable and fly apart (see Instability of the Schwarzschild wormhole below).
  • Tuesday, March 6, 2007

    Monday, March 5, 2007

    Cosmological Constant and Dark Energy

    1. What is Einstein’s cosmological constant?

    “A constant term (labeled Lambda), which Einstein added to his general theory of relativity in the mistaken belief that the Universe was neither expanding nor contracting.”

    The cosmological constant adds equations to the general theory of relativity to show that we live in a static universe, that is, one that is not expanding or contracting.

    Later, the cosmological constant was found to be unnecessary. The observations of Edwin Hubble, as well as Saul Perlmutter, showed that the universe was indeed expanding.

    However, the discovery of cosmic acceleration in the 1990s has renewed interest in a cosmological constant because cosmological constant has negative pressure equal to its energy density and so causes the expansion of the universe to accelerate.


    2. Why did Einstein create his cosmological constant?

    “Einstein included the cosmological constant as a term in his field equations for general relativity because he was dissatisfied that otherwise his equations did not allow, apparently, for a static universe. Gravity would cause a universe which was initially at dynamical equilibrium to contract. To counteract this possibility, Einstein added the cosmological constant. However, soon after Einstein developed his static theory, observations by Edwin Hubble indicated that the universe appears to be expanding; this was consistent with a cosmological solution to the original general-relativity equations that had been found by the mathematician Friedman.”

    Einstein’s own belief that the universe was static overshadowed his observations. He was so certain of this that he modified his equations introducing a cosmological constant that would support his personal beliefs

    “When physicist Edwin Hubble discovered that the universe was not static but was expanding, Einstein called his cosmological constant "the greatest mistake of my life". Einstein effectively abandoned his cosmological constant when he learned of Hubble’s discoveries.
    The cosmological constant does, however, have a major connection with dark energy and the expanding universe and had Einstein stuck to his guns, he could have been given credit for predicting one of the great scientific findings of the last 10 years - the accelerating universe,” states Neatorama.


    3. What is dark energy?

    According to the Contemporary Physics Education Project, “An energy causing the acceleration of the expansion of the universe, detectable through its gravitational effects.” “…dark energy is a hypothetical form of energy that permeates all of space and has strong negative pressure. According to the Theory of Relativity, the effect of such a negative pressure is qualitatively similar to a force acting in opposition to gravity at large scales,” writes Wikipedia.

    This energy could make up almost two-thirds of the Universe and since it is repulsive, it shoots galaxies away from each other with an increasing speed. Dark energy is thought to be the only thing that could cause the expansion of the universe to accelerate in this way.
    It may also account for a big portion of the mass still missing in the universe.

    4. Why did cosmologists invent dark energy?

    Dark energy is an idea that is used to describe a different form of energy that penetrates all of space. Ordinary gravity consists of an attractive force, where as, dark energy is a repulsive force. The dark energy could also account for a larger percentage of the mysterious “missing mass” of the universe

    “Cosmologists know something is driving an accelerated and repulsive expansion of the universe - something that is acting like an anti-gravity force. Because this energy has never been directly seen and its identity is as yet unknown, it is called dark energy.”

    5. How is dark energy related to the cosmological constant?

    “Physicists have tried to explain the acceleration in terms of “dark energy”, which boosts the expansion of the universe by counteracting the effects of gravity. The most popular explanation for dark energy draws on the “cosmological constant” first proposed by Einstein. Observations reveal that dark energy was around nine billion years ago and has been acting in a consistent way ever since. The data suggest that the effect of dark energy was rather weak until about five to six billion years ago when it defeated gravity in a “cosmic tug of war” and the rate of expansion began to increase.”

    This cosmological constant added a repulsive force to counteract the affect gravity had on the universe. This gravitational affect would make the universe contract, so Einstein needed a repulsive force to cancel out the affects gravity would have on the universe thus making the universe static. This repulsive force could be dark energy.

    The dark energy may be the vehicle that drives the concept of the cosmological constant.
    The idea of a counteractive force equal to the gravity is the concept behind the cosmological constant. However, there seems to be more dark energy than gravity. The dark energy outweighs the gravitational affect, so there is more of a repulsive force than a contracting force. Thus the universe is expanding

    NASA,
    Cosmological Constant, Wikipedia,
    Dark Energy, Wikipedia,<>
    Cosmological Constant, Wikipedia,
    Four Things Einstein Got Wrong, Neatorama, December 19th, 2006,
    Contemporary Physics education Project,
    Wikipedia,
    NASA Selects ADEPT Space Mission To Probe Dark Energy,
    Dark energy dates back nine billion years, Hamish Johnston, PhysicsWeb, <>