Friday, May 13, 2011

Gravity Probe B - Final Paper

EDIT 5/15/11: Fixed some links. If you have any trouble viewing images, please click on them and it should enlarge them to full resolution. The videos will be quite small unless you click on the bottom right corner of them to enlarge to full screen.

A brief video introduction...


After spending some time searching for an appropriate article to match my interest, I eventually settled on one from space.com (link) that purported that one of NASA's satellites had collected empirical evidence supporting two of Einstein's theories of gravity. One of the interesting things about gravity is the discrepancy between how common our experience is with it versus our current level of understanding. It's one of many very mysterious and active areas of science. In a world where our current level of technology makes it appear that the only science left to do is to figure out how to stream movies to my cell phone, gravity allows us an area to experience wonder and beauty.



Of course, I tracked this Space.com article over to NASA's website (link) where they have announced the findings on May 4th, 2011. Attempting to find a more journal article, I am directed towards Physical Review Letters (link). Unfortunately, they are available by subscription only. In the meantime, I have located Stanford's website (link) who NASA funded throughout most of the project. The basic concept of this project was first brought up in 1959. NASA began reviewing the idea a couple years later and allocated funds to the project in 1964. It would be fifty years before the satellite actually launched. Delayed by politics, changing shuttle and rocket designs as well as engineering technicalities, the project would eventually sum an estimated
$750,000,000. Stanford University was put in charge of development and operations.



There were two ideas to be tested. The Newtonian theory of gravity as seen above is the classical notion of the inverse square law in normal three dimensional space. Einstein proposed more complicated notions about the nature of gravity and it's interactions with spacetime. Many of Einstein's ideas have been proven empirically and applied technologically. This specific experiment was designed to test the effects of frame dragging and the geodetic effect. The picture to the right above shows a rough image of both. The idea of massive bodies creating a curvature of space is known as the geodetic effect and is represented by the apparent depth of the plane. The swirling lines are representative of frame dragging as the rotating mass has a sort of friction that causes spacetime to move along with it.



A Newtonian object revolving around Earth would continue to rotate for all eternity whereas an Einsteinian object would slowly precess as the massive body of Earth rotates. Frame dragging was predicted by the general theory of relativity and derived by Josef Lense and Hans Thirring. This added the rotational variable alongside mass in attempting to calculate the gravitational field of an object. This effect would cause the precession of an orbiting satellite due to the distortion of spacetime. The Newtonian idea of rotation seems to imply that it is moving or not moving dependent on some absolute reference frame. Ernst Mach argued that without an absolute space to compare it to, rotation must only be defined relative to other objects. This implies a relationship between the two bodies which was postulated by the general theory of relativity. Less noticeably, the geodetic effect also provides predictions that diverge from Newtonian expectations. This was also predicted by Einstein's theory of general relativity and was worked on by Willem de Sitter.




In order to test this hypothesis, the satellite would take a gyro into orbit and observe it for a period of time. It would measure its precession about each axis carefully and compare its results to predictions. Even a simple task requires some elaborate engineering. Four gyroscopes were used in order to provide some redundancy. Each one was purported to be the most perfect sphere ever manufactured by man. They were created using very pure quartz and coated in a solid layer of niobium. The engineers claim that the imperfections around the edge of the sphere vary by as little as 40 atoms. To put this into scale, they said if you enlarged the sphere to the size of the Earth that the highest mountains and the deepest valleys would be no more than 8 ft in either direction.



The entire inside of the satellite takes awhile to prepare once in orbit because it must achieve a temperature very near absolute zero. This temperature is necessary because much of the experiment relies on superconductivity. The gyroscopes were each spun up to 5000 RPM using the xenon gas inside the satellite. Afterwards, the xenon gas was removed in order to create as perfect a vacuum as possible in order to prevent any frictional forces from interfering with the gyroscope. The spheres are then held in place without contact using an electric field trap. It took over 4 months to achieve the temperature, pressure and rotations necessary to begin scientific data collection.



In orbit for 17 months and 9 days, Lockheed Martin designed the capsule to keep the internal components both cool and shielded. The dewar was filled with supercooled helium to keep at -468 degrees Fahrenheit near absolute zero. Gravity Probe B engineered a new design to let the liquid and gaseous phases to interact. This allows the liquid helium to boil off keeping the supercooled fluid inside. The helium that was removed from the system was recycled as a minute corrective force to maintain a near drag-free orbit. The discharge of the helium was described as about a tenth of a human breath. Even at the great altitude of the telescope, there is still a slight atmospheric friction that interferes with the orbit of the satellite; however, this force can be counter acted. Designed to hold 4 ping pong balls, the ultimate satellite was over 24 feet tall.



The primary measurements are taken using the principles of superconductivity. The spherical gyroscopes were coated with niobium which was then supercooled to the point of achieving superconductivity. When a superconductor rotates, the effective lack of friction in the electrons of the metal produces a net movement of the nucleus versus the free electrons. As an analogy, it is as if then niobium moves underneath the electrons which results in a net motion relative to the sphere. The flow of electrons produces a magnetic moment. This magnetic moment is perfectly aligned along with the axis of rotation. This property is used to measure the axis of rotation while minimizing the observer's effect on the gyroscope. The magnitude of the magnetic moment is directly proportional to the spin. A device is placed within the field called the SQUID. This acronym describes a superconducting quantum interference device. The sensitivity if the SQUID is such that it can detect a field as weak as 5×10−18 T. The amount of precession predicted by the theory of relativity is quite small so the high degree of precision is very important. The accuracy of a milliarcsecond is the width of a hair seen at 10 miles. The types of angles being observed is also put into perspective as the angle formed between the top of Lincoln's eye to the bottom of a penny in New York as seen from Paris.



In the above image, the z-axis which is the axis of rotation is aligned down a guide star to measure against. It would be better to measure relative to quasars due to their stability but they are too difficult to see. The on board telescope separates the visual image of the star into four quadrants and is able to very accurately focus on those images. Instead of a quasar, the star IM Pegasis is tracked which has a very well analyzed motion in the sky. This motion is understood relative to other bodies in the sky such as the quasars. Thus, instead of measuring relative to the quasars directly they are measured to them indirectly. There are 3 quasars nearby which can be used. IM Pegasis' position moves due to our own annual parallax as well as IM Pegasi's own motion in the galaxy. It also has an orbital motion because of its binary interactions. Furthermore, changes shape slightly from flares and tidal forces. The motion of IM Pegasis far outweighs any motion in the gyroscope so it must be measured accurately in order to correct the data.




An amalgamation of data from several telescopes is used to accurately measure the movements of IM Pegasis which has become one of the most studied stars in the sky. They have been collecting data quarterly for years in order to maintain a stable and precise measurement. Sixteen telescopes have collected data including NASA'S Deep Space Station 43 in Australia, Natural Resources Canada's Algonquin Radio Observatory in Ontario, the Max Planck Institute's Effelsberg Telescope in Germany, the National Radio Astronomy Observatory's Very Large Baseline Array in Hawaii and their Very Large Array in New Mexico. These telescopes combined with others from across the world to form a virtual telescope as large as the Earth. The data is analyzed by the supercomputer at the National Radio Astronomy Observatory in New Mexico. The relative motion of IM Pegasi versus the three nearby quasars is mapped out in careful detail.



Gravity Probe B was lanched in April 2004. By August of 2005, it had finished data collection. The first phase of data analysis was released as early as February of 2006 but on May 4th, 2011 the final results were released and the project completed. The geodetic predictions were confirmed within 18.3 percent and frame dragging within 7.2 percent. Such extensive analysis was required because of equipment calibrations, tracking the guide star IM Pegasi, as well as taking into consideration several anomalies that happened during the data collection such as solar flares.

While the original article suggests that this experiment has proved Einstein correct, it is not quite so exhaustive. It has provided some additional empirical evidence on behalf of these two particular ideas. It has also been valuable because of all the different engineering accomplishments that were made possible due to the funding, research and the drive put behind this mission. For example, the porous valve that existed between the liquid helium and xenon gas layers was highly specific and efficient and is already finding other applications. In these areas, we have already experienced and been measuring to some degree of specificity the effects of gravitational lensing.



It has been suggested that with further speculation and experimentation, the raw data from Gravity Probe B may continue to shed light on other areas. In an opening speech introducing Albert Einstein, George Bernard Shaw announced, "There is an order of men who are makers of Universes. Ptolemy made a Universe which lasted 1,400 years. Newton also made a Universe which has lasted 300 years. Einstein has made a Universe and I can't tell you how long that would last." If the Academy Award and Nobel Prize winner was not eloquent enough, I think Monty Python aptly sums up the bulk of our empirical data. (Warning: if our professor was not indication enough, the British may be a bit odd, and this video may not be safe for work)




Digital Bibliography, accessed May 2011.
http://xkcd.com/
http://prl.aps.org/accepted/L/ea070Y8dQ491d22a28828c95f660a57ac82e7d8c0
http://www.nasa.gov/mission_pages/gpb/gpb_results.html
http://einstein.stanford.edu/
http://www.space.com/11570-nasa-gravity-probe-einstein-theory-relativity.html

The Case for Modified Newtonian Dyanimcs

It seems that many researchers are taking up the Modified Newtonian Dynamics (MOND) battle standard against supporters of Dark Matter halos to explain galactic rotation curves and other astronomical phenomena. I myself am partial to the belief that MOND is a more likely explanation than invisible matter that only interacts via gravitational forces. Stirring debate among these two groups is not my purpose though, it is to present an article that strongly supports MOND as a viable alternative to dark matter. The full article is as follows:


The article itself is extremely technical with copious amounts of mathematics interjected into its paragraphs, but the introduction and conclusion are worth noting. The authors begin by briefly reviewing the current models of galactic properties such as rotation curves and Mass-Luminosity relations. They draw attention to the problems inherent in these models such as mass anisotropic degeneracies and systematic biases. They then propose that MOND can fit the models of elliptical galaxies, and even Early-Type and Spiral galaxies. The conclusions of their findings are that, statistically speaking, MOND fits over 9000 galaxies taken from the NYU Value added Galaxy Catalog. They admit that some aspects of their model have as much as a 10% error, but counter that this is due to the statistical analysis of 9000 galaxies simultaneously as opposed to fitting the model to each galaxy individually.

Radius-Luminosity and Mass-Luminosity Relationships for Active Galactic Nuclei; Koratkar and Gaskell

Radius-Luminosity and Mass-Luminosity Relationships for Active Galactic Nuclei
Koratkar and Gaskell
The Astrophysical Journal, 370:L61-L64, 1991 April 1


In this paper, Koratkar and Gaskell study ten active galactic nuclei in an attempt to find a independant direct relationship between mass and bolometric luminosity of interior objects, i.e. the central black hole. Given any trend or solution is simple and calculable, they could then find a way to discern properties of distant galaxies based on a single observable. Though the paper touches lightly on raiud-luminosity relationships, I will focus more on the development of a mass-luminosity relationship and the verification of previous derivations in existence.

This paper centers around data collected by the authors regarding the active galactic nuclei, herein referred to as AGNs, of ten individual galaxies. These AGNs represent the core element of each galaxy and are expected to produce and radiate the majority of energy of the galaxy over most, if not all wavelengths of our observable range. The accepted model physics at that time of publishing favored the idea that any energy radiated would be the result of accretion from the Broad-Line Region onto the central object, or black hole. if we accept this model as correct, then it would make sense that the bolometric luminosity, a measure of the expected energy generation across all measurable wavelenghts, would correlate to the total mass of this region, the core AGN.

To produce working data for the mass is familiar and quite simple. Analyzing the Doppler width of the emission spectra provides velocity information for the gasses of the broad-line region surrounding the core. If we assume that the movement of gasses is dominated by gravitational forces, then we can calculate a value for the mass of the AGN by integrating over the core radius. Calculation can be done using any number of available methods, but was done using Monte Carlo methods for this paper.

This study resulted in producing a tested relationship between the bolometric luminosity and a calculated mass of AGNs. Since real astronomical data was used, this paper was a big step on the side of observational astronomy. In addition, it proved to solidify belief in the previously derived theoretical relationship suggested by previous researchers like Dibai, Joly, and Reshetnikov years earlier.

Simulation of Colliding Galaxies

A couple of years ago, I did a project for a Parallel Processing course that utilized a computer simulation of colliding galaxies. I adapted an existing program to my computer cluster running Pelican HPC. The program utilizes a simulation tool called GADGET-2 and it computes gravitational forces with a Tree-SPH (Smooth Particle Hydrodynamics) algorithm. The tree is used for short-range gravitational forces, while long-range forces are computed with a Fast Fourier Transform particle mesh method. The program was written and refined over the years by Volker Springel of the Max Planck Institute for Astrophysics and is freely available on the internet at http://www.mpa-garching.mpg.de/galform/gadget/index.shtml.

This computer simulation allows us to watch collisions that in nature take hundreds of millions of years. The model shows that a collision of two spiral galaxies can create elliptical galaxy. Although the animation appears rough and crude (windows media file, what else), the result you'll see resembles an elliptical galaxy. Lots of data (energy, etc) is generated, but I won't include that here. I hope to refine this program over the summer.


Thursday, May 12, 2011

Sanders Final Post

Final Exam Paper for Astronomy 561

My Subject for this paper is Possibility of Settlement between Modification of Newtonian Mechanics versus Dark Matter and Dark Energy. While I wish to produce a scholarly paper, I would also like my high school students to read this article and begin to understand what the nature of science and the nature of scientist are all about. Even when experienced scientist and engineers are on the trail of understanding, they sometimes forget that for every question they answer two new questions seem to appear. It seems the more we learn, the less we know. Also no student or professional should forget the pursuit of knowledge can be fun and exciting, not boring.

Before we have an argument between MOND versus Dark Matter and Dark Energy, maybe we should observe the disagreements between MOND believers. When I read any single paper, I am lead by reason to believe a solution is at hand. However, the next paper always points out some problem with the new adjustment to Newton’s modified law. In this paper I have quoted:

Astronomy Abstract Service and provide the
Bibliographic Code: 2010MNRAS.407.1128S
for that particular Abstract

The idea is to allow each reader to make a judgment about several modified theories and the weakness of each. My judgment is that each adjustment seems reasonable but with thirteen adjustments we have one monster formula not ready for use in our own homes.

Title: The Universal Faber-Jackson relation by Sanders, R. H. #1
Bibliographic Code: 2010MNRAS.407.1128S

Abstract
In the context of modified Newtonian dynamics, the Fundamental Plane, as the observational signature of the Newtonian viral theorem, is defined by high-surface-brightness objects that deviate from being purely isothermal: the line-of-sight velocity dispersion should slowly decline with radius as observed in luminous elliptical galaxies. All high-surface-brightness objects (e.g. globular clusters, ultra-compact dwarfs) will lie, more or less, on the Fundamental Plane defined by elliptical galaxies, but low-surface-brightness objects (dwarf spheroidals) would be expected to deviate from this relation. This is borne out by observations. With Milgrom's modified Newtonian dynamics (MOND), the Faber-Jackson relation (L ~ σ4), ranging from globular clusters to clusters of galaxies and including both high- and low-surface-brightness objects, is the more fundamental and universal scaling relation in spite of its larger scatter. The Faber-Jackson relation reflects the presence of an additional dimensional constant (the MOND acceleration a0) in the structure equation.


Title: Exact Solutions and Approximations of Mond Fields of Disk Galaxies #2
Author: Brada, R and Milgrom, M
Bibliographic Code: 1994astro.ph.7071B

Abstract
We consider models of thin disks (with and without bulges) in the Bekenstein-Milgrom formulation of MOND as a modification of Newtonian gravity. Analytic solutions are found for the full gravitational fields of Kuzmin disks, and of disk-plus-bulge generalizations of them. For all these models a simple algebraic relation between the MOND potential field and the Newtonian potential holds everywhere outside the disk. We give exact expressions for the rotation curves for these models. We also find that the algebraic relation is a very good approximation for exponential disks. The algebraic relation outside the disk is then extended into the disk to derive an improved approximation for the MOND rotation curve of disk galaxies that requires only knowledge of the Newtonian curve and the surface density.


Title: MOND in the early universe #3
Author: McGaugh, Stacy
Bibliographic Code: 1999AIPC..470…72M

Abstract
I explore some consequences of Milgrom's modified dynamics for cosmology. There appear to be two promising tests for distinguishing MOND from CDM: (1) the rate of growth of structure and (2) the baryon fraction. These should be testable with observations of clusters at high redshift and the microwave background, respectively.


Title: Modified Newtonian dynamics and its implications #4
Author: Sanders, R. H.
Bibliographic Code: 2001astro.ph..6558S

Abstract
Milgrom has proposed that the appearance of discrepancies between the Newtonian dynamical mass and the directly observable mass in astronomical systems could be due to a breakdown of Newtonian dynamics in the limit of low accelerations rather than the presence of unseen matter. Milgrom's hypothesis, modified Newtonian dynamics or MOND, has been remarkably successful in explaining systematic properties of spiral and elliptical galaxies and predicting in detail the observed rotation curves of spiral galaxies with only one additional parameter-- a critical acceleration which is on the order of the cosmologically interesting value of $cH_o$. Here I review the empirical successes of this idea and discuss its possible extension to cosmology and structure formation.



Title: Modified Newtonian dynamics and its implications #5
Author: Sanders, R. H.; McGaugh, Stacy S.
Bibliographic Code: 2002ARA&A..40..263S

Abstract
Modified Newtonian dynamics (MOND) is an empirically motivated modification of Newtonian gravity or inertia suggested by Milgrom as an alternative to cosmic dark matter. The basic idea is that at accelerations below ao ~ 10-8 cm/s2 ~ cHo/6 the effective gravitational attraction approaches √(gnao), where gn is the usual Newtonian acceleration. This simple algorithm yields flat rotation curves for spiral galaxies and a mass-rotation velocity relation of the form M ∝ V4 that forms the basis for the observed luminosity-rotation velocity relation-the Tully-Fisher law. We review the phenomenological success of MOND on scales ranging from dwarf spheroidal galaxies to super clusters and demonstrate that the evidence for dark matter can be equally well interpreted as evidence for MOND. We discuss the possible physical basis for an acceleration-based modification of Newtonian dynamics as well as the extension of MOND to cosmology and structure formation.


Title: General Relativity and Quantum Cosmology, Astrophysics #6
Origin: ARXIV
Bibliographic Code: 2006gr.qc…..4047F

Abstract
Empirical implications of a teleparallel displacement of momentum between initial and final quantum states, using conformally flat quantum coordinates are investigated. An exact formulation is possible in an FRW cosmology in which cosmological redshift is given by 1+z=a_0^2/a^2(t). This is consistent with current observation for a universe expanding at half the rate and twice as old as indicated by a linear law, and, in consequence, requiring a quarter of the critical density for closure. A no CDM teleconnection model resolves inconsistencies between galactic profiles found from lensing data, rotation curves and analytic models of galaxy evolution. The teleconnection model favors a closed no Lambda cosmology. After rescaling Omega so that Omega=1 is critical density, for 225 supernovae, the best fit teleconnection no Lambda model with Omega=1.89 is marginally preferred to the best fit standard flat space Lambda model with Omega=0.284. It will require many observations of supernovae at z>1.5 to eliminate either the standard or teleconnection magnitude-redshift relation. In quantum coordinates the anomalous Pioneer blueshift and the flattening of galaxies' rotation curves appear as optical effects, not as modifications to classical motions. An exact form of Milgrom's phenomenological law (MOND) is shown.


Title: Are Dark Matter and Dark Energy the Residue of the Expansion-Reaction to the Big Bang? #7
Author: Ringermacher, Harry I.;Mead, Lawrence R.
Bibliographic Code: 2006gr.qc….10083R

Abstract
We derive the phenomenological Milgrom square-law acceleration, describing the apparent behavior of dark matter, as the reaction to the Big Bang from a model based on the Lorentz-Dirac equation of motion traditionally describing radiation reaction in electromagnetism but proven applicable to expansion reaction in cosmology. The model is applied within the Robertson-Walker hypersphere, and suggests that the Hubble expansion exactly cancels the classical reaction imparted to matter following the Big Bang, leaving behind a residue proportional to the square of the acceleration. The model further suggests that the energy density associated with the reaction acceleration is precisely the critical density for flattening the universe thus providing a potential explanation of dark energy as well. A test of this model is proposed.


Title: Modified Gravity Without Dark Matter #8
Author: Sanders, Robert
Bibliographic Code: 2007LNP…720..375S

Abstract
On an empirical level, the most successful alternative to dark matter in bound gravitational systems is the modified Newtonian dynamics, or MOND, proposed by Milgrom. Here I discuss the attempts to formulate MOND as a modification of General Relativity. I begin with a summary of the phenomenological successes of MOND and then discuss the various covariant theories that have been proposed as a basis for the idea. I show why these proposals have led inevitably to a multi-field theory. I describe in some detail TeVeS, the tensor-vector-scalar theory proposed by Bekenstein, and discuss its successes and shortcomings. This lecture is primarily pedagogical and directed to those with some, but not a deep, background in General Relativity.


Title: Modified gravity and the phantom of dark matter #9
Author: Brownstein, Joel Richard
Bibliographic Code: 2009PhDT….…172B

Abstract
Astrophysical data analysis of the weak-field predictions support the claim that modified gravity (MOG) theories provide a self-consistent, scale-invariant, universal description of galaxy rotation curves, without the need of non-baryonic dark matter. Comparison to the predictions of Milgrom's modified dynamics (MOND) provides a best-fit and experimentally determined universal value of the MOND acceleration parameter. The predictions of the modified gravity theories are compared to the predictions of cold non-baryonic dark matter (CDM), including a constant density core-modified fitting formula, which produces excellent fits to galaxy rotation curves including the low surface brightness and dwarf galaxies. Upon analyzing the mass profiles of clusters of galaxies inferred from X-ray luminosity measurements, from the smallest nearby clusters to the largest of the clusters of galaxies, it is shown that while MOG provides consistent fits, MOND does not fit the observed shape of cluster mass profiles for any value of the MOND acceleration parameter. Comparison to the predictions of CDM confirm that whereas the Navarro-Frenk-White (NFW) fitting formula does not fit the observed shape of galaxy cluster mass profiles, the core-modified dark matter fitting formula provides excellent best-fits, supporting the hypothesis that baryons are dynamically important in the distribution of dark matter halos.
http://www.blogger.com/img/blank.gif
Origin: WILEYhttp://www.blogger.com/img/blank.gif

MNRAS Keywords: galaxies: kinematics and dynamics, cosmology: theory, dark matter
DOI: 10.1111/j.1365-2966.2009.16184.x

Bibliographic Code: http://adsabs.harvard.edu/abs/2010MNRAS.403..886M

The above link should access all the Bibliographic Code.
Abstract
A new formulation of modified Newtonian dynamics (MOND) as a modified-potential theory of gravity is propounded. In effect, the theory dictates that the MOND potential φ produced by a mass distribution ρ is a solution of the Poisson equation for the modified source density, where g = ν(|gN|/a0)gN, and gN is the Newtonian acceleration field of ρ. This makes φ simply the scalar potential of the algebraic acceleration field g. The theory thus involves solving only linear-differential equations, with one non-linear, algebraic step. It is derivable from an action, satisfies all the usual conservation laws, and gives the correct centre-of-mass acceleration to composite bodies. The theory is akin in some respects to the non-linear Poisson formulation of Bekenstein and Milgrom, but it is different from it, and is obviously easier to apply. The two theories are shown to emerge as natural modifications of a Palatini-type formulation of Newtonian gravity, and are members in a larger class of bi-potential theories.

I hope these articles of scholarly research give readers some idea of the many views of fellow scientists who agree in their pursuit of MOND theory. I know as a trained scientist and engineer, I have tried to make astute observations.

I have attempted, with earnest effort, to avert all feelings from my youth while I lived across the street from a Pentecostal Church. This period of time was before TV. On Saturday night neighbors from far and near would arrive at my parent’s front porch to watch the “Holy Rollers” in action!! At times when I review some of this “Science Debate,” that “Old Time Feeling” returns. I may be watching “Scientist Holy Rollers” in a MOND versus Dark Matter debate.

If You Could See Neutrinos...

...how far back into the universe could you see?

For the case of photons we look at the cosmic microwave background (CMB) and use it to construct an image of the so called 'surface of last scattering', which represents the moment when photons decoupled from and were able to escape the cosmic plasma. This corresponds to a t=380,000 years (with t=0 being the very beginning) in the current theory.

For the case of neutrinos we would look to the cosmic neutrino background (CNB) and try to use that to construct a surface of last scattering for the neutrinos. At what time did neutrinos decouple from the cosmic plasma? Current theories predict that this decoupling time occurred when the universe was less then a second old.

It would stand to reason then that one should be able to use neutrinos to 'look further back' into the universe. In other words, one would expect the last scattering surface for neutrinos to be further back than the last scattering surface for photons.

However, at least one paper finds the opposite to be true:

http://arxiv.org/abs/0907.2887

Somewhat counter-intuitively, the CNB last scattering surface turns out to be much closer and much thicker than the CMB surface--the paper suggests it has to do with the fact that neutrinos are able to have different velocities from each other.

So perhaps the neutrino's LSS is inside the CMB's, but where might the dark matter LSS be?

Tuesday, May 10, 2011

Dark Matter – Cosmic Glue?

Ever since mankind was able to get beyond his basic needs for survival and move to higher levels of thinking, he has always questioned the unknown. His basic powers of observation have answered many of the fundamental questions that were explained away by the presence of a God. The stars are no exception and have not escaped the observations of Man. The question, “ Is there more?’, is alive and well.

While researching topics for this paper, I came across a series of lectures and articles by Professor Kim Griest of the University of California at San Diego. His presentation style and simple manner of explanation allows for the general layman a basic understanding of the complex topic of dark matter. Is dark matter real and if so, what is its purpose for the universe?

When man began his observation of the stars, these were only points of light suspended in the night sky. They all moved across the night sky through the seasons. All except one, the North Star. They used the position of the stars to help them navigate across unknown oceans and return home. As time went on, Man became more curious about these points of light in the sky. In order for all of this to make sense to them, they placed the Earth as the center of the Universe.

It was not until men like Copernicus and Galileo said that the Earth was not the center of the Universe and that it revolved around the sun. Soon new speculations about the night sky began a much more intense observation of the stars. This, of course, was not without persecution of these ideas by the religious side. With the invention of the telescope, Man was now able to get a little closer to the stars. These telescopes could only see an occasional planet and the sun, but other stars were still only points of light.

As telescopes began to evolve, other artifacts began to be apparent. One such artifact was the Nebulae or cloud like smudges. For a while smudges were thought to be composed of gas until much bigger telescopes were developed that could give us a much closer look. It turns out that many of these smudges were clusters of stars. If we look at our own Milky Way Galaxy, we can see these smudges very prominently with our naked eye on very clear, dark nights. Yet, as we examine the smudges of our own galaxy, we can see that indeed they are clusters of millions of stars. The better the telescopes, the farther we could see and find that smudges beyond our own galaxy were galaxies themselves.

Page 2
Unfortunately, all new discoveries were limited by visual observation. This limitation was based on the fact that if it is to be seen, we must have light. Knowing the speed of light, we were able to calculate luminosity of stars and that of whole galaxies. This allowed us to determine distances far greater than anyone could imagine. But any discovery of new information was limited only by our technology. Now we are able to determine whether stars were traveling closer towards us or moving farther away. By the use of Doppler techniques were we able to find if a star was coming towards us or in fact, moving away from us. The shorter the wavelength, the more compressed the wave of light would be thus moving it closer to the blue spectrum of light. The longer the wavelength, the more it would move towards the red spectrum of light, thus the star was moving away. Today, we use more advanced techniques that allow us to “see” into the electromagnetic spectrum that we can not see with our own eyes moving us into radio signals, microwave, infrared, ultra violet, x-ray and even gamma rays. We are also able, through the use of spectrometry, to determine the composition of many stars with Hydrogen being the main fuel of a star. Regardless of what we used, we were still looking at light as our main source of information.

Since stars and star systems could be found in specific regions of the sky, it was thought that the Universe was static. In 1910, Albert Einstein said that the Universe was expanding and contracting. (Ronald W. Clark; Einstein – The Life and Times: 1971) Einstein’s General Theory of Relativity was the first theory to be applied to the whole Universe. Yet, Einstein himself thought that his theory was wrong. His theory depended upon 2 numbers: Hubble’s constant which addressed the expansion rate and the Omega which involved the density of the amount of the mass in the Universe. (Kim Griest, “Dark Matter and the Fate of the Universe”, Feb. 22, 2000) Whatever he did with these numbers, the Universe was either expanding or contracting. This was not good enough for him so he added a third number, the cosmological constant. By using this number, he could then balance the other numbers thus making the Universe static.

But Hubble showed that the Universe was expanding causing Einstein to call his cosmological constant his “greatest blunder”. We have found that there is expansion going on, but the galaxies and stars are not moving away from each other and are fixed in positions relative to each other. So astronomers began looking not at the stars and galaxies but at the darkness of space in between them. If this is so, then there must be something that makes up this space. The best way to describe this is the Raisin Bread Theory which states that the raisins in the dough are all located in positions relative to each other and as the bread is baked, it is the bread that is expanding with the raisins maintaining their positions within the bread. So space now was thought to have substance.

Another observation that supports this theory is the fact that galaxies are rotating at tremendous speeds and even with such speeds, the galaxies remain intact instead of being flung outwards into space. By measuring the total luminosity of the galaxy, we can estimate somewhat accurately,

Page 3
the total number of stars in that galaxy. What we find is that there is not nearly enough mass to keep the galaxy intact. There are not enough stars for the total gravity to keep the stars bound in their rotations around the galaxy because of the speed of their rotation. So in order for there to be enough mass, it must be coming from something else at least by a factor of 10 or more. (Kim Griest, Feb.22, 2000). This “something else” then forms a “halo” that surrounds the entire galaxy. This “something else” is what is completely dominating the gravity. It is suspected that it is matter that we can not see, hence the term “dark matter”. Illustration 1 is an artist rendition of what a galaxy may really look like if we could see this dark matter.


Illustration 1
(Zebu.uoregon.ed)

So when trying to calculate the total mass, we find that the mass of the disk is not enough as shown in the curve labeled “disk” below. If we add the mass of the gas in the galaxy, we find that plus the mass of the stars is still not enough. Therefore, there is another curve that is produced when we calculate a new curve with the proper masses to sustain that rotation. (See rotational curve graphs below.) This graph is based on a well studied galaxy know as NGC 3198.

Page 4
So the race is on to find out what this “stuff” is, how much there is and what is its basic form. Two basic particles that are being researched are MACHOs (Massive Astrophysical Compact Halo Objects) and WIMPS (Weakly Interacting Massive Particles). WIMPS seem to be attracting the most attention. These particles are so small that they can pass through mass with no interaction. They can slip through the space of atoms.

The problem is that with all the research and millions of dollars invested, we are yet to find a particle that fits the picture that astronomers and physicists are looking for. The main culprit is the background noise or radiation coming from space itself. We are still looking for a signal that can be labeled to the illusive particle. So great laboratory facilities with massive instruments to collect these particles have been built deep into the Earth to eliminate the background noise. Even so, the particle remains a mystery. So the question remains. Is dark matter real? The math says it is. Theories that galaxies would otherwise fling their contents into space say it is. So what is it?

At the risk of sounding foolish, as most novices to this subject matter are, I venture to use the old saying, “can’t see the forest for the trees.” I would love to see dark matter discovered and solidly identified. But is it possible that dark matter is not a particle at all but a “soup of particles” and what we are doing is trying to identify tomatoe soup by trying to isolate a single sugar? This is like trying to identify water in an ocean by attempting to isolate a hydrogen neutron. There isn’t one! The blame for this failure is the background noise one gets. Is it possible that all of the background noise put together is in fact the dark matter we have been searching for. If so, it becomes the forest that we can’t see.

Regardless, the subject of dark matter is a facinating one as it seems to be the glue that holds everything together. It has stretching ability which accounts for the fixed relative position of all the observable objects in space. If that is so, then it should reach a point where it can not stretch any further and begin to contract again towards “the big crunch” and if this is true, then the universe really does expand and contract. Maybe Einstein was not really wrong after all when he first said so.

Bibliography

Clark, Ronald W.; Einstein The Life and Times; World Publishing Corp.; 1971, 1984; pg 266-271

Griest, Kim Professor; University of California at San Diego; “Dark Matter and the Fate of the Universe”; February 22, 2000

Griest, Kim Professor; University of California at San Diego; “Mystery of Empty Space”; March 7, 2010

Monday, May 9, 2011

Tully-Fisher Method for Determining Distances to Galaxies

Previous methods for measuring distances to galaxies include parallax angles and standard candles. However, accurate parallax angles are limited to stars within about 500 pc. Beyond that distance, the angles become too small to measure. Standard candles are based on astronomical objects of known luminosity. By measuring the brightness of standard candles, observers can calculate the object’s distance using the inverse-square law. However, a problem with standard candles is in determining the absolute magnitude of the object. The challenge is finding objects that are good candidates for standard candles – that is objects that are luminous, easily identifiable, and relatively common. The more confident we know a star’s luminosity, the more certainty in determining its distance.


An alternative to the aforementioned techniques is the Tully-Fisher Relation. The attached paper published in Astronomy and Astrophysics in 1977 describes a method for determining the distances to galaxies. R. Brent Tully and J. Richard Fisher determined that the variability of light emitted from netral hydrogen in a galaxy is related to the luminosity of the galaxy. Also, the more variability in the light emitted by netral hydrogen, the more luminous the galaxy. Radiation from the approaching side of a rotating galaxy is changes to shorter wavelengths of light while the light from the receding side of the galaxy changes to longer wavelengths. The variability in light increases by an amount proportional to the rotating speed of the galaxy.


I found a visual to illustrate this point at http://lifeng.lamost.org/courses/astrotoday/CHAISSON/AT324/HTML/AT32402.HTM












Figure 24.11 A galaxy's rotation causes some of the radiation it emits to be blueshifted and some to be redshifted (relative to what the emission would be from a nonrotating source). From a distance, when the radiation from the galaxy is combined into a single beam and analyzed spectroscopically, the redshifted and blueshifted components combine to produce a broadening of the galaxy's spectral lines. The amount of broadening is a direct measure of the rotation speed of the galaxy.


The more massive the galaxy, the faster the stars move in their orbits at the galaxy’s edge. The paper summarizes the relationship between gravitational and centrifugal forces: v2 = GM/R and that luminosity is proportional to mass as well as the area of the disk. Tully and Fisher used this method to calibrate the distances of galaxies in excess of 100Mpc.


Recent Studies: Since its introduction, there have been continued research in this area. Another paper, Evolution of the Tully-Fisher Relation up to z=1.4 published last year in Astronomy and Astrophysics, is a study that focuses on Tully Fisher relation at high redshift in the B,V, R, I, and K5 . I will review this paper and submit a follow-up to this blog.

Mass-Luminosity Relation for Spiral Galaxies

Title:
The Mass-Luminosity Relation for Spiral Galaxies.
Authors:
Genkina, L. M.
Publication:
Soviet Astronomy, Vol. 14, p.732
Publication Date:
02/1971
Origin:
ADS
Bibliographic Code:
1971SvA....14..732G

Around the time this paper was published a fairly accurate mass-luminosity relation existed for both elliptical and lenticular galaxies. There were a few suggestions for the mass-luminosity relation of spiral galaxies, however they were different and had a quite large dispersion.

One of the suggestions was that of Holmberg who proposed that the color index be introduced as a supplementary parameter. The author determined that for statistical mass determinations the "mass-luminosity-color" relation had no essential advantage over the ordinary mass-luminosity relation. He then tried dividing spiral galaxies into subtypes and concluded that the introduction of new parameters did not appear to reduce the dispersion in the mass-luminosity relation.

Genkin and the author of this paper derived a mass-luminosity relation for spiral galaxies based on a catalog of masses compiled from data published prior to 1967. Radio astronomy became a popular way to determine the masses of galaxies at this time(right around the time Jocelyn Bell Burnell made her discovery). He notes that Vorontsov-Vel'yaminov pointed out a systematic disparity between the masses determined from radio and optical observations and advised to consider the data separately(the optical observations had two disadvantages).

At this point he goes on to show mass-luminosity relations using at first optical observations, and then only radio observations. He concludes that the closeness of the lines in these relations suggests that a mass-luminosity relation for spiral galaxies can be established within reasonable accuracy. Combining Burbidges' and Roberts' galaxies he found that the regression line is very close to the equation determined by Vorontsov-Vel'vaminov for spiral and irregular galaxies. However, in the 16 Type Ir I galaxies the author worked on, the slope was slightly smaller due to the exclusion of irregular galaxies.

I did not see any posts over this topic and figured I would choose one that was very early in the process of investigating spiral galaxies. It also shows the impact of improved methods such as radio vs. optical observations.

Saturday, May 7, 2011

Tully-Fisher Relation

For our last blog post, given the option of MOND, Mass-Luminosity, and Tully-Fisher it seems that MOND has been the most popular. I figured I'd mix things up just a bit. After scouring through at least a solar mass worth of articles of both Mass-Luminosity and Tully-Fisher, I picked out this one from a few years back.

The full article can be accessed here.

The data they're using here comes from AEGIS. The All-wavelength Extended Groth strip International Survey was an international collaboration between scientists and observatories in order to bring together larger data sets than any one had access to. The NRAO's, Very Large Array radio telescope from New Mexico, NASA's infrared Spitzer Space Telescope, Caltech's Palomar Observatory telescopes, the three imaging processors at the aptly-named Canada-France-Hawaii Telescope on Mauna Kea, the optical telescope at Keck Observatory also on Mauna Kea, NASA's Hubble Space Telescope, NASA's ultraviolet GALEX satellite, and NASA's Chandra X-ray Observatory satellite. Data taken from various spots on Earth as well as from orbit in a multitude of wavelengths and methodologies allowed for a rather varied data set from AEGIS.

We've learned that there is a good relationship between the rotation velocity and the luminosity of a galaxy. To be more precise, there is a relationship between rotation velocity and mass but there is also a relationship between mass and luminosity; hence, there is a relationship between rotation velocity and luminosity as well.



This diagram shows us the basic idea behind the relationship, its measurements, and how we've used it to determine the Hubble constant. As we all experienced in our own rotational velocity curves and trying to best fit the graphs, not all methods were equally successful. This articles provides a closer-fit line because they're using measurements for baryonic mass instead of using luminosity alone to infer the mass. Furthermore, they're considering more than pure rotation velocity by adding in the velocity dispersion. The velocity dispersion is what all the σ measurements refer to and allows a more accurate assessment of the true energies involved. As they slowly modify their constants, re-define their data, more and more evidence is added to the relationship and the best-fit line becomes increasingly more precise. I am curious as to how they were able to estimate the baryonic mass. It seems like they did achieve a greater level of precision given their ability to fit their data appropriately, but the article itself claims that this is beyond the scope of their publication.

This relationship and this data set has also been a part of the development of the current understanding of galaxy evolution and predicting mergers. As a lot of this data came from Mauna Kea observatories, the University of Hawaii's observatory also put together some of the information. That website can be found here but the most interesting video is below.

Thursday, May 5, 2011

Final Paper

Dark Matter & WIMPS picture are missing

The XENON100 dark matter detector recently released its data for the first one hundred days of operation. The detector failed to find evidence of dark matter. The XENON detector is a large tank of 161 kg of chilled xenon buried far beneath the ground. The tank is buried far underground in Italy with an average of about 1.4 kg of rock and ultra pure metal, to prevent cosmic rays from interacting with the xenon and giving a false reading. The hope was that a change in the spin coupling of a nucleon could be observed providing evidence that the xenon in the detector is reacting with WIMPs (Weakly Interacting Massive Particles). This experiment was a significant setback, although not a total refutation of the theory. Dark matter theorist once again failed to observe any interaction or evidence of “dark matter” although there are multiple possible candidates to look for.

In analyzing galactic rotation curves it was found that galaxies often fail to match the Keplerian prediction of motion. Observable galaxies tend to clump matter toward the center of each galaxy as shown in the picture below. According to Newton’s Law of Universal Gravitation there should be a significant reduction in rotation speed at the fringes of each galaxy. In the picture below the spiral arms should be moving significantly slower than the stars closer to the core of the galaxy.


[4]

Newton’s Law of Universal Gravitation has worked extremely well and has been verified countless times with the exception of extremely high velocities in which case the general relativity correction is required. The expected and observed galactic rotation speeds are far below the threshold that requires relativistic correction and therefore the velocity should be reduced as a square of the distance away from the center.


This equation demonstrates the classical relationship between gravitational force and centripetal force. This relationship demonstrates that the velocity of an object is a function of its distance from the center of mass and the mass of the object itself.




The observational data of galactic rotation is contradictory to the theory expressed by combining Newton’s Law of Universal Gravitation with centripetal force for objects in circular motion. This relationship is demonstrated in the graph below: [9]


[12]

This results in a problem that has yet to be resolved. Either Newton’s Law of Universal Gravitation needs modification for areas of low density as proposed by the MOND theory or there is an as yet unobserved form of matter, not predicted by the standard model that results in the gravitational effect seen. [6]

In order to obtain accurate information for a universal gravitation calculation a reasonable approximation for the distance must be made. Observing distance involved in galactic rotation is beyond the use of simple RADAR, parallax, and even using main sequence stars for exploiting the absolute v. apparent luminosity relationship. The distances are so great that we are required to use a combination of Cepheid variable stars and using Type 1A supernovae to obtain a reasonable approximation between the Earth and the galaxy of interest. [9]

Cepheid Variables are a class of stars that have a regular change in luminosity that is a function of its distance from where it is observed. The picture below shows the variation in luminosity in a time dependent sequence. [7]


[2]

The variation in the luminosity of the Cepheid variable is very regular. A plot of the luminosity v. time for Cepheid variable star shows that the variation is harmonic and very regular.


[8]

Cepheid variables within our galaxy were than standardized by comparing absolute luminosity with relative luminosity to generate a plot of luminosity v. period. A linear relationship was determined. This allows us to determine the absolute luminosity from the period of oscillation; astronomers can then compare the absolute luminosity to the relative luminosity and calculate distance.

[7]


Distances greater than 10,000,000 parsecs require another method of distance calculation. The Tully-Fisher relation should not be used as it involves galactic rotation rates and would be a confounding variable in analysis. We can not use galactic rotation rates to determine distance and therefore measure galactic rotation rates. Using the periodicity of Cepheid variables and the luminosity as a function of distance of Type 1A supernovae the distance to the galaxy can be determined. Type 1A supernovae can be measured for brightness and the width of the curve can be used to determine the absolute brightness and therefore calculate distance. [7,9]


[9]

This distance and the observation of the width of the galaxy based on observation allows for a reasonable approximation of the radius of the galaxy of interest.

Mass Estimate
Mass can be estimated using Newton’s law of universal gravitation. Astronomers may select an easily observable section of a galaxy and observe its movement through time. The observed stars should be fairly close to the core of the galaxy as that is where the Newton’s law of universal gravitation closely conforms to observed rates of galactic rotation rates. If the observed star were beyond a certain threshold the galaxy would appear to have much more mass than it actually does. This is the primary problem that must be sorted out using either the dark matter or MOND theory. [9]

Astronomers are required to correct the mass of galaxies using various forms of spectroscopy to account for factors that are not visible. The total amount of interstellar medium can be estimated using 21 cm radiation. This spectroscopy is used to detect gas clouds that are difficult to observe directly as they do not emit light and absorb very little. In addition to this the luminosity of the galaxies, corrected for distance, are used to calculate the mass of the observable galaxy, estimates are used to account for black holes and unobservable but evidence based phenomena. [9]

The two primary explanations for the difference in the observed galactic rotation and the calculated galactic rotation of Newton’s law of universal gravitation are cold dark matter (CDM) and Modified Newtonian Dynamics (MOND). Currently the majority of resources, time and personnel, are exploring CDM. [8, 11]

Fritz Zwicky originally postulated dark Matter in 1933. He was observing the motion of galaxies in the Coma cluster of galaxies and noticed a significant difference in the motion of the galaxies and their observable mass according to Kepler’s 3rd Law of motion. He suggested the presence of “dark matter” a type of matter that conveys the gravitational force of associated with mass but does not interact with light. Dark matter is believed to be the most likely source of difference in observed and theoretical rotation curves of galaxies. Current cosmological models of the universe have dark matter accounting for 23% of the universe while “dark energy” accounts for another 72% of the universe. Together our “observable” universe only accounts for 5 % of the total “stuff” in the universe. There are several different proposals for what could constitute dark matter including WIMPs, neutrinos, and axions, or some other super symmetric particle. There are several projects currently searching for these particles including the CERN Axion Solar Telescope looking for axions, Antarctic Muon And Neutrino Detector Array looking for neutrinos, the Large Hadron Collider looking for super symmetric particles, and the XENON100 detector looking for WIMPs. CDM is clearly the odds on favorite in terms of funding and manpower as a solution to galactic rotation curves. [3, 5, 9, 12, 13]

Current dark matter research is focusing on different candidates of dark matter specifically WIMPs, Weakly Interacting Massive Particles. These particles are considered to be prime targets to be “dark matter” if they can be detected. The XENON100 project was set up in an underground cave in Italy with the purpose being to detect WIMPs. A cave was filled with chilled Xenon and isolated with rock, concrete, iron, etc. This isolation is to prevent cosmic rays from interacting with Xenon. The WIMPs were postulated to be found by their interaction with Xenon, which in turn would cause the emission of a photon, which could be detected in an underground, shielded detector. The Xenon project released the first 100 days of data and detected a total of 6 interactions, 2 of which were proven to be electronic noise, 3 which matched the prediction of stray cosmic radiation, and 1 unaccounted for particle. This did not match the expectations of the researchers and has led researchers to throw out the one remaining detection as another electronic error. [3, 5, 12, 13]

The modified Newtonian gravity theory or MOND theory was originally proposed by Moti Milgrom to explain the galactic rotation curve. He proposed that in very low-density situations Newton’s law of gravity must be modified to account for the galactic rotation curve. The gravitational acceleration in "low gravity" areas can be modeled to fit the observational data. The gravitational acceleration is modified the term Mu(g/a0) where a0 is a new parameter and Mu is an asymptotic function which has yet to be finalized but there are three main candidates (1/x^2, 1/x, e^-x). The asymptotic nature of Mu allows the modification of Newtonian gravity to approach zero the stronger the gravitational field and therefore match the observational evidence for the inner part of galactic rotation curves. The MOND theory even accounts for pressure supported systems showing that they are finite with density falling rapidly as a function of 1/r^4. The most impressive part about the MOND theory is its ability to predict the galactic rotation curve based solely on observable evidence in the near infrared region of the spectrum. This is not just the general averaged rotational velocity but very specific values based on the clumping of observable matter. The MOND theory even accounts for variation in the M/L relationship of the T-F relation based on the color of galaxies. The MOND theory accounts for variation of surface brightness of the galaxy has a corresponding variation in the MOND curve and the observed rotational curve. The MOND theory is not without its problems, it fails to account for the motion in certain super clusters unless a much smaller amount of “dark matter” is considered. The Bullet Cluster provides and excellent example for the problems with the MOND. The “dark matter” effects, such as gravitational interaction seem to be centered far away from the observable mass indicating the presence of an undetectable form of matter. [6, 11]



[4]

The primary issued that I have with the theory of dark matter is epistemological. The MOND theory requires modification of a human-created law of explaining natural phenomena. As new information, such as galactic rotation curves, comes into play we will most likely modify the theory to incorporate the new information as MOND does. CDM requires us to postulate the existence of a particle that has never been observed, is not part of the standard model, and continually shows null results from increasingly more precise, and expensive, measurement. CDM may end up being the answer to the question of galactic rotation curves but it is the responsibility of the scientific community to act on falsifiable theories. MOND is falsifiable within our own solar system. There exist points within our solar system where the gravitational field is essentially canceled out between the sun and some other massive body such as Jupiter. In this very small region gravity can be assessed and determined to be a function of Newton’s law of gravity or the MOND. This can be done for less than the cost spent on one of the four experiments currently searching for dark matter particles listed earlier.

The problem with searching for dark matter is that you may be searching for something that isn’t there, similar to Michelson in his search for ether. Michelson spent years refining the accuracy of his detector, sorting through data in order to measure something that wasn’t there. The propagation of a wave required a medium at the time based on all observational evidence. The question of when to change direction and seek a new explanation for observation remains a key, and yet unsolved aspect of the scientific process.

References
[1] Baez, J. (2006, August 16). This Weeks Finds In Mathematical Physics [Figure]. Retrieved May 4, 2011, from http://math.ucr.edu/home/baez/week238.html
[2] Cepheid Variable Stars. (n.d.). Cepheid Variable Stars (figure ). Retrieved May 3, 2011, from http://www.optcorp.com/edu/articleDetailEDU.aspx?aid=1646
[3] Cown, Science News, R. (n.d.). Underground Experiment Fails to Find Dark Matter. Retrieved April 27, 2011, from Science News website: http://www.wired.com/wiredscience/2011/04/xenon100-dark-matter/
[4] FORS1. (2004). Spiral Galaxy NGC 1232 [Data file]. Retrieved from http://apod.nasa.gov/apod/ap040125.html
[5] How does AMANDA work? (n.d.). Public Information on AMANDA. Retrieved May 4, 2011, from Barwick Group, School of Physical Sciences, University of California Irvine website: http://amanda.uci.edu/public_info.html
[6] McGaugh, S. (n.d.). The MOND Page. Retrieved May 3, 2011, from University of Maryland website: http://www.astro.umd.edu/~ssm/mond/
[7] Nave, C. R. (n.d.). Cepheid Vairables. Retrieved May 4, 2011, from Georgia State University website: http://hyperphysics.phy-astr.gsu.edu/hbase/astro/cepheid.html
[8] Newton, W. (2011, Spring). Astronomy 561 Notes. Powerpoint Presention presented at PHYS561, Mesquite Tx.
[9] Newman/NASA, P. (2010, April 19). Dark Matter. Retrieved May 5, 2011, from NASA
website: http://imagine.gsfc.nasa.gov/docs/science/know_l1/dark_matter.html
[10] Sanders, R. H. (2009). Modified Newtonian Dynamics: A Falsification of Cold Dark Matter. Advances in Astronomy, 2009, article id 752439.
[11] Schilling, G. (2007, April). Battlefield Galactica: Dark Matter vs. MOND. Sky & Telescope. Retrieved from http://www.allesoversterrenkunde.nl/artikelen/617-Battlefield-Galactica-Dark-Matter-vs-MOND.html
[12] Schumann, M. (2011, May 5). Xenon 100. In XENON 100 releases first Results. Retrieved May 3, 2011, from Rice University website: http://xenon.physics.rice.edu/
[13] Sugarbaker, A. (2007, December 2). Figure #1. In Early Evidence for Dark Matter:
The Virial Theorem and Rotation Curves [Figure #1]. Retrieved May 4, 2011, from Stanford University website: http://large.stanford.edu/courses/2007/ph210/sugarbaker2/

Tuesday, May 3, 2011

Cosmological observations in a modified theory of gravity (MOG)

Cosmological observations in a modified theory of gravity (MOG)
J.W. Moffat a,b , V.T. Toth a April 14, 2011

a Perimeter Institute for Theoretical Physics, Waterloo, Ontario N2L 2Y5, Canada
b Department of Physics, University of Waterloo, Waterloo, Ontario N2L 3G1, Canada

The paper in question tells of the modified gravity theory or MOG as a theory devoid of exotic dark matter and that the authors have used the MOG to model galaxy cluster mass, rotational curves and lensing. They agree that adiabatic cold dark matter (ACDM) does indeed provide explanation cosmological phenomenon but in their opinion creates a massive discrepancy or hole in the universe. This hole or omission contends that roughly 95% of the matter in the universe is undetectable to us at this time. The authors claim that they can use their MOG to answer the problems encountered from the standard gravitational theory.
The modified theory of gravity is a theory full based on scalar, tensor and vector fields and it has evolved into Scalar-Tensor-Vector Gravity (STVG) theory. This in principle has stated that the dampened structure growth at low values of k and using a lesser effect of the Silk damping, a process which creates a more uniform density of the universe as well as Cosmic Microwave Background (CMB), are very similar to the calculations determined by the adiabatic cold dark matter theory. Though the do differ.

The MOG calculations are jagged than the predicted graphs of the ACDM. This is believed to have occurred due to the fact that much more data is required to fill and smooth the data series, as more and more galaxies are surveyed this is predicted to make the curve a better fit. There is also the problem of the dampened power spectrum at both high and low values of k as sub galactic distance scales are used. Over the MOG and CMB spectrum a change in the value of the Mukhanov’s constant produces similar results as the dark matter curves.

The authors are convinced that their Modified theory of Gravity (MOG) can indeed explain the discrepancies of Newtonian mechanics. It seems that just varying constants in the equations can indeed fit the observable data. As we, ourselves, have found out through our own homework calculations. Only time will tell.