Saturday, August 8, 2009
Red Shift Z In The Wild II
Redshift Z and distance for 2003
Redshift Z and distance for 2006
Reader jeant8 commented that the unusual redshift values shown in my previous redshift post may be due to changes in the believed value of certain constants that occurred over the years that the data was collected. It's an interesting idea, and to test it I split out the years that had several observations posted by NASA. These were the years 2003 and 2006. The results are shown in the charts above. The data used for these charts is the same data used in the previous post.
The 2003 collection almost looks right. There's only one problem. Redshift values of 3.8 and 4.3 are both assigned to the distance of 12 billion light years. That's a change of ~12% in redshift z with no change in distance. There should be at least ~0.25 billion lights years difference between those two redshift z values.
The 2006 data seems to be more problematic. Not only are the curves not smooth there's actually the case where a low redshift is assigned to a much greater distance than two higher redshift values.
So it seems the problems with NASA's redshift values cannot be explained by changes over the years in redshift interpretation. Values from the same timeframe are inconsistent. It should be pointed out that not only do the values in the two charts presented here come from the same timeframe, they come from the same team of astronomers using the same equipment. That would seem to weed out a lot of possible explanations on why the data appears to be wrong.
Subscribe to:
Post Comments (Atom)
A friend brought your post to my attention.
ReplyDeleteYou really want to do a scatter plot, not a bar plot for this type of data. In Apple Numbers '08, that's the option that just shows the dots. Then just plot distance vs. z or vice versa. Then it looks a lot better. BTW, thanks for including a table of the data you used.
I had complained to some NASA outreach people years ago (when H0 was being revised almost monthly) about including a distance number in press releases and how misleading it was. Which distance? Lookback time? How far away the object is *today*, or how far away was it when the light left, or how far away will it be if I try to travel to it starting today. The basic reply was that people want a number they can identify with.
An earlier commenter noted typos (Mly vs Mpc). The other more annoying issue is that the press office will often ask an individual researcher to provide a distance, who may use their 'favorite' H0 value, not necessarily the current value.
Publishing distances was such an annoyance in older catalogs that the professional practice today is to publish just the z value since that is strictly based on observation.
Bottom line: don't use press releases as your source for physical science data. Track down the original datasets, which are publicly available for all modern NASA-funded missions and projects since the 1990s. Sloan Digital Sky Survey
You can see the scatter in Hubble's original data at Ned Wright's site
Thank you for your help with this Mr. Bridgman. If I may, I have a few questions.
ReplyDeleteQuestion 1:
Following your link I went to the SDSS DR7 (data release 7) page. The "About DR7" has this to say about the redshift data:
QUOTE
============================================
The following caveats apply unchanged to DR7
Red leak to the u filter and very red objects
The u filter has a natural red leak around 7100 Å which is supposed to be blocked by an interference coating. However, under the vacuum in the camera, the wavelength cutoff of the interference coating has shifted redward (see the discussion in the EDR paper), allowing some of this red leak through. The extent of this contamination is different for each camera column. It is not completely clear if the effect is deterministic; there is some evidence that it is variable from one run to another with very similar conditions in a given camera column. Roughly speaking, however, this is a 0.02 magnitude effect in the u magnitudes for mid-K stars (and galaxies of similar color), increasing to 0.06 magnitude for M0 stars (r-i ~ 0.5), 0.2 magnitude at r-i ~ 1.2, and 0.3 magnitude at r-i = 1.5. There is a large dispersion in the red leak for the redder stars, caused by three effects:
The differences in the detailed red leak response from column to column, beating with the complex red spectra of these objects.
The almost certain time variability of the red leak.
The red-leak images on the u chips are out of focus and are not centered at the same place as the u image because of lateral color in the optics and differential refraction - this means that the fraction of the red-leak flux recovered by the PSF fitting depends on the amount of centroid displacement.
To make matters even more complicated, this is a detector effect. This means that it is not the real i and z which drive the excess, but the instrumental colors (i.e., including the effects of atmospheric extinction), so the leak is worse at high airmass, when the true ultraviolet flux is heavily absorbed but the infrared flux is relatively unaffected. Given these complications, we cannot recommend a specific correction to the u-band magnitudes of red stars, and warn the user of these data about over-interpreting results on colors involving the u band for stars later than K.
==========================================
LINK http://www.sdss.org/dr7/start/aboutdr7.html
==========================================
So I'm wondering how I can get good redshift data from a damaged camera that doesn't record redshift correctly. If I've gone to the wrong area on the SDSS site please let me know.
Question 2:
The interface to the query engine is rather confusing to me. For example, I saw no way to query for "perseus cluster", or the observation IDs NASA provides in their press release. Could you provide the steps I'd need to take to query the redshift data for, say, Stephan's Quintet?
Question 3:
I was eventually able to download a FITS file for some area of the sky. The file is not human-readable. Can you point me to a tool that can read this file or instructions on download a human readable format rather than FITS? To see what a mean, a sample of the FITS data is provided below.
=========================================
FITS DATA SAMPLE
=========================================
êA5$≈)5›f9]E@π/‚Bπ)À»
You're on a steep learning curve, magicjava!
ReplyDeleteNot only would you need to come up to speed, quickly, on various aspects of astronomy (the observational science), but also on SDSS!
I'll have a go at answering your three questions, briefly.
Q1: SDSS can be thought of as a two-stage project: imaging+photometry, followed by spectroscopy.
In the first stage, the sky is imaged through five filters, objects identified, and said objects characterised photometrically. For 'point sources' (stars, most quasars, some galaxies), this means estimating the object's brightness through the five filters and calculating its colours ('colour' in astronomy is just the difference in brightness in a pair of broadband regions). For extended objects (i.e. not point sources), photometric characterisation includes estimating colours and the degree to which the light is centrally concentrated (it's more complicated than this, of course). Getting estimates of the colours is important, because a) there are far more objects that can be detected photometrically than the telescope has time to observe spectroscopically, and b) the main objective of the original SDSS was to survey galaxies and quasars (so the spectroscopy should be reserved for those objects, not the vastly more numerous stars). The estimated colours allowed the SDSS software to select quasar and galaxy candidates with high confidence.
In the second stage, spectroscopy, the filters played no part ... so the worst effect of a leaky filter would be some greater uncertainties about the completeness estimates of the survey (e.g. some 'real' quasars were not observed spectroscopically, because they were incorrectly classified as stars).
Q2: SDSS observed galaxies, not clusters; the redshift of a cluster is some kind of weighted average of the redshifts of the galaxies in that cluster. In the x-ray region, individual galaxies tend to be rather weak sources, while rich clusters (such as Perseus) are strong ... so Chandra observations can be used to make estimates of the redshift of a whole cluster, in one go.
For Stephan's Quintet, you'd need the IDs of the individual galaxies (such as NGC 7317, NGC 7318, NGC 7319, and so on), or its sky coordinates (RA and Dec, think longitude and latitude).
Q3: FITS stands for Flexible Image Transport System; here's a webpage that gives a brief overview:
http://fits.gsfc.nasa.gov/fits_intro.html
There are a number of apps which convert FITS data to images, e.g. FITS Liberator:
http://www.spacetelescope.org/projects/fits_liberator/
I hope this helps.
Well that's a bit different from what I was expecting. I was expecting just to pull pre-calculated redshift data from a database.
ReplyDeleteBut I don't mind giving this a try. Wish me luck. :)
I don't know how familiar you are with the various bits of IT/computing/programming/etc that you'd need to do what you want, but I think this SDSS page would be a good place to start:
ReplyDeletehttp://www.sdss.org/dr7/products/spectra/index.html
In short, submitting queries to the Catalog Archive Server (CAS) and/or the Data Archive Server (DAS) would seem to be the way to go. Of course, you'll need some familiarity with the data structure, and the key caveats on the data itself ...
It's not the technical stuff so much as when Dr. Bridgman said "public data" I had in mind data meant for John Q. Public. This clearly isn't that. John Q. Public just gets the press release information that should be ignored due to its low quality.
ReplyDeleteIt's just not what I expected.
One should not use press releases as a primary source of data (unless you're studying press releases).
ReplyDeleteI took your data table and used Ned Wright's cosmology calculator
http://www.astro.ucla.edu/~wright/cosmo_02.htm
and most of the disagreements weren't that bad. Some could be chalked up to rounding errors.
Public data doesn't mean it's easy to use. FITS files include a lot of 'metadata' on conditions of the instruments, etc. If an instrument is having a severe problem, sometimes you just drop the data from it (and weight the remaining data for selection effects). Sometimes others find a way to fix the calibration.
For an intro in using the sky data, you might want to try the 'SkyServer'
http://cas.sdss.org/dr7/en/
which has a number of projects. There was also some tutorials on retrieving SDSS data as tables which I used about a year ago but have yet to get back to that project.
Another favorite data source is
http://cdsarc.u-strasbg.fr/cats/cats.html
but these are lists from research papers and you have to understand the details of how they are constructed to determine anything useful. They used to have a set of catalogs geared towards amateur astronomers but I can't find that anymore.
With all due respect sir, I think you mean one shouldn't use NASA press releases as a primary source of data. If I read in my local paper that my favorite running back averages 4.3 yards per carry, I pretty sure that what that means is my favorite running back averages 4.3 yards per carry. And I'm pretty sure it _doesn't_ mean I should feed 4.3 into Professor Smith's Running Back Calculator to get a "pretty good" idea of what the yards per carry may be.
ReplyDeleteSimilarly, if an account made press releases of that quality about the financial health of their company, they would be fired.
I understand that what NASA does is just a tad bit more difficult than calculating yards per carry. But you have to do those calculations anyway, regardless of whether or not there's a press release. You have the correct information. There's no excuse for not providing it to the public.
I'm not trying to give you a hard time and I really do appreciate you taking time to talk with me, but I think in this case you're trying to defend something that shouldn't be defended. It should be corrected.
BTW, I also understand that the data for astronomy changes over time, thereby invalidating past information. Accountants face this same situation where new laws can make accounting information obsolete. When an event like that occurs, they note it in their press release so that interested parties know to make appropriate changes to the older data.
ReplyDeleteIf I can't put credence in what NASA says about something as basic as redshift, how can I put credence in what they say about Quintessence, Neutronium, Phantom Energy, Dark Matter, Quark Matter, 11 Dimensions of Spacetime, Multiple Universes, a Big Bang whose initial configuration has only a 1 in 10^10^123 chance of actually occurring, Branes, Supersymmetrical Particles that have never been observed, and on and on and on.
I know you spend time arguing with Creationists, and I have to tell you, speaking as an Atheist, the things coming out of NASA sound pretty darn speculative. And when the basic information is wrong, believing that all the sensational stuff is correct is pretty difficult.
NASA press releases are written to be understood by a person with an 8th grade education - a rather depressing thought. In the process much gets lost - including numeric precision - often in exchange for hyperbole.
ReplyDeleteAs to your laundry list of doubt, note that some of these are still hypotheses. When someone reports a test of one of these ideas that might be possible with NASA equipment, they can propose for instrument time. That can be worthy of a press release - but it generally doesn't make the theory 'confirmed'. I'll respond to two items in your laundry list, as I have loads of notes for a possible blog post on them.
Neutronium: Do you know the properties neutrons have in common with electrons? Do you know the electron state, which we have measured its effects in the laboratory, is the electron equivalent of neutronium? Neutronium is actually a very basic prediction of quantum mechanics, the same science that makes the semiconductor electronics in your computer possible. Quantum mechanics makes a number of predictions of 'exotic' states of matter, a number of which have been found in the lab, but it took decades after they were first predicted.
Dark Matter: a generic term which will change when we discover the real cause. Most likely it is a subatomic particle below our current detection threshold. With the accumulating indirect evidence for its existence, betting against it would be equivalent to betting against the existence of the neutrino in 1940.
I'd certainly love to read your post on Neutronium, so feel free to stop by a drop a line when it's up.
ReplyDeleteAs to dark matter, yes, I saw on your web site you generally consider it to be normal matter that's not shining bright enough to see through our telescopes. Sounds sensible to me.
However, most of what you hear in the press releases and even on wikipedia ( http://en.wikipedia.org/wiki/Dark_matter ) is that it's some strange form of nonbaryonic matter that doesn't interact with electromagnetic radiation. So, if you want, I _will_ bet you a dollar that form of dark matter doesn't exist.
P.S.
ReplyDeleteAnd just to be clear, I'll even let you count Neutrinos. If it's Neutrinos causing the gravitational anomaly, you win a dollar.
Neutrinos were once *the* popular dark matter candidate but are today just a portion of the 'missing matter' and no longer 'missing'.
ReplyDeleteFor the latest & greatest on particle news and cosmological impact, see summaries at:
http://pdg.lbl.gov/2009/reviews/contents_sports.html
BTW, neutrinos are nonbaryonic and don't interact electromagnetically.
Considering that the classes of particles we know can be assembled in a hierarchy based on the forces they 'see':
1) strong, electromagnetic, weak, gravitational (quarks)
2) electromagnetic, weak, gravitational (electrons, muons, tau)
3) weak, gravitational (neutrinos)
It would be pretty arrogant to assume there is not a
4) gravitational (dark matter)
Historically, Nature has surprised us more than once in having more particles types than we thought we needed.
Could be, but I'll still offer you the bet if you want to take it. *waves dollar*
ReplyDeleteBTW, I'm trying to find reliable information on the relationship between entropy and plasma. Would happen to know of any good sources for that?
P.S.
ReplyDeleteTo be a bit more specific, I'm looking for things that answer questions like:
*) How can I look a a plasma system, such as a Birkeland Current or a Double Layer, or a Magnetohydrodynamic system and tell a low entropy system from a high entropy system.
*) How stable are the various types of plasma configurations over time (seconds, years, millions of years, etc.)
Unfortunately, the methods of examining systems via entropy seems to be poorly documented outside physical chemistry. A number of stellar nucleosynthesis and supernova researchers (such as the late Hans Bethe) used the methods well. I've tried to find good sources on this technique but even my professors who worked with Bethe didn't have a good resource. It appears that entropy is calculated at each timestep for complex systems.
ReplyDeleteSee Bowers & Wilson
"Numerical Modeling in Applied Physics and Astrophysics", 1991
I have some extensive resources on plasma physics linked off my blog in the Electric Universe topics. You might find them useful.
Thank you Dr. Bridgman. I'm currently plowing through "Plasma Physics for Astrophysics" from Princeton Press. It has a discussion of entropy in the section on Braginski Equations. I'll be sure to give the links you mentioned a red as well. :)
ReplyDelete