How big is the universe? How far away are distant galaxies? What is the shape of our universe, and how is it changing over deep time? These are all key questions in the field of physics known as cosmology, which has fascinated humanity from the first moments we looked up at the stars and wondered what those beautiful little lights are.
Ultimately, cosmology is about understanding our place in the universe.
When we look at our gorgeous night sky, we can count about 9,000 stars (combining north and south hemisphere views) with the naked eye. We now know, however, that there are at least 170 billion galaxies in the universe, with an average of 3 to 400 billion stars in each galaxy. This gives a massive total of about 59.5 by 1021, or about 60 septillion stars, in the observable universe. That’s a big number.
With this massive diversity of suns to observe, contemplate and perhaps one day even explore, we can’t help but wonder how far away these distant suns are. And the obvious problem is that we can’t just go to them and measure our distance traveled in doing so. We have to use the evidence available in the light from these distant objects to make our best judgments about distance.
A “standard model” of cosmology has emerged in the past 20 years, based in part on the 1998 discovery by two different teams of American cosmologists that the universe is not only expanding but actually accelerating in its expansion. The standard model uses as its basis Hubble’s Law, which correlates the distance of galaxies with the observed redshift of those galaxies, as well as a number of other categories of data to cross-check distances derived from Hubble redshift.
As well-established as the standard model is currently, there are still many remaining questions, some of which we discuss below. In conducting my own research, for a book that includes these issues as well as for the below interview, I can’t shake the feeling that our knowledge of the cosmos is still fairly nascent, as was the case a few years ago when I interviewed Harvard astronomer Bob Kirshner here.
Andy Howell is a staff scientist at the Las Cumbres Observatory Global Telescope Network and adjunct faculty in physics at the University of California Santa Barbara. He has focused on the use of supernovae for measuring cosmic distances and is a key contributor to this growing field. He also studies dark energy and its role in explaining the shape of our universe.
Andy helps me pick through the structure of the standard model in our interview below, focusing on perhaps the key part of the standard model: the “cosmic distance ladder.” How do we know how far other stars and galaxies are from us? The cosmic distance ladder is our effort to answer this question. It relies on things like parallax (using the motion of the Earth in its orbit to literally triangulate the distance of nearby stars), Cepheid variable stars that have somewhat regular periodicities that we can see, for the distance to nearby galaxies, and supernovae explosions, and many other sets of data, for determining even longer distances. We focus below on the science of supernovae and how they can help to increase the certainty of our current cosmic distance ladder.
We also delve a little into the philosophy of science even though this isn’t Andy’s forte, as he acknowledges below. A large part of my interest in studying cosmology and other sciences is to learn more about how science is really done. A recent New York Times piece by philosopher of science James Blachowicz argued that there isn’t much method behind the scientific method, echoing long-standing critiques like Paul Feyerabend’s Against Method.
Most scientists would strongly disagree with this, but Blachowicz’s point is that how science is really done is far more messy than most scientists would acknowledge. He also makes the valid point that very few scientists study how science is done across different fields. So while any scientist can draw conclusions, of course, about how science is done in their own field, the study of how science is done across different fields is in fact the purview of the philosophy of science. My feeling is that scientists in all fields would generally benefit from a better understanding of the philosophy of science because of its broader purview.
I interviewed Andy by email after meeting him at the wonderful and entertaining Astronomy on Tap public educational events held once a month at the Matrix club in downtown Santa Barbara to share the latest astronomical and cosmological knowledge with the public.
Tam Hunt: What is the state of cosmology today? What are the most exciting developments here or on the horizon?
Andy Howell: We now know there is something making the universe accelerate. We don't know what that is, so we call it "Dark Energy" by analogy with "Dark Matter," though the two are unrelated. We're now measuring the properties of Dark Energy, and we can say that the preliminary results look like it matches what Albert Einstein called the Cosmological Constant. That means the Dark Energy is a property of the vacuum of space. And strangely, the Dark Energy doesn't get diluted as the universe expands, like matter does. The next exciting thing is the launching of a satellite to measure the properties of Dark Energy. That will allow us to see if it has evolved in time or if it really is a constant.
TH: In terms of understanding the shape of our universe and things like accelerating expansion, supernovae have been key in understanding cosmic distances. What's the basic approach for using supernovae to judge cosmic distances and to figure out the ongoing evolution of our universe?
AH: Supernovae are something like "standard candles" — they have about the same intrinsic luminosity, so you can measure their observed brightness and from that determine a distance to them. It is like measuring the brightness of a 100-watt lightbulb — from that you can work out how far away it is. Supernovae aren't perfectly standard though, so we actually calibrate their brightness based on other properties they have. To do all of this, you measure supernovae very accurately over the course of a couple months using digital cameras, observing the supernovae through filters. You also split the light into a spectrum, like a rainbow, to measure the composition of each supernova.
TH: What are the main theories about how supernovae form? Can we expect other types of supernovae to be added to this list in the coming years?
AH: The kind of supernovae used in cosmology are the explosions of white dwarf stars, known as type Ia supernovae. Those are the burned out cores of stars like our sun. They are as heavy as the sun, but packed into the size of the Earth. We think they are in a binary system with another star, and the white dwarf steals matter from the second star until it gets too massive, and this causes it to explode. There are many other kinds of supernovae, the most famous of which is a massive star, which collapses into a black hole. Depending on how you count, there are as few as a half-dozen or as many as maybe two dozen types of supernovae. We're finding new kinds all the time.
TH: With an increasing number of theories about how supernovae originate being developed in recent years, how confident should we be in the current accelerating universe cosmology?
AH: As hard as it is to believe, the origins of the supernovae and their use in cosmology are largely disconnected. That's because we just use the fact that they are nearly standard candles empirically. And we actually compute relative distances. It is like if you had two lights in the distance that you knew were the same wattage, but didn't know exactly what wattage. You could tell if, say, one bulb was twice as far away as the other. It doesn't matter if you know how the bulb was manufactured.
We're very confident in the properties of the universe derived from supernovae, because they have been verified by many different groups using different techniques, both with and without supernovae.
TH: In terms of supernovae being standard candles, can you clarify how we know they're standard candles if we can't confirm the particular type of supernovae involved in each case?
AH: You do have to know the type of supernovae involved — they have to be Type Ia supernovae, i.e., the kind without hydrogen, but which have strong silicon in their spectrum. Basically for those types of supernovae, in cases where we know the distance they have a tight distribution of luminosity, and this luminosity correlates with other variables like the color of the supernova and how long it takes the light to fade. You can use these correlations to make them better standard candles. We are quite sure they are the thermonuclear destruction of white dwarfs, but that's beside the point. Theoretically, it doesn't matter what they are, as it doesn't change the correlations. In principle, knowing that some of them are slightly different could improve their use as standard candles, as it might make the correlation tighter.
TH: Staying for a moment with the standard candle concept, aren’t there different types of Type Ia supernovae now? The Chandrasekhar limit that was thought to be the theoretical basis for the standard nature of Ia has been found to be inaccurate in many cases. For example, the new category of SNe Iax stars have lower luminosity than normal Ia stars. And some recent work has shown that even standard SNe Ia don’t have the same intrinsic light curves, color or luminosity. Given these new data about SNe Ia how can we be confident that our notions of absolute brightness, and thus distance, aren’t wide of the mark?
AH: Actually, even that critique is wrong, though it is a misconception that is common even among professional astronomers. Nobody ever did a calculation regarding the Chandrasekhar limit and then said, “Aha, this means they should all be at this luminosity.” It was just a form of reasoning that some astronomers outside the field used to help themselves feel better after the fact about SNe Ia being used empirically. “Well that makes sense because they are ‘standard bombs’ — they all have the same yield.” In actuality, supernovae are powered by radioactive nickel, and the amount of nickel they produce varies by a factor of 10. Even if they explode at the Chandrasekhar mass (which is a big if), there are other factors that affect their “yield.” You can just see this empirically without understanding why. But it doesn’t matter because you can correct for it. The bright ones take a long time to fade, and are bluer, so you can correct for their different intrinsic brightnesses.
And yes, we think these so-called Iax supernovae are, in fact, duds that don’t completely blow up the white dwarf. But they look very different from normal Type Ia supernovae, so you can always tell them apart.
The new research you allude to (Mandel et al 2011) was actually saying that when you add infrared data and a fancy theoretical framework, it can give you more information. With more information, it makes them better, not worse, as cosmological tools. Imagine you are using people’s heights as standard rulers. If you measure someone’s angular size, and know how tall they are, you can estimate the distance to them. Let’s say you saw someone standing far away and you couldn’t make out the gender, age or nationality of the person. If you just assume they are an average adult height, you can get an approximate distance. But here’s the real trick — you know how wrong you’ll be because the standard deviation of people’s heights tells you the error on your distance.
But now let’s assume you can figure out that the person is an adult male Swede. Their average height is on the taller side, so that changes your answer, but it is still within the range of error of your first estimate. Now, armed with this new information, you care more certain about the height of the person, so you have a tighter error on your distance. So the more we find out about supernovae, the better our distance estimates get. They rarely invalidate our original estimates because we correctly estimated our errors in the first place. We used the actual spread in luminosities to estimate the errors, which is equivalent to the standard deviation of heights.
TH: Maybe I’m being obtuse here, but I’m still not seeing how you can know with confidence the intrinsic brightness of these supernovae, and thus their distance, even in a relative sense. You write above that you can correct for intrinsic brightness given different light curves among a heterogeneous population of SNe Ia. But how do you know what corrections to make without knowing the absolute brightness? Your metaphor using humans is helpful, but it works because we know something about the “intrinsic height” of humans, from direct experience that we can measure up close. It seems that you’re agreeing that there’s actually nothing standard about “standard candles,” so if that’s the case, what is the fundamental basis for using them as standard candles even for measuring relative distance?
AH: It isn’t that there’s nothing standard, it is that we use what we know about how their colors and light curve shape relate to their brightness to make them more standard. Some people call them standardizable candles instead of standard candles.
As far as how relative distances can help you, imagine you are going to build a slide for kids in the backyard, and you have some plans, some wood and a ruler. You build the slide and kids use it, but then you find out your ruler was too large by 5 percent. The slide still has the same curve to it — it is just scaled differently than you thought it was. That slide is like the curve of the model we fit to data in the distance vs. redshift that allows us to measure the properties of Dark Energy. If it bends a certain way, it tells us the Dark Energy has certain properties. But it doesn’t matter if the absolute size of it is off.
TH: You mention above that we know the distance to some Type Ia supernovae, and that allows us to calculate the relative distances of more distant supernovae. Can you explain how we know the distance to some Type Ia supernovae? Are you referring to the role of Cepheid variables or additional factors also? If we look at Cepheids, Riess et al. 2016 doubled the number of reliable Cepheid measurements for SNe Ia galaxy hosts to just 18, which still seems a very small number of data points to impart much confidence in the general cosmic distance ladder.
There are some areas where absolute distance does matter, and that is in determining the Hubble constant. And to measure things like that we want to have supernovae in the same galaxy as some other distance indicator like a Cepheid Variable or a maser. Yes, 18 may be a small number, but these distances can be measured very precisely. And there are other ways of determining the Hubble constant, too, and they (mostly) agree.
TH: In terms of verifying cosmological results without using supernovae, can you elaborate on how this is done?
AH: Look at this graph here to see a good visual depiction of three major sources of data that all cross over and confirm each other at least partially. It isn't state of the art anymore, but it makes the point. Constraints on mass are along the bottom axis, and dark energy on the vertical axis. The blue shows constraints from Supernovae, the green from the Cosmic Microwave Background and the orange from Baryon Acoustic Oscillations. The gray shows the overlap from all three. But if you took away the supernova results, the other two would still agree in the gray region. There are other techniques not plotted as well — weak lensing, measuring the Hubble Constant with Cepheids or strong lensing, measuring clusters of galaxies. There are enough independent indicators that the problem is over-constrained.
TH: Some recent results from the Planck Collaboration (a 2015 paper states: "Both tensions drive the Bennett et al (2014 ) value of [the Hubble constant] away from the Planck solution”) suggest that the Hubble constant value as measured from the cosmic microwave background is now diverging further, as new data comes in, from the Hubble constant value derived from supernovae data. Is this increasing divergence suggesting that maybe our current cosmological models may need to be revised ("new physics")? Or is it something less serious?
AH: The latest results on the most accurate value of the Hubble constant are from Riess et al.’s major 2016 paper, “A 2.4% Determination of the Local Value of the Hubble Constant.” A good rule of thumb is that if two results by different authors don't exactly agree, but they are really close, it probably comes down to them not estimating their errors correctly. You can see some indication of this — when the Riess paper uses different combinations of experiments, they gets different answers. That's almost certainly what it is — no need to jump to new physics just yet. They do outline possible new physics in the paper though, just to cover their bases. I think it is in the “fun to think about but odds are against it” category.
TH: Some respected astronomers have suggested that we may not have the right theory behind redshift, which is the basis for the accelerating universe cosmological model in terms of viewing high redshift supernovae as being more distant. William Tifft, for example, has for decades maintained that redshift is quantized and thus the normal Hubble constant explanation is incomplete. This is known as the Lehto-Tifft redshift quantization model (Tifft 2003). What is your view on the quantized redshift idea?
AH: That model was developed in the '70s when data was quite limited. It didn’t stand up to further scrutiny — today we have millions of redshifts, and there is no quantization. It has been thoroughly discredited.
TH: More controversially, have you reviewed Halton Arp’s work, which he produced over the span of four decades from the 1970s until the first decade of the 20th century, on redshift anomalies? Was he off-base, or did he indeed find some strange anomalies for the high redshift = further away assumptions of Hubble’s Law?
AH: Arp was wildly wrong. He came up with his ideas in the '60s before galaxy evolution, active galactic nuclei and the expansion history of the universe were fully understood. Again, his ideas have been thoroughly discredited. He pointed to some galaxies that looked interacting, but had different distances. Today we know that these are actually just superposed unrelated galaxies, and it happens all the time by chance. And every galaxy has undergone tons of mergers, so it isn’t even surprising when two superimposed galaxies look like they have interacted recently. They did, just not with each other. Today, we have exquisite statistics on millions of galaxies and there’s nothing fishy going on. Now that he’s dead, I’m not sure there is a single legitimate active astronomer who believes any of that.
TH: Turning to the effects of dust, which is an acknowledged source of uncertainty with respect to accurate measurements of SNe Ia, have astronomers figured out how to adjust for this effect? The same Mandel 2011 paper mentioned above creates a Bayesian model using near infrared data to reduce uncertainty, and they have followed up with more recent work on this, but do problems remain in terms of adjusting for dust reddening, particularly with respect to pink dust?
AH: Dust is always a problem, but we take several different approaches: (1) Go to the infrared where it doesn’t much affect things. (2) Don’t just assume the dust is like Milky Way dust — measure its properties and correct for the reddening. (3) Eliminate the most reddened supernovae so that if you are wrong you aren’t wrong by much. (4) Estimate your errors correctly. Then you know how much you might be off by. (5) Use different techniques for determining distances and make sure they agree. (6) Use supernovae in galaxies that have little or no dust and make sure they agree with the ones in dusty galaxies.
In the end, after using all these approaches, the results had better hang together, and they pretty much do.
TH: Expanding my focus a bit to look at the philosophy of science, are you more of a Kuhnian or Popperian with respect to how science uncovers truth? Or is the philosophy of science not of concern for you?
AH: I like the Feynman quote: “Philosophy of science is about as useful to scientists as ornithology is to birds.” Maybe it’s a little harsh, but I think in general when you science works like X or science works like Y and you get into camps you are just creating artificial categories by only looking at pieces of the whole picture. Now I do care very much about the process of science — for example, finding ways to do more blinded studies, share information or improving diversity in science.
TH: Sean Carroll has a new book out called The Big Picture, and it looks at the worldview that modern cosmology leads us to. He also describes his own philosophy of “poetic naturalism.” Do you agree with Carroll’s approach in terms of attempting to find meaning or spiritual solace in modern science? Or do you follow a different approach?
AH: I haven’t read the book, but I’ve seen Sean talk about the subject recently. I’m careful about the use of the word spiritual, but I’d imagine that the sense of awe and wonder I feel about the universe is the same that a religious person feels — i.e., it is the same brain process, we just attribute it to different things. And of course there’s meaning in science. My dad made a full recovery from a heart attack because they chilled his body for 24 hours and slowly thawed him out. That’s astounding, and it makes a real everyday difference in my life. I just flew to Athens in a giant metal tube and interacted with people there in a way the ancient Greeks could never have imagined. And now we’re figuring out some of the real answers to the questions about the way the universe operates that they posed. That’s pretty profound.
— Tam Hunt is a lawyer and owner of Community Renewable Solutions LLC, a renewable energy project development and policy advocacy firm based in Santa Barbara and in Hilo, Hawaii; co-founder of Solar Trains LLC, and author of the new book, Solar: Why Our Energy Future Is So Bright. Click here to read previous columns. The opinions expressed are his own.