Another aspect of Planck’s legacy bears examining.
A couple of months ago, the 2018 Gruber Prize in Cosmology was awarded to the Planck Satellite. This was (I think) a well-deserved honour for all of us who have worked on Planck during the more than 20 years since its conception, for a mission which confirmed a standard model of cosmology and measured the parameters which describe it to accuracies of a few percent. Planck is the latest in a series of telescopes and satellites dating back to the COBE Satellite in the early 90s, through the MAXIMA and Boomerang balloons (among many others) around the turn of the 21st century, and the WMAP Satellite (The Gruber Foundation seems to like CMB satellites: COBE won the Prize in 2006 and WMAP in 2012).
Well, it wasn’t really awarded to the Planck Satellite itself, of course: 50% of the half-million-dollar award went to the Principal Investigators of the two Planck instruments, Jean-Loup Puget and Reno Mandolesi, and the other half to the “Planck Team”. The Gruber site officially mentions 334 members of the Collaboration as recipients of the Prize.
Unfortunately, the Gruber Foundation apparently has some convoluted rules about how it makes such group awards, and the PIs were not allowed to split the monetary portion of the prize among the full 300-plus team. Instead, they decided to share the second half of the funds amongst “43 identified members made up of the Planck Science Team, key members of the Planck editorial board, and Co-Investigators of the two instruments.” Those words were originally on the Gruber site but in fact have since been removed — there is no public recognition of this aspect of the award, which is completely appropriate as it is the whole team who deserves the award. (Full disclosure: as a member of the Planck Editorial Board and a Co-Investigator, I am one of that smaller group of 43, chosen not entirely transparently by the PIs.)
I also understand that the PIs will use a portion of their award to create a fund for all members of the collaboration to draw on for Planck-related travel over the coming years, now that there is little or no governmental funding remaining for Planck work, and those of us who will also receive a financial portion of the award will also be encouraged to do so (after, unfortunately, having to work out the tax implications of both receiving the prize and donating it back).
This seems like a reasonable way to handle a problem with no real fair solution, although, as usual in large collaborations like Planck, the communications about this left many Planck collaborators in the dark. (Planck also won the Royal Society 2018 Group Achievement Award which, because there is no money involved, could be uncontroversially awarded to the ESA Planck Team, without an explicit list. And the situation is much better than for the Nobel Prize.)
However, this seemingly reasonable solution reveals an even bigger, longer-standing, and wider-ranging problem: only about 50 of the 334 names on the full Planck team list (roughly 15%) are women. This is already appallingly low. Worse still, none of the 43 formerly “identified” members officially receiving a monetary prize are women (although we would have expected about 6 given even that terrible fraction). Put more explicitly, there is not a single woman in the upper reaches of Planck scientific management.
This terrible situation was also noted by my colleague Jean-Luc Starck (one of the larger group of 334) and Olivier Berné. As a slight corrective to this, it was refreshing to see Nature’s take on the end of Planck dominated by interviews with young members of the collaboration including several women who will, we hope, be dominating the field over the coming years and decades.
This week, we released (most of) the final set of papers from the Planck collaboration — the long-awaited Planck 2018 results (which were originally meant to be the “Planck 2016 results”, but everything takes longer than you hope…), available on the ESA website as well as the arXiv. More importantly for many astrophysicists and cosmologists, the final public release of Planck data is also available.
Anyway, we aren’t quite finished: those of you up on your roman numerals will notice that there are only 9 papers but the last one is “XII” — the rest of the papers will come out over the coming months. So it’s not the end, but at least it’s the beginning of the end.
And it’s been a long time coming. I attended my first Planck-related meeting in 2000 or so (and plenty of people had been working on the projects that would become Planck for a half-decade by that point). For the last year or more, the number of people working on Planck has dwindled as grant money has dried up (most of the scientists now analysing the data are doing so without direct funding for the work).
(I won’t rehash the scientific and technical background to the Planck Satellite and the cosmic microwave background (CMB), which I’ve been writing about for most of the lifetime of this blog.)
Planck 2018: the science
So, in the language of the title of the first paper in the series, what is the legacy of Planck? The state of our science is strong. For the first time, we present full results from both the temperature of the CMB and its polarization. Unfortunately, we don’t actually use all the data available to us — on the largest angular scales, Planck’s results remain contaminated by astrophysical foregrounds and unknown “systematic” errors. This is especially true of our measurements of the polarization of the CMB, unfortunately, which is probably Planck’s most significant limitation.
The remaining data are an excellent match for what is becoming the standard model of cosmology: ΛCDM, or “Lambda-Cold Dark Matter”, which is dominated, first, by a component which makes the Universe accelerate in its expansion (Λ, Greek Lambda), usually thought to be Einstein’s cosmological constant; and secondarily by an invisible component that seems to interact only by gravity (CDM, or “cold dark matter”). We have tested for more exotic versions of both of these components, but the simplest model seems to fit the data without needing any such extensions. We also observe the atoms and light which comprise the more prosaic kinds of matter we observe in our day-to-day lives, which make up only a few percent of the Universe.
All together, the sum of the densities of these components are just enough to make the curvature of the Universe exactly flat through Einstein’s General Relativity and its famous relationship between the amount of stuff (mass) and the geometry of space-time. Furthermore, we can measure the way the matter in the Universe is distributed as a function of the length scale of the structures involved. All of these are consistent with the predictions of the famous or infamous theory of cosmic inflation), which expanded the Universe when it was much less than one second old by factors of more than 1020. This made the Universe appear flat (think of zooming into a curved surface) and expanded the tiny random fluctuations of quantum mechanics so quickly and so much that they eventually became the galaxies and clusters of galaxies we observe today. (Unfortunately, we still haven’t observed the long-awaited primordial B-mode polarization that would be a somewhat direct signature of inflation, although the combination of data from Planck and BICEP2/Keck give the strongest constraint to date.)
Most of these results are encoded in a function called the CMB power spectrum, something I’ve shown here on the blog a few times before, but I never tire of the beautiful agreement between theory and experiment, so I’ll do it again:
(The figure is from the Planck “legacy” paper; more details are in others in the 2018 series, especially the Planck “cosmological parameters” paper.) The top panel gives the power spectrum for the Planck temperature data, the second panel the cross-correlation between temperature and the so-called E-mode polarization, the left bottom panel the polarization-only spectrum, and the right bottom the spectrum from the gravitational lensing of CMB photons due to matter along the line of sight. (There are also spectra for the B mode of polarization, but Planck cannot distinguish these from zero.) The points are “one sigma” error bars, and the blue curve gives the best fit model.
As an important aside, these spectra per se are not used to determine the cosmological parameters; rather, we use a Bayesian procedure to calculate the likelihood of the parameters directly from the data. On small scales (corresponding to 𝓁>30 since 𝓁 is related to the inverse of an angular distance), estimates of spectra from individual detectors are used as an approximation to the proper Bayesian formula; on large scales (𝓁<30) we use a more complicated likelihood function, calculated somewhat differently for data from Planck’s High- and Low-frequency instruments, which captures more of the details of the full Bayesian procedure (although, as noted above, we don’t use all possible combinations of polarization and temperature data to avoid contamination by foregrounds and unaccounted-for sources of noise).
Of course, not all cosmological data, from Planck and elsewhere, seem to agree completely with the theory. Perhaps most famously, local measurements of how fast the Universe is expanding today — the Hubble constant — give a value of H0 = (73.52 ± 1.62) km/s/Mpc (the units give how much faster something is moving away from us in km/s as they get further away, measured in megaparsecs (Mpc); whereas Planck (which infers the value within a constrained model) gives (67.27 ± 0.60) km/s/Mpc . This is a pretty significant discrepancy and, unfortunately, it seems difficult to find an interesting cosmological effect that could be responsible for these differences. Rather, we are forced to expect that it is due to one or more of the experiments having some unaccounted-for source of error.
The term of art for these discrepancies is “tension” and indeed there are a few other “tensions” between Planck and other datasets, as well as within the Planck data itself: weak gravitational lensing measurements of the distortion of light rays due to the clustering of matter in the relatively nearby Universe show evidence for slightly weaker clustering than that inferred from Planck data. There are tensions even within Planck, when we measure the same quantities by different means (including things related to similar gravitational lensing effects). But, just as “half of all three-sigma results are wrong”, we expect that we’ve mis- or under-estimated (or to quote the no-longer-in-the-running-for-the-worst president ever, “misunderestimated”) our errors much or all of the time and should really learn to expect this sort of thing. Some may turn out to be real, but many will be statistical flukes or systematic experimental errors.
(If you were looking a briefer but more technical fly-through the Planck results — from someone not on the Planck team — check out Renee Hlozek’s tweetstorm.)
Planck 2018: lessons learned
So, Planck has more or less lived up to its advanced billing as providing definitive measurements of the cosmological parameters, while still leaving enough “tensions” and other open questions to keep us cosmologists working for decades to come (we are already planning the next generation of ground-based telescopes and satellites for measuring the CMB).
But did we do things in the best possible way? Almost certainly not. My colleague (and former grad student!) Joe Zuntz has pointed out that we don’t use any explicit “blinding” in our statistical analysis. The point is to avoid our own biases when doing an analysis: you don’t want to stop looking for sources of error when you agree with the model you thought would be true. This works really well when you can enumerate all of your sources of error and then simulate them. In practice, most collaborations (such as the Polarbear team with whom I also work) choose to un-blind some results exactly to be able to find such sources of error, and indeed this is the motivation behind the scores of “null tests” that we run on different combinations of Planck data. We discuss this a little in an appendix of the “legacy” paper — null tests are important, but we have often found that a fully blind procedure isn’t powerful enough to find all sources of error, and in many cases (including some motivated by external scientists looking at Planck data) it was exactly low-level discrepancies within the processed results that have led us to new systematic effects. A more fully-blind procedure would be preferable, of course, but I hope this is a case of the great being the enemy of the good (or good enough). I suspect that those next-generation CMB experiments will incorporate blinding from the beginning.
Further, although we have released a lot of software and data to the community, it would be very difficult to reproduce all of our results. Nowadays, experiments are moving toward a fully open-source model, where all the software is publicly available (in Planck, not all of our analysis software was available to other members of the collaboration, much less to the community at large). This does impose an extra burden on the scientists, but it is probably worth the effort, and again, needs to be built into the collaboration’s policies from the start.
That’s the science and methodology. But Planck is also important as having been one of the first of what is now pretty standard in astrophysics: a collaboration of many hundreds of scientists (and many hundreds more of engineers, administrators, and others without whom Planck would not have been possible). In the end, we persisted, and persevered, and did some great science. But I learned that scientists need to learn to be better at communicating, both from the top of the organisation down, and from the “bottom” (I hesitate to use that word, since that is where much of the real work is done) up, especially when those lines of hoped-for communication are usually between different labs or Universities, very often between different countries. Physicists, I have learned, can be pretty bad at managing — and at being managed. This isn’t a great combination, and I say this as a middle-manager in the Planck organisation, very much guilty on both fronts.
It was announced this morning that the WMAP team has won the $3 million Breakthrough Prize. Unlike the Nobel Prize, which infamously is only awarded to three people each year, the Breakthrough Prize was awarded to the whole 27-member WMAP team, led by Chuck Bennett, Gary Hinshaw, Norm Jarosik, Lyman Page, and David Spergel, but including everyone through postdocs and grad students who worked on the project. This is great, and I am happy to send my hearty congratulations to all of them (many of whom I know well and am lucky to count as friends).
I actually knew about the prize last week as I was interviewed by Nature for an article about it. Luckily I didn’t have to keep the secret for long. Although I admit to a little envy, it’s hard to argue that the prize wasn’t deserved. WMAP was ideally placed to solidify the current standard model of cosmology, a Universe dominated by dark matter and dark energy, with strong indications that there was a period of cosmological inflation at very early times, which had several important observational consequences. First, it made the geometry of the Universe — as described by Einstein’s theory of general relativity, which links the contents of the Universe with its shape — flat. Second, it generated the tiny initial seeds which eventually grew into the galaxies that we observe in the Universe today (and the stars and planets within them, of course).
By the time WMAP released its first results in 2003, a series of earlier experiments (including MAXIMA and BOOMERanG, which I had the privilege of being part of) had gone much of the way toward this standard model. Indeed, about ten years one of my Imperial colleagues, Carlo Contaldi, and I wanted to make that comparison explicit, so we used what were then considered fancy Bayesian sampling techniques to combine the data from balloons and ground-based telescopes (which are collectively known as “sub-orbital” experiments) and compare the results to WMAP. We got a plot like the following (which we never published), showing the main quantity that these CMB experiments measure, called the power spectrum (which I’ve discussed in a little more detail here). The horizontal axis corresponds to the size of structures in the map (actually, its inverse, so smaller is to the right) and the vertical axis to how large the the signal is on those scales.
As you can see, the suborbital experiments, en masse, had data at least as good as WMAP on most scales except the very largest (leftmost; this is because you really do need a satellite to see the entire sky) and indeed were able to probe smaller scales than WMAP (to the right). Since then, I’ve had the further privilege of being part of the Planck Satellite team, whose work has superseded all of these, giving much more precise measurements over all of these scales:
Am I jealous? Ok, a little bit.
But it’s also true, perhaps for entirely sociological reasons, that the community is more apt to trust results from a single, monolithic, very expensive satellite than an ensemble of results from a heterogeneous set of balloons and telescopes, run on (comparative!) shoestrings. On the other hand, the overall agreement amongst those experiments, and between them and WMAP, is remarkable.
And that agreement remains remarkable, even if much of the effort of the cosmology community is devoted to understanding the small but significant differences that remain, especially between one monolithic and expensive satellite (WMAP) and another (Planck). Indeed, those “real and serious” (to quote myself) differences would be hard to see even if I plotted them on the same graph. But since both are ostensibly measuring exactly the same thing (the CMB sky), any differences — even those much smaller than the error bars — must be accounted for almost certainly boil down to differences in the analyses or misunderstanding of each team’s own data. Somewhat more interesting are differences between CMB results and measurements of cosmology from other, very different, methods, but that’s a story for another day.
The first direct detection of gravitational waves was announced in February of 2015 by the LIGO team, after decades of planning, building and refining their beautiful experiment. Since that time, the US-based LIGO has been joined by the European Virgo gravitational wave telescope (and more are planned around the globe).
The first four events that the teams announced were from the spiralling in and eventual mergers of pairs of black holes, with masses ranging from about seven to about forty times the mass of the sun. These masses are perhaps a bit higher than we expect to by typical, which might raise intriguing questions about how such black holes were formed and evolved, although even comparing the results to the predictions is a hard problem depending on the details of the statistical properties of the detectors and the astrophysical models for the evolution of black holes and the stars from which (we think) they formed.
Last week, the teams announced the detection of a very different kind of event, the collision of two neutron stars, each about 1.4 times the mass of the sun. Neutron stars are one possible end state of the evolution of a star, when its atoms are no longer able to withstand the pressure of the gravity trying to force them together. This was first understood by S Chandrasekhar in the early years of the 20th Century, who realised that there was a limit to the mass of a star held up simply by the quantum-mechanical repulsion of the electrons at the outskirts of the atoms making up the star. When you surpass this mass, known, appropriately enough, as the Chandrasekhar mass, the star will collapse in upon itself, combining the electrons and protons into neutrons and likely releasing a vast amount of energy in the form of a supernova explosion. After the explosion, the remnant is likely to be a dense ball of neutrons, whose properties are actually determined fairly precisely by similar physics to that of the Chandrasekhar limit (discussed for this case by Oppenheimer, Volkoff and Tolman), giving us the magic 1.4 solar mass number.
(Last week also coincidentally would have seen Chandrasekhar’s 107th birthday, and Google chose to illustrate their home page with an animation in his honour for the occasion. I was a graduate student at the University of Chicago, where Chandra, as he was known, spent most of his career. Most of us students were far too intimidated to interact with him, although it was always seen as an auspicious occasion when you spotted him around the halls of the Astronomy and Astrophysics Center.)
This process can therefore make a single 1.4 solar-mass neutron star, and we can imagine that in some rare cases we can end up with two neutron stars orbiting one another. Indeed, the fact that LIGO saw one, but only one, such event during its year-and-a-half run allows the teams to constrain how often that happens, albeit with very large error bars, between 320 and 4740 events per cubic gigaparsec per year; a cubic gigaparsec is about 3 billion light-years on each side, so these are rare events indeed. These results and many other scientific inferences from this single amazing observation are reported in the teams’ overview paper.
A series of other papers discuss those results in more detail, covering the physics of neutron stars to limits on departures from Einstein’s theory of gravity (for more on some of these other topics, see this blog, or this story from the NY Times). As a cosmologist, the most exciting of the results were the use of the event as a “standard siren”, an object whose gravitational wave properties are well-enough understood that we can deduce the distance to the object from the LIGO results alone. Although the idea came from Bernard Schutz in 1986, the term “Standard siren” was coined somewhat later (by Sean Carroll) in analogy to the (heretofore?) more common cosmological standard candles and standard rulers: objects whose intrinsic brightness and distances are known and so whose distances can be measured by observations of their apparent brightness or size, just as you can roughly deduce how far away a light bulb is by how bright it appears, or how far away a familiar object or person is by how big how it looks.
Gravitational wave events are standard sirens because our understanding of relativity is good enough that an observation of the shape of gravitational wave pattern as a function of time can tell us the properties of its source. Knowing that, we also then know the amplitude of that pattern when it was released. Over the time since then, as the gravitational waves have travelled across the Universe toward us, the amplitude has gone down (further objects look dimmer sound quieter); the expansion of the Universe also causes the frequency of the waves to decrease — this is the cosmological redshift that we observe in the spectra of distant objects’ light.
Unlike LIGO’s previous detections of binary-black-hole mergers, this new observation of a binary-neutron-star merger was also seen in photons: first as a gamma-ray burst, and then as a “nova”: a new dot of light in the sky. Indeed, the observation of the afterglow of the merger by teams of literally thousands of astronomers in gamma and x-rays, optical and infrared light, and in the radio, is one of the more amazing pieces of academic teamwork I have seen.
And these observations allowed the teams to identify the host galaxy of the original neutron stars, and to measure the redshift of its light (the lengthening of the light’s wavelength due to the movement of the galaxy away from us). It is most likely a previously unexceptional galaxy called NGC 4993, with a redshift z=0.009, putting it about 40 megaparsecs away, relatively close on cosmological scales.
But this means that we can measure all of the factors in one of the most celebrated equations in cosmology, Hubble’s law: cz=H₀ d, where c is the speed of light, z is the redshift just mentioned, and d is the distance measured from the gravitational wave burst itself. This just leaves H₀, the famous Hubble Constant, giving the current rate of expansion of the Universe, usually measured in kilometres per second per megaparsec. The old-fashioned way to measure this quantity is via the so-called cosmic distance ladder, bootstrapping up from nearby objects of known distance to more distant ones whose properties can only be calibrated by comparison with those more nearby. But errors accumulate in this process and we can be susceptible to the weakest rung on the chain (see recent work by some of my colleagues trying to formalise this process). Alternately, we can use data from cosmic microwave background (CMB) experiments like the Planck Satellite (see here for lots of discussion on this blog); the typical size of the CMB pattern on the sky is something very like a standard ruler. Unfortunately, it, too, needs to calibrated, implicitly by other aspects of the CMB pattern itself, and so ends up being a somewhat indirect measurement. Currently, the best cosmic-distance-ladder measurement gives something like 73.24 ± 1.74 km/sec/Mpc whereas Planck gives 67.81 ± 0.92 km/sec/Mpc; these numbers disagree by “a few sigma”, enough that it is hard to explain as simply a statistical fluctuation.
Unfortunately, the new LIGO results do not solve the problem. Because we cannot observe the inclination of the neutron-star binary (i.e., the orientation of its orbit), this blows up the error on the distance to the object, due to the Bayesian marginalisation over this unknown parameter (just as the Planck measurement requires marginalization over all of the other cosmological parameters to fully calibrate the results). Because the host galaxy is relatively nearby, the teams must also account for the fact that the redshift includes the effect not only of the cosmological expansion but also the movement of galaxies with respect to one another due to the pull of gravity on relatively large scales; this so-called peculiar velocity has to be modelled which adds further to the errors.
This procedure gives a final measurement of 70.0+12-8.0, with the full shape of the probability curve shown in the Figure, taken directly from the paper. Both the Planck and distance-ladder results are consistent with these rather large error bars. But this is calculated from a single object; as more of these events are seen these error bars will go down, typically by something like the square root of the number of events, so it might not be too long before this is the best way to measure the Hubble Constant.
[Apologies: too long, too technical, and written late at night while trying to get my wonderful not-quite-three-week-old daughter to sleep through the night.]
[Uh oh, this is sort of disastrously long, practically unedited, and a mixture of tutorial- and expert-level text. Good luck. Send corrections.]
It’s been almost exactly a year since the release of the first Planck cosmology results (which I discussed in some depth at the time). On this auspicious anniversary, we in the cosmology community found ourselves with yet more tantalising results to ponder, this time from a ground-based telescope called BICEP2. While Planck’s results were measurements of the temperature of the cosmic microwave background (CMB), this year’s concerned its polarisation.
Background
Polarisation is essentially a headless arrow that can come attached to the photons coming from any direction on the sky — if you’ve worn polarised sunglasses, and noticed how what you see changes as you rotate them around, you’ve seen polarisation. The same physics responsible for the temperature also generates polarisation. But more importantly for these new results, polarisation is a sensitive probe of some of the processes that are normally mixed in, and so hard to distinguish, in the temperature.
Technical aside (you can ignore the details of this paragraph). Actually, it’s a bit more complicated than that: we can think of the those headless arrows on the sky as the sum of two separate kinds of patterns. We call the first of these the “E-mode”, and it represents patterns consisting of either radial spikes or circles around a point. The other patterns are called the “B-mode” and look like patterns that swirl around, either to the left or the right. The important difference between them is that the E modes don’t change if you reflect them in a mirror, while the B modes do — we say that they have a handedness, or parity, in somewhat more mathematical terms. I’ve discussed the CMB a lot in the past but can’t do the theory of the CMB justice here, but my colleague Wayne Hu has an excellent, if somewhat dated, set of web pages explaining the physics (probably at a physics-major level).
The excitement comes because these B-mode patterns can only arise in a few ways. The most exciting is that they can come from gravitational waves (GWs) in the early Universe. Gravitational waves (sometimes incorrectly called “gravity waves” which historically refers to unrelated phenomena!) are propagating ripples in space-time, predicted in Einstein’s general relativistic theory of gravitation. Because the CMB is generated about 400,000 years after the big bang, it’s only sensitive to gravitational radiation from the early Universe, not astrophysical sources like spiralling neutron stars or — from where we have other, circumstantial, evidence for gravitational waves, and which are the sources for which experiments like LIGO and eLISA will be searching. These early Universe gravitational waves move matter around in a specific way, which in turn induce those specific B-mode polarization pattern.
In the early Universe, there aren’t a lot of ways to generate gravitational waves. The most important one is inflation, an early period of expansion which blows up a subatomically-sized region by something like a billion-billion-billion times in each direction — inflation seems to be the most well thought-out idea for getting a Universe that looks like the one in which we live, flat (in the sense of Einstein’s relativity and the curvature of space-time), more or less uniform, but with small perturbations to the density that have grown to become the galaxies and clusters of galaxies in the Universe today. Those fluctuations arise because the rapid expansion takes minuscule quantum fluctuations and blows them up to finite size. This is essentially the same physics as the famous Hawking radiation from black holes. The fluctuations that eventually create the galaxies are accompanied by a separate set of fluctuations in the gravitational field itself: these are the ones that become gravitational radiation observable in the CMB. We characterise the background of gravitational radiation through the number r, which stands for the ratio of these two kinds of fluctuations — gravitational radiation divided by the density fluctuations.
Important caveat: there are other ways of producing gravitational radiation in the early Universe, although they don’t necessarily make exactly the same predictions; some of these issues have been discussed by my colleagues in various technical papers (Brandenberger 2011; Hindmarsh et al 2008; Lizarraga et al 2014 — the latter paper from just today!).
However, there are other ways to generate B modes. First, lots of astrophysical objects emit polarised light, and they generally don’t preferentially create E or B patterns. In particular, clouds of gas and dust in our galaxy will generally give us polarised light, and as we’re sitting inside our galaxy, it’s hard to avoid these. Luckily, we’re towards the outskirts of the Milky Way, so there are some clean areas of sky, but it’s hard to be sure that we’re not seeing some such light — and there are very few previous experiments to compare with.
We also know that large masses along the line of sight — clusters of galaxies and even bigger — distort the path of the light and can move those polarisation arrows around. This, in turn, can convert what started out as E into B and vice versa. But we know a lot about that intervening matter, and about the E-mode pattern that we started with, so we have a pretty good handle on this. There are some angular scales over which this is larger than the gravitational wave signal, and some scales that the gravitational wave signal is dominant.
So, if we can observe B-modes, and we are convinced that they are primordial, and that they are not due to lensing or astrophysical sources, and they have the properties expected from inflation, then (and only then!) we have direct evidence for inflation!
Data
Here’s a plot, courtesy the BICEP2 team, with the current state of the data targeting these B modes:
The figure shows the so-called power spectrum of the B-mode data — the horizontal “multipole” axis corresponds to angular sizes (θ) on the sky: very roughly, multipole ℓ ~ 180°/θ. The vertical axis gives the amount of “power” at those scales: it is larger if there are more structures of that particular size. The downward pointing arrows are all upper limits; the error bars labeled BICEP2 and Polarbear are actual detections. The solid red curve is the expected signal from the lensing effect discussed above; the long-dashed red curve is the effect of gravitational radiation (with a particular amplitude), and the short-dashed red curve is the total B-mode signal from the two effects.
The Polarbear results were announced on 11 March (disclosure: I am a member of the Polarbear team). These give a detection of the gravitational lensing signal. It was expected, and has been observed in other ways both in temperature and polarisation, but this was the first time it’s been seen directly in this sort of B-mode power spectrum, a crucial advance in the field, letting us really see lensing unblurred by the presence of other effects. We looked at very “clean” areas of the sky, in an effort to minimise the possible contamination from those astrophjysical foregrounds.
The BICEP2 results were announced with a big press conference on 17 March. There are two papers so far, one giving the scientific results, another discussing the experimental techniques used — more papers discussing the data processing and other aspects of the analysis are forthcoming. But there is no doubt from the results that they have presented so far that this is an amazing, careful, and beautiful experiment.
Taken at face value, the BICEP2 results give a pretty strong detection of gravitational radiation from the early Universe, with the ratio parameter r=0.20, with error bars +0.07 and -0.05 (they are different in the two different directions, so you can’t write it with the usual “±”).
This is why there has been such an amazing amount of interest in both the press and the scientific community about these results — if true, they are a first semi-direct detection of gravitational radiation, strong evidence that inflation happened in the early Universe, and therefore a first look at waves which were created in the first tiny fraction of a second after the big bang, and have been propagating unimpeded in the Universe ever since. If we can measure more of the properties of these waves, we can learn more about the way inflation happened, which may in turn give us a handle on the particle physics of the early Universe and ultimately on a so-called “theory of everything” joining up quantum mechanics and gravity.
Taken at face value, the BICEP2 results imply that the very simplest theories of inflation may be right: the so-called “single-field slow-roll” theories that postulate a very simple addition to the particle physics of the Universe. In the other direction, scientists working on string theory have begun to make predictions about the character of inflation in their models, and many of these models are strongly constrained — perhaps even ruled out — by these data.
Skepticism
This is great. But scientists are skeptical by nature, and many of us have spent the last few days happily trying to poke holes in these results. My colleagues Peter Coles and Ted Bunn have blogged their own worries over the last couple of days, and Antony Lewis has already done some heroic work looking at the data.
The first worry is raised by their headline result: r=0.20. On its face, this conflicts with last year’s Planck result, which says that r<0.11 (of course, both of these numbers really represent probability distributions, so there is no absolute contradiction between these numbers, but rather they should be seen to be as a very unlikely combination). How can we ameliorate the “tension” (a word that has come into vogue in cosmology lately: a wimpy way — that I’ve used, too — of talking about apparent contradictions!) between these numbers?
First, how does Planck measure r to begin with? Above, I wrote about how B modes show only gravitational radiation (and lensing, and astrophysical foregrounds). But the same gravitational radiation also contributes to the CMB temperature, albeit at a comparatively low level, and at large angular scales — the very left-most points of the temperature equivalent of a plot like the above — I reproduce one from last year’s Planck release at right. In fact, those left-most data points are a bit low compared to the most favoured theory (the smooth curve), which pushes the Planck limit down a bit.
But Planck and BICEP2 measure r at somewhat different angular scales, and so we can “ameliorate the tension” by making the theory a bit more complicated: the gravitational radiation isn’t described by just one number, but by a curve. If both data are to be believed, the curve slopes up from the Planck regime toward the BICEP2 regime. In fact, such a new parameter is already present in the theory, and goes by the name “tensor tilt”. The problem is that the required amount of tilt is somewhat larger than the simplest ideas — such as the single-field slow-roll theories — prefer.
If we want to keep the theories simple, we need to make the data more complicated: bluntly, we need to find mistakes in either Planck or BICEP2. The large-scale CMB temperature sky has been scrutinised for the last 20 years or so, from COBE through WMAP and now Planck. Throughout this time, the community has been building up a catalog of “anomalies” (another term of art we use to describe things we’re uncomfortable with), many of which do seem to affect those large scales. The problem is that no one quite figure out if these things are statistically significant: we look at so many possible ways that the sky could be weird, but we only publish the ones that look significant. As my Imperial colleague Professor David Hand would point out, “Coincidences, Miracles, and Rare Events Happen Every Day”. Nonetheless, there seems to be some evidence that something interesting/unusual/anomalous is happening at large scales, and perhaps if we understood this correctly, the Planck limits on r would go up.
But perhaps not: those results have been solid for a long while without an alternative explanation. So maybe the problem is with BICEP2? There are certainly lots of ways they could have made mistakes. Perhaps most importantly, it is very difficult for them to distinguish between primordial perturbations and astrophysical foregrounds, as their main results use only data from a single frequency (like a single colour in the spectrum, but down closer to radio wavelengths). They do compare with some older data at a different frequency, but the comparison does not strongly rule out contamination. They also rely on models for possible contamination, which give a very small contribution, but these models are very poorly constrained by current data.
Another way they could go wrong is that they may misattribute some of their temperature measurement, or their E mode polarisation, to their B mode detection. Because the temperature and E mode are so much larger than the B they are seeing, only a very small amount of such contamination could change their results by a large amount. They do their best to control this “leakage”, and argue that its residual effect is tiny, but it’s very hard to get absolutely right.
And there is some internal evidence within the BICEP2 results that things are not perfect. The most obvious one comes from the figure above: the points around ℓ=200 — where the lensing contributions begins to dominate — are a bit higher than the model. Is this just a statistical fluctuation, or is it evidence of a broader problem? Their paper show some somewhat discrepant points in their E polarisation measurements, as well. None of these are very statistically significant, and some may be confirmed by other measurements, but there are enough of these that caution makes sense. From only a few days thinking about the results (and not yet really sitting down and going through the papers in great depth), it’s hard to make detailed judgements. It seems like the team have been careful that it’s hard to imagine the results going away completely, but easy to imagine lots of ways in which it could be wrong in detail.
But this skepticism from me and others is a good thing, even for the BICEP2 team: they will want their results scrutinised by the community. And the rest of us in the community will want the opportunity to reproduce the results. First, we’ll try to dig into the BICEP2 results themselves, making sure that they’ve done everything as well as possible. But over the next months and years, we’ll want to reproduce them with other experiments.
First, of course, will be Planck. Since I’m on Planck, there’s not much I can say here, except that we expect to release our own polarisation data and cosmological results later this year. This paper (Efstathiou and Gratton 2009) may be of interest….
Next, there are a bunch of ground- and balloon-based CMB experiments gathering data and/or looking for funding right now. The aforementioned Polarbear will continue, and I’m also involved with the EBEX team which hopes to fly a new balloon to probe the CMB polarisation again in a few years. In the meantime, there’s also ACT, SPIDER, SPT, and indeed the successor to BICEP itself, called the Keck array, and many others besides. Eventually, we may even get a new CMB satellite, but don’t hold your breath…
Rumour-mongering
I first heard about the coming BICEP2 results in the middle of last week, when I was up in Edinburgh and received an email from a colleague just saying “r=0.2?!!?” I quickly called to ask what he meant, and he transmitted the rumour of a coming BICEP detection, perhaps bolstered by some confirmation from their successor experiment, the Keck Array (which does in fact appear in their paper). Indeed, such a rumour had been floating around the community for a year or so, but most of thought it would turn out to be spurious. But very quickly last week, we realised that this was for real. It became most solid when I had a call from a Guardian journalist, who managed to elicit some inane comments from me, before anything was known for sure.
By the weekend, it became clear that there would be an astronomy-related press conference at Harvard on Monday, and we were all pretty sure that it would be the BICEP2 news. The number r=0.20 was most commonly cited, and we all figured it would have an error bar around 0.06 or so — small enough to be a real detection, but large enough to leave room for error (but I also heard rumours of r=0.075).
By Monday morning, things had reached whatever passes for a fever pitch in the cosmology community: twitter and Facebook conversations, a mention on BBC Radio 4’s Today programme, all before the official title of the press conference was even announced: “First Direct Evidence for Cosmic Inflation”. Apparently, other BBC journalists had already had embargoed confirmation of some of the details from the BICEP2 team, but the embargo meant they couldn’t participate in the rumour-spreading.
I was traveling during most of this time, fielding occasional call from journalists (there aren’t that many CMB-specialists within within easy of the London-based media), though, unfortunately for my ego, I wasn’t able to make it onto any of Monday night’s choice tv spots.
By the time of the press conference itself, the cosmology community had self-organised: there was a Facebook group organised by Fermilab’s Scott Dodelson, which pretty quickly started dissecting the papers and was able to follow along with the press conference as it happened (despite the fact that most of us couldn’t get onto the website — one of the first times that the popularity of cosmology has brought down a server).
At the time, I was on a series of trains from Loch Lomond to Glasgow, Edinburgh and finally on to London, but the facebook group made (from a tech standpoint, it’s surprising that we didn’t do this on the supposedly more capable Google Plus platform, but the sociological fact is that more of us are on, and use, Facebook). It was great to be able to watch, and participate in, the real-time discussion of the papers (which continues on Facebook as of now). Cosmologists have been teasing out possible inconsistencies (some of which I alluded to above), trying to understand the implications of the results if they’re right — and thinking about the next steps. IRL, Now that I’m back at Imperial, we’ve been poring over the papers in yet more detail, trying to work exactly how they’ve gathered and analysed their data, and seeing what parts we want to try to reproduce.
Aftermath
Physics moves fast nowadays: as of this writing, about 72 hours after the announcement, there are 16 papers mentioning the BICEP2 results on the physics ArXiV (it’s a live search, so the number will undoubtedly grow). Most of them attempt to constrain various early-Universe models in the light of the r=0.20 results — some of them with some amount of statistical rigour, others just pointing out various models in which that is more or less easy to get. (I’ve obviously spent too much time on this post and not enough writing papers.)
It’s also worth collecting, if only for my own future reference, some of the media coverage of the results:
- The BBC’s excellent news piece and nice explanatory supplement
- The Wall Street Journal
- The Guardian
- The Telegraph
- The Economist
- IEEE Spectrum (on the more technical side)
For more background, you can check out
- Sean Carroll’s introduction and post-press-conference debrief
- Peter Coles’ liveblog, straw poll, and skeptical summary
Today was the deadline for submitting so-called “White Papers” proposing the next generation of the European Space Agency satellite missions. Because of the long lead times for these sorts of complicated technical achievements, this call is for launches in the faraway years of 2028 or 2034. (These dates would be harder to wrap my head around if I weren’t writing this on the same weekend that I’m attending the 25th reunion of my university graduation, an event about which it’s difficult to avoid the clichéd thought that May, 1988 feels like the day before yesterday.)
At least two of the ideas are particularly close to my scientific heart.
The Polarized Radiation Imaging and Spectroscopy Mission (PRISM) is a cosmic microwave background (CMB) telescope, following on from Planck and the current generation of sub-orbital telescopes like EBEX and PolarBear: whereas Planck has 72 detectors observing the sky over nine frequencies on the sky, PRISM would have more than 7000 detectors working in a similar way to Planck over 32 frequencies, along with another set observing 300 narrow frequency bands, and another instrument dedicated to measuring the spectrum of the CMB in even more detail. Combined, these instruments allow a wide variety of cosmological and astrophysical goals, concentrating on more direct observations of early Universe physics than possible with current instruments, in particular the possible background of gravitational waves from inflation, and the small correlations induced by the physics of inflation and other physical processes in the history of the Universe.
The eLISA mission is the latest attempt to build a gravitational radiation observatory in space, observing astrophysical sources rather than the primordial background affecting the CMB, using giant lasers to measure the distance between three separate free-floating satellites a million kilometres apart from one another. As a gravitational wave passes through the triangle, it bends space and effectively changes the distance between them. The trio would thereby be sensitive to the gravitational waves produced by small, dense objects orbiting one another, objects like white dwarfs, neutron stars and, most excitingly, black holes. This would give us a probe of physics in locations we can’t see with ordinary light, and in regimes that we can’t reproduce on earth or anywhere nearby.
In the selection process, ESA is supposed to take into account the interests of the community. Hence both of these missions are soliciting support, of active and interested scientists and also the more general public: check out the sites for PRISM and eLISA. It’s a tough call. Both cases would be more convincing with a detection of gravitational radiation in their respective regimes, but the process requires putting down a marker early on. In the long term, a CMB mission like PRISM seems inevitable — there are unlikely to be any technical showstoppers — it’s just a big telescope in a slightly unusual range of frequencies. eLISA is more technically challenging: the LISA Pathfinder effort has shown just how hard it is to keep and monitor a free-floating mass in space, and the lack of a detection so far from the ground-based LIGO observatory, although completely consistent with expectations, has kept the community’s enthusiasm lower. (This will likely change with Advanced LIGO, expected to see many hundreds of sources as soon as it comes online in 2015 or thereabouts.)
Full disclosure: although I’ve signed up to support both, I’m directly involved in the PRISM white paper.
Yesterday’s release of the Planck papers and data wasn’t just aimed at the scientific community, of course. We wanted to let the rest of the world know about our results. The main press conference was at ESA HQ in Paris, and there was a smaller event here in London run by the UKSA, which I participated in as part of a panel of eight Planck scientists.
The reporters tried to keep us honest, asking us to keep simplifying our explanations so that they — and their readers — could understand them. We struggled with describing how our measurements of the typical size of spots in our map of the CMB eventually led us to a measurement of the age of the Universe (which I tried to do in my previous post). This was hard not only because the reasoning is subtle, but also because, frankly, it’s not something we care that much about: it’s a model-dependent parameter, something we don’t measure directly, and doesn’t have much of a cosmological consequence. (I ended up on the phone with the BBC’s Pallab Ghosh at about 8pm trying to work out whether the age has changed by 50 or 80 million years, a number that means more to him and his viewers than to me and my colleagues.)
There are pieces by the reporters who asked excellent questions at the press conference, at The Guardian, The Economist and The Financial Times, as well as one behind the (London) Times paywall by Hannah Devlin who was probably most rigorous in her requests for us to simplify our explanations. I’ll also point to NPR’s coverage, mostly since it is one of the few outlets to explicitly mention the topology of the Universe which was one of the areas of Planck science I worked on myself.
Aside from the press conference itself, the media were fairly clamouring for the chance to talk about Planck. Most of the major outlets in the UK and around Europe covered the Planck results. Even in the US, we made it onto the front page of the New York Times. Rather than summarise all of the results, I’ll just self-aggrandizingly point to the places where I appeared: a text-based preview from the BBC, and a short quote on video taken after the press conference, as well as one on ITV. I’m most proud of my appearance with Tom Clarke on Channel 4 News — we spent about an hour planning and discussing the results, edited down to a few minutes including my head floating in front of some green-screen astrophysics animations.
Now that the day is over, you can look at the results for yourself at the BBC’s nice interactive version, or at the lovely Planck Chromoscope created by Cardiff University’s Dr Chris North, who donated a huge amount of his time and effort to helping us make yesterday a success. I should also thank our funders over at the UK Space Agency, STFC and (indirectly) ESA — Planck is big science, and these sorts of results don’t come cheap. I hope you agree that they’ve been worth it.
If you’re the kind of person who reads this blog, then you won’t have missed yesterday’s announcement of the first Planck cosmology results.
The most important is our picture of the cosmic microwave background itself:
But it takes a lot of work to go from the data coming off the Planck satellite to this picture. First, we have to make nine different maps, one at each of the frequencies in which Planck observes, from 30 GHz (with a wavelength of 1 cm) up to 850 GHz (0.350 mm) — note that the colour scales here are the same:



At low and high frequencies, these are dominated by the emission of our own galaxy, and there is at least some contamination over the whole range, so it takes hard work to separate the primordial CMB signal from the dirty (but interesting) astrophysics along the way. In fact, it’s sufficiently challenging that the team uses four different methods, each with different assumptions, to do so, and the results agree remarkably well.
In fact, we don’t use the above CMB image directly to do the main cosmological science. Instead, we build a Bayesian model of the data, combining our understanding of the foreground astrophysics and the cosmology, and marginalise over the astrophysical parameters in order to extract as much cosmological information as we can. (The formalism is described in the Planck likelihood paper, and the main results of the analysis are in the Planck cosmological parameters paper.)
The main tool for this is the power spectrum, a plot which shows us how the different hot and cold spots on our CMB map are distributed:
In this plot, the left-hand side (low ℓ) corresponds to large angles on the sky and high ℓ to small angles. Planck’s results are remarkable for covering this whole range from ℓ=2 to ℓ=2500: the previous CMB satellite, WMAP, had a high-quality spectrum out to ℓ=750 or so; ground- and balloon-based experiments like SPT and ACT filled in some of the high-ℓ regime.
It’s worth marvelling at this for a moment, a triumph of modern cosmological theory and observation: our theoretical models fit our data from scales of 180° down to 0.1°, each of those bumps and wiggles a further sign of how well we understand the contents, history and evolution of the Universe. Our high-quality data has refined our knowledge of the cosmological parameters that describe the universe, decreasing the error bars by a factor of several on the six parameters that describe the simplest ΛCDM universe. Moreover, and maybe remarkably, the data don’t seem to require any additional parameters beyond those six: for example, despite previous evidence to the contrary, the Universe doesn’t need any additional neutrinos.
The quantity most well-measured by Planck is related to the typical size of spots in the CMB map; it’s about a degree, with an error of less than one part in 1,000. This quantity has changed a bit (by about the width of the error bar) since the previous WMAP results. This, in turn, causes us to revise our estimates of quantities like the expansion rate of the Universe (the Hubble constant), which has gone down, in fact by enough that it’s interestingly different from its best measurements using local (non-CMB) data, from more or less direct observations of galaxies moving away from us. Both methods have disadvantages: for the CMB, it’s a very indirect measurement, requiring imposing a model upon the directly measured spot size (known more technically as the “acoustic scale” since it comes from sound waves in the early Universe). For observations of local galaxies, it requires building up the famous cosmic distance ladder, calibrating our understanding of the distances to further and further objects, few of which we truly understand from first principles. So perhaps this discrepancy is due to messy and difficult astrophysics, or perhaps to interesting cosmological evolution.
This change in the expansion rate is also indirectly responsible for the results that have made the most headlines: it changes our best estimate of the age of the Universe (slower expansion means an older Universe) and of the relative amounts of its constituents (since the expansion rate is related to the geometry of the Universe, which, because of Einstein’s General Relativity, tells us the amount of matter).
But the cosmological parameters measured in this way are just Planck’s headlines: there is plenty more science. We’ve gone beyond the power spectrum above to put limits upon so-called non-Gaussianities which are signatures of the detailed way in which the seeds of large-scale structure in the Universe was initially laid down. We’ve observed clusters of galaxies which give us yet more insight into cosmology (and which seem to show an intriguing tension with some of the cosmological parameters). We’ve measured the deflection of light by gravitational lensing. And in work that I helped lead, we’ve used the CMB maps to put limits on some of the ways in which our simplest models of the Universe could be wrong, possibly having an interesting topology or rotation on the largest scales.
But because we’ve scrutinised our data so carefully, we have found some peculiarities which don’t quite fit the models. From the days of COBE and WMAP, there has been evidence that the largest angular scales in the map, a few degrees and larger, have some “anomalies” — some of the patterns show strange alignments, some show unexpected variation between two different hemispheres of the sky, and there are some areas of the sky that are larger and colder than is expected to occur in our theories. Individually, any of these might be a statistical fluke (and collectively they may still be) but perhaps they are giving us evidence of something exciting going on in the early Universe. Or perhaps, to use a bad analogy, the CMB map is like the Zapruder film: if you scrutinise anything carefully enough, you’ll find things that look a conspiracy, but turn out to have an innocent explanation.
I’ve mentioned eight different Planck papers so far, but in fact we’ve released 28 (and there will be a few more to come over the coming months, and many in the future). There’s an overall introduction to the Planck Mission, and papers on the data processing, observations of relatively nearby galaxies, and plenty more cosmology. The papers have been submitted to the journal A&A, they’re available on the ArXiV, and you can find a list of them at the ESA site.
Even more important for my cosmology colleagues, we’ve released the Planck data, as well, along with the necessary code and other information necessary to understand it: you can get it from the Planck Legacy Archive. I’m sure we’ve only just begun to get exciting and fun science out of the data from Planck. And this is only the beginning of Planck’s data: just the first 15 months of observations, and just the intensity of the CMB: in the coming years we’ll be analysing (and releasing) more than one more year of data, and starting to dig into Planck’s observations of the polarized sky.
My apologies for being far too busy to post. I’ll be much louder in couple of weeks once we release the Planck data — on March 21. Until then, I have to shut up and follow the Planck rules.
OK, back to editing. (I’ll try to update this post with any advance information as it becomes available.)
Update (on timing, not content): the main Planck press conference will be held on the morning of 21 March at 10am CET at ESA HQ in Paris. There will be a simultaneous UK event (9am GMT) held at the Royal Astronomical Society in London, where the Paris event will be streamed, followed by a local Q&A session. (There will also be a more technical afternoon session in Paris.)
Probably more important for my astrophysics colleagues: the Planck papers will be posted on the ESA website at noon on the 21st, after the press event, and will appear on the ArXiV the following day, 22 March. Be sure to set aside some time next weekend!
My colleagues, friends and collaborators on the EBEX project are in Antarctica this (Northern) winter preparing the telescope for launch. And today, they did it:
Just beautiful. (The video is from Asad Aboobaker, whose blog EBEX in Flight is documenting the mission from the field.)
EBEX is a next-generation CMB telescope, with hundreds of detectors measuring temperature and polarisation, which we hope will allow us to see the effects of an early epoch of cosmic inflation through the background of gravitational radiation that it produces. But right now, we are just hoping that EBEX catches some good winds in the Antarctic Polar Vortex and comes back around the continent after about two weeks of observing from 120,000 feet.
Nearly two-and-a-half years after its launch, the end of ESA’s Planck mission has begun. (In fact, the BBC scooped the rest of the Planck collaboration itself with a story last week; you can read the UK take at the excellent Cardiff-led public Planck site.)
Planck’s High-Frequency Instrument (HFI) instrument must be cooled to 0.1 degrees above absolute zero, maintained at this temperature by a series of refrigerators — which had been making Planck the coldest known object in space, colder than the 2.7 degrees to which the cosmic microwave background itself warms even the most regions of intergalactic space. The final cooler in the chain relies on a tank of the Helium-3 isotope, which has finally run out, within days of its predicted lifetime — and giving Planck more than twice as much time observing the Universe as its nominal 14-month mission.
The Low-Frequency Instrument (LFI) doesn’t require such cold temperatures, although in fact they do use one of the earlier stages in the chain, the UK-built 4-degree cooler, as a reference against which it compares its measurements. LFI will, therefore, continue its measurements for the next half-year or so.
But our work, of course, goes on: we will continue to process and analyse Planck’s data, refining our maps of the sky, and get down to the real work of extracting a full sky’s worth of astrophysics and cosmology from our data. The first, preliminary, release of Planck data happened just one year ago, and yet more new Planck science will be presented at a conference in Bologna in a few months. The most exciting and important work will be getting cosmology from Planck data, which we expect to first present in early 2013, and likely in further iterations beyond that.
It’s been a busy few weeks, and that seems like a good excuse for my lack of posts. Since coming back from Scotland, I’ve been to:
Paris, for our bi-monthly Planck Core Team meetings, discussing of the state of the data from the satellite, and of our ongoing processing of it;
Cambridge, for yet more Planck, this time to discuss the papers that we as collaboration will be writing over the next couple of years; and
Varenna, on Lake Como in northern Italy, for the Passion for Light meeting, sponsored by SIF (the Italian Physical Society) and EPS (the European Physical Society). The meeting was at least in part to introduce the effort to sponsor an International Year of Light in 2015, supported by the UN and international scientific organizations. My remit was “Light from the Universe”, which I took as an excuse to talk about (yes), Planck and the Cosmic Microwave Background. That makes sense because of what is revealed in this plot, a version of which I showed:
This figure (made after an excellent one which will be in an upcoming paper by Dole and Bethermin) shows the intensity of the “background light” integrated over all sources in the Universe. The horizontal axis gives the frequency of electromagnetic radiation — from the radio at the far left, to the Cosmic Microwave Background (CMB), the Cosmic Infrared Background (CIB), optical light in the middle, and on to ultraviolet, x-ray and gamma-ray light. The height of each curve is proportional to the intensity of the background, the amount of energy falling on a square meter of area per second coming from a particular direction on the sky (for aficionados of the mathematical details, we actually plot the quantity νIν to take account of the logarithmic axis, so that the area under the curve gives a rough estimate of the total intensity) which is itself also proportional to the total energy density of that background, averaged over the whole Universe.
Here on earth, we are dominated by the sun (or, indoors, by artificial illumination), but a planet is a very unusual place: most of the Universe is empty space, not particularly near a star. What this plot shows is that most of the background — most of the light in the Universe — isn’t from stars or other astronomical objects at all. Rather, it’s the Cosmic Microwave Background, the CMB, light from the early Universe, generated before there were any distinct objects at all, visible today as a so-called black body with temperature 2.73 degrees Kelvin. It also shows us that there is roughly the same amount of energy in infrared light (the CIB) as in the optical. This light doesn’t come directly from stars, but is re-processed as visible starlight is absorbed by interstellar dust which heats up and in turn glows in the infrared. That is one of the reasons why Planck’s sister-satellite Herschel, an infrared observatory, is so important: it reveals the fate of roughly half of the starlight ever produced. So we see that outside of the optical and ultraviolet, stars do not dominate the light of the Universe. The x-ray background comes from both very hot gas, heated by falling into clusters of galaxies on large scales, or by supernovae within galaxies, along with the very energetic collisions between particles that happen in the environments around black holes as matter falls in. We believe that the gamma ray background also come from accretion onto supermassive black holes at the centres of galaxies. But my talk centred on the yellow swathe of the CMB, although the only Planck data released so far are the relatively small contaminants from other sources in the same range of frequencies.
Other speakers in Varenna discussed microscopy, precision clocks, particle physics, the wave-particle duality, and the generation of very high-energy particles of light in the laboratory. But my favourite was a talk by Alessandro Farini, a Florentine “psychophysicist” who studies our perception of art. He showed the detailed (and extremely unphysical) use of light in art by even such supposedly realistic painters as Caravaggio, as well as using a series of optical illusions to show how our perceptions, which we think of as a simple recording of our surroundings, involve a huge amount of processing and interpretation before we are consciously aware of it. (As an aside, I was amused to see his collection of photographs with CMB Nobel Laureate George Smoot.)
And having found myself on the shores of Lake Como I took advantage of my good fortune:
OK, this post has gone on long enough. I’ll have to find another opportunity to discuss speedy neutrinos, crashing satellites (and my latest appearance on the BBC World News to talk about the latter), not to mention our weeklong workshop at Imperial discussing the technical topic of photometric redshifts, and the 13.1 miles I ran last weekend.
Like my friend and colleague Peter Coles, I am just returned from the fine wine-soaked dinner for the workshop “Cosmology and Astroparticle physics from the LHC to PLANCK” held at the Niels Bohr Institute in Copenhagen. It is an honor to visit the place where so many discoveries of 20th Century physics were made, and an even greater honor to be able to speak in the same auditorium as many of the best physicists of the last hundred years.
(You can see the conference photo here; apparently, the trumpet is just the latest in a long series.)
I talked about the most recent results from the Planck Satellite, gave an overview of the state of the art of (pre-Planck) measurements of the Cosmic Microwave Background, and found myself in what feels like the unlikely role as a mouthpiece for a large (and therefore conservative) community, basically putting forward the standard model of cosmology: a hot big bang with dark matter, dark energy, and inflation — a model that requires not one, not two, but (at least) three separate additions to particles and fields that we know from terrestrial observations: one to make up the bulk of the mass of today’s Universe, another to make the Universe accelerate in its expansion today, and another to make it accelerate at early times. It would sound absurd if it weren’t so well supported by observations.
Many of my colleagues in the EBEX experiment have just lit out for the west. Specifically, the team is heading off to Palestine (pronounced “Palesteen”), Texas, to get the telescope and instrument ready for its big Antarctic long-duration balloon flight at the end of the year, when we hope to gather our first real scientific data and observe the temperature and polarization of the cosmic microwave background (CMB) radiation. Unlike the Planck Satellite, which has a few dozen detectors changed little from those that flew on MAXIMA and BOOMEReNG in the 1990s, EBEX can use more modern technology, and will fly with thousands of detectors, allowing us to achieve far greater sensitivity to the smallest variations in the CMB.
Asad, one of the EBEX postdocs, involved in the experiment for several years, will be writing on the EBEX in Flight blog about the experiences down in Texas and, we hope, the future path of the team and telescope down to Antarctica. Follow along as the team drives across the country (at least twice), assembles and tests the instrument, breaks and fixes things, sleeps too little, works too hard, and, we hope, builds the most sensitive CMB experiment yet deployed. (And of course, eats cheeseburgers.)
And if you want a change from cosmology, you can instead follow along with another friend, Marc, who is trying to see if he can come to grips with writing on an iPad in the supposedly post-PC world, over at typelesswriter.
One of the perks (perqs?) of academia is that occasionally I get an excuse to escape the damp grey of London Winters. The Planck Satellite is an international collaboration and, although largely backed by the European Space Agency, it has a large contribution from US scientists, who built the CMB detectors for Planck’s HFI instrument, as well as being significantly involved in the analysis of Planck data. Much of this work is centred at NASA’s famous Jet Propulsion Lab in Pasadena, and I was happy to rearrange my schedule to allow a February trip to sunny Southern California (I hope my undergraduate students enjoyed the two guest lectures during my absence).
Visiting California, I was compelled to take advantage of the local culture, which mostly seemed to involve meals. I ate as much Mexican food as I could manage, from fantastic $1.25 tacos from the El Taquito Mexicano Truck to somewhat higher-end fare at Tinga in LA proper. And I finally got to taste bánh mì, French-influenced Vietnamese sandwiches (which have arrived in London but I somehow haven’t tried them here yet). And I got to take in the view from the heights of Griffith Park:
as well as down at street level:
And even better, I got to share these meals and views with old and new friends.
Of course I was mainly in LA to do science, but even at JPL we managed to escape our windowless meeting room and check out the clean-room where NASA is assembling the Mars Science Lab:
The white pod-like structure is the spacecraft itself, which will parachute into Mars’ atmosphere in a few years, and from it will descend the circular “sky crane” currently parked behind it which will itself deploy the car-sized Curiosity Rover to do the real work of Martian geology, chemistry, climatology and (who knows?) biology.
But my own work was for the semi-annual meeting of the Planck CTP working group (I’ve never been sure if it was intentional, but the name always seemed to me a sort of science pun, obliquely referring to the famous “CPT” symmetry of fundamental physics). In Planck, “CTP” refers to Cℓ from Temperature and Polarization: the calculation of the famous CMB power spectrum which contains much of the cosmological information in the maps that Planck will produce. The spectrum allows us to compress the millions of pixels in a map of the CMB sky, such as this one from the WMAP experiment (the colors give the temperature or intensity of the radiation, the lines its polarization), into just a few thousand numbers we can plot on a graph.
OK, this is not a publishable figure. Instead, it marks the tenth anniversary of the first CTP working group telecon in February 2001 (somewhat before I was involved in the group, actually). But given that we won’t be publishing Planck cosmology data for another couple of years, sugary spectra will have to do instead of the real ones in the meantime.
The work of the CTP group is exactly concerned with finding the best algorithms for translating CMB maps into these power spectra. They must take into account the complicated noise in the map, coming from our imperfect instruments which observe the sky with finite resolution — that is, a telescope which smooths the sky at a scale from about half down to one-tenth of a degree — and with a limited sensitivity — every measurement has a little bit of unavoidable noise added to it. Moreover, in between the CMB, produced 400,000 years after the Big Bang, and Planck’s instruments, observing today, is the entire rest of the Universe, which contains matter that both absorbs and emits (glows) in the microwaves which Planck observes. So in practice we need to simultaneously deal with all of these effects when reducing our maps down to power spectra. This is a surprisingly difficult problem: the naive, brute-force (Bayesian), solution requires a number of computer operations which scales like the cube of the number of pixels in the CMB map; at Planck’s resolution this is as many as 100 million pixels, and there still are no supercomputers capable of doing the septillion (1024) operations in a reasonable time. If we smooth the map, we can still solve the full problem, but on small scales, we need to come up with useful approximations which take advantage of what we know about the data, usually taking advantage of the very large number of points that contribute, and the so-called asymptotic theorems which say, roughly, that we can learn about the right answer by doing lots of simulations, which are much less computationally expensive.
At the required levels of both accuracy and precision, the results depend on all of the details of the data processing and the algorithm: How do you account for the telescope’s optics and the pixelization of the sky? How do you model the noise in the map? How do you remove those pixels contaminated by astrophysical emission or absorption? All of this is compounded by the necessary (friendly) scientific competition: it is the responsibility of the CTP group to make recommendations for how Planck will actually produce its power spectra for the community and, naturally, each of us wants our own algorithm or computer program to be used — to win. So these meetings are as much about politics as science, but we can hope that the outcome is that all the codes are raised to an appropriate level and we can make the decisions on non-scientific grounds (ease of use, flexibility, speed, etc.) that will produce the high-quality scientific results for which we designed and built Planck — and have worked on it for the last decade or more.
I’ve been meaning to give a shout-out to my colleagues on the ADAMIS team at the APC (AstroParticule et Cosmologie) Lab at the Université Paris 7 for a while: in addition to doing lots of great work on Planck, EBEX, PolarBear and other important CMB and cosmology experiments, they’ve also been running a group blog since the Autumn, Paper(s) of the Week et les autres choses (scientifique) which dissects some of the more interesting work to come out of the cosmology community. In particular, one of my favorite collaborators has written an extremely astute analysis of what, exactly, we on the Planck team released in our lengthy series of papers last month (which I have already discussed in a somewhat more boosterish fashion).
The Satellite now known as the Planck Surveyor was first conceived in the mid-1990s, in the wake of the results from NASA’s COBE Satellite, the first to detect primordial anisotropies in the Cosmic Microwave Background (CMB), light from about 400,000 years after the big bang. (I am a relative latecomer to the project, having only joined in about 2000.)
After all this time, we on the team are very excited to produce our very first scientific results. These take the form of a catalog of sources detected by Planck, along with 25 papers discussing the catalog as well as the more diffuse pattern of radiation on the sky.
Planck is the very first instrument to observe the whole sky with light in nine bands with wavelengths from about 1/3 of a millimeter up to one centimeter, an unprecedented range. In fact this first release of data and papers discusses Planck as a tool for astrophysics — as a telescope observing distant galaxies and clusters of galaxies as well as our own Galaxy, the Milky Way. All of these glow in Planck’s bands (indeed they dominate over the CMB in most of them), and with our high-sensitivity all-sky maps we have the opportunity to do astronomy with Planck, the best microwave telescope ever made. Indeed, to get to this point, we actually have to separate out the CMB from the other sources of emission and, somewhat perversely, actively remove that from the data we are presenting.
Over the last year, then, we on the Planck team have written about 25 papers to support this science; a few of them are about the mission as a whole, the instruments on board Planck, and the data processing pipelines that we have written to produce our data. Then there are a few papers discussing the data we are making available, the Early Release Compact Source Catalog and the various subsets discussing separately objects within our own Milky Way Galaxy as well as more distant galaxies and clusters of galaxies. The remaining papers give our first attempts at analyzing the data and extracting the best science possible.
Most of the highlights in the current papers provide confirmation of things that astronomers have suspected, thanks to Planck’s high sensitivity and wide coverage. It has long been surmised that most stars in the Universe are formed in locations shrouded by dust, and hence not visible to optical telescopes. Rather, the birth of stars heats the dust to temperatures much lower than that of stars, but much higher than the cold dust far from star-forming regions. This warm dust radiates in Planck’s bands, seen at lower and lower frequencies for more and more distant galaxies (due to the redshift of light from these faraway objects). For the first time, Planck has observed this Cosmic Infrared Background (CIB) at frequencies that may correspond to galaxies forming when the Universe was less than 15% of its current age, less than 2 billion years after the big bang. Here is a picture of the CIB at various places around the sky, specifically chosen to be as free as possible of other sources of emission:
Another exciting result has to do with the properties of that dust in our own Milky Way Galaxies. This so-called cosmic dust is known to be made of very tiny grains, from small agglomerations of a few molecules up to those a few tens of micrometers across. Ever since the mid-1990s, there has been some evidence that this dust emits radiation at millimeter wavelengths that the simplest models could not account for. One idea, actually first proposed in the 1950s, is that some of the dust grains are oblong, and receive enough of a kick from their environment that they spin at very high rates, emitting radiation at a frequency related to that rotation. Planck’s observations seem to confirm this prediction quantitatively, seeing its effects in our galaxy. This image of the Rho Ophiuchus molecular cloud shows that the spinning dust emission at 30 GHz traces the same structures as the thermal emission at 857 GHz:
In addition, Planck has found more than twenty new clusters of galaxies, has mapped the dust in gas in the Milky Way in three dimensions, and uncovered cold gas in nearby galaxies. And this is just the beginning of what Planck is capable of. We have not yet begun to discuss the cosmological implications, nor Planck’s abilities to measure not just the intensity of light, but also its polarization.
Of course the most important thing we have learned so far is how hard it is to work in a team of 400 or so scientists, whom — myself included — like neither managing nor being managed (and are likewise not particularly skilled at either). I’ve been involved in a small way in the editing process, shepherding just a few of those 25 papers to completion, paying attention to the language and presentation as much as the science. Given the difficulties, I am relatively happy with the results — the papers can be downloaded directly from ESA, and will be available on the ArXiV on 12 January 2011, and will eventually be published in the journal Astronomy and Astrophysics. It will be very interesting to see how we manage this in two years when we may have as many as a hundred or so papers at once. Stay tuned.
One of my old friends from graduate school, and a colleague to the present day, Lloyd Knox — whom you may remember from such cosmology hits as the Dark Energy Song — has started an initiative to create “short documentary videos to demonstrate the explanatory power of simple physical models and to help us understand and aesthetically appreciate the natural world”. It’s called The Spherical Cow company — the name comes from the traditional physicists’ trick of idealizing and simplifying any problem he or she gets, sometimes out of all recognition — but usually, when done well, keeping enough of the salient features.
The first video does just that, giving a simple description of the formation of the Cosmic Microwave Background, in the form of a conversation between Lloyd and his son, Teddy — with interpolations for animations and narration. Even with those occasional animations, the whole thing is pleasingly low-fi, but well-explained and charming (especially so for me, as I know the protagonists). I look forward to the next videos in the series, and I’ll certainly be recommending them to students of all ages.
I spent part of this week in Paris (apparently at the same time as a large number of other London-based scientists who were here for other things) discussing whether the European CMB community should rally and respond to ESA’s latest call for proposals for a mission to be launched in the next open slot—which isn’t until around 2022.
As successful as Planck seems to be, and as fun as it is working with the data, I suspect that no one on the Planck team thinks that a 400-scientist, dispersed, international team coming from a dozen countries each with its own politics and funding priorities, is the most efficient way to run such a project. But we’re stuck with it—no single European country can afford the better part of a billion Euros it will cost. Particle physics has been in this mode for the better part of fifty years, and arguably since the Manhattan Project, but it’s a new way of doing things — involving new career structures, new ways of evaluating research, new ways of planning, and a new concentration upon management — that we astrophysicists have to develop to answer our particular kinds of scientific questions.
But a longer discussion of “big science” is for another time. The next CMB satellite will probably be big, but the coming ESA call is officially for an “M-class” (for “medium”) mission, with a meagre (sic) 600 million euro cap. What will the astrophysical and cosmological community get for all this cash? How will it improve upon Planck?
Well, Planck has been designed to mine the cosmic microwave background for all of the temperature information available, the brightness of the microwave sky in all directions, down to around a few arcminutes at which scale it becomes smooth. But light from the CMB also carries information about the polarisation of light, essentially two more numbers we can measure at every point. Planck will measure some of this polarisation data, but we know that there will be much more to learn. We expect that this as-yet unmeasured polarisation can answer questions about fundamental physics that affects the early universe and describes its content and evolution. What are the details of the early period of inflation that gave the observable Universe its large-scale properties and seeded the formation of structures in it—and did it happen at all? What are the properties of the ubiquitous and light neutrino particles whose presence would have had a small but crucial effect on the evolution of structure?
The importance of these questions is driving us toward a fairly ambitious proposal for the next CMB mission. It will have a resolution comparable to that of Planck, but with many hundreds of individual detectors, compared to Plank’s many dozens—giving us over an order of magnitude increase in sensitivity to polarisation on the sky. Actually, even getting to this point took a good day or two of discussion. Should we instead make a cheaper, more focused proposal that would concentrate only on the question oaf inflation and in particular upon the background of gravitational radiation — observable as so-called “B-modes” in polarisation — that some theories predict? The problem with this proposal is that it is possible, or even likely, that it will produce what is known as a “null result”—that is, it won’t see anything at all. Moreover, a current generation of ground- and balloon-based CMB experiments, including EBEX and Polarbear, which I am lucky enough to be part of, are in progress, and should have results within the next few years, possibly scooping any too-narrowly designed future satellite.
So we will be broadening our case beyond these B-modes, and therefore making our design more ambitious, in order to make these further fundamental measurements. And, like Planck, we will be opening a new window on the sky for astrophysicists of all stripes, giving measurements of magnetic fields, the shapes of dust grains, and likely many more things we haven’t yet though of.
One minor upshot of all this is that our original name, the rather dull “B-Pol”, is no longer appropriate. Any ideas?
The Planck Satellite was launched in May 2009, and started regular operations late last summer. This spring, we achieved an important milestone: the satellite has observed the whole sky.
To celebrate, the Planck team have released an image of the full sky. The telescope has detectors which can see the sky with 9 bands at wavelengths ranging from 0.3 millimeters up to nearly a centimeter, out of which we have made this false-color image. The center of the picture is toward the center of the Galaxy, with the rest of the sphere unwrapped into an ellipse so that we can put it onto a computer screen (so the left and right edges are really both the same points).
At the longest and shortest wavelengths, our view is dominated by matter in our own Milky Way galaxy — this is the purple-blue cloud, mostly so-called galactic “cirrus” gas and dust, largely concentrated in a thin band running through the center which is the disk of our galaxy viewed from within.
In addition to this so-called diffuse emission, we can also see individual, bright blue-white objects. Some of these are within our galaxy, but many are themselves whole distant galaxies viewed from many thousands or millions of light years distance. Here’s a version of the picture with some objects highlighted:
Even though Planck is largely a cosmology mission, we expect these galactic and extragalactic data to be invaluable to astrophysicists of all stripes. Buried in these pictures we hope to find information on the structure and formation of galaxies, on the evolution of very faint magnetic fields, and on the evolution of the most massive objects in the Universe, clusters of galaxies.
But there is plenty of cosmology to be done: we see the Cosmic Microwave Background (CMB) in the red and yellow splotches at the top and bottom — out of the galactic plane. We on the Planck team will be spending much of the next two years separating the galactic and extragalactic “foreground” emission from the CMB, and characterizing its properties in as much detail as we can. Stay tuned.
I admit that I was somewhat taken aback by the level of interest in these pictures: we haven’t released any data to the community, or written any papers. Indeed, we’ve really said nothing at all about science. Yet we’ve made it onto the front page of the Independent and even the Financial Times, and yours truly was quoted on the BBC’s website. I hope this is just a precursor to the excitement we’ll generate when we can actually talk about science, first early next year when we release a catalog of sources on the sky for the community to observe with other telescopes, and then in a couple of years time when we will finally drop the real CMB cosmology results.