Another aspect of Planck’s legacy bears examining.
A couple of months ago, the 2018 Gruber Prize in Cosmology was awarded to the Planck Satellite. This was (I think) a well-deserved honour for all of us who have worked on Planck during the more than 20 years since its conception, for a mission which confirmed a standard model of cosmology and measured the parameters which describe it to accuracies of a few percent. Planck is the latest in a series of telescopes and satellites dating back to the COBE Satellite in the early 90s, through the MAXIMA and Boomerang balloons (among many others) around the turn of the 21st century, and the WMAP Satellite (The Gruber Foundation seems to like CMB satellites: COBE won the Prize in 2006 and WMAP in 2012).
Well, it wasn’t really awarded to the Planck Satellite itself, of course: 50% of the half-million-dollar award went to the Principal Investigators of the two Planck instruments, Jean-Loup Puget and Reno Mandolesi, and the other half to the “Planck Team”. The Gruber site officially mentions 334 members of the Collaboration as recipients of the Prize.
Unfortunately, the Gruber Foundation apparently has some convoluted rules about how it makes such group awards, and the PIs were not allowed to split the monetary portion of the prize among the full 300-plus team. Instead, they decided to share the second half of the funds amongst “43 identified members made up of the Planck Science Team, key members of the Planck editorial board, and Co-Investigators of the two instruments.” Those words were originally on the Gruber site but in fact have since been removed — there is no public recognition of this aspect of the award, which is completely appropriate as it is the whole team who deserves the award. (Full disclosure: as a member of the Planck Editorial Board and a Co-Investigator, I am one of that smaller group of 43, chosen not entirely transparently by the PIs.)
I also understand that the PIs will use a portion of their award to create a fund for all members of the collaboration to draw on for Planck-related travel over the coming years, now that there is little or no governmental funding remaining for Planck work, and those of us who will also receive a financial portion of the award will also be encouraged to do so (after, unfortunately, having to work out the tax implications of both receiving the prize and donating it back).
This seems like a reasonable way to handle a problem with no real fair solution, although, as usual in large collaborations like Planck, the communications about this left many Planck collaborators in the dark. (Planck also won the Royal Society 2018 Group Achievement Award which, because there is no money involved, could be uncontroversially awarded to the ESA Planck Team, without an explicit list. And the situation is much better than for the Nobel Prize.)
However, this seemingly reasonable solution reveals an even bigger, longer-standing, and wider-ranging problem: only about 50 of the 334 names on the full Planck team list (roughly 15%) are women. This is already appallingly low. Worse still, none of the 43 formerly “identified” members officially receiving a monetary prize are women (although we would have expected about 6 given even that terrible fraction). Put more explicitly, there is not a single woman in the upper reaches of Planck scientific management.
This terrible situation was also noted by my colleague Jean-Luc Starck (one of the larger group of 334) and Olivier Berné. As a slight corrective to this, it was refreshing to see Nature’s take on the end of Planck dominated by interviews with young members of the collaboration including several women who will, we hope, be dominating the field over the coming years and decades.
This week, we released (most of) the final set of papers from the Planck collaboration — the long-awaited Planck 2018 results (which were originally meant to be the “Planck 2016 results”, but everything takes longer than you hope…), available on the ESA website as well as the arXiv. More importantly for many astrophysicists and cosmologists, the final public release of Planck data is also available.
Anyway, we aren’t quite finished: those of you up on your roman numerals will notice that there are only 9 papers but the last one is “XII” — the rest of the papers will come out over the coming months. So it’s not the end, but at least it’s the beginning of the end.
And it’s been a long time coming. I attended my first Planck-related meeting in 2000 or so (and plenty of people had been working on the projects that would become Planck for a half-decade by that point). For the last year or more, the number of people working on Planck has dwindled as grant money has dried up (most of the scientists now analysing the data are doing so without direct funding for the work).
Planck 2018: the science
So, in the language of the title of the first paper in the series, what is the legacy of Planck? The state of our science is strong. For the first time, we present full results from both the temperature of the CMB and its polarization. Unfortunately, we don’t actually use all the data available to us — on the largest angular scales, Planck’s results remain contaminated by astrophysical foregrounds and unknown “systematic” errors. This is especially true of our measurements of the polarization of the CMB, unfortunately, which is probably Planck’s most significant limitation.
The remaining data are an excellent match for what is becoming the standard model of cosmology: ΛCDM, or “Lambda-Cold Dark Matter”, which is dominated, first, by a component which makes the Universe accelerate in its expansion (Λ, Greek Lambda), usually thought to be Einstein’s cosmological constant; and secondarily by an invisible component that seems to interact only by gravity (CDM, or “cold dark matter”). We have tested for more exotic versions of both of these components, but the simplest model seems to fit the data without needing any such extensions. We also observe the atoms and light which comprise the more prosaic kinds of matter we observe in our day-to-day lives, which make up only a few percent of the Universe.
All together, the sum of the densities of these components are just enough to make the curvature of the Universe exactly flat through Einstein’s General Relativity and its famous relationship between the amount of stuff (mass) and the geometry of space-time. Furthermore, we can measure the way the matter in the Universe is distributed as a function of the length scale of the structures involved. All of these are consistent with the predictions of the famous or infamous theory of cosmic inflation), which expanded the Universe when it was much less than one second old by factors of more than 1020. This made the Universe appear flat (think of zooming into a curved surface) and expanded the tiny random fluctuations of quantum mechanics so quickly and so much that they eventually became the galaxies and clusters of galaxies we observe today. (Unfortunately, we still haven’t observed the long-awaited primordial B-mode polarization that would be a somewhat direct signature of inflation, although the combination of data from Planck and BICEP2/Keck give the strongest constraint to date.)
Most of these results are encoded in a function called the CMB power spectrum, something I’ve shown here on the blog a few times before, but I never tire of the beautiful agreement between theory and experiment, so I’ll do it again: (The figure is from the Planck “legacy” paper; more details are in others in the 2018 series, especially the Planck “cosmological parameters” paper.) The top panel gives the power spectrum for the Planck temperature data, the second panel the cross-correlation between temperature and the so-called E-mode polarization, the left bottom panel the polarization-only spectrum, and the right bottom the spectrum from the gravitational lensing of CMB photons due to matter along the line of sight. (There are also spectra for the B mode of polarization, but Planck cannot distinguish these from zero.) The points are “one sigma” error bars, and the blue curve gives the best fit model.
As an important aside, these spectra per se are not used to determine the cosmological parameters; rather, we use a Bayesian procedure to calculate the likelihood of the parameters directly from the data. On small scales (corresponding to 𝓁>30 since 𝓁 is related to the inverse of an angular distance), estimates of spectra from individual detectors are used as an approximation to the proper Bayesian formula; on large scales (𝓁<30) we use a more complicated likelihood function, calculated somewhat differently for data from Planck’s High- and Low-frequency instruments, which captures more of the details of the full Bayesian procedure (although, as noted above, we don’t use all possible combinations of polarization and temperature data to avoid contamination by foregrounds and unaccounted-for sources of noise).
Of course, not all cosmological data, from Planck and elsewhere, seem to agree completely with the theory. Perhaps most famously, local measurements of how fast the Universe is expanding today — the Hubble constant — give a value of H0 = (73.52 ± 1.62) km/s/Mpc (the units give how much faster something is moving away from us in km/s as they get further away, measured in megaparsecs (Mpc); whereas Planck (which infers the value within a constrained model) gives (67.27 ± 0.60) km/s/Mpc . This is a pretty significant discrepancy and, unfortunately, it seems difficult to find an interesting cosmological effect that could be responsible for these differences. Rather, we are forced to expect that it is due to one or more of the experiments having some unaccounted-for source of error.
The term of art for these discrepancies is “tension” and indeed there are a few other “tensions” between Planck and other datasets, as well as within the Planck data itself: weak gravitational lensing measurements of the distortion of light rays due to the clustering of matter in the relatively nearby Universe show evidence for slightly weaker clustering than that inferred from Planck data. There are tensions even within Planck, when we measure the same quantities by different means (including things related to similar gravitational lensing effects). But, just as “half of all three-sigma results are wrong”, we expect that we’ve mis- or under-estimated (or to quote the no-longer-in-the-running-for-the-worst president ever, “misunderestimated”) our errors much or all of the time and should really learn to expect this sort of thing. Some may turn out to be real, but many will be statistical flukes or systematic experimental errors.
(If you were looking a briefer but more technical fly-through the Planck results — from someone not on the Planck team — check out Renee Hlozek’s tweetstorm.)
Planck 2018: lessons learned
So, Planck has more or less lived up to its advanced billing as providing definitive measurements of the cosmological parameters, while still leaving enough “tensions” and other open questions to keep us cosmologists working for decades to come (we are already planning the next generation of ground-based telescopes and satellites for measuring the CMB).
But did we do things in the best possible way? Almost certainly not. My colleague (and former grad student!) Joe Zuntz has pointed out that we don’t use any explicit “blinding” in our statistical analysis. The point is to avoid our own biases when doing an analysis: you don’t want to stop looking for sources of error when you agree with the model you thought would be true. This works really well when you can enumerate all of your sources of error and then simulate them. In practice, most collaborations (such as the Polarbear team with whom I also work) choose to un-blind some results exactly to be able to find such sources of error, and indeed this is the motivation behind the scores of “null tests” that we run on different combinations of Planck data. We discuss this a little in an appendix of the “legacy” paper — null tests are important, but we have often found that a fully blind procedure isn’t powerful enough to find all sources of error, and in many cases (including some motivated by external scientists looking at Planck data) it was exactly low-level discrepancies within the processed results that have led us to new systematic effects. A more fully-blind procedure would be preferable, of course, but I hope this is a case of the great being the enemy of the good (or good enough). I suspect that those next-generation CMB experiments will incorporate blinding from the beginning.
Further, although we have released a lot of software and data to the community, it would be very difficult to reproduce all of our results. Nowadays, experiments are moving toward a fully open-source model, where all the software is publicly available (in Planck, not all of our analysis software was available to other members of the collaboration, much less to the community at large). This does impose an extra burden on the scientists, but it is probably worth the effort, and again, needs to be built into the collaboration’s policies from the start.
That’s the science and methodology. But Planck is also important as having been one of the first of what is now pretty standard in astrophysics: a collaboration of many hundreds of scientists (and many hundreds more of engineers, administrators, and others without whom Planck would not have been possible). In the end, we persisted, and persevered, and did some great science. But I learned that scientists need to learn to be better at communicating, both from the top of the organisation down, and from the “bottom” (I hesitate to use that word, since that is where much of the real work is done) up, especially when those lines of hoped-for communication are usually between different labs or Universities, very often between different countries. Physicists, I have learned, can be pretty bad at managing — and at being managed. This isn’t a great combination, and I say this as a middle-manager in the Planck organisation, very much guilty on both fronts.
It was announced this morning that the WMAP team has won the $3 million Breakthrough Prize. Unlike the Nobel Prize, which infamously is only awarded to three people each year, the Breakthrough Prize was awarded to the whole 27-member WMAP team, led by Chuck Bennett, Gary Hinshaw, Norm Jarosik, Lyman Page, and David Spergel, but including everyone through postdocs and grad students who worked on the project. This is great, and I am happy to send my hearty congratulations to all of them (many of whom I know well and am lucky to count as friends).
I actually knew about the prize last week as I was interviewed by Nature for an article about it. Luckily I didn’t have to keep the secret for long. Although I admit to a little envy, it’s hard to argue that the prize wasn’t deserved. WMAP was ideally placed to solidify the current standard model of cosmology, a Universe dominated by dark matter and dark energy, with strong indications that there was a period of cosmological inflation at very early times, which had several important observational consequences. First, it made the geometry of the Universe — as described by Einstein’s theory of general relativity, which links the contents of the Universe with its shape — flat. Second, it generated the tiny initial seeds which eventually grew into the galaxies that we observe in the Universe today (and the stars and planets within them, of course).
By the time WMAP released its first results in 2003, a series of earlier experiments (including MAXIMA and BOOMERanG, which I had the privilege of being part of) had gone much of the way toward this standard model. Indeed, about ten years one of my Imperial colleagues, Carlo Contaldi, and I wanted to make that comparison explicit, so we used what were then considered fancy Bayesian sampling techniques to combine the data from balloons and ground-based telescopes (which are collectively known as “sub-orbital” experiments) and compare the results to WMAP. We got a plot like the following (which we never published), showing the main quantity that these CMB experiments measure, called the power spectrum (which I’ve discussed in a little more detail here). The horizontal axis corresponds to the size of structures in the map (actually, its inverse, so smaller is to the right) and the vertical axis to how large the the signal is on those scales.
As you can see, the suborbital experiments, en masse, had data at least as good as WMAP on most scales except the very largest (leftmost; this is because you really do need a satellite to see the entire sky) and indeed were able to probe smaller scales than WMAP (to the right). Since then, I’ve had the further privilege of being part of the Planck Satellite team, whose work has superseded all of these, giving much more precise measurements over all of these scales:
Am I jealous? Ok, a little bit.
But it’s also true, perhaps for entirely sociological reasons, that the community is more apt to trust results from a single, monolithic, very expensive satellite than an ensemble of results from a heterogeneous set of balloons and telescopes, run on (comparative!) shoestrings. On the other hand, the overall agreement amongst those experiments, and between them and WMAP, is remarkable.
And that agreement remains remarkable, even if much of the effort of the cosmology community is devoted to understanding the small but significant differences that remain, especially between one monolithic and expensive satellite (WMAP) and another (Planck). Indeed, those “real and serious” (to quote myself) differences would be hard to see even if I plotted them on the same graph. But since both are ostensibly measuring exactly the same thing (the CMB sky), any differences — even those much smaller than the error bars — must be accounted for almost certainly boil down to differences in the analyses or misunderstanding of each team’s own data. Somewhat more interesting are differences between CMB results and measurements of cosmology from other, very different, methods, but that’s a story for another day.
[Uh oh, this is sort of disastrously long, practically unedited, and a mixture of tutorial- and expert-level text. Good luck. Send corrections.]
It’s been almost exactly a year since the release of the first Planck cosmology results (which I discussed in some depth at the time). On this auspicious anniversary, we in the cosmology community found ourselves with yet more tantalising results to ponder, this time from a ground-based telescope called BICEP2. While Planck’s results were measurements of the temperature of the cosmic microwave background (CMB), this year’s concerned its polarisation.
Polarisation is essentially a headless arrow that can come attached to the photons coming from any direction on the sky — if you’ve worn polarised sunglasses, and noticed how what you see changes as you rotate them around, you’ve seen polarisation. The same physics responsible for the temperature also generates polarisation. But more importantly for these new results, polarisation is a sensitive probe of some of the processes that are normally mixed in, and so hard to distinguish, in the temperature.
Technical aside (you can ignore the details of this paragraph). Actually, it’s a bit more complicated than that: we can think of the those headless arrows on the sky as the sum of two separate kinds of patterns. We call the first of these the “E-mode”, and it represents patterns consisting of either radial spikes or circles around a point. The other patterns are called the “B-mode” and look like patterns that swirl around, either to the left or the right. The important difference between them is that the E modes don’t change if you reflect them in a mirror, while the B modes do — we say that they have a handedness, or parity, in somewhat more mathematical terms. I’ve discussed the CMB a lot in the past but can’t do the theory of the CMB justice here, but my colleague Wayne Hu has an excellent, if somewhat dated, set of web pages explaining the physics (probably at a physics-major level).
The excitement comes because these B-mode patterns can only arise in a few ways. The most exciting is that they can come from gravitational waves (GWs) in the early Universe. Gravitational waves (sometimes incorrectly called “gravity waves” which historically refers to unrelated phenomena!) are propagating ripples in space-time, predicted in Einstein’s general relativistic theory of gravitation. Because the CMB is generated about 400,000 years after the big bang, it’s only sensitive to gravitational radiation from the early Universe, not astrophysical sources like spiralling neutron stars or — from where we have other, circumstantial, evidence for gravitational waves, and which are the sources for which experiments like LIGO and eLISA will be searching. These early Universe gravitational waves move matter around in a specific way, which in turn induce those specific B-mode polarization pattern.
In the early Universe, there aren’t a lot of ways to generate gravitational waves. The most important one is inflation, an early period of expansion which blows up a subatomically-sized region by something like a billion-billion-billion times in each direction — inflation seems to be the most well thought-out idea for getting a Universe that looks like the one in which we live, flat (in the sense of Einstein’s relativity and the curvature of space-time), more or less uniform, but with small perturbations to the density that have grown to become the galaxies and clusters of galaxies in the Universe today. Those fluctuations arise because the rapid expansion takes minuscule quantum fluctuations and blows them up to finite size. This is essentially the same physics as the famous Hawking radiation from black holes. The fluctuations that eventually create the galaxies are accompanied by a separate set of fluctuations in the gravitational field itself: these are the ones that become gravitational radiation observable in the CMB. We characterise the background of gravitational radiation through the number r, which stands for the ratio of these two kinds of fluctuations — gravitational radiation divided by the density fluctuations.
Important caveat: there are other ways of producing gravitational radiation in the early Universe, although they don’t necessarily make exactly the same predictions; some of these issues have been discussed by my colleagues in various technical papers (Brandenberger 2011; Hindmarsh et al 2008; Lizarraga et al 2014 — the latter paper from just today!).
However, there are other ways to generate B modes. First, lots of astrophysical objects emit polarised light, and they generally don’t preferentially create E or B patterns. In particular, clouds of gas and dust in our galaxy will generally give us polarised light, and as we’re sitting inside our galaxy, it’s hard to avoid these. Luckily, we’re towards the outskirts of the Milky Way, so there are some clean areas of sky, but it’s hard to be sure that we’re not seeing some such light — and there are very few previous experiments to compare with.
We also know that large masses along the line of sight — clusters of galaxies and even bigger — distort the path of the light and can move those polarisation arrows around. This, in turn, can convert what started out as E into B and vice versa. But we know a lot about that intervening matter, and about the E-mode pattern that we started with, so we have a pretty good handle on this. There are some angular scales over which this is larger than the gravitational wave signal, and some scales that the gravitational wave signal is dominant.
So, if we can observe B-modes, and we are convinced that they are primordial, and that they are not due to lensing or astrophysical sources, and they have the properties expected from inflation, then (and only then!) we have direct evidence for inflation!
Here’s a plot, courtesy the BICEP2 team, with the current state of the data targeting these B modes:
The figure shows the so-called power spectrum of the B-mode data — the horizontal “multipole” axis corresponds to angular sizes (θ) on the sky: very roughly, multipole ℓ ~ 180°/θ. The vertical axis gives the amount of “power” at those scales: it is larger if there are more structures of that particular size. The downward pointing arrows are all upper limits; the error bars labeled BICEP2 and Polarbear are actual detections. The solid red curve is the expected signal from the lensing effect discussed above; the long-dashed red curve is the effect of gravitational radiation (with a particular amplitude), and the short-dashed red curve is the total B-mode signal from the two effects.
The Polarbear results were announced on 11 March (disclosure: I am a member of the Polarbear team). These give a detection of the gravitational lensing signal. It was expected, and has been observed in other ways both in temperature and polarisation, but this was the first time it’s been seen directly in this sort of B-mode power spectrum, a crucial advance in the field, letting us really see lensing unblurred by the presence of other effects. We looked at very “clean” areas of the sky, in an effort to minimise the possible contamination from those astrophjysical foregrounds.
The BICEP2 results were announced with a big press conference on 17 March. There are two papers so far, one giving the scientific results, another discussing the experimental techniques used — more papers discussing the data processing and other aspects of the analysis are forthcoming. But there is no doubt from the results that they have presented so far that this is an amazing, careful, and beautiful experiment.
Taken at face value, the BICEP2 results give a pretty strong detection of gravitational radiation from the early Universe, with the ratio parameter r=0.20, with error bars +0.07 and -0.05 (they are different in the two different directions, so you can’t write it with the usual “±”).
This is why there has been such an amazing amount of interest in both the press and the scientific community about these results — if true, they are a first semi-direct detection of gravitational radiation, strong evidence that inflation happened in the early Universe, and therefore a first look at waves which were created in the first tiny fraction of a second after the big bang, and have been propagating unimpeded in the Universe ever since. If we can measure more of the properties of these waves, we can learn more about the way inflation happened, which may in turn give us a handle on the particle physics of the early Universe and ultimately on a so-called “theory of everything” joining up quantum mechanics and gravity.
Taken at face value, the BICEP2 results imply that the very simplest theories of inflation may be right: the so-called “single-field slow-roll” theories that postulate a very simple addition to the particle physics of the Universe. In the other direction, scientists working on string theory have begun to make predictions about the character of inflation in their models, and many of these models are strongly constrained — perhaps even ruled out — by these data.
This is great. But scientists are skeptical by nature, and many of us have spent the last few days happily trying to poke holes in these results. My colleagues Peter Coles and Ted Bunn have blogged their own worries over the last couple of days, and Antony Lewis has already done some heroic work looking at the data.
The first worry is raised by their headline result: r=0.20. On its face, this conflicts with last year’s Planck result, which says that r<0.11 (of course, both of these numbers really represent probability distributions, so there is no absolute contradiction between these numbers, but rather they should be seen to be as a very unlikely combination). How can we ameliorate the “tension” (a word that has come into vogue in cosmology lately: a wimpy way — that I’ve used, too — of talking about apparent contradictions!) between these numbers?
First, how does Planck measure r to begin with? Above, I wrote about how B modes show only gravitational radiation (and lensing, and astrophysical foregrounds). But the same gravitational radiation also contributes to the CMB temperature, albeit at a comparatively low level, and at large angular scales — the very left-most points of the temperature equivalent of a plot like the above — I reproduce one from last year’s Planck release at right. In fact, those left-most data points are a bit low compared to the most favoured theory (the smooth curve), which pushes the Planck limit down a bit.
But Planck and BICEP2 measure r at somewhat different angular scales, and so we can “ameliorate the tension” by making the theory a bit more complicated: the gravitational radiation isn’t described by just one number, but by a curve. If both data are to be believed, the curve slopes up from the Planck regime toward the BICEP2 regime. In fact, such a new parameter is already present in the theory, and goes by the name “tensor tilt”. The problem is that the required amount of tilt is somewhat larger than the simplest ideas — such as the single-field slow-roll theories — prefer.
If we want to keep the theories simple, we need to make the data more complicated: bluntly, we need to find mistakes in either Planck or BICEP2. The large-scale CMB temperature sky has been scrutinised for the last 20 years or so, from COBE through WMAP and now Planck. Throughout this time, the community has been building up a catalog of “anomalies” (another term of art we use to describe things we’re uncomfortable with), many of which do seem to affect those large scales. The problem is that no one quite figure out if these things are statistically significant: we look at so many possible ways that the sky could be weird, but we only publish the ones that look significant. As my Imperial colleague Professor David Hand would point out, “Coincidences, Miracles, and Rare Events Happen Every Day”. Nonetheless, there seems to be some evidence that something interesting/unusual/anomalous is happening at large scales, and perhaps if we understood this correctly, the Planck limits on r would go up.
But perhaps not: those results have been solid for a long while without an alternative explanation. So maybe the problem is with BICEP2? There are certainly lots of ways they could have made mistakes. Perhaps most importantly, it is very difficult for them to distinguish between primordial perturbations and astrophysical foregrounds, as their main results use only data from a single frequency (like a single colour in the spectrum, but down closer to radio wavelengths). They do compare with some older data at a different frequency, but the comparison does not strongly rule out contamination. They also rely on models for possible contamination, which give a very small contribution, but these models are very poorly constrained by current data.
Another way they could go wrong is that they may misattribute some of their temperature measurement, or their E mode polarisation, to their B mode detection. Because the temperature and E mode are so much larger than the B they are seeing, only a very small amount of such contamination could change their results by a large amount. They do their best to control this “leakage”, and argue that its residual effect is tiny, but it’s very hard to get absolutely right.
And there is some internal evidence within the BICEP2 results that things are not perfect. The most obvious one comes from the figure above: the points around ℓ=200 — where the lensing contributions begins to dominate — are a bit higher than the model. Is this just a statistical fluctuation, or is it evidence of a broader problem? Their paper show some somewhat discrepant points in their E polarisation measurements, as well. None of these are very statistically significant, and some may be confirmed by other measurements, but there are enough of these that caution makes sense. From only a few days thinking about the results (and not yet really sitting down and going through the papers in great depth), it’s hard to make detailed judgements. It seems like the team have been careful that it’s hard to imagine the results going away completely, but easy to imagine lots of ways in which it could be wrong in detail.
But this skepticism from me and others is a good thing, even for the BICEP2 team: they will want their results scrutinised by the community. And the rest of us in the community will want the opportunity to reproduce the results. First, we’ll try to dig into the BICEP2 results themselves, making sure that they’ve done everything as well as possible. But over the next months and years, we’ll want to reproduce them with other experiments.
First, of course, will be Planck. Since I’m on Planck, there’s not much I can say here, except that we expect to release our own polarisation data and cosmological results later this year. This paper (Efstathiou and Gratton 2009) may be of interest….
Next, there are a bunch of ground- and balloon-based CMB experiments gathering data and/or looking for funding right now. The aforementioned Polarbear will continue, and I’m also involved with the EBEX team which hopes to fly a new balloon to probe the CMB polarisation again in a few years. In the meantime, there’s also ACT, SPIDER, SPT, and indeed the successor to BICEP itself, called the Keck array, and many others besides. Eventually, we may even get a new CMB satellite, but don’t hold your breath…
I first heard about the coming BICEP2 results in the middle of last week, when I was up in Edinburgh and received an email from a colleague just saying “r=0.2?!!?” I quickly called to ask what he meant, and he transmitted the rumour of a coming BICEP detection, perhaps bolstered by some confirmation from their successor experiment, the Keck Array (which does in fact appear in their paper). Indeed, such a rumour had been floating around the community for a year or so, but most of thought it would turn out to be spurious. But very quickly last week, we realised that this was for real. It became most solid when I had a call from a Guardian journalist, who managed to elicit some inane comments from me, before anything was known for sure.
By the weekend, it became clear that there would be an astronomy-related press conference at Harvard on Monday, and we were all pretty sure that it would be the BICEP2 news. The number r=0.20 was most commonly cited, and we all figured it would have an error bar around 0.06 or so — small enough to be a real detection, but large enough to leave room for error (but I also heard rumours of r=0.075).
By Monday morning, things had reached whatever passes for a fever pitch in the cosmology community: twitter and Facebook conversations, a mention on BBC Radio 4’s Today programme, all before the official title of the press conference was even announced: “First Direct Evidence for Cosmic Inflation”. Apparently, other BBC journalists had already had embargoed confirmation of some of the details from the BICEP2 team, but the embargo meant they couldn’t participate in the rumour-spreading.
I was traveling during most of this time, fielding occasional call from journalists (there aren’t that many CMB-specialists within within easy of the London-based media), though, unfortunately for my ego, I wasn’t able to make it onto any of Monday night’s choice tv spots.
By the time of the press conference itself, the cosmology community had self-organised: there was a Facebook group organised by Fermilab’s Scott Dodelson, which pretty quickly started dissecting the papers and was able to follow along with the press conference as it happened (despite the fact that most of us couldn’t get onto the website — one of the first times that the popularity of cosmology has brought down a server).
At the time, I was on a series of trains from Loch Lomond to Glasgow, Edinburgh and finally on to London, but the facebook group made (from a tech standpoint, it’s surprising that we didn’t do this on the supposedly more capable Google Plus platform, but the sociological fact is that more of us are on, and use, Facebook). It was great to be able to watch, and participate in, the real-time discussion of the papers (which continues on Facebook as of now). Cosmologists have been teasing out possible inconsistencies (some of which I alluded to above), trying to understand the implications of the results if they’re right — and thinking about the next steps. IRL, Now that I’m back at Imperial, we’ve been poring over the papers in yet more detail, trying to work exactly how they’ve gathered and analysed their data, and seeing what parts we want to try to reproduce.
Physics moves fast nowadays: as of this writing, about 72 hours after the announcement, there are 16 papers mentioning the BICEP2 results on the physics ArXiV (it’s a live search, so the number will undoubtedly grow). Most of them attempt to constrain various early-Universe models in the light of the r=0.20 results — some of them with some amount of statistical rigour, others just pointing out various models in which that is more or less easy to get. (I’ve obviously spent too much time on this post and not enough writing papers.)
It’s also worth collecting, if only for my own future reference, some of the media coverage of the results:
- The BBC’s excellent news piece and nice explanatory supplement
- The Wall Street Journal
- The Guardian
- The Telegraph
- The Economist
- IEEE Spectrum (on the more technical side)
For more background, you can check out
- Sean Carroll’s introduction and post-press-conference debrief
- Peter Coles’ liveblog, straw poll, and skeptical summary
Today was the deadline for submitting so-called “White Papers” proposing the next generation of the European Space Agency satellite missions. Because of the long lead times for these sorts of complicated technical achievements, this call is for launches in the faraway years of 2028 or 2034. (These dates would be harder to wrap my head around if I weren’t writing this on the same weekend that I’m attending the 25th reunion of my university graduation, an event about which it’s difficult to avoid the clichéd thought that May, 1988 feels like the day before yesterday.)
At least two of the ideas are particularly close to my scientific heart.
The Polarized Radiation Imaging and Spectroscopy Mission (PRISM) is a cosmic microwave background (CMB) telescope, following on from Planck and the current generation of sub-orbital telescopes like EBEX and PolarBear: whereas Planck has 72 detectors observing the sky over nine frequencies on the sky, PRISM would have more than 7000 detectors working in a similar way to Planck over 32 frequencies, along with another set observing 300 narrow frequency bands, and another instrument dedicated to measuring the spectrum of the CMB in even more detail. Combined, these instruments allow a wide variety of cosmological and astrophysical goals, concentrating on more direct observations of early Universe physics than possible with current instruments, in particular the possible background of gravitational waves from inflation, and the small correlations induced by the physics of inflation and other physical processes in the history of the Universe.
The eLISA mission is the latest attempt to build a gravitational radiation observatory in space, observing astrophysical sources rather than the primordial background affecting the CMB, using giant lasers to measure the distance between three separate free-floating satellites a million kilometres apart from one another. As a gravitational wave passes through the triangle, it bends space and effectively changes the distance between them. The trio would thereby be sensitive to the gravitational waves produced by small, dense objects orbiting one another, objects like white dwarfs, neutron stars and, most excitingly, black holes. This would give us a probe of physics in locations we can’t see with ordinary light, and in regimes that we can’t reproduce on earth or anywhere nearby.
In the selection process, ESA is supposed to take into account the interests of the community. Hence both of these missions are soliciting support, of active and interested scientists and also the more general public: check out the sites for PRISM and eLISA. It’s a tough call. Both cases would be more convincing with a detection of gravitational radiation in their respective regimes, but the process requires putting down a marker early on. In the long term, a CMB mission like PRISM seems inevitable — there are unlikely to be any technical showstoppers — it’s just a big telescope in a slightly unusual range of frequencies. eLISA is more technically challenging: the LISA Pathfinder effort has shown just how hard it is to keep and monitor a free-floating mass in space, and the lack of a detection so far from the ground-based LIGO observatory, although completely consistent with expectations, has kept the community’s enthusiasm lower. (This will likely change with Advanced LIGO, expected to see many hundreds of sources as soon as it comes online in 2015 or thereabouts.)
Full disclosure: although I’ve signed up to support both, I’m directly involved in the PRISM white paper.
Yesterday’s release of the Planck papers and data wasn’t just aimed at the scientific community, of course. We wanted to let the rest of the world know about our results. The main press conference was at ESA HQ in Paris, and there was a smaller event here in London run by the UKSA, which I participated in as part of a panel of eight Planck scientists.
The reporters tried to keep us honest, asking us to keep simplifying our explanations so that they — and their readers — could understand them. We struggled with describing how our measurements of the typical size of spots in our map of the CMB eventually led us to a measurement of the age of the Universe (which I tried to do in my previous post). This was hard not only because the reasoning is subtle, but also because, frankly, it’s not something we care that much about: it’s a model-dependent parameter, something we don’t measure directly, and doesn’t have much of a cosmological consequence. (I ended up on the phone with the BBC’s Pallab Ghosh at about 8pm trying to work out whether the age has changed by 50 or 80 million years, a number that means more to him and his viewers than to me and my colleagues.)
There are pieces by the reporters who asked excellent questions at the press conference, at The Guardian, The Economist and The Financial Times, as well as one behind the (London) Times paywall by Hannah Devlin who was probably most rigorous in her requests for us to simplify our explanations. I’ll also point to NPR’s coverage, mostly since it is one of the few outlets to explicitly mention the topology of the Universe which was one of the areas of Planck science I worked on myself.
Aside from the press conference itself, the media were fairly clamouring for the chance to talk about Planck. Most of the major outlets in the UK and around Europe covered the Planck results. Even in the US, we made it onto the front page of the New York Times. Rather than summarise all of the results, I’ll just self-aggrandizingly point to the places where I appeared: a text-based preview from the BBC, and a short quote on video taken after the press conference, as well as one on ITV. I’m most proud of my appearance with Tom Clarke on Channel 4 News — we spent about an hour planning and discussing the results, edited down to a few minutes including my head floating in front of some green-screen astrophysics animations.
Now that the day is over, you can look at the results for yourself at the BBC’s nice interactive version, or at the lovely Planck Chromoscope created by Cardiff University’s Dr Chris North, who donated a huge amount of his time and effort to helping us make yesterday a success. I should also thank our funders over at the UK Space Agency, STFC and (indirectly) ESA — Planck is big science, and these sorts of results don’t come cheap. I hope you agree that they’ve been worth it.
If you’re the kind of person who reads this blog, then you won’t have missed yesterday’s announcement of the first Planck cosmology results.
The most important is our picture of the cosmic microwave background itself:
But it takes a lot of work to go from the data coming off the Planck satellite to this picture. First, we have to make nine different maps, one at each of the frequencies in which Planck observes, from 30 GHz (with a wavelength of 1 cm) up to 850 GHz (0.350 mm) — note that the colour scales here are the same:
At low and high frequencies, these are dominated by the emission of our own galaxy, and there is at least some contamination over the whole range, so it takes hard work to separate the primordial CMB signal from the dirty (but interesting) astrophysics along the way. In fact, it’s sufficiently challenging that the team uses four different methods, each with different assumptions, to do so, and the results agree remarkably well.
In fact, we don’t use the above CMB image directly to do the main cosmological science. Instead, we build a Bayesian model of the data, combining our understanding of the foreground astrophysics and the cosmology, and marginalise over the astrophysical parameters in order to extract as much cosmological information as we can. (The formalism is described in the Planck likelihood paper, and the main results of the analysis are in the Planck cosmological parameters paper.)
The main tool for this is the power spectrum, a plot which shows us how the different hot and cold spots on our CMB map are distributed: In this plot, the left-hand side (low ℓ) corresponds to large angles on the sky and high ℓ to small angles. Planck’s results are remarkable for covering this whole range from ℓ=2 to ℓ=2500: the previous CMB satellite, WMAP, had a high-quality spectrum out to ℓ=750 or so; ground- and balloon-based experiments like SPT and ACT filled in some of the high-ℓ regime.
It’s worth marvelling at this for a moment, a triumph of modern cosmological theory and observation: our theoretical models fit our data from scales of 180° down to 0.1°, each of those bumps and wiggles a further sign of how well we understand the contents, history and evolution of the Universe. Our high-quality data has refined our knowledge of the cosmological parameters that describe the universe, decreasing the error bars by a factor of several on the six parameters that describe the simplest ΛCDM universe. Moreover, and maybe remarkably, the data don’t seem to require any additional parameters beyond those six: for example, despite previous evidence to the contrary, the Universe doesn’t need any additional neutrinos.
The quantity most well-measured by Planck is related to the typical size of spots in the CMB map; it’s about a degree, with an error of less than one part in 1,000. This quantity has changed a bit (by about the width of the error bar) since the previous WMAP results. This, in turn, causes us to revise our estimates of quantities like the expansion rate of the Universe (the Hubble constant), which has gone down, in fact by enough that it’s interestingly different from its best measurements using local (non-CMB) data, from more or less direct observations of galaxies moving away from us. Both methods have disadvantages: for the CMB, it’s a very indirect measurement, requiring imposing a model upon the directly measured spot size (known more technically as the “acoustic scale” since it comes from sound waves in the early Universe). For observations of local galaxies, it requires building up the famous cosmic distance ladder, calibrating our understanding of the distances to further and further objects, few of which we truly understand from first principles. So perhaps this discrepancy is due to messy and difficult astrophysics, or perhaps to interesting cosmological evolution.
This change in the expansion rate is also indirectly responsible for the results that have made the most headlines: it changes our best estimate of the age of the Universe (slower expansion means an older Universe) and of the relative amounts of its constituents (since the expansion rate is related to the geometry of the Universe, which, because of Einstein’s General Relativity, tells us the amount of matter).
But the cosmological parameters measured in this way are just Planck’s headlines: there is plenty more science. We’ve gone beyond the power spectrum above to put limits upon so-called non-Gaussianities which are signatures of the detailed way in which the seeds of large-scale structure in the Universe was initially laid down. We’ve observed clusters of galaxies which give us yet more insight into cosmology (and which seem to show an intriguing tension with some of the cosmological parameters). We’ve measured the deflection of light by gravitational lensing. And in work that I helped lead, we’ve used the CMB maps to put limits on some of the ways in which our simplest models of the Universe could be wrong, possibly having an interesting topology or rotation on the largest scales.
But because we’ve scrutinised our data so carefully, we have found some peculiarities which don’t quite fit the models. From the days of COBE and WMAP, there has been evidence that the largest angular scales in the map, a few degrees and larger, have some “anomalies” — some of the patterns show strange alignments, some show unexpected variation between two different hemispheres of the sky, and there are some areas of the sky that are larger and colder than is expected to occur in our theories. Individually, any of these might be a statistical fluke (and collectively they may still be) but perhaps they are giving us evidence of something exciting going on in the early Universe. Or perhaps, to use a bad analogy, the CMB map is like the Zapruder film: if you scrutinise anything carefully enough, you’ll find things that look a conspiracy, but turn out to have an innocent explanation.
I’ve mentioned eight different Planck papers so far, but in fact we’ve released 28 (and there will be a few more to come over the coming months, and many in the future). There’s an overall introduction to the Planck Mission, and papers on the data processing, observations of relatively nearby galaxies, and plenty more cosmology. The papers have been submitted to the journal A&A, they’re available on the ArXiV, and you can find a list of them at the ESA site.
Even more important for my cosmology colleagues, we’ve released the Planck data, as well, along with the necessary code and other information necessary to understand it: you can get it from the Planck Legacy Archive. I’m sure we’ve only just begun to get exciting and fun science out of the data from Planck. And this is only the beginning of Planck’s data: just the first 15 months of observations, and just the intensity of the CMB: in the coming years we’ll be analysing (and releasing) more than one more year of data, and starting to dig into Planck’s observations of the polarized sky.
OK, back to editing. (I’ll try to update this post with any advance information as it becomes available.)
Update (on timing, not content): the main Planck press conference will be held on the morning of 21 March at 10am CET at ESA HQ in Paris. There will be a simultaneous UK event (9am GMT) held at the Royal Astronomical Society in London, where the Paris event will be streamed, followed by a local Q&A session. (There will also be a more technical afternoon session in Paris.)
Probably more important for my astrophysics colleagues: the Planck papers will be posted on the ESA website at noon on the 21st, after the press event, and will appear on the ArXiV the following day, 22 March. Be sure to set aside some time next weekend!
Until now, I have been forced to resist the clamour brewing among both members of my extensive readership (hi, dad!) to post a bit more often: my excuse is that, in the little over a month between early September and mid-October, I have travelled back and forth from Paris to London five times, spent a weekend in the USA, started teaching a new course, and ran a half marathon.
Ten one-way trips in six weeks is too many; the Eurostar makes it about as pleasant as it could possibly be: 2 1/4 hours from central London to central Paris by train (a flight from Heathrow to de Gaulle is faster, but the airports are less convenient and much more stressful). Most of my time in Paris was for Planck Satellite meetings, mostly devoted to the first major release of Planck data and papers next year — of course, by The Planck rules, I can’t talk about what happened. At least I have no more trips to Paris until early December (and only four or so hours a week of Planck telecons).
But in addition to three Planck meetings, I also helped out in my minor role as a member of the Scientific Organizing Committee of the Big Bang, Big Data, Big Computing meeting at the APC, which was an excellent gathering of cosmologists with computer scientists and statisticians, all doing our best to talk over the fences of jargon and habit that often keep the different fields from having productive conversations. One of my favourite talks was the technical but entertaining From mean Euler characteristics to the Gaussian kinematic formula by Robert Adler, whose work in statistics more than thirty years ago taught many in cosmology how to treat the functions that we use to describe the distribution of density and temperature in the Universe as random fields; he discussed more recent updates to that early work for much more general circumstances, the cosmological repercussions of which have yet to be digested. Another highlight was from Imperial’s own Professor David Hand, Opportunities and Challenges in Modelling and Anomaly Detection, discussing how to pull small and possibly weird (“anomalous”) signals from large amounts of data— he didn’t highlight many specific instances in cosmology, but rather gave examples with other sorts of big data, such as the distribution of prices of credit card purchases (with some particularly good anecdotes culled from gas/petrol station data).
Finally, in addition to those many days of meetings — and yes, the occasional good Parisian meal — there were a couple of instances of the most satisfying of my professional duties: two examinations for newly-minted PhDs from the Institut d’Astrophysiques de Paris and the Laboratoire Astroparticule et Cosmologie — félicitations aux Docteurs Errard et Ducout.
Nearly two-and-a-half years after its launch, the end of ESA’s Planck mission has begun. (In fact, the BBC scooped the rest of the Planck collaboration itself with a story last week; you can read the UK take at the excellent Cardiff-led public Planck site.)
Planck’s High-Frequency Instrument (HFI) instrument must be cooled to 0.1 degrees above absolute zero, maintained at this temperature by a series of refrigerators — which had been making Planck the coldest known object in space, colder than the 2.7 degrees to which the cosmic microwave background itself warms even the most regions of intergalactic space. The final cooler in the chain relies on a tank of the Helium-3 isotope, which has finally run out, within days of its predicted lifetime — and giving Planck more than twice as much time observing the Universe as its nominal 14-month mission.
The Low-Frequency Instrument (LFI) doesn’t require such cold temperatures, although in fact they do use one of the earlier stages in the chain, the UK-built 4-degree cooler, as a reference against which it compares its measurements. LFI will, therefore, continue its measurements for the next half-year or so.
But our work, of course, goes on: we will continue to process and analyse Planck’s data, refining our maps of the sky, and get down to the real work of extracting a full sky’s worth of astrophysics and cosmology from our data. The first, preliminary, release of Planck data happened just one year ago, and yet more new Planck science will be presented at a conference in Bologna in a few months. The most exciting and important work will be getting cosmology from Planck data, which we expect to first present in early 2013, and likely in further iterations beyond that.
Funding for space missions in the UK was split from the Science and Technology Facilities Council to the the UK Space Agency earlier this year. Very roughly, UKSA will fund the missions themselves all the way through to the processing of data, while STFC will fund the science that comes from analysing the data.
To try to be a little more specific, the agencies have put out a press release on this so-called “dual key” approach: “Who does what? — Arrangements for sharing responsibility for the science programme between the STFC and the UK Space Agency.” The executive summary is:
This still leaves many of the details of the split unanswered, or at least fuzzy: How do we ensure that government supports the two agencies adequately and jointly? How do we ensure that STFC supports science exploitation from missions that UKSA funds, so that the UK gets the full return on its investment? How do we define the split between “data analysis” and “science exploitation”?
Here at Imperial, we work on both sides of that divide for both Planck and Herschel: we are the home to data analysis centres for both missions, and want to take advantage of the resulting science opportunities. Indeed, as we take the Planck mission ahead towards its first cosmology results at the end of next year, we are already seeing some of these tensions played out, in both the decision-making process of each agency separately as well as in the overall level of funding available in these austere times.
What are blogs for, if not self-publicity? In that vein, I’ll be appearing at the Spacetacular! night on April 12, in honor of Yuri’s night: the 50th anniversary of Yuri Gagarin’s first-ever manned space flight.
The evening is organized by Londonist editor Matt Brown along with comedian and presenter Helen Keen, hosting a line-up of comedians and scientists. I promise not to be funny so you can tell which I am — I’ll be talking for ten minutes or so about my adventures in space (well, working on a big space-based project, the Planck Surveyor Satellite).
We scientists often, and correctly, make the point that manned space flight has almost nothing to do with science. But I certainly wouldn’t be the scientist I am if it weren’t a morning long ago in Hooks Lane Nursery School watching one of those early moon launches, thinking I wanted to have something, anything, to do with that. So let us know if you want to come celebrate [the Facebook event link is currently broken, but this one is still up.] this amazing human achievement with comedy and science (and spacey costumes) at the Camden Head Pub in London next week.
Many of my colleagues in the EBEX experiment have just lit out for the west. Specifically, the team is heading off to Palestine (pronounced “Palesteen”), Texas, to get the telescope and instrument ready for its big Antarctic long-duration balloon flight at the end of the year, when we hope to gather our first real scientific data and observe the temperature and polarization of the cosmic microwave background (CMB) radiation. Unlike the Planck Satellite, which has a few dozen detectors changed little from those that flew on MAXIMA and BOOMEReNG in the 1990s, EBEX can use more modern technology, and will fly with thousands of detectors, allowing us to achieve far greater sensitivity to the smallest variations in the CMB.
Asad, one of the EBEX postdocs, involved in the experiment for several years, will be writing on the EBEX in Flight blog about the experiences down in Texas and, we hope, the future path of the team and telescope down to Antarctica. Follow along as the team drives across the country (at least twice), assembles and tests the instrument, breaks and fixes things, sleeps too little, works too hard, and, we hope, builds the most sensitive CMB experiment yet deployed. (And of course, eats cheeseburgers.)
And if you want a change from cosmology, you can instead follow along with another friend, Marc, who is trying to see if he can come to grips with writing on an iPad in the supposedly post-PC world, over at typelesswriter.
One of the perks (perqs?) of academia is that occasionally I get an excuse to escape the damp grey of London Winters. The Planck Satellite is an international collaboration and, although largely backed by the European Space Agency, it has a large contribution from US scientists, who built the CMB detectors for Planck’s HFI instrument, as well as being significantly involved in the analysis of Planck data. Much of this work is centred at NASA’s famous Jet Propulsion Lab in Pasadena, and I was happy to rearrange my schedule to allow a February trip to sunny Southern California (I hope my undergraduate students enjoyed the two guest lectures during my absence).
Visiting California, I was compelled to take advantage of the local culture, which mostly seemed to involve meals. I ate as much Mexican food as I could manage, from fantastic $1.25 tacos from the El Taquito Mexicano Truck to somewhat higher-end fare at Tinga in LA proper. And I finally got to taste bánh mì, French-influenced Vietnamese sandwiches (which have arrived in London but I somehow haven’t tried them here yet). And I got to take in the view from the heights of Griffith Park:
as well as down at street level:
And even better, I got to share these meals and views with old and new friends.
Of course I was mainly in LA to do science, but even at JPL we managed to escape our windowless meeting room and check out the clean-room where NASA is assembling the Mars Science Lab:
The white pod-like structure is the spacecraft itself, which will parachute into Mars’ atmosphere in a few years, and from it will descend the circular “sky crane” currently parked behind it which will itself deploy the car-sized Curiosity Rover to do the real work of Martian geology, chemistry, climatology and (who knows?) biology.
But my own work was for the semi-annual meeting of the Planck CTP working group (I’ve never been sure if it was intentional, but the name always seemed to me a sort of science pun, obliquely referring to the famous “CPT” symmetry of fundamental physics). In Planck, “CTP” refers to Cℓ from Temperature and Polarization: the calculation of the famous CMB power spectrum which contains much of the cosmological information in the maps that Planck will produce. The spectrum allows us to compress the millions of pixels in a map of the CMB sky, such as this one from the WMAP experiment (the colors give the temperature or intensity of the radiation, the lines its polarization), into just a few thousand numbers we can plot on a graph.
OK, this is not a publishable figure. Instead, it marks the tenth anniversary of the first CTP working group telecon in February 2001 (somewhat before I was involved in the group, actually). But given that we won’t be publishing Planck cosmology data for another couple of years, sugary spectra will have to do instead of the real ones in the meantime.
The work of the CTP group is exactly concerned with finding the best algorithms for translating CMB maps into these power spectra. They must take into account the complicated noise in the map, coming from our imperfect instruments which observe the sky with finite resolution — that is, a telescope which smooths the sky at a scale from about half down to one-tenth of a degree — and with a limited sensitivity — every measurement has a little bit of unavoidable noise added to it. Moreover, in between the CMB, produced 400,000 years after the Big Bang, and Planck’s instruments, observing today, is the entire rest of the Universe, which contains matter that both absorbs and emits (glows) in the microwaves which Planck observes. So in practice we need to simultaneously deal with all of these effects when reducing our maps down to power spectra. This is a surprisingly difficult problem: the naive, brute-force (Bayesian), solution requires a number of computer operations which scales like the cube of the number of pixels in the CMB map; at Planck’s resolution this is as many as 100 million pixels, and there still are no supercomputers capable of doing the septillion (1024) operations in a reasonable time. If we smooth the map, we can still solve the full problem, but on small scales, we need to come up with useful approximations which take advantage of what we know about the data, usually taking advantage of the very large number of points that contribute, and the so-called asymptotic theorems which say, roughly, that we can learn about the right answer by doing lots of simulations, which are much less computationally expensive.
At the required levels of both accuracy and precision, the results depend on all of the details of the data processing and the algorithm: How do you account for the telescope’s optics and the pixelization of the sky? How do you model the noise in the map? How do you remove those pixels contaminated by astrophysical emission or absorption? All of this is compounded by the necessary (friendly) scientific competition: it is the responsibility of the CTP group to make recommendations for how Planck will actually produce its power spectra for the community and, naturally, each of us wants our own algorithm or computer program to be used — to win. So these meetings are as much about politics as science, but we can hope that the outcome is that all the codes are raised to an appropriate level and we can make the decisions on non-scientific grounds (ease of use, flexibility, speed, etc.) that will produce the high-quality scientific results for which we designed and built Planck — and have worked on it for the last decade or more.
I’ve been meaning to give a shout-out to my colleagues on the ADAMIS team at the APC (AstroParticule et Cosmologie) Lab at the Université Paris 7 for a while: in addition to doing lots of great work on Planck, EBEX, PolarBear and other important CMB and cosmology experiments, they’ve also been running a group blog since the Autumn, Paper(s) of the Week et les autres choses (scientifique) which dissects some of the more interesting work to come out of the cosmology community. In particular, one of my favorite collaborators has written an extremely astute analysis of what, exactly, we on the Planck team released in our lengthy series of papers last month (which I have already discussed in a somewhat more boosterish fashion).