Results matching “Planck”

Planck: Demographics and Diversity

Another aspect of Planck’s legacy bears examining.

A couple of months ago, the 2018 Gruber Prize in Cosmology was awarded to the Planck Satellite. This was (I think) a well-deserved honour for all of us who have worked on Planck during the more than 20 years since its conception, for a mission which confirmed a standard model of cosmology and measured the parameters which describe it to accuracies of a few percent. Planck is the latest in a series of telescopes and satellites dating back to the COBE Satellite in the early 90s, through the MAXIMA and Boomerang balloons (among many others) around the turn of the 21st century, and the WMAP Satellite (The Gruber Foundation seems to like CMB satellites: COBE won the Prize in 2006 and WMAP in 2012).

Well, it wasn’t really awarded to the Planck Satellite itself, of course: 50% of the half-million-dollar award went to the Principal Investigators of the two Planck instruments, Jean-Loup Puget and Reno Mandolesi, and the other half to the “Planck Team”. The Gruber site officially mentions 334 members of the Collaboration as recipients of the Prize.

Unfortunately, the Gruber Foundation apparently has some convoluted rules about how it makes such group awards, and the PIs were not allowed to split the monetary portion of the prize among the full 300-plus team. Instead, they decided to share the second half of the funds amongst “43 identified members made up of the Planck Science Team, key members of the Planck editorial board, and Co-Investigators of the two instruments.” Those words were originally on the Gruber site but in fact have since been removed — there is no public recognition of this aspect of the award, which is completely appropriate as it is the whole team who deserves the award. (Full disclosure: as a member of the Planck Editorial Board and a Co-Investigator, I am one of that smaller group of 43, chosen not entirely transparently by the PIs.)

I also understand that the PIs will use a portion of their award to create a fund for all members of the collaboration to draw on for Planck-related travel over the coming years, now that there is little or no governmental funding remaining for Planck work, and those of us who will also receive a financial portion of the award will also be encouraged to do so (after, unfortunately, having to work out the tax implications of both receiving the prize and donating it back).

This seems like a reasonable way to handle a problem with no real fair solution, although, as usual in large collaborations like Planck, the communications about this left many Planck collaborators in the dark. (Planck also won the Royal Society 2018 Group Achievement Award which, because there is no money involved, could be uncontroversially awarded to the ESA Planck Team, without an explicit list. And the situation is much better than for the Nobel Prize.)

However, this seemingly reasonable solution reveals an even bigger, longer-standing, and wider-ranging problem: only about 50 of the 334 names on the full Planck team list (roughly 15%) are women. This is already appallingly low. Worse still, none of the 43 formerly “identified” members officially receiving a monetary prize are women (although we would have expected about 6 given even that terrible fraction). Put more explicitly, there is not a single woman in the upper reaches of Planck scientific management.

This terrible situation was also noted by my colleague Jean-Luc Starck (one of the larger group of 334) and Olivier Berné. As a slight corrective to this, it was refreshing to see Nature’s take on the end of Planck dominated by interviews with young members of the collaboration including several women who will, we hope, be dominating the field over the coming years and decades.

(Almost) The end of Planck

This week, we released (most of) the final set of papers from the Planck collaboration — the long-awaited Planck 2018 results (which were originally meant to be the “Planck 2016 results”, but everything takes longer than you hope…), available on the ESA website as well as the arXiv. More importantly for many astrophysicists and cosmologists, the final public release of Planck data is also available.

Anyway, we aren’t quite finished: those of you up on your roman numerals will notice that there are only 9 papers but the last one is “XII” — the rest of the papers will come out over the coming months. So it’s not the end, but at least it’s the beginning of the end.

And it’s been a long time coming. I attended my first Planck-related meeting in 2000 or so (and plenty of people had been working on the projects that would become Planck for a half-decade by that point). For the last year or more, the number of people working on Planck has dwindled as grant money has dried up (most of the scientists now analysing the data are doing so without direct funding for the work).

(I won’t rehash the scientific and technical background to the Planck Satellite and the cosmic microwave background (CMB), which I’ve been writing about for most of the lifetime of this blog.)

Planck 2018: the science

So, in the language of the title of the first paper in the series, what is the legacy of Planck? The state of our science is strong. For the first time, we present full results from both the temperature of the CMB and its polarization. Unfortunately, we don’t actually use all the data available to us — on the largest angular scales, Planck’s results remain contaminated by astrophysical foregrounds and unknown “systematic” errors. This is especially true of our measurements of the polarization of the CMB, unfortunately, which is probably Planck’s most significant limitation.

The remaining data are an excellent match for what is becoming the standard model of cosmology: ΛCDM, or “Lambda-Cold Dark Matter”, which is dominated, first, by a component which makes the Universe accelerate in its expansion (Λ, Greek Lambda), usually thought to be Einstein’s cosmological constant; and secondarily by an invisible component that seems to interact only by gravity (CDM, or “cold dark matter”). We have tested for more exotic versions of both of these components, but the simplest model seems to fit the data without needing any such extensions. We also observe the atoms and light which comprise the more prosaic kinds of matter we observe in our day-to-day lives, which make up only a few percent of the Universe.

All together, the sum of the densities of these components are just enough to make the curvature of the Universe exactly flat through Einstein’s General Relativity and its famous relationship between the amount of stuff (mass) and the geometry of space-time. Furthermore, we can measure the way the matter in the Universe is distributed as a function of the length scale of the structures involved. All of these are consistent with the predictions of the famous or infamous theory of cosmic inflation), which expanded the Universe when it was much less than one second old by factors of more than 1020. This made the Universe appear flat (think of zooming into a curved surface) and expanded the tiny random fluctuations of quantum mechanics so quickly and so much that they eventually became the galaxies and clusters of galaxies we observe today. (Unfortunately, we still haven’t observed the long-awaited primordial B-mode polarization that would be a somewhat direct signature of inflation, although the combination of data from Planck and BICEP2/Keck give the strongest constraint to date.)

Most of these results are encoded in a function called the CMB power spectrum, something I’ve shown here on the blog a few times before, but I never tire of the beautiful agreement between theory and experiment, so I’ll do it again: PlanckSpectra (The figure is from the Planck “legacy” paper; more details are in others in the 2018 series, especially the Planck “cosmological parameters” paper.) The top panel gives the power spectrum for the Planck temperature data, the second panel the cross-correlation between temperature and the so-called E-mode polarization, the left bottom panel the polarization-only spectrum, and the right bottom the spectrum from the gravitational lensing of CMB photons due to matter along the line of sight. (There are also spectra for the B mode of polarization, but Planck cannot distinguish these from zero.) The points are “one sigma” error bars, and the blue curve gives the best fit model.

As an important aside, these spectra per se are not used to determine the cosmological parameters; rather, we use a Bayesian procedure to calculate the likelihood of the parameters directly from the data. On small scales (corresponding to 𝓁>30 since 𝓁 is related to the inverse of an angular distance), estimates of spectra from individual detectors are used as an approximation to the proper Bayesian formula; on large scales (𝓁<30) we use a more complicated likelihood function, calculated somewhat differently for data from Planck’s High- and Low-frequency instruments, which captures more of the details of the full Bayesian procedure (although, as noted above, we don’t use all possible combinations of polarization and temperature data to avoid contamination by foregrounds and unaccounted-for sources of noise).

Of course, not all cosmological data, from Planck and elsewhere, seem to agree completely with the theory. Perhaps most famously, local measurements of how fast the Universe is expanding today — the Hubble constant — give a value of H0 = (73.52 ± 1.62) km/s/Mpc (the units give how much faster something is moving away from us in km/s as they get further away, measured in megaparsecs (Mpc); whereas Planck (which infers the value within a constrained model) gives (67.27 ± 0.60) km/s/Mpc . This is a pretty significant discrepancy and, unfortunately, it seems difficult to find an interesting cosmological effect that could be responsible for these differences. Rather, we are forced to expect that it is due to one or more of the experiments having some unaccounted-for source of error.

The term of art for these discrepancies is “tension” and indeed there are a few other “tensions” between Planck and other datasets, as well as within the Planck data itself: weak gravitational lensing measurements of the distortion of light rays due to the clustering of matter in the relatively nearby Universe show evidence for slightly weaker clustering than that inferred from Planck data. There are tensions even within Planck, when we measure the same quantities by different means (including things related to similar gravitational lensing effects). But, just as “half of all three-sigma results are wrong”, we expect that we’ve mis- or under-estimated (or to quote the no-longer-in-the-running-for-the-worst president ever, “misunderestimated”) our errors much or all of the time and should really learn to expect this sort of thing. Some may turn out to be real, but many will be statistical flukes or systematic experimental errors.

(If you were looking a briefer but more technical fly-through the Planck results — from someone not on the Planck team — check out Renee Hlozek’s tweetstorm.)

Planck 2018: lessons learned

So, Planck has more or less lived up to its advanced billing as providing definitive measurements of the cosmological parameters, while still leaving enough “tensions” and other open questions to keep us cosmologists working for decades to come (we are already planning the next generation of ground-based telescopes and satellites for measuring the CMB).

But did we do things in the best possible way? Almost certainly not. My colleague (and former grad student!) Joe Zuntz has pointed out that we don’t use any explicit “blinding” in our statistical analysis. The point is to avoid our own biases when doing an analysis: you don’t want to stop looking for sources of error when you agree with the model you thought would be true. This works really well when you can enumerate all of your sources of error and then simulate them. In practice, most collaborations (such as the Polarbear team with whom I also work) choose to un-blind some results exactly to be able to find such sources of error, and indeed this is the motivation behind the scores of “null tests” that we run on different combinations of Planck data. We discuss this a little in an appendix of the “legacy” paper — null tests are important, but we have often found that a fully blind procedure isn’t powerful enough to find all sources of error, and in many cases (including some motivated by external scientists looking at Planck data) it was exactly low-level discrepancies within the processed results that have led us to new systematic effects. A more fully-blind procedure would be preferable, of course, but I hope this is a case of the great being the enemy of the good (or good enough). I suspect that those next-generation CMB experiments will incorporate blinding from the beginning.

Further, although we have released a lot of software and data to the community, it would be very difficult to reproduce all of our results. Nowadays, experiments are moving toward a fully open-source model, where all the software is publicly available (in Planck, not all of our analysis software was available to other members of the collaboration, much less to the community at large). This does impose an extra burden on the scientists, but it is probably worth the effort, and again, needs to be built into the collaboration’s policies from the start.

That’s the science and methodology. But Planck is also important as having been one of the first of what is now pretty standard in astrophysics: a collaboration of many hundreds of scientists (and many hundreds more of engineers, administrators, and others without whom Planck would not have been possible). In the end, we persisted, and persevered, and did some great science. But I learned that scientists need to learn to be better at communicating, both from the top of the organisation down, and from the “bottom” (I hesitate to use that word, since that is where much of the real work is done) up, especially when those lines of hoped-for communication are usually between different labs or Universities, very often between different countries. Physicists, I have learned, can be pretty bad at managing — and at being managed. This isn’t a great combination, and I say this as a middle-manager in the Planck organisation, very much guilty on both fronts.

WMAP Breaks Through

It was announced this morning that the WMAP team has won the $3 million Breakthrough Prize. Unlike the Nobel Prize, which infamously is only awarded to three people each year, the Breakthrough Prize was awarded to the whole 27-member WMAP team, led by Chuck Bennett, Gary Hinshaw, Norm Jarosik, Lyman Page, and David Spergel, but including everyone through postdocs and grad students who worked on the project. This is great, and I am happy to send my hearty congratulations to all of them (many of whom I know well and am lucky to count as friends).

I actually knew about the prize last week as I was interviewed by Nature for an article about it. Luckily I didn’t have to keep the secret for long. Although I admit to a little envy, it’s hard to argue that the prize wasn’t deserved. WMAP was ideally placed to solidify the current standard model of cosmology, a Universe dominated by dark matter and dark energy, with strong indications that there was a period of cosmological inflation at very early times, which had several important observational consequences. First, it made the geometry of the Universe — as described by Einstein’s theory of general relativity, which links the contents of the Universe with its shape — flat. Second, it generated the tiny initial seeds which eventually grew into the galaxies that we observe in the Universe today (and the stars and planets within them, of course).

By the time WMAP released its first results in 2003, a series of earlier experiments (including MAXIMA and BOOMERanG, which I had the privilege of being part of) had gone much of the way toward this standard model. Indeed, about ten years one of my Imperial colleagues, Carlo Contaldi, and I wanted to make that comparison explicit, so we used what were then considered fancy Bayesian sampling techniques to combine the data from balloons and ground-based telescopes (which are collectively known as “sub-orbital” experiments) and compare the results to WMAP. We got a plot like the following (which we never published), showing the main quantity that these CMB experiments measure, called the power spectrum (which I’ve discussed in a little more detail here). The horizontal axis corresponds to the size of structures in the map (actually, its inverse, so smaller is to the right) and the vertical axis to how large the the signal is on those scales.

Grand unified spectrum

As you can see, the suborbital experiments, en masse, had data at least as good as WMAP on most scales except the very largest (leftmost; this is because you really do need a satellite to see the entire sky) and indeed were able to probe smaller scales than WMAP (to the right). Since then, I’ve had the further privilege of being part of the Planck Satellite team, whose work has superseded all of these, giving much more precise measurements over all of these scales: PlanckCl

Am I jealous? Ok, a little bit.

But it’s also true, perhaps for entirely sociological reasons, that the community is more apt to trust results from a single, monolithic, very expensive satellite than an ensemble of results from a heterogeneous set of balloons and telescopes, run on (comparative!) shoestrings. On the other hand, the overall agreement amongst those experiments, and between them and WMAP, is remarkable.

And that agreement remains remarkable, even if much of the effort of the cosmology community is devoted to understanding the small but significant differences that remain, especially between one monolithic and expensive satellite (WMAP) and another (Planck). Indeed, those “real and serious” (to quote myself) differences would be hard to see even if I plotted them on the same graph. But since both are ostensibly measuring exactly the same thing (the CMB sky), any differences — even those much smaller than the error bars — must be accounted for almost certainly boil down to differences in the analyses or misunderstanding of each team’s own data. Somewhat more interesting are differences between CMB results and measurements of cosmology from other, very different, methods, but that’s a story for another day.

The first direct detection of gravitational waves was announced in February of 2015 by the LIGO team, after decades of planning, building and refining their beautiful experiment. Since that time, the US-based LIGO has been joined by the European Virgo gravitational wave telescope (and more are planned around the globe).

The first four events that the teams announced were from the spiralling in and eventual mergers of pairs of black holes, with masses ranging from about seven to about forty times the mass of the sun. These masses are perhaps a bit higher than we expect to by typical, which might raise intriguing questions about how such black holes were formed and evolved, although even comparing the results to the predictions is a hard problem depending on the details of the statistical properties of the detectors and the astrophysical models for the evolution of black holes and the stars from which (we think) they formed.

Last week, the teams announced the detection of a very different kind of event, the collision of two neutron stars, each about 1.4 times the mass of the sun. Neutron stars are one possible end state of the evolution of a star, when its atoms are no longer able to withstand the pressure of the gravity trying to force them together. This was first understood by S Chandrasekhar in the early years of the 20th Century, who realised that there was a limit to the mass of a star held up simply by the quantum-mechanical repulsion of the electrons at the outskirts of the atoms making up the star. When you surpass this mass, known, appropriately enough, as the Chandrasekhar mass, the star will collapse in upon itself, combining the electrons and protons into neutrons and likely releasing a vast amount of energy in the form of a supernova explosion. After the explosion, the remnant is likely to be a dense ball of neutrons, whose properties are actually determined fairly precisely by similar physics to that of the Chandrasekhar limit (discussed for this case by Oppenheimer, Volkoff and Tolman), giving us the magic 1.4 solar mass number.

(Last week also coincidentally would have seen Chandrasekhar’s 107th birthday, and Google chose to illustrate their home page with an animation in his honour for the occasion. I was a graduate student at the University of Chicago, where Chandra, as he was known, spent most of his career. Most of us students were far too intimidated to interact with him, although it was always seen as an auspicious occasion when you spotted him around the halls of the Astronomy and Astrophysics Center.)

This process can therefore make a single 1.4 solar-mass neutron star, and we can imagine that in some rare cases we can end up with two neutron stars orbiting one another. Indeed, the fact that LIGO saw one, but only one, such event during its year-and-a-half run allows the teams to constrain how often that happens, albeit with very large error bars, between 320 and 4740 events per cubic gigaparsec per year; a cubic gigaparsec is about 3 billion light-years on each side, so these are rare events indeed. These results and many other scientific inferences from this single amazing observation are reported in the teams’ overview paper.

A series of other papers discuss those results in more detail, covering the physics of neutron stars to limits on departures from Einstein’s theory of gravity (for more on some of these other topics, see this blog, or this story from the NY Times). As a cosmologist, the most exciting of the results were the use of the event as a “standard siren”, an object whose gravitational wave properties are well-enough understood that we can deduce the distance to the object from the LIGO results alone. Although the idea came from Bernard Schutz in 1986, the term “Standard siren” was coined somewhat later (by Sean Carroll) in analogy to the (heretofore?) more common cosmological standard candles and standard rulers: objects whose intrinsic brightness and distances are known and so whose distances can be measured by observations of their apparent brightness or size, just as you can roughly deduce how far away a light bulb is by how bright it appears, or how far away a familiar object or person is by how big how it looks.

Gravitational wave events are standard sirens because our understanding of relativity is good enough that an observation of the shape of gravitational wave pattern as a function of time can tell us the properties of its source. Knowing that, we also then know the amplitude of that pattern when it was released. Over the time since then, as the gravitational waves have travelled across the Universe toward us, the amplitude has gone down (further objects look dimmer sound quieter); the expansion of the Universe also causes the frequency of the waves to decrease — this is the cosmological redshift that we observe in the spectra of distant objects’ light.

Unlike LIGO’s previous detections of binary-black-hole mergers, this new observation of a binary-neutron-star merger was also seen in photons: first as a gamma-ray burst, and then as a “nova”: a new dot of light in the sky. Indeed, the observation of the afterglow of the merger by teams of literally thousands of astronomers in gamma and x-rays, optical and infrared light, and in the radio, is one of the more amazing pieces of academic teamwork I have seen.

And these observations allowed the teams to identify the host galaxy of the original neutron stars, and to measure the redshift of its light (the lengthening of the light’s wavelength due to the movement of the galaxy away from us). It is most likely a previously unexceptional galaxy called NGC 4993, with a redshift z=0.009, putting it about 40 megaparsecs away, relatively close on cosmological scales.

But this means that we can measure all of the factors in one of the most celebrated equations in cosmology, Hubble’s law: cz=Hd, where c is the speed of light, z is the redshift just mentioned, and d is the distance measured from the gravitational wave burst itself. This just leaves H₀, the famous Hubble Constant, giving the current rate of expansion of the Universe, usually measured in kilometres per second per megaparsec. The old-fashioned way to measure this quantity is via the so-called cosmic distance ladder, bootstrapping up from nearby objects of known distance to more distant ones whose properties can only be calibrated by comparison with those more nearby. But errors accumulate in this process and we can be susceptible to the weakest rung on the chain (see recent work by some of my colleagues trying to formalise this process). Alternately, we can use data from cosmic microwave background (CMB) experiments like the Planck Satellite (see here for lots of discussion on this blog); the typical size of the CMB pattern on the sky is something very like a standard ruler. Unfortunately, it, too, needs to calibrated, implicitly by other aspects of the CMB pattern itself, and so ends up being a somewhat indirect measurement. Currently, the best cosmic-distance-ladder measurement gives something like 73.24 ± 1.74 km/sec/Mpc whereas Planck gives 67.81 ± 0.92 km/sec/Mpc; these numbers disagree by “a few sigma”, enough that it is hard to explain as simply a statistical fluctuation.

Unfortunately, the new LIGO results do not solve the problem. Because we cannot observe the inclination of the neutron-star binary (i.e., the orientation of its orbit), this blows up the error on the distance to the object, due to the Bayesian marginalisation over this unknown parameter (just as the Planck measurement requires marginalization over all of the other cosmological parameters to fully calibrate the results). Because the host galaxy is relatively nearby, the teams must also account for the fact that the redshift includes the effect not only of the cosmological expansion but also the movement of galaxies with respect to one another due to the pull of gravity on relatively large scales; this so-called peculiar velocity has to be modelled which adds further to the errors.

This procedure gives a final measurement of 70.0+12-8.0, with the full shape of the probability curve shown in the Figure, taken directly from the paper. Both the Planck and distance-ladder results are consistent with these rather large error bars. But this is calculated from a single object; as more of these events are seen these error bars will go down, typically by something like the square root of the number of events, so it might not be too long before this is the best way to measure the Hubble Constant.

GW H0

[Apologies: too long, too technical, and written late at night while trying to get my wonderful not-quite-three-week-old daughter to sleep through the night.]

Wussy (Best Band in America?)

It’s been a year since the last entry here. So I could blog about the end of Planck, the first observation of gravitational waves, fatherhood, or the horror (comedy?) of the US Presidential election. Instead, it’s going to be rock ’n’ roll, though I don’t know if that’s because it’s too important, or not important enough.

It started last year when I came across Christgau’s A+ review of Wussy’s Attica and the mentions of Sonic Youth, Nirvana and Television seemed compelling enough to make it worth a try (paid for before listening even in the streaming age). He was right. I was a few years late (they’ve been around since 2005), but the songs and the sound hit me immediately. Attica was the best new record I’d heard in a long time, grabbing me from the first moment, “when the kick of the drum lined up with the beat of [my] heart”, in the words of their own description of the feeling of first listening to The Who’s “Baba O’Riley”. Three guitars, bass, and a drum, over beautiful screams from co-songwriters Lisa Walker and Chuck Cleaver.

Wusst

And they just released a new record, Forever Sounds, reviewed in Spin Magazine just before its release:

To certain fans of Lucinda Williams, Crazy Horse, Mekons and R.E.M., Wussy became the best band in America almost instantaneously…

Indeed, that list nailed my musical obsessions with an almost google-like creepiness. Guitars, soul, maybe even some politics. Wussy makes me feel almost like the Replacements did in 1985.

IMG 1764

So I was ecstatic when I found out that Wussy was touring the UK, and their London date was at the great but tiny Windmill in Brixton, one of the two or three venues within walking distance of my flat (where I had once seen one of the other obsessions from that list, The Mekons). I only learned about the gig a couple of days before, but tickets were not hard to get: the place only holds about 150 people, but their were far fewer on hand that night — perhaps because Wussy also played the night before as part of the Walpurgis Nacht festival. But I wanted to see a full set, and this night they were scheduled to play the entire new Forever Sounds record. I admit I was slightly apprehensive — it’s only a few weeks old and I’d only listened a few times.

But from the first note (and after a good set from the third opener, Slowgun) I realised that the new record had already wormed its way into my mind — a bit more atmospheric, less song-oriented, than Attica, but now, obviously, as good or nearly so. After the 40 or so minutes of songs from the album, they played a few more from the back catalog, and that was it (this being London, even after the age of “closing time”, most clubs in residential neighbourhoods have to stop the music pretty early). Though I admit I was hoping for, say, a cover of “I Could Never Take the Place of Your Man”, it was still a great, sloppy, loud show, with enough of us in the audience to shout and cheer (but probably not enough to make very much cash for the band, so I was happy to buy my first band t-shirt since, yes, a Mekons shirt from one of their tours about 20 years ago…). I did get a chance to thank a couple of the band members for indeed being the “best band in America” (albeit in London). I also asked whether they could come back for an acoustic show some time soon, so I wouldn’t have to tear myself away from my family and instead could bring my (currently) seven-month old baby to see them some day soon.

They did say UK tours might be a more regular occurrence, and you can follow their progress on the Wussy Road Blog. You should just buy their records, support great music.

Spring & Summer Science

As the academic year winds to a close, scientists’ thoughts turn towards all of the warm-weather travel ahead (in order to avoid thinking about exam marking). Mostly, that means attending scientific conferences, like the upcoming IAU Symposium, Statistical Challenges in 21st Century Cosmology in Lisbon next month, and (for me and my collaborators) the usual series of meetings to prepare for the 2014 release of Planck data. But there are also opportunities for us to interact with people outside of our technical fields: public lectures and festivals.

Next month, parallel to the famous Hay Festival of Literature & the Arts, the town of Hay-on-Wye also hosts How The Light Gets In, concentrating on the also-important disciplines of philosophy and music, with a strong strand of science thrown in. This year, along with comic book writer Warren Ellis, cringe-inducing politicians like Michael Howard and George Galloway, ubiquitous semi-intellectuals like Joan Bakewell, there will be quite a few scientists, with a skew towards the crowd-friendly and controversial. I’m not sure that I want to hear Rupert Sheldrake talk about the efficacy of science and the scientific method, although it might be interesting to hear Julian Barbour, Huw Price, and Lee Smolin talk about the arrow of time. Some of the descriptions are inscrutable enough to pique my interest: Nancy Cartwright and George Ellis will discuss “Ultimate Proof” — I can’t quite figure out if that means physics or epistemology. Perhaps similarly, chemist Peter Atkins will ask “Can science explain all of existence” (and apparently answer in the affirmative). Closer to my own wheelhouse, Roger Penrose, Laura Mersini-Houghton, and John Ellis will discuss whether it is “just possible the Big Bang will turn out to be a mistake”. Penrose was and is one of the smartest people to work out the consequences of Einstein’s general theory of relativity, though in the last few years his cosmological musings have proven to be, well, just plain wrong — but, as I said, controversial and crowd-pleasing… (Disclosure: someone from the festival called me up and asked me to write about it here.)

Alas, I’ll likely be in Lisbon, instead of Hay. But if you want to hear me speak, you can make your way up North to Grantham, where Isaac Newton was educated, for this year’s Gravity Fields festival in late September. The line-up isn’t set yet, but I’ll be there, as will my fellow astronomers Chris Lintott and Catherine Heymans and particle physicist Val Gibson, alongside musicians, dancers, and lots of opportunities to explore the wilds of Lincolnshire. Or if you want to see me before then (and prefer to stay in London), you can come to Imperial for my much-delayed Inaugural Professorial Lecture on May 21, details TBC…

Gravitational Waves?

[Uh oh, this is sort of disastrously long, practically unedited, and a mixture of tutorial- and expert-level text. Good luck. Send corrections.]

It’s been almost exactly a year since the release of the first Planck cosmology results (which I discussed in some depth at the time). On this auspicious anniversary, we in the cosmology community found ourselves with yet more tantalising results to ponder, this time from a ground-based telescope called BICEP2. While Planck’s results were measurements of the temperature of the cosmic microwave background (CMB), this year’s concerned its polarisation.

Background

Polarisation is essentially a headless arrow that can come attached to the photons coming from any direction on the sky — if you’ve worn polarised sunglasses, and noticed how what you see changes as you rotate them around, you’ve seen polarisation. The same physics responsible for the temperature also generates polarisation. But more importantly for these new results, polarisation is a sensitive probe of some of the processes that are normally mixed in, and so hard to distinguish, in the temperature.

Technical aside (you can ignore the details of this paragraph). Actually, it’s a bit more complicated than that: we can think of the those headless arrows on the sky as the sum of two separate kinds of patterns. We call the first of these the “E-mode”, and it represents patterns consisting of either radial spikes or circles around a point. The other patterns are called the “B-mode” and look like patterns that swirl around, either to the left or the right. The important difference between them is that the E modes don’t change if you reflect them in a mirror, while the B modes do — we say that they have a handedness, or parity, in somewhat more mathematical terms. I’ve discussed the CMB a lot in the past but can’t do the theory of the CMB justice here, but my colleague Wayne Hu has an excellent, if somewhat dated, set of web pages explaining the physics (probably at a physics-major level).

EBfig

The excitement comes because these B-mode patterns can only arise in a few ways. The most exciting is that they can come from gravitational waves (GWs) in the early Universe. Gravitational waves (sometimes incorrectly called “gravity waves” which historically refers to unrelated phenomena!) are propagating ripples in space-time, predicted in Einstein’s general relativistic theory of gravitation. Because the CMB is generated about 400,000 years after the big bang, it’s only sensitive to gravitational radiation from the early Universe, not astrophysical sources like spiralling neutron stars or — from where we have other, circumstantial, evidence for gravitational waves, and which are the sources for which experiments like LIGO and eLISA will be searching. These early Universe gravitational waves move matter around in a specific way, which in turn induce those specific B-mode polarization pattern.

In the early Universe, there aren’t a lot of ways to generate gravitational waves. The most important one is inflation, an early period of expansion which blows up a subatomically-sized region by something like a billion-billion-billion times in each direction — inflation seems to be the most well thought-out idea for getting a Universe that looks like the one in which we live, flat (in the sense of Einstein’s relativity and the curvature of space-time), more or less uniform, but with small perturbations to the density that have grown to become the galaxies and clusters of galaxies in the Universe today. Those fluctuations arise because the rapid expansion takes minuscule quantum fluctuations and blows them up to finite size. This is essentially the same physics as the famous Hawking radiation from black holes. The fluctuations that eventually create the galaxies are accompanied by a separate set of fluctuations in the gravitational field itself: these are the ones that become gravitational radiation observable in the CMB. We characterise the background of gravitational radiation through the number r, which stands for the ratio of these two kinds of fluctuations — gravitational radiation divided by the density fluctuations.

Important caveat: there are other ways of producing gravitational radiation in the early Universe, although they don’t necessarily make exactly the same predictions; some of these issues have been discussed by my colleagues in various technical papers (Brandenberger 2011; Hindmarsh et al 2008; Lizarraga et al 2014 — the latter paper from just today!).

However, there are other ways to generate B modes. First, lots of astrophysical objects emit polarised light, and they generally don’t preferentially create E or B patterns. In particular, clouds of gas and dust in our galaxy will generally give us polarised light, and as we’re sitting inside our galaxy, it’s hard to avoid these. Luckily, we’re towards the outskirts of the Milky Way, so there are some clean areas of sky, but it’s hard to be sure that we’re not seeing some such light — and there are very few previous experiments to compare with.

We also know that large masses along the line of sight — clusters of galaxies and even bigger — distort the path of the light and can move those polarisation arrows around. This, in turn, can convert what started out as E into B and vice versa. But we know a lot about that intervening matter, and about the E-mode pattern that we started with, so we have a pretty good handle on this. There are some angular scales over which this is larger than the gravitational wave signal, and some scales that the gravitational wave signal is dominant.

So, if we can observe B-modes, and we are convinced that they are primordial, and that they are not due to lensing or astrophysical sources, and they have the properties expected from inflation, then (and only then!) we have direct evidence for inflation!

Data

Here’s a plot, courtesy the BICEP2 team, with the current state of the data targeting these B modes: Almost all BB limits

The figure shows the so-called power spectrum of the B-mode data — the horizontal “multipole” axis corresponds to angular sizes (θ) on the sky: very roughly, multipole ℓ ~ 180°/θ. The vertical axis gives the amount of “power” at those scales: it is larger if there are more structures of that particular size. The downward pointing arrows are all upper limits; the error bars labeled BICEP2 and Polarbear are actual detections. The solid red curve is the expected signal from the lensing effect discussed above; the long-dashed red curve is the effect of gravitational radiation (with a particular amplitude), and the short-dashed red curve is the total B-mode signal from the two effects.

The Polarbear results were announced on 11 March (disclosure: I am a member of the Polarbear team). These give a detection of the gravitational lensing signal. It was expected, and has been observed in other ways both in temperature and polarisation, but this was the first time it’s been seen directly in this sort of B-mode power spectrum, a crucial advance in the field, letting us really see lensing unblurred by the presence of other effects. We looked at very “clean” areas of the sky, in an effort to minimise the possible contamination from those astrophjysical foregrounds.

The BICEP2 results were announced with a big press conference on 17 March. There are two papers so far, one giving the scientific results, another discussing the experimental techniques used — more papers discussing the data processing and other aspects of the analysis are forthcoming. But there is no doubt from the results that they have presented so far that this is an amazing, careful, and beautiful experiment.

Taken at face value, the BICEP2 results give a pretty strong detection of gravitational radiation from the early Universe, with the ratio parameter r=0.20, with error bars +0.07 and -0.05 (they are different in the two different directions, so you can’t write it with the usual “±”).

This is why there has been such an amazing amount of interest in both the press and the scientific community about these results — if true, they are a first semi-direct detection of gravitational radiation, strong evidence that inflation happened in the early Universe, and therefore a first look at waves which were created in the first tiny fraction of a second after the big bang, and have been propagating unimpeded in the Universe ever since. If we can measure more of the properties of these waves, we can learn more about the way inflation happened, which may in turn give us a handle on the particle physics of the early Universe and ultimately on a so-called “theory of everything” joining up quantum mechanics and gravity.

Taken at face value, the BICEP2 results imply that the very simplest theories of inflation may be right: the so-called “single-field slow-roll” theories that postulate a very simple addition to the particle physics of the Universe. In the other direction, scientists working on string theory have begun to make predictions about the character of inflation in their models, and many of these models are strongly constrained — perhaps even ruled out — by these data.

Skepticism

This is great. But scientists are skeptical by nature, and many of us have spent the last few days happily trying to poke holes in these results. My colleagues Peter Coles and Ted Bunn have blogged their own worries over the last couple of days, and Antony Lewis has already done some heroic work looking at the data.

The first worry is raised by their headline result: r=0.20. On its face, this conflicts with last year’s Planck result, which says that r<0.11 (of course, both of these numbers really represent probability distributions, so there is no absolute contradiction between these numbers, but rather they should be seen to be as a very unlikely combination). How can we ameliorate the “tension” (a word that has come into vogue in cosmology lately: a wimpy way — that I’ve used, too — of talking about apparent contradictions!) between these numbers?

PlanckCl lowFirst, how does Planck measure r to begin with? Above, I wrote about how B modes show only gravitational radiation (and lensing, and astrophysical foregrounds). But the same gravitational radiation also contributes to the CMB temperature, albeit at a comparatively low level, and at large angular scales — the very left-most points of the temperature equivalent of a plot like the above — I reproduce one from last year’s Planck release at right. In fact, those left-most data points are a bit low compared to the most favoured theory (the smooth curve), which pushes the Planck limit down a bit.

But Planck and BICEP2 measure r at somewhat different angular scales, and so we can “ameliorate the tension” by making the theory a bit more complicated: the gravitational radiation isn’t described by just one number, but by a curve. If both data are to be believed, the curve slopes up from the Planck regime toward the BICEP2 regime. In fact, such a new parameter is already present in the theory, and goes by the name “tensor tilt”. The problem is that the required amount of tilt is somewhat larger than the simplest ideas — such as the single-field slow-roll theories — prefer.

If we want to keep the theories simple, we need to make the data more complicated: bluntly, we need to find mistakes in either Planck or BICEP2. The large-scale CMB temperature sky has been scrutinised for the last 20 years or so, from COBE through WMAP and now Planck. Throughout this time, the community has been building up a catalog of “anomalies” (another term of art we use to describe things we’re uncomfortable with), many of which do seem to affect those large scales. The problem is that no one quite figure out if these things are statistically significant: we look at so many possible ways that the sky could be weird, but we only publish the ones that look significant. As my Imperial colleague Professor David Hand would point out, “Coincidences, Miracles, and Rare Events Happen Every Day”. Nonetheless, there seems to be some evidence that something interesting/unusual/anomalous is happening at large scales, and perhaps if we understood this correctly, the Planck limits on r would go up.

But perhaps not: those results have been solid for a long while without an alternative explanation. So maybe the problem is with BICEP2? There are certainly lots of ways they could have made mistakes. Perhaps most importantly, it is very difficult for them to distinguish between primordial perturbations and astrophysical foregrounds, as their main results use only data from a single frequency (like a single colour in the spectrum, but down closer to radio wavelengths). They do compare with some older data at a different frequency, but the comparison does not strongly rule out contamination. They also rely on models for possible contamination, which give a very small contribution, but these models are very poorly constrained by current data.

Another way they could go wrong is that they may misattribute some of their temperature measurement, or their E mode polarisation, to their B mode detection. Because the temperature and E mode are so much larger than the B they are seeing, only a very small amount of such contamination could change their results by a large amount. They do their best to control this “leakage”, and argue that its residual effect is tiny, but it’s very hard to get absolutely right.

And there is some internal evidence within the BICEP2 results that things are not perfect. The most obvious one comes from the figure above: the points around ℓ=200 — where the lensing contributions begins to dominate — are a bit higher than the model. Is this just a statistical fluctuation, or is it evidence of a broader problem? Their paper show some somewhat discrepant points in their E polarisation measurements, as well. None of these are very statistically significant, and some may be confirmed by other measurements, but there are enough of these that caution makes sense. From only a few days thinking about the results (and not yet really sitting down and going through the papers in great depth), it’s hard to make detailed judgements. It seems like the team have been careful that it’s hard to imagine the results going away completely, but easy to imagine lots of ways in which it could be wrong in detail.

But this skepticism from me and others is a good thing, even for the BICEP2 team: they will want their results scrutinised by the community. And the rest of us in the community will want the opportunity to reproduce the results. First, we’ll try to dig into the BICEP2 results themselves, making sure that they’ve done everything as well as possible. But over the next months and years, we’ll want to reproduce them with other experiments.

First, of course, will be Planck. Since I’m on Planck, there’s not much I can say here, except that we expect to release our own polarisation data and cosmological results later this year. This paper (Efstathiou and Gratton 2009) may be of interest….

Next, there are a bunch of ground- and balloon-based CMB experiments gathering data and/or looking for funding right now. The aforementioned Polarbear will continue, and I’m also involved with the EBEX team which hopes to fly a new balloon to probe the CMB polarisation again in a few years. In the meantime, there’s also ACT, SPIDER, SPT, and indeed the successor to BICEP itself, called the Keck array, and many others besides. Eventually, we may even get a new CMB satellite, but don’t hold your breath…

Rumour-mongering

I first heard about the coming BICEP2 results in the middle of last week, when I was up in Edinburgh and received an email from a colleague just saying “r=0.2?!!?” I quickly called to ask what he meant, and he transmitted the rumour of a coming BICEP detection, perhaps bolstered by some confirmation from their successor experiment, the Keck Array (which does in fact appear in their paper). Indeed, such a rumour had been floating around the community for a year or so, but most of thought it would turn out to be spurious. But very quickly last week, we realised that this was for real. It became most solid when I had a call from a Guardian journalist, who managed to elicit some inane comments from me, before anything was known for sure.

By the weekend, it became clear that there would be an astronomy-related press conference at Harvard on Monday, and we were all pretty sure that it would be the BICEP2 news. The number r=0.20 was most commonly cited, and we all figured it would have an error bar around 0.06 or so — small enough to be a real detection, but large enough to leave room for error (but I also heard rumours of r=0.075).

By Monday morning, things had reached whatever passes for a fever pitch in the cosmology community: twitter and Facebook conversations, a mention on BBC Radio 4’s Today programme, all before the official title of the press conference was even announced: “First Direct Evidence for Cosmic Inflation”. Apparently, other BBC journalists had already had embargoed confirmation of some of the details from the BICEP2 team, but the embargo meant they couldn’t participate in the rumour-spreading.

I was traveling during most of this time, fielding occasional call from journalists (there aren’t that many CMB-specialists within within easy of the London-based media), though, unfortunately for my ego, I wasn’t able to make it onto any of Monday night’s choice tv spots.

By the time of the press conference itself, the cosmology community had self-organised: there was a Facebook group organised by Fermilab’s Scott Dodelson, which pretty quickly started dissecting the papers and was able to follow along with the press conference as it happened (despite the fact that most of us couldn’t get onto the website — one of the first times that the popularity of cosmology has brought down a server).

At the time, I was on a series of trains from Loch Lomond to Glasgow, Edinburgh and finally on to London, but the facebook group made (from a tech standpoint, it’s surprising that we didn’t do this on the supposedly more capable Google Plus platform, but the sociological fact is that more of us are on, and use, Facebook). It was great to be able to watch, and participate in, the real-time discussion of the papers (which continues on Facebook as of now). Cosmologists have been teasing out possible inconsistencies (some of which I alluded to above), trying to understand the implications of the results if they’re right — and thinking about the next steps. IRL, Now that I’m back at Imperial, we’ve been poring over the papers in yet more detail, trying to work exactly how they’ve gathered and analysed their data, and seeing what parts we want to try to reproduce.

Aftermath

Physics moves fast nowadays: as of this writing, about 72 hours after the announcement, there are 16 papers mentioning the BICEP2 results on the physics ArXiV (it’s a live search, so the number will undoubtedly grow). Most of them attempt to constrain various early-Universe models in the light of the r=0.20 results — some of them with some amount of statistical rigour, others just pointing out various models in which that is more or less easy to get. (I’ve obviously spent too much time on this post and not enough writing papers.)

It’s also worth collecting, if only for my own future reference, some of the media coverage of the results:

For more background, you can check out

Observing, days 1-2

I am sitting in the control room of the James Clerk Maxwell Telescope (JCMT), 14,000 feet up Mauna Kea, on Hawaii’s Big Island. I’m here to do observations for the SCUBA-2 Cosmology Legacy Survey (CLS).

I’m not really an observer — this is really my first time at a full-sized, modern telescope. But much of JCMT’s observing time is taken up with a series of so-called Legacy Surveys (JLS) — large projects, observing large amounts of sky or large numbers of stars or galaxies.

JCMT is a submillimeter telescope: it detects light with wavelength at or just below one millimeter. This is a difficult regime for astronomy: the atmosphere itself glows very strongly in the infrared, mostly because of water vapour. That’s why I’m sitting at the cold and dry top of an active volcano (albeit one that hasn’t erupted in thousands of years).

Unfortunately, “cold and dry” doesn’t mean there is no precipitation. Here is yesterday’s view, from JCMT over to the CSO telescope:

Snowy view of the CSO from JCMT

This is Hawaii, not Hoth, or even Antarctica.

Tonight seems more promising: we measure the overall quality as an optical depth, denoted by the symbol τ, essentially the probability that a photon you care out will get scattered by the atmosphere before it reaches your telescope. The JLS survey overall requires τ<0.2, and the CLS that I’m actually here for needs even better conditions, τ<0.10. So far we’re just above 0.20 — good enough for some projects, but not the JLS. I’m up here with a JCMT Telescope System Specialist — who actually knows how to run the telescope — and he’s been calibrating the instrument, observing a few sources, and we’re waiting for the optical depth to dip into the JLS band. If that happens, we can fire up SCUBA-2, the instrument (camera) that records the light from the sky. SCUBA-2 uses bolometers (like HFI on Planck), very sensitive thermometers cooled down to superconducting temperatures.

(You can keep track of the conditions here, and specifically monitor the optical depth here. News flash: as I type this, τ=0.199, less than 0.2!)

Later this week, I’ll try to talk about why these are called “Legacy” surveys — and why that’s bad news.

 

Today was the deadline for submitting so-called “White Papers” proposing the next generation of the European Space Agency satellite missions. Because of the long lead times for these sorts of complicated technical achievements, this call is for launches in the faraway years of 2028 or 2034. (These dates would be harder to wrap my head around if I weren’t writing this on the same weekend that I’m attending the 25th reunion of my university graduation, an event about which it’s difficult to avoid the clichéd thought that May, 1988 feels like the day before yesterday.)

At least two of the ideas are particularly close to my scientific heart.

The Polarized Radiation Imaging and Spectroscopy Mission (PRISM) is a cosmic microwave background (CMB) telescope, following on from Planck and the current generation of sub-orbital telescopes like EBEX and PolarBear: whereas Planck has 72 detectors observing the sky over nine frequencies on the sky, PRISM would have more than 7000 detectors working in a similar way to Planck over 32 frequencies, along with another set observing 300 narrow frequency bands, and another instrument dedicated to measuring the spectrum of the CMB in even more detail. Combined, these instruments allow a wide variety of cosmological and astrophysical goals, concentrating on more direct observations of early Universe physics than possible with current instruments, in particular the possible background of gravitational waves from inflation, and the small correlations induced by the physics of inflation and other physical processes in the history of the Universe.

The eLISA mission is the latest attempt to build a gravitational radiation observatory in space, observing astrophysical sources rather than the primordial background affecting the CMB, using giant lasers to measure the distance between three separate free-floating satellites a million kilometres apart from one another. As a gravitational wave passes through the triangle, it bends space and effectively changes the distance between them. The trio would thereby be sensitive to the gravitational waves produced by small, dense objects orbiting one another, objects like white dwarfs, neutron stars and, most excitingly, black holes. This would give us a probe of physics in locations we can’t see with ordinary light, and in regimes that we can’t reproduce on earth or anywhere nearby.

In the selection process, ESA is supposed to take into account the interests of the community. Hence both of these missions are soliciting support, of active and interested scientists and also the more general public: check out the sites for PRISM and eLISA. It’s a tough call. Both cases would be more convincing with a detection of gravitational radiation in their respective regimes, but the process requires putting down a marker early on. In the long term, a CMB mission like PRISM seems inevitable — there are unlikely to be any technical showstoppers — it’s just a big telescope in a slightly unusual range of frequencies. eLISA is more technically challenging: the LISA Pathfinder effort has shown just how hard it is to keep and monitor a free-floating mass in space, and the lack of a detection so far from the ground-based LIGO observatory, although completely consistent with expectations, has kept the community’s enthusiasm lower. (This will likely change with Advanced LIGO, expected to see many hundreds of sources as soon as it comes online in 2015 or thereabouts.)

Full disclosure: although I’ve signed up to support both, I’m directly involved in the PRISM white paper.

About a year ago, I wrote about TimeWave a festival of art, science and technology coming this May to London, with tendrils snaking out to New York and LA.

As part of the festival, we’re organising Quest for the Grail: An International Adventure Game, later this month: from noon to 5pm in London and right afterwards, noon to 5pm in Manhattan, New York.

The London teams will “hunt for objects in Clerkenwell hotspots…from the Order of St. John to Blackfriars Bridge to the International Magic Shop. You may be looking for a charm against the Plague, a tombstone or a silver goblet. Your team may be asked to invent something - the holiest of drinks.” The game continues with New York teams searching in “Manhattan hotspots…from Clinton Castle to the tombstones of Trinity Church to the Grand Lodge of the Masons. You may be looking for a marker of a headless ghost who haunts Wall Street, a symbol of George Washington or a troll in the East Village”, aided by London players and puppetmasters overseeing the games.

Unfortunately, I’m in sunny California recovering from my winter (and many years) of Planck work, but if you’re in either city and would like to play, you can join as an individual, a half-team of five, or a full team of ten players. There’s more information on the site, or you can contact the organisers directly at grail@timewavefestival.com.

Planck 2013: the PR

Yesterday’s release of the Planck papers and data wasn’t just aimed at the scientific community, of course. We wanted to let the rest of the world know about our results. The main press conference was at ESA HQ in Paris, and there was a smaller event here in London run by the UKSA, which I participated in as part of a panel of eight Planck scientists.

The reporters tried to keep us honest, asking us to keep simplifying our explanations so that they — and their readers — could understand them. We struggled with describing how our measurements of the typical size of spots in our map of the CMB eventually led us to a measurement of the age of the Universe (which I tried to do in my previous post). This was hard not only because the reasoning is subtle, but also because, frankly, it’s not something we care that much about: it’s a model-dependent parameter, something we don’t measure directly, and doesn’t have much of a cosmological consequence. (I ended up on the phone with the BBC’s Pallab Ghosh at about 8pm trying to work out whether the age has changed by 50 or 80 million years, a number that means more to him and his viewers than to me and my colleagues.)

There are pieces by the reporters who asked excellent questions at the press conference, at The Guardian, The Economist and The Financial Times, as well as one behind the (London) Times paywall by Hannah Devlin who was probably most rigorous in her requests for us to simplify our explanations. I’ll also point to NPR’s coverage, mostly since it is one of the few outlets to explicitly mention the topology of the Universe which was one of the areas of Planck science I worked on myself.

Aside from the press conference itself, the media were fairly clamouring for the chance to talk about Planck. Most of the major outlets in the UK and around Europe covered the Planck results. Even in the US, we made it onto the front page of the New York Times. Rather than summarise all of the results, I’ll just self-aggrandizingly point to the places where I appeared: a text-based preview from the BBC, and a short quote on video taken after the press conference, as well as one on ITV. I’m most proud of my appearance with Tom Clarke on Channel 4 News — we spent about an hour planning and discussing the results, edited down to a few minutes including my head floating in front of some green-screen astrophysics animations.

Now that the day is over, you can look at the results for yourself at the BBC’s nice interactive version, or at the lovely Planck Chromoscope created by Cardiff University’s Dr Chris North, who donated a huge amount of his time and effort to helping us make yesterday a success. I should also thank our funders over at the UK Space Agency, STFC and (indirectly) ESA — Planck is big science, and these sorts of results don’t come cheap. I hope you agree that they’ve been worth it.

Planck 2013: the science

If you’re the kind of person who reads this blog, then you won’t have missed yesterday’s announcement of the first Planck cosmology results.

The most important is our picture of the cosmic microwave background itself: Planck CMB node full image

But it takes a lot of work to go from the data coming off the Planck satellite to this picture. First, we have to make nine different maps, one at each of the frequencies in which Planck observes, from 30 GHz (with a wavelength of 1 cm) up to 850 GHz (0.350 mm) — note that the colour scales here are the same:

30GHz143GHz857GHz

At low and high frequencies, these are dominated by the emission of our own galaxy, and there is at least some contamination over the whole range, so it takes hard work to separate the primordial CMB signal from the dirty (but interesting) astrophysics along the way. In fact, it’s sufficiently challenging that the team uses four different methods, each with different assumptions, to do so, and the results agree remarkably well.

In fact, we don’t use the above CMB image directly to do the main cosmological science. Instead, we build a Bayesian model of the data, combining our understanding of the foreground astrophysics and the cosmology, and marginalise over the astrophysical parameters in order to extract as much cosmological information as we can. (The formalism is described in the Planck likelihood paper, and the main results of the analysis are in the Planck cosmological parameters paper.)

The main tool for this is the power spectrum, a plot which shows us how the different hot and cold spots on our CMB map are distributed: PlanckCl In this plot, the left-hand side (low ℓ) corresponds to large angles on the sky and high ℓ to small angles. Planck’s results are remarkable for covering this whole range from ℓ=2 to ℓ=2500: the previous CMB satellite, WMAP, had a high-quality spectrum out to ℓ=750 or so; ground- and balloon-based experiments like SPT and ACT filled in some of the high-ℓ regime.

It’s worth marvelling at this for a moment, a triumph of modern cosmological theory and observation: our theoretical models fit our data from scales of 180° down to 0.1°, each of those bumps and wiggles a further sign of how well we understand the contents, history and evolution of the Universe. Our high-quality data has refined our knowledge of the cosmological parameters that describe the universe, decreasing the error bars by a factor of several on the six parameters that describe the simplest ΛCDM universe. Moreover, and maybe remarkably, the data don’t seem to require any additional parameters beyond those six: for example, despite previous evidence to the contrary, the Universe doesn’t need any additional neutrinos.

The quantity most well-measured by Planck is related to the typical size of spots in the CMB map; it’s about a degree, with an error of less than one part in 1,000. This quantity has changed a bit (by about the width of the error bar) since the previous WMAP results. This, in turn, causes us to revise our estimates of quantities like the expansion rate of the Universe (the Hubble constant), which has gone down, in fact by enough that it’s interestingly different from its best measurements using local (non-CMB) data, from more or less direct observations of galaxies moving away from us. Both methods have disadvantages: for the CMB, it’s a very indirect measurement, requiring imposing a model upon the directly measured spot size (known more technically as the “acoustic scale” since it comes from sound waves in the early Universe). For observations of local galaxies, it requires building up the famous cosmic distance ladder, calibrating our understanding of the distances to further and further objects, few of which we truly understand from first principles. So perhaps this discrepancy is due to messy and difficult astrophysics, or perhaps to interesting cosmological evolution.

This change in the expansion rate is also indirectly responsible for the results that have made the most headlines: it changes our best estimate of the age of the Universe (slower expansion means an older Universe) and of the relative amounts of its constituents (since the expansion rate is related to the geometry of the Universe, which, because of Einstein’s General Relativity, tells us the amount of matter).

But the cosmological parameters measured in this way are just Planck’s headlines: there is plenty more science. We’ve gone beyond the power spectrum above to put limits upon so-called non-Gaussianities which are signatures of the detailed way in which the seeds of large-scale structure in the Universe was initially laid down. We’ve observed clusters of galaxies which give us yet more insight into cosmology (and which seem to show an intriguing tension with some of the cosmological parameters). We’ve measured the deflection of light by gravitational lensing. And in work that I helped lead, we’ve used the CMB maps to put limits on some of the ways in which our simplest models of the Universe could be wrong, possibly having an interesting topology or rotation on the largest scales.

But because we’ve scrutinised our data so carefully, we have found some peculiarities which don’t quite fit the models. From the days of COBE and WMAP, there has been evidence that the largest angular scales in the map, a few degrees and larger, have some “anomalies” — some of the patterns show strange alignments, some show unexpected variation between two different hemispheres of the sky, and there are some areas of the sky that are larger and colder than is expected to occur in our theories. Individually, any of these might be a statistical fluke (and collectively they may still be) but perhaps they are giving us evidence of something exciting going on in the early Universe. Or perhaps, to use a bad analogy, the CMB map is like the Zapruder film: if you scrutinise anything carefully enough, you’ll find things that look a conspiracy, but turn out to have an innocent explanation.

I’ve mentioned eight different Planck papers so far, but in fact we’ve released 28 (and there will be a few more to come over the coming months, and many in the future). There’s an overall introduction to the Planck Mission, and papers on the data processing, observations of relatively nearby galaxies, and plenty more cosmology. The papers have been submitted to the journal A&A, they’re available on the ArXiV, and you can find a list of them at the ESA site.

Even more important for my cosmology colleagues, we’ve released the Planck data, as well, along with the necessary code and other information necessary to understand it: you can get it from the Planck Legacy Archive. I’m sure we’ve only just begun to get exciting and fun science out of the data from Planck. And this is only the beginning of Planck’s data: just the first 15 months of observations, and just the intensity of the CMB: in the coming years we’ll be analysing (and releasing) more than one more year of data, and starting to dig into Planck’s observations of the polarized sky.

Breaking the silence (updated)

My apologies for being far too busy to post. I’ll be much louder in couple of weeks once we release the Planck data — on March 21. Until then, I have to shut up and follow the Planck rules.

OK, back to editing. (I’ll try to update this post with any advance information as it becomes available.)

Update (on timing, not content): the main Planck press conference will be held on the morning of 21 March at 10am CET at ESA HQ in Paris. There will be a simultaneous UK event (9am GMT) held at the Royal Astronomical Society in London, where the Paris event will be streamed, followed by a local Q&A session. (There will also be a more technical afternoon session in Paris.)

Probably more important for my astrophysics colleagues: the Planck papers will be posted on the ESA website at noon on the 21st, after the press event, and will appear on the ArXiV the following day, 22 March. Be sure to set aside some time next weekend!

Traversant la Manche

Until now, I have been forced to resist the clamour brewing among both members of my extensive readership (hi, dad!) to post a bit more often: my excuse is that, in the little over a month between early September and mid-October, I have travelled back and forth from Paris to London five times, spent a weekend in the USA, started teaching a new course, and ran a half marathon.

Ten one-way trips in six weeks is too many; the Eurostar makes it about as pleasant as it could possibly be: 2 1/4 hours from central London to central Paris by train (a flight from Heathrow to de Gaulle is faster, but the airports are less convenient and much more stressful). Most of my time in Paris was for Planck Satellite meetings, mostly devoted to the first major release of Planck data and papers next year — of course, by The Planck rules, I can’t talk about what happened. At least I have no more trips to Paris until early December (and only four or so hours a week of Planck telecons).

But in addition to three Planck meetings, I also helped out in my minor role as a member of the Scientific Organizing Committee of the Big Bang, Big Data, Big Computing meeting at the APC, which was an excellent gathering of cosmologists with computer scientists and statisticians, all doing our best to talk over the fences of jargon and habit that often keep the different fields from having productive conversations. One of my favourite talks was the technical but entertaining From mean Euler characteristics to the Gaussian kinematic formula by Robert Adler, whose work in statistics more than thirty years ago taught many in cosmology how to treat the functions that we use to describe the distribution of density and temperature in the Universe as random fields; he discussed more recent updates to that early work for much more general circumstances, the cosmological repercussions of which have yet to be digested. Another highlight was from Imperial’s own Professor David Hand, Opportunities and Challenges in Modelling and Anomaly Detection, discussing how to pull small and possibly weird (“anomalous”) signals from large amounts of data— he didn’t highlight many specific instances in cosmology, but rather gave examples with other sorts of big data, such as the distribution of prices of credit card purchases (with some particularly good anecdotes culled from gas/petrol station data).

Finally, in addition to those many days of meetings — and yes, the occasional good Parisian meal — there were a couple of instances of the most satisfying of my professional duties: two examinations for newly-minted PhDs from the Institut d’Astrophysiques de Paris and the Laboratoire Astroparticule et Cosmologie — félicitations aux Docteurs Errard et Ducout.

The Higgs day continues (and I’m not even a particle physicist).

At about 5pm, just as I was dialling into one of my several-times-a-week Planck teleconferences, I had an email from Tim at the BBC, who works with the World ServiceWorld Have Your Say” show, coming on at 6pm. Would I be able to come up with a one-minute analogy for the Higgs Boson? I came up with two (neither original). The first is that the Higgs field acts like treacle or molasses, inhibiting the motion of particles — and it’s exactly a resistance to motion that is the manifestation of inertia, and hence mass. The more fanciful analogy, due to UCL’s David Miller, is that it behaves like a roomful of partygoers when someone famous walks into the room. We peons throng to the celebrity, slowing her down (it was Margaret Thatcher in the original version); a less famous celebrity is impeded somewhat less, and even the partygoers themselves — analogous to the Higgs particles — can’t move freely. Hence, all particles have a mass.

Unfortunately, both of these had been discussed by UCL’s Professor John Butterworth (who blogs for the Guardian and whose excellent explanations made him ubiquitous in today’s media blitz), the IOP’s Caitlin Watson, and journalist (and cosmology consultant?!) Marcus Chown even before I came on. I was prepared to give up the chance for media glory, or possibly talk about the related concept of spontaneous symmetry breaking and the infamous “Mexican Hat potential”. But at just after 6, they rang back and asked if I could join the programme in progress. What I hadn’t quite realised was that the title for the show was “Is there room for Higgs Boson & Religion?”.

I suppose this stems from Leon Lederman’s book, The God Particle. The story that has been told in recent years is that Lederman wanted to call it “The Goddamn Particle” and that his American publishers wouldn’t let that pass — but I always thought that version a little too pat, both getting Lederman off the hook for an ill-conceived name, and tweaking American religious sensitivities.

But whatever the source, the host wanted to use today’s news to try to pound on the usual science-vs-religion drum, inviting listener comments on the topic along with a discussion between the scientists and a series of religious figures. Luckily, none of us really wanted to use this occasion to disagree: the spiritual types wanted to see science as a celebration of the (god-created, god-given) natural world, and we scientists didn’t want to claim more for science than its ability to answer the practical questions about the real world that it has the tools to address. Of course, I felt the need to say, many religious people and perhaps entire religions make supernatural claims about the world. And those, so far, have turned out to be false.

I would have preferred to talk about spontaneous symmetry breaking, but if you want to hear about science and religion, download the podcast.

Spring Break?

Somehow I’ve managed to forget my usual end-of-term post-mortem of the year’s lecturing. I think perhaps I’m only now recovering from 11 weeks of lectures, lab supervision, tutoring alongside a very busy time analysing Planck satellite data.

But a few weeks ago term ended, and I finished teaching my undergraduate cosmology course at Imperial, 27 lectures covering 14 billion years of physics. It was my fourth time teaching the class (I’ve talked about my experiences in previous years here, here, and here), but this will be the last time during this run. Our department doesn’t let us teach a course more than three or four years in a row, and I think that’s a wise policy. I think I’ve arrived at some very good ways of explaining concepts such as the curvature of space-time itself, and difficulties with our models like the 122-or-so-order-of-magnitude cosmological constant problem, but I also noticed that I wasn’t quite as excited as in previous years, working up from the experimentation of my first time through in 2009, putting it all on a firmer foundation — and writing up the lecture notes — in 2010, and refined over the last two years. This year’s teaching evaluations should come through soon, so I’ll have some feedback, and there are still about six weeks until the students’ understanding — and my explanations — are tested in the exam.

Next year, I’ve got the frankly daunting responsibility of teaching second-year quantum mechanics: 30 lectures, lots of problem sheets, in-class problems to work through, and of course the mindbending weirdness of the subject itself. I’d love to teach them Dirac’s very useful notation which unifies the physical concept of quantum states with the mathematical ideas of vectors, matrices and operators — and which is used by all actual practitioners from advanced undergraduates through working physicists. But I’m told that students find this an extra challenge rather than a simplification. Comments from teachers and students of quantum mechanics are welcome.

On 'Jaffe'

Despite the last decade and a half or more of the internet, I’ve never bothered to actually work out the meaning and history of my surname, “Jaffe”. Somehow I always thought it was connected to the town of Jaffa, near Tel Aviv. But in fact, 30 seconds of searching turns up information that the name comes from the Hebrew yafeh (יפה, meaning “beautiful”) and dates at least from Rabbi Mordecai Jaffe in 16th-century Prague. I was also happy to discover that, with at least half a millennium behind us, there are plenty of interesting Jaffes in history and today (although I had to be careful not to be waylaid by the possibility that we’re actually Irish…).

Of course there are a lot of physicists: Arthur and Robert, as well as several astronomers who don’t quite rate a wikipedia page (yet!): Walter, Daniel and Tess (who is one of my collaborators on Planck). But also actors — Sam, Nicole and Marielle, composers — David and Stephen, and even athletes — Peter and Scott. And there really is an Irish connection: Sir Otto, once the Lord Mayor of Belfast, born in Hamburg, lived and worked in New York as well as Ireland.

Planck Warms Up

Nearly two-and-a-half years after its launch, the end of ESA’s Planck mission has begun. (In fact, the BBC scooped the rest of the Planck collaboration itself with a story last week; you can read the UK take at the excellent Cardiff-led public Planck site.)

Planck’s High-Frequency Instrument (HFI) instrument must be cooled to 0.1 degrees above absolute zero, maintained at this temperature by a series of refrigerators — which had been making Planck the coldest known object in space, colder than the 2.7 degrees to which the cosmic microwave background itself warms even the most regions of intergalactic space. The final cooler in the chain relies on a tank of the Helium-3 isotope, which has finally run out, within days of its predicted lifetime — and giving Planck more than twice as much time observing the Universe as its nominal 14-month mission.

The Low-Frequency Instrument (LFI) doesn’t require such cold temperatures, although in fact they do use one of the earlier stages in the chain, the UK-built 4-degree cooler, as a reference against which it compares its measurements. LFI will, therefore, continue its measurements for the next half-year or so.

But our work, of course, goes on: we will continue to process and analyse Planck’s data, refining our maps of the sky, and get down to the real work of extracting a full sky’s worth of astrophysics and cosmology from our data. The first, preliminary, release of Planck data happened just one year ago, and yet more new Planck science will be presented at a conference in Bologna in a few months. The most exciting and important work will be getting cosmology from Planck data, which we expect to first present in early 2013, and likely in further iterations beyond that.

Passion for Light

It’s been a busy few weeks, and that seems like a good excuse for my lack of posts. Since coming back from Scotland, I’ve been to:

  • Paris, for our bi-monthly Planck Core Team meetings, discussing of the state of the data from the satellite, and of our ongoing processing of it;

  • Cambridge, for yet more Planck, this time to discuss the papers that we as collaboration will be writing over the next couple of years; and

  • Varenna, on Lake Como in northern Italy, for the Passion for Light meeting, sponsored by SIF (the Italian Physical Society) and EPS (the European Physical Society). The meeting was at least in part to introduce the effort to sponsor an International Year of Light in 2015, supported by the UN and international scientific organizations. My remit was “Light from the Universe”, which I took as an excuse to talk about (yes), Planck and the Cosmic Microwave Background. That makes sense because of what is revealed in this plot, a version of which I showed:

Extragalactic Backgrounds (after Dole and Bethermin)

This figure (made after an excellent one which will be in an upcoming paper by Dole and Bethermin) shows the intensity of the “background light” integrated over all sources in the Universe. The horizontal axis gives the frequency of electromagnetic radiation — from the radio at the far left, to the Cosmic Microwave Background (CMB), the Cosmic Infrared Background (CIB), optical light in the middle, and on to ultraviolet, x-ray and gamma-ray light. The height of each curve is proportional to the intensity of the background, the amount of energy falling on a square meter of area per second coming from a particular direction on the sky (for aficionados of the mathematical details, we actually plot the quantity νIν to take account of the logarithmic axis, so that the area under the curve gives a rough estimate of the total intensity) which is itself also proportional to the total energy density of that background, averaged over the whole Universe.

Here on earth, we are dominated by the sun (or, indoors, by artificial illumination), but a planet is a very unusual place: most of the Universe is empty space, not particularly near a star. What this plot shows is that most of the background — most of the light in the Universe — isn’t from stars or other astronomical objects at all. Rather, it’s the Cosmic Microwave Background, the CMB, light from the early Universe, generated before there were any distinct objects at all, visible today as a so-called black body with temperature 2.73 degrees Kelvin. It also shows us that there is roughly the same amount of energy in infrared light (the CIB) as in the optical. This light doesn’t come directly from stars, but is re-processed as visible starlight is absorbed by interstellar dust which heats up and in turn glows in the infrared. That is one of the reasons why Planck’s sister-satellite Herschel, an infrared observatory, is so important: it reveals the fate of roughly half of the starlight ever produced. So we see that outside of the optical and ultraviolet, stars do not dominate the light of the Universe. The x-ray background comes from both very hot gas, heated by falling into clusters of galaxies on large scales, or by supernovae within galaxies, along with the very energetic collisions between particles that happen in the environments around black holes as matter falls in. We believe that the gamma ray background also come from accretion onto supermassive black holes at the centres of galaxies. But my talk centred on the yellow swathe of the CMB, although the only Planck data released so far are the relatively small contaminants from other sources in the same range of frequencies.

Other speakers in Varenna discussed microscopy, precision clocks, particle physics, the wave-particle duality, and the generation of very high-energy particles of light in the laboratory. But my favourite was a talk by Alessandro Farini, a Florentine “psychophysicist” who studies our perception of art. He showed the detailed (and extremely unphysical) use of light in art by even such supposedly realistic painters as Caravaggio, as well as using a series of optical illusions to show how our perceptions, which we think of as a simple recording of our surroundings, involve a huge amount of processing and interpretation before we are consciously aware of it. (As an aside, I was amused to see his collection of photographs with CMB Nobel Laureate George Smoot.)

And having found myself on the shores of Lake Como I took advantage of my good fortune:

Villa Monastero 5
(Many more pictures here.)

OK, this post has gone on long enough. I’ll have to find another opportunity to discuss speedy neutrinos, crashing satellites (and my latest appearance on the BBC World News to talk about the latter), not to mention our weeklong workshop at Imperial discussing the technical topic of photometric redshifts, and the 13.1 miles I ran last weekend.

STFC and UKSA

Funding for space missions in the UK was split from the Science and Technology Facilities Council to the the UK Space Agency earlier this year. Very roughly, UKSA will fund the missions themselves all the way through to the processing of data, while STFC will fund the science that comes from analysing the data.

To try to be a little more specific, the agencies have put out a press release on this so-called “dual key” approach: “Who does what? — Arrangements for sharing responsibility for the science programme between the STFC and the UK Space Agency.” The executive summary is:

  • UKSA

    • ESA subscriptions
    • Mission-specific instruments
    • Operation of UK instruments (Post-launch support)
    • Aurora integrated national programme
  • STFC:

    • Early R&D for space science (non-mission specific)
    • Studentships/fellowships
    • Scientific exploitation of missions

This still leaves many of the details of the split unanswered, or at least fuzzy: How do we ensure that government supports the two agencies adequately and jointly? How do we ensure that STFC supports science exploitation from missions that UKSA funds, so that the UK gets the full return on its investment? How do we define the split between “data analysis” and “science exploitation”?

Here at Imperial, we work on both sides of that divide for both Planck and Herschel: we are the home to data analysis centres for both missions, and want to take advantage of the resulting science opportunities. Indeed, as we take the Planck mission ahead towards its first cosmology results at the end of next year, we are already seeing some of these tensions played out, in both the decision-making process of each agency separately as well as in the overall level of funding available in these austere times.