Results matching “CMB”

The first direct detection of gravitational waves was announced in February of 2015 by the LIGO team, after decades of planning, building and refining their beautiful experiment. Since that time, the US-based LIGO has been joined by the European Virgo gravitational wave telescope (and more are planned around the globe).

The first four events that the teams announced were from the spiralling in and eventual mergers of pairs of black holes, with masses ranging from about seven to about forty times the mass of the sun. These masses are perhaps a bit higher than we expect to by typical, which might raise intriguing questions about how such black holes were formed and evolved, although even comparing the results to the predictions is a hard problem depending on the details of the statistical properties of the detectors and the astrophysical models for the evolution of black holes and the stars from which (we think) they formed.

Last week, the teams announced the detection of a very different kind of event, the collision of two neutron stars, each about 1.4 times the mass of the sun. Neutron stars are one possible end state of the evolution of a star, when its atoms are no longer able to withstand the pressure of the gravity trying to force them together. This was first understood by S Chandrasekhar in the early years of the 20th Century, who realised that there was a limit to the mass of a star held up simply by the quantum-mechanical repulsion of the electrons at the outskirts of the atoms making up the star. When you surpass this mass, known, appropriately enough, as the Chandrasekhar mass, the star will collapse in upon itself, combining the electrons and protons into neutrons and likely releasing a vast amount of energy in the form of a supernova explosion. After the explosion, the remnant is likely to be a dense ball of neutrons, whose properties are actually determined fairly precisely by similar physics to that of the Chandrasekhar limit (discussed for this case by Oppenheimer, Volkoff and Tolman), giving us the magic 1.4 solar mass number.

(Last week also coincidentally would have seen Chandrasekhar’s 107th birthday, and Google chose to illustrate their home page with an animation in his honour for the occasion. I was a graduate student at the University of Chicago, where Chandra, as he was known, spent most of his career. Most of us students were far too intimidated to interact with him, although it was always seen as an auspicious occasion when you spotted him around the halls of the Astronomy and Astrophysics Center.)

This process can therefore make a single 1.4 solar-mass neutron star, and we can imagine that in some rare cases we can end up with two neutron stars orbiting one another. Indeed, the fact that LIGO saw one, but only one, such event during its year-and-a-half run allows the teams to constrain how often that happens, albeit with very large error bars, between 320 and 4740 events per cubic gigaparsec per year; a cubic gigaparsec is about 3 billion light-years on each side, so these are rare events indeed. These results and many other scientific inferences from this single amazing observation are reported in the teams’ overview paper.

A series of other papers discuss those results in more detail, covering the physics of neutron stars to limits on departures from Einstein’s theory of gravity (for more on some of these other topics, see this blog, or this story from the NY Times). As a cosmologist, the most exciting of the results were the use of the event as a “standard siren”, an object whose gravitational wave properties are well-enough understood that we can deduce the distance to the object from the LIGO results alone. Although the idea came from Bernard Schutz in 1986, the term “Standard siren” was coined somewhat later (by Sean Carroll) in analogy to the (heretofore?) more common cosmological standard candles and standard rulers: objects whose intrinsic brightness and distances are known and so whose distances can be measured by observations of their apparent brightness or size, just as you can roughly deduce how far away a light bulb is by how bright it appears, or how far away a familiar object or person is by how big how it looks.

Gravitational wave events are standard sirens because our understanding of relativity is good enough that an observation of the shape of gravitational wave pattern as a function of time can tell us the properties of its source. Knowing that, we also then know the amplitude of that pattern when it was released. Over the time since then, as the gravitational waves have travelled across the Universe toward us, the amplitude has gone down (further objects look dimmer sound quieter); the expansion of the Universe also causes the frequency of the waves to decrease — this is the cosmological redshift that we observe in the spectra of distant objects’ light.

Unlike LIGO’s previous detections of binary-black-hole mergers, this new observation of a binary-neutron-star merger was also seen in photons: first as a gamma-ray burst, and then as a “nova”: a new dot of light in the sky. Indeed, the observation of the afterglow of the merger by teams of literally thousands of astronomers in gamma and x-rays, optical and infrared light, and in the radio, is one of the more amazing pieces of academic teamwork I have seen.

And these observations allowed the teams to identify the host galaxy of the original neutron stars, and to measure the redshift of its light (the lengthening of the light’s wavelength due to the movement of the galaxy away from us). It is most likely a previously unexceptional galaxy called NGC 4993, with a redshift z=0.009, putting it about 40 megaparsecs away, relatively close on cosmological scales.

But this means that we can measure all of the factors in one of the most celebrated equations in cosmology, Hubble’s law: cz=Hd, where c is the speed of light, z is the redshift just mentioned, and d is the distance measured from the gravitational wave burst itself. This just leaves H₀, the famous Hubble Constant, giving the current rate of expansion of the Universe, usually measured in kilometres per second per megaparsec. The old-fashioned way to measure this quantity is via the so-called cosmic distance ladder, bootstrapping up from nearby objects of known distance to more distant ones whose properties can only be calibrated by comparison with those more nearby. But errors accumulate in this process and we can be susceptible to the weakest rung on the chain (see recent work by some of my colleagues trying to formalise this process). Alternately, we can use data from cosmic microwave background (CMB) experiments like the Planck Satellite (see here for lots of discussion on this blog); the typical size of the CMB pattern on the sky is something very like a standard ruler. Unfortunately, it, too, needs to calibrated, implicitly by other aspects of the CMB pattern itself, and so ends up being a somewhat indirect measurement. Currently, the best cosmic-distance-ladder measurement gives something like 73.24 ± 1.74 km/sec/Mpc whereas Planck gives 67.81 ± 0.92 km/sec/Mpc; these numbers disagree by “a few sigma”, enough that it is hard to explain as simply a statistical fluctuation.

Unfortunately, the new LIGO results do not solve the problem. Because we cannot observe the inclination of the neutron-star binary (i.e., the orientation of its orbit), this blows up the error on the distance to the object, due to the Bayesian marginalisation over this unknown parameter (just as the Planck measurement requires marginalization over all of the other cosmological parameters to fully calibrate the results). Because the host galaxy is relatively nearby, the teams must also account for the fact that the redshift includes the effect not only of the cosmological expansion but also the movement of galaxies with respect to one another due to the pull of gravity on relatively large scales; this so-called peculiar velocity has to be modelled which adds further to the errors.

This procedure gives a final measurement of 70.0+12-8.0, with the full shape of the probability curve shown in the Figure, taken directly from the paper. Both the Planck and distance-ladder results are consistent with these rather large error bars. But this is calculated from a single object; as more of these events are seen these error bars will go down, typically by something like the square root of the number of events, so it might not be too long before this is the best way to measure the Hubble Constant.

GW H0

[Apologies: too long, too technical, and written late at night while trying to get my wonderful not-quite-three-week-old daughter to sleep through the night.]

Gravitational Waves?

[Uh oh, this is sort of disastrously long, practically unedited, and a mixture of tutorial- and expert-level text. Good luck. Send corrections.]

It’s been almost exactly a year since the release of the first Planck cosmology results (which I discussed in some depth at the time). On this auspicious anniversary, we in the cosmology community found ourselves with yet more tantalising results to ponder, this time from a ground-based telescope called BICEP2. While Planck’s results were measurements of the temperature of the cosmic microwave background (CMB), this year’s concerned its polarisation.

Background

Polarisation is essentially a headless arrow that can come attached to the photons coming from any direction on the sky — if you’ve worn polarised sunglasses, and noticed how what you see changes as you rotate them around, you’ve seen polarisation. The same physics responsible for the temperature also generates polarisation. But more importantly for these new results, polarisation is a sensitive probe of some of the processes that are normally mixed in, and so hard to distinguish, in the temperature.

Technical aside (you can ignore the details of this paragraph). Actually, it’s a bit more complicated than that: we can think of the those headless arrows on the sky as the sum of two separate kinds of patterns. We call the first of these the “E-mode”, and it represents patterns consisting of either radial spikes or circles around a point. The other patterns are called the “B-mode” and look like patterns that swirl around, either to the left or the right. The important difference between them is that the E modes don’t change if you reflect them in a mirror, while the B modes do — we say that they have a handedness, or parity, in somewhat more mathematical terms. I’ve discussed the CMB a lot in the past but can’t do the theory of the CMB justice here, but my colleague Wayne Hu has an excellent, if somewhat dated, set of web pages explaining the physics (probably at a physics-major level).

EBfig

The excitement comes because these B-mode patterns can only arise in a few ways. The most exciting is that they can come from gravitational waves (GWs) in the early Universe. Gravitational waves (sometimes incorrectly called “gravity waves” which historically refers to unrelated phenomena!) are propagating ripples in space-time, predicted in Einstein’s general relativistic theory of gravitation. Because the CMB is generated about 400,000 years after the big bang, it’s only sensitive to gravitational radiation from the early Universe, not astrophysical sources like spiralling neutron stars or — from where we have other, circumstantial, evidence for gravitational waves, and which are the sources for which experiments like LIGO and eLISA will be searching. These early Universe gravitational waves move matter around in a specific way, which in turn induce those specific B-mode polarization pattern.

In the early Universe, there aren’t a lot of ways to generate gravitational waves. The most important one is inflation, an early period of expansion which blows up a subatomically-sized region by something like a billion-billion-billion times in each direction — inflation seems to be the most well thought-out idea for getting a Universe that looks like the one in which we live, flat (in the sense of Einstein’s relativity and the curvature of space-time), more or less uniform, but with small perturbations to the density that have grown to become the galaxies and clusters of galaxies in the Universe today. Those fluctuations arise because the rapid expansion takes minuscule quantum fluctuations and blows them up to finite size. This is essentially the same physics as the famous Hawking radiation from black holes. The fluctuations that eventually create the galaxies are accompanied by a separate set of fluctuations in the gravitational field itself: these are the ones that become gravitational radiation observable in the CMB. We characterise the background of gravitational radiation through the number r, which stands for the ratio of these two kinds of fluctuations — gravitational radiation divided by the density fluctuations.

Important caveat: there are other ways of producing gravitational radiation in the early Universe, although they don’t necessarily make exactly the same predictions; some of these issues have been discussed by my colleagues in various technical papers (Brandenberger 2011; Hindmarsh et al 2008; Lizarraga et al 2014 — the latter paper from just today!).

However, there are other ways to generate B modes. First, lots of astrophysical objects emit polarised light, and they generally don’t preferentially create E or B patterns. In particular, clouds of gas and dust in our galaxy will generally give us polarised light, and as we’re sitting inside our galaxy, it’s hard to avoid these. Luckily, we’re towards the outskirts of the Milky Way, so there are some clean areas of sky, but it’s hard to be sure that we’re not seeing some such light — and there are very few previous experiments to compare with.

We also know that large masses along the line of sight — clusters of galaxies and even bigger — distort the path of the light and can move those polarisation arrows around. This, in turn, can convert what started out as E into B and vice versa. But we know a lot about that intervening matter, and about the E-mode pattern that we started with, so we have a pretty good handle on this. There are some angular scales over which this is larger than the gravitational wave signal, and some scales that the gravitational wave signal is dominant.

So, if we can observe B-modes, and we are convinced that they are primordial, and that they are not due to lensing or astrophysical sources, and they have the properties expected from inflation, then (and only then!) we have direct evidence for inflation!

Data

Here’s a plot, courtesy the BICEP2 team, with the current state of the data targeting these B modes: Almost all BB limits

The figure shows the so-called power spectrum of the B-mode data — the horizontal “multipole” axis corresponds to angular sizes (θ) on the sky: very roughly, multipole ℓ ~ 180°/θ. The vertical axis gives the amount of “power” at those scales: it is larger if there are more structures of that particular size. The downward pointing arrows are all upper limits; the error bars labeled BICEP2 and Polarbear are actual detections. The solid red curve is the expected signal from the lensing effect discussed above; the long-dashed red curve is the effect of gravitational radiation (with a particular amplitude), and the short-dashed red curve is the total B-mode signal from the two effects.

The Polarbear results were announced on 11 March (disclosure: I am a member of the Polarbear team). These give a detection of the gravitational lensing signal. It was expected, and has been observed in other ways both in temperature and polarisation, but this was the first time it’s been seen directly in this sort of B-mode power spectrum, a crucial advance in the field, letting us really see lensing unblurred by the presence of other effects. We looked at very “clean” areas of the sky, in an effort to minimise the possible contamination from those astrophjysical foregrounds.

The BICEP2 results were announced with a big press conference on 17 March. There are two papers so far, one giving the scientific results, another discussing the experimental techniques used — more papers discussing the data processing and other aspects of the analysis are forthcoming. But there is no doubt from the results that they have presented so far that this is an amazing, careful, and beautiful experiment.

Taken at face value, the BICEP2 results give a pretty strong detection of gravitational radiation from the early Universe, with the ratio parameter r=0.20, with error bars +0.07 and -0.05 (they are different in the two different directions, so you can’t write it with the usual “±”).

This is why there has been such an amazing amount of interest in both the press and the scientific community about these results — if true, they are a first semi-direct detection of gravitational radiation, strong evidence that inflation happened in the early Universe, and therefore a first look at waves which were created in the first tiny fraction of a second after the big bang, and have been propagating unimpeded in the Universe ever since. If we can measure more of the properties of these waves, we can learn more about the way inflation happened, which may in turn give us a handle on the particle physics of the early Universe and ultimately on a so-called “theory of everything” joining up quantum mechanics and gravity.

Taken at face value, the BICEP2 results imply that the very simplest theories of inflation may be right: the so-called “single-field slow-roll” theories that postulate a very simple addition to the particle physics of the Universe. In the other direction, scientists working on string theory have begun to make predictions about the character of inflation in their models, and many of these models are strongly constrained — perhaps even ruled out — by these data.

Skepticism

This is great. But scientists are skeptical by nature, and many of us have spent the last few days happily trying to poke holes in these results. My colleagues Peter Coles and Ted Bunn have blogged their own worries over the last couple of days, and Antony Lewis has already done some heroic work looking at the data.

The first worry is raised by their headline result: r=0.20. On its face, this conflicts with last year’s Planck result, which says that r<0.11 (of course, both of these numbers really represent probability distributions, so there is no absolute contradiction between these numbers, but rather they should be seen to be as a very unlikely combination). How can we ameliorate the “tension” (a word that has come into vogue in cosmology lately: a wimpy way — that I’ve used, too — of talking about apparent contradictions!) between these numbers?

PlanckCl lowFirst, how does Planck measure r to begin with? Above, I wrote about how B modes show only gravitational radiation (and lensing, and astrophysical foregrounds). But the same gravitational radiation also contributes to the CMB temperature, albeit at a comparatively low level, and at large angular scales — the very left-most points of the temperature equivalent of a plot like the above — I reproduce one from last year’s Planck release at right. In fact, those left-most data points are a bit low compared to the most favoured theory (the smooth curve), which pushes the Planck limit down a bit.

But Planck and BICEP2 measure r at somewhat different angular scales, and so we can “ameliorate the tension” by making the theory a bit more complicated: the gravitational radiation isn’t described by just one number, but by a curve. If both data are to be believed, the curve slopes up from the Planck regime toward the BICEP2 regime. In fact, such a new parameter is already present in the theory, and goes by the name “tensor tilt”. The problem is that the required amount of tilt is somewhat larger than the simplest ideas — such as the single-field slow-roll theories — prefer.

If we want to keep the theories simple, we need to make the data more complicated: bluntly, we need to find mistakes in either Planck or BICEP2. The large-scale CMB temperature sky has been scrutinised for the last 20 years or so, from COBE through WMAP and now Planck. Throughout this time, the community has been building up a catalog of “anomalies” (another term of art we use to describe things we’re uncomfortable with), many of which do seem to affect those large scales. The problem is that no one quite figure out if these things are statistically significant: we look at so many possible ways that the sky could be weird, but we only publish the ones that look significant. As my Imperial colleague Professor David Hand would point out, “Coincidences, Miracles, and Rare Events Happen Every Day”. Nonetheless, there seems to be some evidence that something interesting/unusual/anomalous is happening at large scales, and perhaps if we understood this correctly, the Planck limits on r would go up.

But perhaps not: those results have been solid for a long while without an alternative explanation. So maybe the problem is with BICEP2? There are certainly lots of ways they could have made mistakes. Perhaps most importantly, it is very difficult for them to distinguish between primordial perturbations and astrophysical foregrounds, as their main results use only data from a single frequency (like a single colour in the spectrum, but down closer to radio wavelengths). They do compare with some older data at a different frequency, but the comparison does not strongly rule out contamination. They also rely on models for possible contamination, which give a very small contribution, but these models are very poorly constrained by current data.

Another way they could go wrong is that they may misattribute some of their temperature measurement, or their E mode polarisation, to their B mode detection. Because the temperature and E mode are so much larger than the B they are seeing, only a very small amount of such contamination could change their results by a large amount. They do their best to control this “leakage”, and argue that its residual effect is tiny, but it’s very hard to get absolutely right.

And there is some internal evidence within the BICEP2 results that things are not perfect. The most obvious one comes from the figure above: the points around ℓ=200 — where the lensing contributions begins to dominate — are a bit higher than the model. Is this just a statistical fluctuation, or is it evidence of a broader problem? Their paper show some somewhat discrepant points in their E polarisation measurements, as well. None of these are very statistically significant, and some may be confirmed by other measurements, but there are enough of these that caution makes sense. From only a few days thinking about the results (and not yet really sitting down and going through the papers in great depth), it’s hard to make detailed judgements. It seems like the team have been careful that it’s hard to imagine the results going away completely, but easy to imagine lots of ways in which it could be wrong in detail.

But this skepticism from me and others is a good thing, even for the BICEP2 team: they will want their results scrutinised by the community. And the rest of us in the community will want the opportunity to reproduce the results. First, we’ll try to dig into the BICEP2 results themselves, making sure that they’ve done everything as well as possible. But over the next months and years, we’ll want to reproduce them with other experiments.

First, of course, will be Planck. Since I’m on Planck, there’s not much I can say here, except that we expect to release our own polarisation data and cosmological results later this year. This paper (Efstathiou and Gratton 2009) may be of interest….

Next, there are a bunch of ground- and balloon-based CMB experiments gathering data and/or looking for funding right now. The aforementioned Polarbear will continue, and I’m also involved with the EBEX team which hopes to fly a new balloon to probe the CMB polarisation again in a few years. In the meantime, there’s also ACT, SPIDER, SPT, and indeed the successor to BICEP itself, called the Keck array, and many others besides. Eventually, we may even get a new CMB satellite, but don’t hold your breath…

Rumour-mongering

I first heard about the coming BICEP2 results in the middle of last week, when I was up in Edinburgh and received an email from a colleague just saying “r=0.2?!!?” I quickly called to ask what he meant, and he transmitted the rumour of a coming BICEP detection, perhaps bolstered by some confirmation from their successor experiment, the Keck Array (which does in fact appear in their paper). Indeed, such a rumour had been floating around the community for a year or so, but most of thought it would turn out to be spurious. But very quickly last week, we realised that this was for real. It became most solid when I had a call from a Guardian journalist, who managed to elicit some inane comments from me, before anything was known for sure.

By the weekend, it became clear that there would be an astronomy-related press conference at Harvard on Monday, and we were all pretty sure that it would be the BICEP2 news. The number r=0.20 was most commonly cited, and we all figured it would have an error bar around 0.06 or so — small enough to be a real detection, but large enough to leave room for error (but I also heard rumours of r=0.075).

By Monday morning, things had reached whatever passes for a fever pitch in the cosmology community: twitter and Facebook conversations, a mention on BBC Radio 4’s Today programme, all before the official title of the press conference was even announced: “First Direct Evidence for Cosmic Inflation”. Apparently, other BBC journalists had already had embargoed confirmation of some of the details from the BICEP2 team, but the embargo meant they couldn’t participate in the rumour-spreading.

I was traveling during most of this time, fielding occasional call from journalists (there aren’t that many CMB-specialists within within easy of the London-based media), though, unfortunately for my ego, I wasn’t able to make it onto any of Monday night’s choice tv spots.

By the time of the press conference itself, the cosmology community had self-organised: there was a Facebook group organised by Fermilab’s Scott Dodelson, which pretty quickly started dissecting the papers and was able to follow along with the press conference as it happened (despite the fact that most of us couldn’t get onto the website — one of the first times that the popularity of cosmology has brought down a server).

At the time, I was on a series of trains from Loch Lomond to Glasgow, Edinburgh and finally on to London, but the facebook group made (from a tech standpoint, it’s surprising that we didn’t do this on the supposedly more capable Google Plus platform, but the sociological fact is that more of us are on, and use, Facebook). It was great to be able to watch, and participate in, the real-time discussion of the papers (which continues on Facebook as of now). Cosmologists have been teasing out possible inconsistencies (some of which I alluded to above), trying to understand the implications of the results if they’re right — and thinking about the next steps. IRL, Now that I’m back at Imperial, we’ve been poring over the papers in yet more detail, trying to work exactly how they’ve gathered and analysed their data, and seeing what parts we want to try to reproduce.

Aftermath

Physics moves fast nowadays: as of this writing, about 72 hours after the announcement, there are 16 papers mentioning the BICEP2 results on the physics ArXiV (it’s a live search, so the number will undoubtedly grow). Most of them attempt to constrain various early-Universe models in the light of the r=0.20 results — some of them with some amount of statistical rigour, others just pointing out various models in which that is more or less easy to get. (I’ve obviously spent too much time on this post and not enough writing papers.)

It’s also worth collecting, if only for my own future reference, some of the media coverage of the results:

For more background, you can check out

Today was the deadline for submitting so-called “White Papers” proposing the next generation of the European Space Agency satellite missions. Because of the long lead times for these sorts of complicated technical achievements, this call is for launches in the faraway years of 2028 or 2034. (These dates would be harder to wrap my head around if I weren’t writing this on the same weekend that I’m attending the 25th reunion of my university graduation, an event about which it’s difficult to avoid the clichéd thought that May, 1988 feels like the day before yesterday.)

At least two of the ideas are particularly close to my scientific heart.

The Polarized Radiation Imaging and Spectroscopy Mission (PRISM) is a cosmic microwave background (CMB) telescope, following on from Planck and the current generation of sub-orbital telescopes like EBEX and PolarBear: whereas Planck has 72 detectors observing the sky over nine frequencies on the sky, PRISM would have more than 7000 detectors working in a similar way to Planck over 32 frequencies, along with another set observing 300 narrow frequency bands, and another instrument dedicated to measuring the spectrum of the CMB in even more detail. Combined, these instruments allow a wide variety of cosmological and astrophysical goals, concentrating on more direct observations of early Universe physics than possible with current instruments, in particular the possible background of gravitational waves from inflation, and the small correlations induced by the physics of inflation and other physical processes in the history of the Universe.

The eLISA mission is the latest attempt to build a gravitational radiation observatory in space, observing astrophysical sources rather than the primordial background affecting the CMB, using giant lasers to measure the distance between three separate free-floating satellites a million kilometres apart from one another. As a gravitational wave passes through the triangle, it bends space and effectively changes the distance between them. The trio would thereby be sensitive to the gravitational waves produced by small, dense objects orbiting one another, objects like white dwarfs, neutron stars and, most excitingly, black holes. This would give us a probe of physics in locations we can’t see with ordinary light, and in regimes that we can’t reproduce on earth or anywhere nearby.

In the selection process, ESA is supposed to take into account the interests of the community. Hence both of these missions are soliciting support, of active and interested scientists and also the more general public: check out the sites for PRISM and eLISA. It’s a tough call. Both cases would be more convincing with a detection of gravitational radiation in their respective regimes, but the process requires putting down a marker early on. In the long term, a CMB mission like PRISM seems inevitable — there are unlikely to be any technical showstoppers — it’s just a big telescope in a slightly unusual range of frequencies. eLISA is more technically challenging: the LISA Pathfinder effort has shown just how hard it is to keep and monitor a free-floating mass in space, and the lack of a detection so far from the ground-based LIGO observatory, although completely consistent with expectations, has kept the community’s enthusiasm lower. (This will likely change with Advanced LIGO, expected to see many hundreds of sources as soon as it comes online in 2015 or thereabouts.)

Full disclosure: although I’ve signed up to support both, I’m directly involved in the PRISM white paper.

Planck 2013: the PR

Yesterday’s release of the Planck papers and data wasn’t just aimed at the scientific community, of course. We wanted to let the rest of the world know about our results. The main press conference was at ESA HQ in Paris, and there was a smaller event here in London run by the UKSA, which I participated in as part of a panel of eight Planck scientists.

The reporters tried to keep us honest, asking us to keep simplifying our explanations so that they — and their readers — could understand them. We struggled with describing how our measurements of the typical size of spots in our map of the CMB eventually led us to a measurement of the age of the Universe (which I tried to do in my previous post). This was hard not only because the reasoning is subtle, but also because, frankly, it’s not something we care that much about: it’s a model-dependent parameter, something we don’t measure directly, and doesn’t have much of a cosmological consequence. (I ended up on the phone with the BBC’s Pallab Ghosh at about 8pm trying to work out whether the age has changed by 50 or 80 million years, a number that means more to him and his viewers than to me and my colleagues.)

There are pieces by the reporters who asked excellent questions at the press conference, at The Guardian, The Economist and The Financial Times, as well as one behind the (London) Times paywall by Hannah Devlin who was probably most rigorous in her requests for us to simplify our explanations. I’ll also point to NPR’s coverage, mostly since it is one of the few outlets to explicitly mention the topology of the Universe which was one of the areas of Planck science I worked on myself.

Aside from the press conference itself, the media were fairly clamouring for the chance to talk about Planck. Most of the major outlets in the UK and around Europe covered the Planck results. Even in the US, we made it onto the front page of the New York Times. Rather than summarise all of the results, I’ll just self-aggrandizingly point to the places where I appeared: a text-based preview from the BBC, and a short quote on video taken after the press conference, as well as one on ITV. I’m most proud of my appearance with Tom Clarke on Channel 4 News — we spent about an hour planning and discussing the results, edited down to a few minutes including my head floating in front of some green-screen astrophysics animations.

Now that the day is over, you can look at the results for yourself at the BBC’s nice interactive version, or at the lovely Planck Chromoscope created by Cardiff University’s Dr Chris North, who donated a huge amount of his time and effort to helping us make yesterday a success. I should also thank our funders over at the UK Space Agency, STFC and (indirectly) ESA — Planck is big science, and these sorts of results don’t come cheap. I hope you agree that they’ve been worth it.

Planck 2013: the science

If you’re the kind of person who reads this blog, then you won’t have missed yesterday’s announcement of the first Planck cosmology results.

The most important is our picture of the cosmic microwave background itself: Planck CMB node full image

But it takes a lot of work to go from the data coming off the Planck satellite to this picture. First, we have to make nine different maps, one at each of the frequencies in which Planck observes, from 30 GHz (with a wavelength of 1 cm) up to 850 GHz (0.350 mm) — note that the colour scales here are the same:

30GHz143GHz857GHz

At low and high frequencies, these are dominated by the emission of our own galaxy, and there is at least some contamination over the whole range, so it takes hard work to separate the primordial CMB signal from the dirty (but interesting) astrophysics along the way. In fact, it’s sufficiently challenging that the team uses four different methods, each with different assumptions, to do so, and the results agree remarkably well.

In fact, we don’t use the above CMB image directly to do the main cosmological science. Instead, we build a Bayesian model of the data, combining our understanding of the foreground astrophysics and the cosmology, and marginalise over the astrophysical parameters in order to extract as much cosmological information as we can. (The formalism is described in the Planck likelihood paper, and the main results of the analysis are in the Planck cosmological parameters paper.)

The main tool for this is the power spectrum, a plot which shows us how the different hot and cold spots on our CMB map are distributed: PlanckCl In this plot, the left-hand side (low ℓ) corresponds to large angles on the sky and high ℓ to small angles. Planck’s results are remarkable for covering this whole range from ℓ=2 to ℓ=2500: the previous CMB satellite, WMAP, had a high-quality spectrum out to ℓ=750 or so; ground- and balloon-based experiments like SPT and ACT filled in some of the high-ℓ regime.

It’s worth marvelling at this for a moment, a triumph of modern cosmological theory and observation: our theoretical models fit our data from scales of 180° down to 0.1°, each of those bumps and wiggles a further sign of how well we understand the contents, history and evolution of the Universe. Our high-quality data has refined our knowledge of the cosmological parameters that describe the universe, decreasing the error bars by a factor of several on the six parameters that describe the simplest ΛCDM universe. Moreover, and maybe remarkably, the data don’t seem to require any additional parameters beyond those six: for example, despite previous evidence to the contrary, the Universe doesn’t need any additional neutrinos.

The quantity most well-measured by Planck is related to the typical size of spots in the CMB map; it’s about a degree, with an error of less than one part in 1,000. This quantity has changed a bit (by about the width of the error bar) since the previous WMAP results. This, in turn, causes us to revise our estimates of quantities like the expansion rate of the Universe (the Hubble constant), which has gone down, in fact by enough that it’s interestingly different from its best measurements using local (non-CMB) data, from more or less direct observations of galaxies moving away from us. Both methods have disadvantages: for the CMB, it’s a very indirect measurement, requiring imposing a model upon the directly measured spot size (known more technically as the “acoustic scale” since it comes from sound waves in the early Universe). For observations of local galaxies, it requires building up the famous cosmic distance ladder, calibrating our understanding of the distances to further and further objects, few of which we truly understand from first principles. So perhaps this discrepancy is due to messy and difficult astrophysics, or perhaps to interesting cosmological evolution.

This change in the expansion rate is also indirectly responsible for the results that have made the most headlines: it changes our best estimate of the age of the Universe (slower expansion means an older Universe) and of the relative amounts of its constituents (since the expansion rate is related to the geometry of the Universe, which, because of Einstein’s General Relativity, tells us the amount of matter).

But the cosmological parameters measured in this way are just Planck’s headlines: there is plenty more science. We’ve gone beyond the power spectrum above to put limits upon so-called non-Gaussianities which are signatures of the detailed way in which the seeds of large-scale structure in the Universe was initially laid down. We’ve observed clusters of galaxies which give us yet more insight into cosmology (and which seem to show an intriguing tension with some of the cosmological parameters). We’ve measured the deflection of light by gravitational lensing. And in work that I helped lead, we’ve used the CMB maps to put limits on some of the ways in which our simplest models of the Universe could be wrong, possibly having an interesting topology or rotation on the largest scales.

But because we’ve scrutinised our data so carefully, we have found some peculiarities which don’t quite fit the models. From the days of COBE and WMAP, there has been evidence that the largest angular scales in the map, a few degrees and larger, have some “anomalies” — some of the patterns show strange alignments, some show unexpected variation between two different hemispheres of the sky, and there are some areas of the sky that are larger and colder than is expected to occur in our theories. Individually, any of these might be a statistical fluke (and collectively they may still be) but perhaps they are giving us evidence of something exciting going on in the early Universe. Or perhaps, to use a bad analogy, the CMB map is like the Zapruder film: if you scrutinise anything carefully enough, you’ll find things that look a conspiracy, but turn out to have an innocent explanation.

I’ve mentioned eight different Planck papers so far, but in fact we’ve released 28 (and there will be a few more to come over the coming months, and many in the future). There’s an overall introduction to the Planck Mission, and papers on the data processing, observations of relatively nearby galaxies, and plenty more cosmology. The papers have been submitted to the journal A&A, they’re available on the ArXiV, and you can find a list of them at the ESA site.

Even more important for my cosmology colleagues, we’ve released the Planck data, as well, along with the necessary code and other information necessary to understand it: you can get it from the Planck Legacy Archive. I’m sure we’ve only just begun to get exciting and fun science out of the data from Planck. And this is only the beginning of Planck’s data: just the first 15 months of observations, and just the intensity of the CMB: in the coming years we’ll be analysing (and releasing) more than one more year of data, and starting to dig into Planck’s observations of the polarized sky.

EBEX launched!

My colleagues, friends and collaborators on the EBEX project are in Antarctica this (Northern) winter preparing the telescope for launch. And today, they did it:

Just beautiful. (The video is from Asad Aboobaker, whose blog EBEX in Flight is documenting the mission from the field.)

EBEX is a next-generation CMB telescope, with hundreds of detectors measuring temperature and polarisation, which we hope will allow us to see the effects of an early epoch of cosmic inflation through the background of gravitational radiation that it produces. But right now, we are just hoping that EBEX catches some good winds in the Antarctic Polar Vortex and comes back around the continent after about two weeks of observing from 120,000 feet.

ICIC

Among the many other things I haven’t had time to blog about, this term we opened the new Imperial Centre for Inference and Cosmology, the culmination of several years of expansion in the Imperial Astrophysics group. In mid-March we had our in-house grand opening, with a ribbon-cutting by the group’s most famous alumnus.

Statistics and astronomy have a long history together, largely growing from the desire to predict the locations of planets and other heavenly bodies based on inexact measurements. In relatively modern times, that goes back at least to Legendre and Gauss who more or less independently came up with the least-squares method of combining observations, which can be thought of as based on the latter’s eponymous Gaussian distribution.

Our group had already had a much shorter but still significant history in what has come to be called “astrostatistics”, having been involved with large astronomical surveys such as UKIDSS and IPHAS and the many allowed by the infrared satellite telescope Herschel (and its predecessors ISO, IRAS and Spitzer). Along with my own work on the CMB and other applications of statistics to cosmology, the other “founding members” of ICIC include: my colleague Roberto Trotta who has made important forays into the rigorous application of principled Bayesian statistics to problems cosmology and particle physics; Jonathan Pritchard who studies the distribution of matter in the evolving Universe and what that can teach about its constituents and that evolution; and Daniel Mortlock, who has written about some of his work looking for rare and unusual objects elsewhere on this blog. We are lucky to have the initial membership of the group supplemented by Alan Heavens, who will be joining us over the summer and has a long history of working to understand the distribution of matter in the Universe throughout its history. This group will be joined by several members of the Statistics section of the Mathematics Department, in particular David van Dyk, David Hand and Axel Gandy.

One of the fun parts of starting up the new centre has been the opportunity to design our new suite of glass-walled offices. Once we made sure that there would be room for a couple of sofas and a coffee machine for the Astrophysics group to share, we needed something to allow a little privacy. For the main corridor, we settled on this:
IMG 2932
The left side is from the Hubble Ultra-Deep field (in negative), a picture about 3 arc minutes on a side (about the size of a dime or 5p coin held at arm’s length), the deepest — most distant — optical image of the Universe yet taken. The right side is our Milky Way galaxy as reconstructed by the 2MASS survey.

The final wall is a bit different:
IMG 2926
The middle panels show part of papers by each of those founding members of the group, flanked on the left and right side with the posthumously published paper by the Rev. Thomas Bayes who gave his name to the field of Bayesian Probability.

Of course, there has been some controversy about how we should actually refer to the place. Reading out the letters gives the amusing “I see, I see”, and IC2 (“I-C-squared”) has a nice feel and a bit of built-in mathematics, although it does sound a bit like the outcome of a late-90s corporate branding exercise (and the pedants in the group noted that technically it would then be the incorrect I×C×C unless we cluttered it with parentheses).

We’re hoping that the group will keep growing, and we look forward to applying our tools and ideas to more and more astronomical data over the coming years. One of the most important ways to do that, of course, will be through collaboration: if you’re an astronomer with lots of data, or a statistician with lots of ideas, or, like many of us, somewhere in between, please get in touch and come for a visit.


Unfortunately we don’t yet have a webpage for the Centre..

Planck Warms Up

Nearly two-and-a-half years after its launch, the end of ESA’s Planck mission has begun. (In fact, the BBC scooped the rest of the Planck collaboration itself with a story last week; you can read the UK take at the excellent Cardiff-led public Planck site.)

Planck’s High-Frequency Instrument (HFI) instrument must be cooled to 0.1 degrees above absolute zero, maintained at this temperature by a series of refrigerators — which had been making Planck the coldest known object in space, colder than the 2.7 degrees to which the cosmic microwave background itself warms even the most regions of intergalactic space. The final cooler in the chain relies on a tank of the Helium-3 isotope, which has finally run out, within days of its predicted lifetime — and giving Planck more than twice as much time observing the Universe as its nominal 14-month mission.

The Low-Frequency Instrument (LFI) doesn’t require such cold temperatures, although in fact they do use one of the earlier stages in the chain, the UK-built 4-degree cooler, as a reference against which it compares its measurements. LFI will, therefore, continue its measurements for the next half-year or so.

But our work, of course, goes on: we will continue to process and analyse Planck’s data, refining our maps of the sky, and get down to the real work of extracting a full sky’s worth of astrophysics and cosmology from our data. The first, preliminary, release of Planck data happened just one year ago, and yet more new Planck science will be presented at a conference in Bologna in a few months. The most exciting and important work will be getting cosmology from Planck data, which we expect to first present in early 2013, and likely in further iterations beyond that.

Bluffing about Mars

Saturday afternoon I received a call from a news producer at the BBC — could I come talk about the Mars Science Laboratory, launched earlier that day?

This was a tough question, publicity-monger though I am: I don’t actually know anything about Mars. I suppose to people outside of the very broad field of “astronomy”, studying the planets in the solar system is not very different from studying the Cosmic Microwave Background. After all, in both cases we use telescopes and satellites. But actually, the study of planets is much closer to geology (and, with increasing interest in the possibilities of life on those planets, to biology) than astronomy per se.

JPL Clean Room: Mars Science Lab II

Nonetheless, I did actually know a little about the Mars Science Laboratory and its Curiosity rover: when I was visiting the Jet Propulsion Laboratory earlier this year (to work on the CMB), our group took a quick field trip across the lab to the shop where the satellite was being assembled. Everything else I admit that I learned from wikipedia and NASA PR materials in the two hours before the interview.

One of the difficulties in getting to the surface of Mars arises from its tenuous atmosphere: parachutes aren’t a very efficient braking system. Instead, the igloo-shaped structure above and below is part of the “Sky Crane”, an amazing contraption that will hover and lower the rover gently down to the Martian surface. The Curiosity rover itself is possibly the most sophisticated robot we’ve ever put on another planet: about the size of a Mini, it will scoot around the neighbourhood of the landing site, performing experiments and sending the results back to the human race. My favourite instrument is the ChemCam, which will use “laser induced breakdown spectroscopy” to analyse rocks on the Martian surface. This is the very science-fictiony idea of shooting a high-energy laser beam at a rock, high enough to vaporise it, and then take a careful spectroscopic picture of that vapour, which scientists will decode and use to figure out the rock’s constituent elements. (Of course if there were any real Martians, they might not take kindly to our shooting laser beams at their rocks, in which case we may need to figure out a defense against the Illudium Q-36 Explosive Space Modulator.)

JPL Clean Room: Mars Science Lab I

Martians are, of course, one of the most important parts of MSL’s mission and the broader international program of exploring Mars. NASA is very careful to point out, however, that the point of the current mission is not to find life per se, but to help determine Mars’ habitability: could Mars now support life, or could it have in the past? The actual hunt for life will have to wait for a future mission.

Passion for Light

It’s been a busy few weeks, and that seems like a good excuse for my lack of posts. Since coming back from Scotland, I’ve been to:

  • Paris, for our bi-monthly Planck Core Team meetings, discussing of the state of the data from the satellite, and of our ongoing processing of it;

  • Cambridge, for yet more Planck, this time to discuss the papers that we as collaboration will be writing over the next couple of years; and

  • Varenna, on Lake Como in northern Italy, for the Passion for Light meeting, sponsored by SIF (the Italian Physical Society) and EPS (the European Physical Society). The meeting was at least in part to introduce the effort to sponsor an International Year of Light in 2015, supported by the UN and international scientific organizations. My remit was “Light from the Universe”, which I took as an excuse to talk about (yes), Planck and the Cosmic Microwave Background. That makes sense because of what is revealed in this plot, a version of which I showed:

Extragalactic Backgrounds (after Dole and Bethermin)

This figure (made after an excellent one which will be in an upcoming paper by Dole and Bethermin) shows the intensity of the “background light” integrated over all sources in the Universe. The horizontal axis gives the frequency of electromagnetic radiation — from the radio at the far left, to the Cosmic Microwave Background (CMB), the Cosmic Infrared Background (CIB), optical light in the middle, and on to ultraviolet, x-ray and gamma-ray light. The height of each curve is proportional to the intensity of the background, the amount of energy falling on a square meter of area per second coming from a particular direction on the sky (for aficionados of the mathematical details, we actually plot the quantity νIν to take account of the logarithmic axis, so that the area under the curve gives a rough estimate of the total intensity) which is itself also proportional to the total energy density of that background, averaged over the whole Universe.

Here on earth, we are dominated by the sun (or, indoors, by artificial illumination), but a planet is a very unusual place: most of the Universe is empty space, not particularly near a star. What this plot shows is that most of the background — most of the light in the Universe — isn’t from stars or other astronomical objects at all. Rather, it’s the Cosmic Microwave Background, the CMB, light from the early Universe, generated before there were any distinct objects at all, visible today as a so-called black body with temperature 2.73 degrees Kelvin. It also shows us that there is roughly the same amount of energy in infrared light (the CIB) as in the optical. This light doesn’t come directly from stars, but is re-processed as visible starlight is absorbed by interstellar dust which heats up and in turn glows in the infrared. That is one of the reasons why Planck’s sister-satellite Herschel, an infrared observatory, is so important: it reveals the fate of roughly half of the starlight ever produced. So we see that outside of the optical and ultraviolet, stars do not dominate the light of the Universe. The x-ray background comes from both very hot gas, heated by falling into clusters of galaxies on large scales, or by supernovae within galaxies, along with the very energetic collisions between particles that happen in the environments around black holes as matter falls in. We believe that the gamma ray background also come from accretion onto supermassive black holes at the centres of galaxies. But my talk centred on the yellow swathe of the CMB, although the only Planck data released so far are the relatively small contaminants from other sources in the same range of frequencies.

Other speakers in Varenna discussed microscopy, precision clocks, particle physics, the wave-particle duality, and the generation of very high-energy particles of light in the laboratory. But my favourite was a talk by Alessandro Farini, a Florentine “psychophysicist” who studies our perception of art. He showed the detailed (and extremely unphysical) use of light in art by even such supposedly realistic painters as Caravaggio, as well as using a series of optical illusions to show how our perceptions, which we think of as a simple recording of our surroundings, involve a huge amount of processing and interpretation before we are consciously aware of it. (As an aside, I was amused to see his collection of photographs with CMB Nobel Laureate George Smoot.)

And having found myself on the shores of Lake Como I took advantage of my good fortune:

Villa Monastero 5
(Many more pictures here.)

OK, this post has gone on long enough. I’ll have to find another opportunity to discuss speedy neutrinos, crashing satellites (and my latest appearance on the BBC World News to talk about the latter), not to mention our weeklong workshop at Imperial discussing the technical topic of photometric redshifts, and the 13.1 miles I ran last weekend.

EBEX in Flight

Many of my colleagues in the EBEX experiment have just lit out for the west. Specifically, the team is heading off to Palestine (pronounced “Palesteen”), Texas, to get the telescope and instrument ready for its big Antarctic long-duration balloon flight at the end of the year, when we hope to gather our first real scientific data and observe the temperature and polarization of the cosmic microwave background (CMB) radiation. Unlike the Planck Satellite, which has a few dozen detectors changed little from those that flew on MAXIMA and BOOMEReNG in the 1990s, EBEX can use more modern technology, and will fly with thousands of detectors, allowing us to achieve far greater sensitivity to the smallest variations in the CMB.

Asad, one of the EBEX postdocs, involved in the experiment for several years, will be writing on the EBEX in Flight blog about the experiences down in Texas and, we hope, the future path of the team and telescope down to Antarctica. Follow along as the team drives across the country (at least twice), assembles and tests the instrument, breaks and fixes things, sleeps too little, works too hard, and, we hope, builds the most sensitive CMB experiment yet deployed. (And of course, eats cheeseburgers.)

And if you want a change from cosmology, you can instead follow along with another friend, Marc, who is trying to see if he can come to grips with writing on an iPad in the supposedly post-PC world, over at typelesswriter.

Tacos and Power Spectra in LA

One of the perks (perqs?) of academia is that occasionally I get an excuse to escape the damp grey of London Winters. The Planck Satellite is an international collaboration and, although largely backed by the European Space Agency, it has a large contribution from US scientists, who built the CMB detectors for Planck’s HFI instrument, as well as being significantly involved in the analysis of Planck data. Much of this work is centred at NASA’s famous Jet Propulsion Lab in Pasadena, and I was happy to rearrange my schedule to allow a February trip to sunny Southern California (I hope my undergraduate students enjoyed the two guest lectures during my absence).

Visiting California, I was compelled to take advantage of the local culture, which mostly seemed to involve meals. I ate as much Mexican food as I could manage, from fantastic $1.25 tacos from the El Taquito Mexicano Truck to somewhat higher-end fare at Tinga in LA proper. And I finally got to taste bánh mì, French-influenced Vietnamese sandwiches (which have arrived in London but I somehow haven’t tried them here yet). And I got to take in the view from the heights of Griffith Park:
Griffith Park view II as well as down at street level:
Signs: S La Brea and 1st St, LA And even better, I got to share these meals and views with old and new friends.

Of course I was mainly in LA to do science, but even at JPL we managed to escape our windowless meeting room and check out the clean-room where NASA is assembling the Mars Science Lab:

JPL Clean Room: Mars Science Lab I

The white pod-like structure is the spacecraft itself, which will parachute into Mars’ atmosphere in a few years, and from it will descend the circular “sky crane” currently parked behind it which will itself deploy the car-sized Curiosity Rover to do the real work of Martian geology, chemistry, climatology and (who knows?) biology.

CMB_ILC_PolMap.png But my own work was for the semi-annual meeting of the Planck CTP working group (I’ve never been sure if it was intentional, but the name always seemed to me a sort of science pun, obliquely referring to the famous “CPT” symmetry of fundamental physics). In Planck, “CTP” refers to C from Temperature and Polarization: the calculation of the famous CMB power spectrum which contains much of the cosmological information in the maps that Planck will produce. The spectrum allows us to compress the millions of pixels in a map of the CMB sky, such as this one from the WMAP experiment (the colors give the temperature or intensity of the radiation, the lines its polarization), into just a few thousand numbers we can plot on a graph.

Planck CTP cake OK, this is not a publishable figure. Instead, it marks the tenth anniversary of the first CTP working group telecon in February 2001 (somewhat before I was involved in the group, actually). But given that we won’t be publishing Planck cosmology data for another couple of years, sugary spectra will have to do instead of the real ones in the meantime.

The work of the CTP group is exactly concerned with finding the best algorithms for translating CMB maps into these power spectra. They must take into account the complicated noise in the map, coming from our imperfect instruments which observe the sky with finite resolution — that is, a telescope which smooths the sky at a scale from about half down to one-tenth of a degree — and with a limited sensitivity — every measurement has a little bit of unavoidable noise added to it. Moreover, in between the CMB, produced 400,000 years after the Big Bang, and Planck’s instruments, observing today, is the entire rest of the Universe, which contains matter that both absorbs and emits (glows) in the microwaves which Planck observes. So in practice we need to simultaneously deal with all of these effects when reducing our maps down to power spectra. This is a surprisingly difficult problem: the naive, brute-force (Bayesian), solution requires a number of computer operations which scales like the cube of the number of pixels in the CMB map; at Planck’s resolution this is as many as 100 million pixels, and there still are no supercomputers capable of doing the septillion (1024) operations in a reasonable time. If we smooth the map, we can still solve the full problem, but on small scales, we need to come up with useful approximations which take advantage of what we know about the data, usually taking advantage of the very large number of points that contribute, and the so-called asymptotic theorems which say, roughly, that we can learn about the right answer by doing lots of simulations, which are much less computationally expensive.

At the required levels of both accuracy and precision, the results depend on all of the details of the data processing and the algorithm: How do you account for the telescope’s optics and the pixelization of the sky? How do you model the noise in the map? How do you remove those pixels contaminated by astrophysical emission or absorption? All of this is compounded by the necessary (friendly) scientific competition: it is the responsibility of the CTP group to make recommendations for how Planck will actually produce its power spectra for the community and, naturally, each of us wants our own algorithm or computer program to be used — to win. So these meetings are as much about politics as science, but we can hope that the outcome is that all the codes are raised to an appropriate level and we can make the decisions on non-scientific grounds (ease of use, flexibility, speed, etc.) that will produce the high-quality scientific results for which we designed and built Planck — and have worked on it for the last decade or more.

Les autres choses (scientifique)

I’ve been meaning to give a shout-out to my colleagues on the ADAMIS team at the APC (AstroParticule et Cosmologie) Lab at the Université Paris 7 for a while: in addition to doing lots of great work on Planck, EBEX, PolarBear and other important CMB and cosmology experiments, they’ve also been running a group blog since the Autumn, Paper(s) of the Week et les autres choses (scientifique) which dissects some of the more interesting work to come out of the cosmology community. In particular, one of my favorite collaborators has written an extremely astute analysis of what, exactly, we on the Planck team released in our lengthy series of papers last month (which I have already discussed in a somewhat more boosterish fashion).

Planck: First results

The Satellite now known as the Planck Surveyor was first conceived in the mid-1990s, in the wake of the results from NASA’s COBE Satellite, the first to detect primordial anisotropies in the Cosmic Microwave Background (CMB), light from about 400,000 years after the big bang. (I am a relative latecomer to the project, having only joined in about 2000.)

After all this time, we on the team are very excited to produce our very first scientific results. These take the form of a catalog of sources detected by Planck, along with 25 papers discussing the catalog as well as the more diffuse pattern of radiation on the sky.

Planck is the very first instrument to observe the whole sky with light in nine bands with wavelengths from about 1/3 of a millimeter up to one centimeter, an unprecedented range. In fact this first release of data and papers discusses Planck as a tool for astrophysics — as a telescope observing distant galaxies and clusters of galaxies as well as our own Galaxy, the Milky Way. All of these glow in Planck’s bands (indeed they dominate over the CMB in most of them), and with our high-sensitivity all-sky maps we have the opportunity to do astronomy with Planck, the best microwave telescope ever made. Indeed, to get to this point, we actually have to separate out the CMB from the other sources of emission and, somewhat perversely, actively remove that from the data we are presenting.

Over the last year, then, we on the Planck team have written about 25 papers to support this science; a few of them are about the mission as a whole, the instruments on board Planck, and the data processing pipelines that we have written to produce our data. Then there are a few papers discussing the data we are making available, the Early Release Compact Source Catalog and the various subsets discussing separately objects within our own Milky Way Galaxy as well as more distant galaxies and clusters of galaxies. The remaining papers give our first attempts at analyzing the data and extracting the best science possible.

Most of the highlights in the current papers provide confirmation of things that astronomers have suspected, thanks to Planck’s high sensitivity and wide coverage. It has long been surmised that most stars in the Universe are formed in locations shrouded by dust, and hence not visible to optical telescopes. Rather, the birth of stars heats the dust to temperatures much lower than that of stars, but much higher than the cold dust far from star-forming regions. This warm dust radiates in Planck’s bands, seen at lower and lower frequencies for more and more distant galaxies (due to the redshift of light from these faraway objects). For the first time, Planck has observed this Cosmic Infrared Background (CIB) at frequencies that may correspond to galaxies forming when the Universe was less than 15% of its current age, less than 2 billion years after the big bang. Here is a picture of the CIB at various places around the sky, specifically chosen to be as free as possible of other sources of emission:
Cosmic Infrared Background

Another exciting result has to do with the properties of that dust in our own Milky Way Galaxies. This so-called cosmic dust is known to be made of very tiny grains, from small agglomerations of a few molecules up to those a few tens of micrometers across. Ever since the mid-1990s, there has been some evidence that this dust emits radiation at millimeter wavelengths that the simplest models could not account for. One idea, actually first proposed in the 1950s, is that some of the dust grains are oblong, and receive enough of a kick from their environment that they spin at very high rates, emitting radiation at a frequency related to that rotation. Planck’s observations seem to confirm this prediction quantitatively, seeing its effects in our galaxy. This image of the Rho Ophiuchus molecular cloud shows that the spinning dust emission at 30 GHz traces the same structures as the thermal emission at 857 GHz:
Spinning Dust

In addition, Planck has found more than twenty new clusters of galaxies, has mapped the dust in gas in the Milky Way in three dimensions, and uncovered cold gas in nearby galaxies. And this is just the beginning of what Planck is capable of. We have not yet begun to discuss the cosmological implications, nor Planck’s abilities to measure not just the intensity of light, but also its polarization.

Of course the most important thing we have learned so far is how hard it is to work in a team of 400 or so scientists, whom — myself included — like neither managing nor being managed (and are likewise not particularly skilled at either). I’ve been involved in a small way in the editing process, shepherding just a few of those 25 papers to completion, paying attention to the language and presentation as much as the science. Given the difficulties, I am relatively happy with the results — the papers can be downloaded directly from ESA, and will be available on the ArXiV on 12 January 2011, and will eventually be published in the journal Astronomy and Astrophysics. It will be very interesting to see how we manage this in two years when we may have as many as a hundred or so papers at once. Stay tuned.

Beautiful Evidence

One excuse for not blogging over the last month was a couple of weeks spent in North America, first in and around New York and New Jersey, visiting my family, and then a stop in Montreal for the annual collaboration meeting for the EBEX CMB balloon project, which we expect to launch on its science mission from Antarctica in about a year (alas I will be most likely minding the fort back here in Britain rather than joining my adventurous colleagues in the frozen South).

But while in New York I got to attend my first proper art auction, one with a very scientific bent — Beautiful Evidence: The Library of Edward Tufte. Tufte is something of an “info-guru”; in a series of gorgeously produced books, he has talked about techniques for translating numbers and words into graphics. Although he’s got an over-strong aversion to computer graphics (and especially to powerpoint), much of his advice is right-on (and rarely heeded).

In the course of selling his books and giving regular, well-attended courses (and, latterly, working for the President), I expect that Tufte (who started out as a Professor of Statistics at Yale) must have amassed a reasonable nest egg, ploughed back into books, pamphlets, artwork and posters. The 127 or so lots cover everything from science and mathematics to dance and fine art.

I was most interested in the scientific books and manuscripts, and the wonderful thing about auctions is that you can play with — sorry, I mean inspect — the items on offer. I couldn’t resist:

Huygens' Cosmotheoros

That’s me, holding Christian Huygens’ Cosmotheoros from 1698. Amazingly, it was one of the few items not to make its reserve price, under $1000 — I could afford it that with only a little credit. But the most expensive item was an original of Galileo’s Sidereal Messenger, from which has sprung all of astronomy, most of physics, much of science, and indeed a lot of the society in which we live. Given that, $662,000 doesn’t seem unreasonable.

In between those two extremes was another item I was lucky enough to hold: Newton's Principia A third edition of Isaac Newton’s Philosophiae Naturalis Principia Mathematica, which went for $16,250. This is the final edition printed during Newton’s lifetime, albeit with edits by one Henry Pemberton, which became “the basis for all subsequent editions” (and was notable for having lost all references to Newton’s rival in the creation of the calculus, Leibnitz). Like Galileo’s, it is one of the founding texts of modern science. But scientific progress, all that “standing on the shoulders of giants” has the slightly strange effect that such books are often mentioned, but rarely read. It is easier to learn Newton’s laws from a twenty-first century textbook (not to mention wikipedia) than from the original sources. Unlike many other such books, the Principia remains almost entirely mathematically and factually correct, but written in such a style — using geometry and pictures instead of equations, not to mention being in Latin — that even modern physicists find it hard to follow. Partially to ameliorate this (and partially to prove that he was the one of the few people who could manage the task), the great astrophsysicist S. Chandrasekhar decided, in the early 1990s, to produce an edition of “Newton’s Principia for the Common Reader”, translating Newton’s geometry into modern equations. (Needless to say, the book makes impressive demands upon the supposed “common reader”.) We could all do worse than to spend some time trying to get into Newton’s head (or Chandra’s).

Swedish Statistics

[Apologies to those of you who may have seen an inadvertantly-published unfinished version of this post]

I’ve just returned from a week at the Annual meeting of the Institute for Mathematical Statistics in Gothenburg, Sweden. It’s always instructive to go to meetings outside of one’s specialty, outside of the proverbial comfort zone. I’ve been in my own field long enough that I’m used to feeling like one of the popular kids, knowing and being known by most of my fellow cosmologists — it’s a good corrective to an overinflated sense of self-worth to be somewhere where nobody knows your name. Having said that, I was bit disappointed in the turnout for our session, “Statistics, Physics and Astrophysics”. Mathematical statistics is a highly specialized field, but with five or more parallel sessions going on at once, most attendees could find something interesting. However, even cross-cutting sessions of supposedly general interest — our talks were by physicists, not statisticians — didn’t have the opportunity to get a wide audience.

The meeting itself, outside of that session, was very much of mathematical statistics, more about lemmas and proofs than practical data analysis. Of course these theoretical underpinnings are crucial to the eventual practical work, although it’s always disheartening to see the mathematicians idealise a problem all out of recognition. For example, the mathematicians routinely assume that the errors on a measurement are independent and identically distributed (“iid” for short) but in practice this is rarely true in the data that we gather. (I should use this as an opportunity to mention my favourite statistics terms of art: homoscedastic and heteroscedastic, describing, respectively, identical and varying distributions.)

But there were more than a couple of interesting talks and sessions, mostly concentrating upon two of the most exciting — and newsworthy — intersections between statistical problems and the real world: finance and climate. How do we compare complicated, badly-sampled, real-world economic or climate data to complicated models which don’t pretend to capture the full range of phenomena? In what sense are the predictions inherently statistical and in what sense are they deterministic? “Probability”, said de Finetti, the famous Bayesian statistician “does not exist”, by which he meant that probabilities are statements about our knowledge of the world, not statements about the world. The world does, however, give sequences of values (stock prices, temperatures, etc.) which we can test our judgements against. This, in the financial realm, was the discussion of Hans Föllmer’s Medallion Prize Lecture, which veered into the more abstract realm of stochastic integration, Martingales and Itō calculus along the way.

Another pleasure was the session chaired by Robert Adler. Adler is the author of a book called The Geometry of Random Fields, a book which has had a significant effect upon cosmology from the 1980s through today. A “random field” is something that you could measure over some regime of space and time, but for which your theory doesn’t determine its actual value, but only its statistical properties, such as its average and the way the value at different points are related to one another. The best example in cosmology is the CMB itself — none of our theories predict the temperature at any particular place, but the theories that have survived our tests make predictions about the mean value and about the product of temperatures at any two points — this is called the correlation function, and a random field in which only the mean and correlation function can be specified is called a Gaussian random field, after the Gaussian distribution that is the mathematical version of this description. Indeed, Adler uses the CMB as one of the examples on his academic home page. But there are many more application besides: the session featured talks on brain imaging and on Google’s use random fields to analyze data about the way people look at their web pages

Gothenburg itself was nice in that Scandinavian way: nice, but not terribly exciting, full of healthy, attractive people who seem pleased with their lot in life. The week of our meeting overlapped with two other important events in the town. The other big meeting in town was the World Library and Information Congress — you can only imagine the party atmosphere in a town filled with both statisticians and librarians! But adding to that, Gothenburg was hosting its summer kulturkalas festival of culture — the streets were filled with musicians and other performers to distract us from the mathematics.

BPol++

I spent part of this week in Paris (apparently at the same time as a large number of other London-based scientists who were here for other things) discussing whether the European CMB community should rally and respond to ESA’s latest call for proposals for a mission to be launched in the next open slot—which isn’t until around 2022.

As successful as Planck seems to be, and as fun as it is working with the data, I suspect that no one on the Planck team thinks that a 400-scientist, dispersed, international team coming from a dozen countries each with its own politics and funding priorities, is the most efficient way to run such a project. But we’re stuck with it—no single European country can afford the better part of a billion Euros it will cost. Particle physics has been in this mode for the better part of fifty years, and arguably since the Manhattan Project, but it’s a new way of doing things — involving new career structures, new ways of evaluating research, new ways of planning, and a new concentration upon management — that we astrophysicists have to develop to answer our particular kinds of scientific questions.

But a longer discussion of “big science” is for another time. The next CMB satellite will probably be big, but the coming ESA call is officially for an “M-class” (for “medium”) mission, with a meagre (sic) 600 million euro cap. What will the astrophysical and cosmological community get for all this cash? How will it improve upon Planck?

Well, Planck has been designed to mine the cosmic microwave background for all of the temperature information available, the brightness of the microwave sky in all directions, down to around a few arcminutes at which scale it becomes smooth. But light from the CMB also carries information about the polarisation of light, essentially two more numbers we can measure at every point. Planck will measure some of this polarisation data, but we know that there will be much more to learn. We expect that this as-yet unmeasured polarisation can answer questions about fundamental physics that affects the early universe and describes its content and evolution. What are the details of the early period of inflation that gave the observable Universe its large-scale properties and seeded the formation of structures in it—and did it happen at all? What are the properties of the ubiquitous and light neutrino particles whose presence would have had a small but crucial effect on the evolution of structure?

The importance of these questions is driving us toward a fairly ambitious proposal for the next CMB mission. It will have a resolution comparable to that of Planck, but with many hundreds of individual detectors, compared to Plank’s many dozens—giving us over an order of magnitude increase in sensitivity to polarisation on the sky. Actually, even getting to this point took a good day or two of discussion. Should we instead make a cheaper, more focused proposal that would concentrate only on the question oaf inflation and in particular upon the background of gravitational radiation — observable as so-called “B-modes” in polarisation — that some theories predict? The problem with this proposal is that it is possible, or even likely, that it will produce what is known as a “null result”—that is, it won’t see anything at all. Moreover, a current generation of ground- and balloon-based CMB experiments, including EBEX and Polarbear, which I am lucky enough to be part of, are in progress, and should have results within the next few years, possibly scooping any too-narrowly designed future satellite.

So we will be broadening our case beyond these B-modes, and therefore making our design more ambitious, in order to make these further fundamental measurements. And, like Planck, we will be opening a new window on the sky for astrophysicists of all stripes, giving measurements of magnetic fields, the shapes of dust grains, and likely many more things we haven’t yet though of.

One minor upshot of all this is that our original name, the rather dull “B-Pol”, is no longer appropriate. Any ideas?

The Planck Sky Previewed

The Planck Satellite was launched in May 2009, and started regular operations late last summer. This spring, we achieved an important milestone: the satellite has observed the whole sky.

To celebrate, the Planck team have released an image of the full sky. The telescope has detectors which can see the sky with 9 bands at wavelengths ranging from 0.3 millimeters up to nearly a centimeter, out of which we have made this false-color image. The center of the picture is toward the center of the Galaxy, with the rest of the sphere unwrapped into an ellipse so that we can put it onto a computer screen (so the left and right edges are really both the same points).

The microwave sky

At the longest and shortest wavelengths, our view is dominated by matter in our own Milky Way galaxy — this is the purple-blue cloud, mostly so-called galactic “cirrus” gas and dust, largely  concentrated in a thin band running through the center which is the disk of our galaxy viewed from within.

In addition to this so-called diffuse emission, we can also see individual, bright blue-white objects. Some of these are within our galaxy, but many are themselves whole distant galaxies viewed from many thousands or millions of light years distance. Here’s a version of the picture with some objects highlighted:

PLANCK_FSM_03_Black_Regions_v02_B.jpg

Even though Planck is largely a cosmology mission, we expect these galactic and extragalactic data to be invaluable to astrophysicists of all stripes. Buried in these pictures we hope to find information on the structure and formation of galaxies, on the evolution of very faint magnetic fields, and on the evolution of the most massive objects in the Universe, clusters of galaxies.

But there is plenty of cosmology to be done: we see the Cosmic Microwave Background (CMB) in the red and yellow splotches at the top and bottom — out of the galactic plane. We on the Planck team will be spending much of the next two years separating the galactic and extragalactic “foreground” emission from the CMB, and characterizing its properties in as much detail as we can. Stay tuned.

I admit that I was somewhat taken aback by the level of interest in these pictures: we haven’t released any data to the community, or written any papers. Indeed, we’ve really said nothing at all about science. Yet we’ve made it onto the front page of the Independent and even the Financial Times, and yours truly was quoted on the BBC’s website. I hope this is just a precursor to the excitement we’ll generate when we can actually talk about science, first early next year when we release a catalog of sources on the sky for the community to observe with other telescopes, and then in a couple of years time when we will finally drop the real CMB cosmology results.

Andrew Lange, Huan Tran

The cosmology community has had a terrible few months.

I am saddened to report the passing of Andrew Lange, a physicist from CalTech and one of the world’s preeminent experimental cosmologists. Among many other accomplishments, Andrew was one of the leaders of the Boomerang experiment, which made the first large-scale map of the Cosmic Microwave Background radiation with a resolution of less than one degree, sufficient to see the opposing action of gravity and pressure in the gas of the early Universe, and to use that to measure the overall density of matter, among many other cosmological properties. He has since been an important leader in a number of other experiments, notably the Planck Surveyor satellite and the Spider balloon-borne telescope, currently being developed to become one of the most sensitive CMB experiments ever built.

I learned about this tragedy on the same day that people are gathering in Berkeley, California, to mourn the passing of another experimental cosmologist, Huan Tran of Berkeley. Huan was an excellent young scientist, most recently deeply involved in the development of PolarBear, another one of the current generation of ultra-sensitive CMB experiments. Huan lead the development of the PolarBear telescope itself, currently being tested in the mountains of California, but to be deployed for real science on the Atacama plane in Chile. We on the PolarBear team are proud to name the PolarBear telescope after Huan Tran, a token of our esteem for him, and a small tribute to his memory.

My thoughts go out to the friends and family of both Huan and Andrew. I, and many others, will miss them both.

Planck's First Light

I’m happy to be able to point to ESA’s first post-launch press release from the Planck Surveyor Satellite.

Here is a picture of the area of sky that Planck has observed during its “First Light Survey”, superposed on an optical image of the Milky Way galaxy:

FIRST_LIGHT_SURVEY_skystrip_boxes_L.jpg

(Image credit: ESA, LFI and HFI Consortia (Planck); Background image: Axel Mellinger. More pictures are available on the UK Planck Site as well as in French.)

The last few months since the launch have been a lot of fun, getting to play with Planck data ourselves. Here at Imperial, our data-processing remit is fairly narrow: we compute and check how well the satellite is pointing where it is supposed to, and calculate the shape of its beam on the sky (i.e., how blurry its vision is). Nonetheless, just being able to work at all with this incredibly high-quality data is satisfying.

Because of the way Planck scans the sky, in individual rings slowly stepping around the sky over the course of about seven months, with a nominal mission of two full observations of the sky, even the two weeks of “First Light Survey” data is remarkably powerful: we have seen a bit more than 5% of the sky with about half of the sensitivity that Planck is meant to eventually have (in fact, we hope to extend the mission beyond the initial 14 months). This is already comparable to the most powerful sub-orbital (i.e., ground and balloon-based) CMB experiments to date.

But a full scientific analysis will have to wait a while: after the 14 month nominal mission, we will have one year to analyze the data, and another year to get science out of it before we need to release the data alongside, we hope, a whole raft of papers. So stay tuned until roughly Autumn of 2012 for the next big Planck splash.