This week, we released (most of) the final set of papers from the Planck collaboration — the long-awaited Planck 2018 results (which were originally meant to be the “Planck 2016 results”, but everything takes longer than you hope…), available on the ESA website as well as the arXiv. More importantly for many astrophysicists and cosmologists, the final public release of Planck data is also available.
Anyway, we aren’t quite finished: those of you up on your roman numerals will notice that there are only 9 papers but the last one is “XII” — the rest of the papers will come out over the coming months. So it’s not the end, but at least it’s the beginning of the end.
And it’s been a long time coming. I attended my first Planck-related meeting in 2000 or so (and plenty of people had been working on the projects that would become Planck for a half-decade by that point). For the last year or more, the number of people working on Planck has dwindled as grant money has dried up (most of the scientists now analysing the data are doing so without direct funding for the work).
(I won’t rehash the scientific and technical background to the Planck Satellite and the cosmic microwave background (CMB), which I’ve been writing about for most of the lifetime of this blog.)
Planck 2018: the science
So, in the language of the title of the first paper in the series, what is the legacy of Planck? The state of our science is strong. For the first time, we present full results from both the temperature of the CMB and its polarization. Unfortunately, we don’t actually use all the data available to us — on the largest angular scales, Planck’s results remain contaminated by astrophysical foregrounds and unknown “systematic” errors. This is especially true of our measurements of the polarization of the CMB, unfortunately, which is probably Planck’s most significant limitation.
The remaining data are an excellent match for what is becoming the standard model of cosmology: ΛCDM, or “Lambda-Cold Dark Matter”, which is dominated, first, by a component which makes the Universe accelerate in its expansion (Λ, Greek Lambda), usually thought to be Einstein’s cosmological constant; and secondarily by an invisible component that seems to interact only by gravity (CDM, or “cold dark matter”). We have tested for more exotic versions of both of these components, but the simplest model seems to fit the data without needing any such extensions. We also observe the atoms and light which comprise the more prosaic kinds of matter we observe in our day-to-day lives, which make up only a few percent of the Universe.
All together, the sum of the densities of these components are just enough to make the curvature of the Universe exactly flat through Einstein’s General Relativity and its famous relationship between the amount of stuff (mass) and the geometry of space-time. Furthermore, we can measure the way the matter in the Universe is distributed as a function of the length scale of the structures involved. All of these are consistent with the predictions of the famous or infamous theory of cosmic inflation), which expanded the Universe when it was much less than one second old by factors of more than 1020. This made the Universe appear flat (think of zooming into a curved surface) and expanded the tiny random fluctuations of quantum mechanics so quickly and so much that they eventually became the galaxies and clusters of galaxies we observe today. (Unfortunately, we still haven’t observed the long-awaited primordial B-mode polarization that would be a somewhat direct signature of inflation, although the combination of data from Planck and BICEP2/Keck give the strongest constraint to date.)
Most of these results are encoded in a function called the CMB power spectrum, something I’ve shown here on the blog a few times before, but I never tire of the beautiful agreement between theory and experiment, so I’ll do it again:
(The figure is from the Planck “legacy” paper; more details are in others in the 2018 series, especially the Planck “cosmological parameters” paper.) The top panel gives the power spectrum for the Planck temperature data, the second panel the cross-correlation between temperature and the so-called E-mode polarization, the left bottom panel the polarization-only spectrum, and the right bottom the spectrum from the gravitational lensing of CMB photons due to matter along the line of sight. (There are also spectra for the B mode of polarization, but Planck cannot distinguish these from zero.) The points are “one sigma” error bars, and the blue curve gives the best fit model.
As an important aside, these spectra per se are not used to determine the cosmological parameters; rather, we use a Bayesian procedure to calculate the likelihood of the parameters directly from the data. On small scales (corresponding to 𝓁>30 since 𝓁 is related to the inverse of an angular distance), estimates of spectra from individual detectors are used as an approximation to the proper Bayesian formula; on large scales (𝓁<30) we use a more complicated likelihood function, calculated somewhat differently for data from Planck’s High- and Low-frequency instruments, which captures more of the details of the full Bayesian procedure (although, as noted above, we don’t use all possible combinations of polarization and temperature data to avoid contamination by foregrounds and unaccounted-for sources of noise).
Of course, not all cosmological data, from Planck and elsewhere, seem to agree completely with the theory. Perhaps most famously, local measurements of how fast the Universe is expanding today — the Hubble constant — give a value of H0 = (73.52 ± 1.62) km/s/Mpc (the units give how much faster something is moving away from us in km/s as they get further away, measured in megaparsecs (Mpc); whereas Planck (which infers the value within a constrained model) gives (67.27 ± 0.60) km/s/Mpc . This is a pretty significant discrepancy and, unfortunately, it seems difficult to find an interesting cosmological effect that could be responsible for these differences. Rather, we are forced to expect that it is due to one or more of the experiments having some unaccounted-for source of error.
The term of art for these discrepancies is “tension” and indeed there are a few other “tensions” between Planck and other datasets, as well as within the Planck data itself: weak gravitational lensing measurements of the distortion of light rays due to the clustering of matter in the relatively nearby Universe show evidence for slightly weaker clustering than that inferred from Planck data. There are tensions even within Planck, when we measure the same quantities by different means (including things related to similar gravitational lensing effects). But, just as “half of all three-sigma results are wrong”, we expect that we’ve mis- or under-estimated (or to quote the no-longer-in-the-running-for-the-worst president ever, “misunderestimated”) our errors much or all of the time and should really learn to expect this sort of thing. Some may turn out to be real, but many will be statistical flukes or systematic experimental errors.
(If you were looking a briefer but more technical fly-through the Planck results — from someone not on the Planck team — check out Renee Hlozek’s tweetstorm.)
Planck 2018: lessons learned
So, Planck has more or less lived up to its advanced billing as providing definitive measurements of the cosmological parameters, while still leaving enough “tensions” and other open questions to keep us cosmologists working for decades to come (we are already planning the next generation of ground-based telescopes and satellites for measuring the CMB).
But did we do things in the best possible way? Almost certainly not. My colleague (and former grad student!) Joe Zuntz has pointed out that we don’t use any explicit “blinding” in our statistical analysis. The point is to avoid our own biases when doing an analysis: you don’t want to stop looking for sources of error when you agree with the model you thought would be true. This works really well when you can enumerate all of your sources of error and then simulate them. In practice, most collaborations (such as the Polarbear team with whom I also work) choose to un-blind some results exactly to be able to find such sources of error, and indeed this is the motivation behind the scores of “null tests” that we run on different combinations of Planck data. We discuss this a little in an appendix of the “legacy” paper — null tests are important, but we have often found that a fully blind procedure isn’t powerful enough to find all sources of error, and in many cases (including some motivated by external scientists looking at Planck data) it was exactly low-level discrepancies within the processed results that have led us to new systematic effects. A more fully-blind procedure would be preferable, of course, but I hope this is a case of the great being the enemy of the good (or good enough). I suspect that those next-generation CMB experiments will incorporate blinding from the beginning.
Further, although we have released a lot of software and data to the community, it would be very difficult to reproduce all of our results. Nowadays, experiments are moving toward a fully open-source model, where all the software is publicly available (in Planck, not all of our analysis software was available to other members of the collaboration, much less to the community at large). This does impose an extra burden on the scientists, but it is probably worth the effort, and again, needs to be built into the collaboration’s policies from the start.
That’s the science and methodology. But Planck is also important as having been one of the first of what is now pretty standard in astrophysics: a collaboration of many hundreds of scientists (and many hundreds more of engineers, administrators, and others without whom Planck would not have been possible). In the end, we persisted, and persevered, and did some great science. But I learned that scientists need to learn to be better at communicating, both from the top of the organisation down, and from the “bottom” (I hesitate to use that word, since that is where much of the real work is done) up, especially when those lines of hoped-for communication are usually between different labs or Universities, very often between different countries. Physicists, I have learned, can be pretty bad at managing — and at being managed. This isn’t a great combination, and I say this as a middle-manager in the Planck organisation, very much guilty on both fronts.
The first direct detection of gravitational waves was announced in February of 2015 by the LIGO team, after decades of planning, building and refining their beautiful experiment. Since that time, the US-based LIGO has been joined by the European Virgo gravitational wave telescope (and more are planned around the globe).
The first four events that the teams announced were from the spiralling in and eventual mergers of pairs of black holes, with masses ranging from about seven to about forty times the mass of the sun. These masses are perhaps a bit higher than we expect to by typical, which might raise intriguing questions about how such black holes were formed and evolved, although even comparing the results to the predictions is a hard problem depending on the details of the statistical properties of the detectors and the astrophysical models for the evolution of black holes and the stars from which (we think) they formed.
Last week, the teams announced the detection of a very different kind of event, the collision of two neutron stars, each about 1.4 times the mass of the sun. Neutron stars are one possible end state of the evolution of a star, when its atoms are no longer able to withstand the pressure of the gravity trying to force them together. This was first understood by S Chandrasekhar in the early years of the 20th Century, who realised that there was a limit to the mass of a star held up simply by the quantum-mechanical repulsion of the electrons at the outskirts of the atoms making up the star. When you surpass this mass, known, appropriately enough, as the Chandrasekhar mass, the star will collapse in upon itself, combining the electrons and protons into neutrons and likely releasing a vast amount of energy in the form of a supernova explosion. After the explosion, the remnant is likely to be a dense ball of neutrons, whose properties are actually determined fairly precisely by similar physics to that of the Chandrasekhar limit (discussed for this case by Oppenheimer, Volkoff and Tolman), giving us the magic 1.4 solar mass number.
(Last week also coincidentally would have seen Chandrasekhar’s 107th birthday, and Google chose to illustrate their home page with an animation in his honour for the occasion. I was a graduate student at the University of Chicago, where Chandra, as he was known, spent most of his career. Most of us students were far too intimidated to interact with him, although it was always seen as an auspicious occasion when you spotted him around the halls of the Astronomy and Astrophysics Center.)
This process can therefore make a single 1.4 solar-mass neutron star, and we can imagine that in some rare cases we can end up with two neutron stars orbiting one another. Indeed, the fact that LIGO saw one, but only one, such event during its year-and-a-half run allows the teams to constrain how often that happens, albeit with very large error bars, between 320 and 4740 events per cubic gigaparsec per year; a cubic gigaparsec is about 3 billion light-years on each side, so these are rare events indeed. These results and many other scientific inferences from this single amazing observation are reported in the teams’ overview paper.
A series of other papers discuss those results in more detail, covering the physics of neutron stars to limits on departures from Einstein’s theory of gravity (for more on some of these other topics, see this blog, or this story from the NY Times). As a cosmologist, the most exciting of the results were the use of the event as a “standard siren”, an object whose gravitational wave properties are well-enough understood that we can deduce the distance to the object from the LIGO results alone. Although the idea came from Bernard Schutz in 1986, the term “Standard siren” was coined somewhat later (by Sean Carroll) in analogy to the (heretofore?) more common cosmological standard candles and standard rulers: objects whose intrinsic brightness and distances are known and so whose distances can be measured by observations of their apparent brightness or size, just as you can roughly deduce how far away a light bulb is by how bright it appears, or how far away a familiar object or person is by how big how it looks.
Gravitational wave events are standard sirens because our understanding of relativity is good enough that an observation of the shape of gravitational wave pattern as a function of time can tell us the properties of its source. Knowing that, we also then know the amplitude of that pattern when it was released. Over the time since then, as the gravitational waves have travelled across the Universe toward us, the amplitude has gone down (further objects look dimmer sound quieter); the expansion of the Universe also causes the frequency of the waves to decrease — this is the cosmological redshift that we observe in the spectra of distant objects’ light.
Unlike LIGO’s previous detections of binary-black-hole mergers, this new observation of a binary-neutron-star merger was also seen in photons: first as a gamma-ray burst, and then as a “nova”: a new dot of light in the sky. Indeed, the observation of the afterglow of the merger by teams of literally thousands of astronomers in gamma and x-rays, optical and infrared light, and in the radio, is one of the more amazing pieces of academic teamwork I have seen.
And these observations allowed the teams to identify the host galaxy of the original neutron stars, and to measure the redshift of its light (the lengthening of the light’s wavelength due to the movement of the galaxy away from us). It is most likely a previously unexceptional galaxy called NGC 4993, with a redshift z=0.009, putting it about 40 megaparsecs away, relatively close on cosmological scales.
But this means that we can measure all of the factors in one of the most celebrated equations in cosmology, Hubble’s law: cz=H₀ d, where c is the speed of light, z is the redshift just mentioned, and d is the distance measured from the gravitational wave burst itself. This just leaves H₀, the famous Hubble Constant, giving the current rate of expansion of the Universe, usually measured in kilometres per second per megaparsec. The old-fashioned way to measure this quantity is via the so-called cosmic distance ladder, bootstrapping up from nearby objects of known distance to more distant ones whose properties can only be calibrated by comparison with those more nearby. But errors accumulate in this process and we can be susceptible to the weakest rung on the chain (see recent work by some of my colleagues trying to formalise this process). Alternately, we can use data from cosmic microwave background (CMB) experiments like the Planck Satellite (see here for lots of discussion on this blog); the typical size of the CMB pattern on the sky is something very like a standard ruler. Unfortunately, it, too, needs to calibrated, implicitly by other aspects of the CMB pattern itself, and so ends up being a somewhat indirect measurement. Currently, the best cosmic-distance-ladder measurement gives something like 73.24 ± 1.74 km/sec/Mpc whereas Planck gives 67.81 ± 0.92 km/sec/Mpc; these numbers disagree by “a few sigma”, enough that it is hard to explain as simply a statistical fluctuation.
Unfortunately, the new LIGO results do not solve the problem. Because we cannot observe the inclination of the neutron-star binary (i.e., the orientation of its orbit), this blows up the error on the distance to the object, due to the Bayesian marginalisation over this unknown parameter (just as the Planck measurement requires marginalization over all of the other cosmological parameters to fully calibrate the results). Because the host galaxy is relatively nearby, the teams must also account for the fact that the redshift includes the effect not only of the cosmological expansion but also the movement of galaxies with respect to one another due to the pull of gravity on relatively large scales; this so-called peculiar velocity has to be modelled which adds further to the errors.
This procedure gives a final measurement of 70.0+12-8.0, with the full shape of the probability curve shown in the Figure, taken directly from the paper. Both the Planck and distance-ladder results are consistent with these rather large error bars. But this is calculated from a single object; as more of these events are seen these error bars will go down, typically by something like the square root of the number of events, so it might not be too long before this is the best way to measure the Hubble Constant.
[Apologies: too long, too technical, and written late at night while trying to get my wonderful not-quite-three-week-old daughter to sleep through the night.]
A quick alert to any friends or followers in or near Rotterdam this weekend: I’ll be participating in a series of dialogs supporting The Humans, a piece of theatrical performance art sponsored by and shown at the Witte de With Center for Contemporary Art. The production, which will encompass a series of symposia with the process of writing and design, culminating in the play itself to be staged next Spring. It has arisen from the correspondence between artist Alexandre Singh and Defne Ayas, the new director of the Witte de With:
Set before the Earth’s beginning in a proto-world populated by spirits, gods, artisans and men of clay and plaster, The Humans — with ‘creation’ as its central theme — is modelled after the ancient Greek plays of Aristophanes. Whilst the theatrical references are ancient, the satire is utterly modern: religion, morality and human hubris are all mocked with an irreverent and biting tone.
Beyond this rather ambitious setup, I don’t know much about it. But this weekend I hope to learn more: Saturday, I’ll be on stage, talking with Singh about cosmology, in the first of a series of “Causeries”, this one with the daunting title “The Creation: On Cosmogony and Cosmology” — the other participants are philosophers and historians, and I’m looking forward to seeing whether there is a common thread through the different discussions, and how (or if) they reflect back on the eventual play itself.
This week I received the results of the “Student On-Line Evaluations” for my cosmology course. As I wrote a few weeks ago, I thought that this, my fourth and final year teaching the course, had gone pretty well, and I was happy to see that the evaluations bore this out: 80% of the responses were “good” or “very good”, the remainder “satisfactory” (and no “poor” or “very poor”, I’m happy to say). I was disappointed that only 23 student (fewer than half of the total) registered their opinion on subjects like “The structure and delivery of the lectures” and “the interest and enthusiasm generated by the lecturer”.
The weakest spot was “The explanation of concepts given by the lecturer” with 5 for satisfactory, 11 for good and 7 for very good — I suppose this reflects the actual difficulty of some of the material. In the second half of the course I need to draw more heavily on concepts from particle physics and thermodynamics that undergraduate students may not have encountered before, concepts that are necessary in order to understand how the Universe evolved from its hot, dense and simple early state to today’s wonderfully complex mix of radiation, gas, galaxies, dark matter and dark energy. Without several days to devote to the nuclear physics of big-bang nucleosynthesis, or the even longer necessary to really explain the quantum field theory in curved space-time that would be necessary to get a quantitative understanding of the density perturbations produced by an early epoch of cosmic inflation, the best I can do is give a taste of these ideas.
And I really appreciated comments such as “Work with other lecturers to show them how it’s done”. So thanks to all of my students — and good luck on the exam in early June.
Somehow I’ve managed to forget my usual end-of-term post-mortem of the year’s lecturing. I think perhaps I’m only now recovering from 11 weeks of lectures, lab supervision, tutoring alongside a very busy time analysing Planck satellite data.
But a few weeks ago term ended, and I finished teaching my undergraduate cosmology course at Imperial, 27 lectures covering 14 billion years of physics. It was my fourth time teaching the class (I’ve talked about my experiences in previous years here, here, and here), but this will be the last time during this run. Our department doesn’t let us teach a course more than three or four years in a row, and I think that’s a wise policy. I think I’ve arrived at some very good ways of explaining concepts such as the curvature of space-time itself, and difficulties with our models like the 122-or-so-order-of-magnitude cosmological constant problem, but I also noticed that I wasn’t quite as excited as in previous years, working up from the experimentation of my first time through in 2009, putting it all on a firmer foundation — and writing up the lecture notes — in 2010, and refined over the last two years. This year’s teaching evaluations should come through soon, so I’ll have some feedback, and there are still about six weeks until the students’ understanding — and my explanations — are tested in the exam.
Next year, I’ve got the frankly daunting responsibility of teaching second-year quantum mechanics: 30 lectures, lots of problem sheets, in-class problems to work through, and of course the mindbending weirdness of the subject itself. I’d love to teach them Dirac’s very useful notation which unifies the physical concept of quantum states with the mathematical ideas of vectors, matrices and operators — and which is used by all actual practitioners from advanced undergraduates through working physicists. But I’m told that students find this an extra challenge rather than a simplification. Comments from teachers and students of quantum mechanics are welcome.
Urban Sputnik, our collaboration with Vanessa Harden and Dominic Southgate of Gammaroot Design is currently on display at Imperial College in the main entrance of the Norman Foster-designed business school, located on Exhibition Road in London, just up the street from the Science Museum, the V&A Museum and the Natural History Museum. I’ve discussed the pieces that will be on display before, and if you’re anywhere near South Kensington in London over the next few days, please come and see them.
If that piques your interest, you can hear more from us directly: on Tuesday evening, November 8, we’ll be hosting a short presentation — with drinks and snacks — talking about the creation of the pieces and the science behind them.
For the second time this decade, the Nobel Prize in Physics has been awarded for cosmology, to Saul Perlmutter, Adam Riess and Brian Schmidt. They are among the leaders of the teams that used the properties of supernovae — exploding stars — to measure the rate of expansion of the Universe over time. In so doing, they found that the expansion has been speeding up for the last few billion years. This is difficult to accommodate in a Universe with matter that experiences gravity in the attractive way to which we are accustomed; instead it seems to require that the Universe today be dominated by an exotic form of matter given the purposely uninformative name “Dark Energy”. This is exemplified by Cosmological Constant, a term Einstein originally included in his equation of General Relativity but abandoned when it did not fit the available data — Einstein’s motivation was not to have an accelerating Universe, but a static one, with the attraction exactly balanced by the acceleration. In the late 1990s, those two groups began to see evidence of acceleration on larger scales than Einstein envisaged, evidence that has only got better over time (especially, I should say, when combined with evidence from the Cosmic Microwave Background on the flat overall geometry of the Universe).
I was impressed to see the Guardian liveblogging the announcement of the Nobel Prize in Physics, something that usually happens for Apple product announcements and high-profile sporting events. In the blog, Martin Rees makes the excellent point that, like much physics nowadays, these discoveries were made by teams of people, with excellent leadership by the prizewinners, absolutely, but that there should be a mechanism to recognise the full scope of highly expert scientists involved. (Indeed, the Gruber Cosmology prize, which was awarded for the same research in 2007, officially recognises “Saul Perlmutter & the Supernova Cosmology Project” and “Brian Schmidt & the High-z Supernova Search Team”.)
The big problem with Dark Energy isn’t the observations, however, but the underlying theory — there is no good particle physics model which allows a cosmological constant anything like we see today. The simplest ideas say that it is just zero, and the next simplest give something that is about 10 to the power 122 or so too large.
Luckily, cosmologists and astrophysicists have ideas to solidify the supernova results and hopefully get a handle on the underlying nature of whatever is causing the acceleration, by mapping the expansion of the Universe in space and time in even more detail. There are a plethora of ground-based telescopes making observations already, but the next step will be to go to space. And it turns out that there is another reason why this is a great day for scientists studying Dark Energy: we have just had word that ESA has decided that one of its next M-class (“M” for “medium”) will be Euclid, a satellite explicitly designed to measure the properties of the accelerating Universe.
It’s been a busy few weeks, and that seems like a good excuse for my lack of posts. Since coming back from Scotland, I’ve been to:
Paris, for our bi-monthly Planck Core Team meetings, discussing of the state of the data from the satellite, and of our ongoing processing of it;
Cambridge, for yet more Planck, this time to discuss the papers that we as collaboration will be writing over the next couple of years; and
Varenna, on Lake Como in northern Italy, for the Passion for Light meeting, sponsored by SIF (the Italian Physical Society) and EPS (the European Physical Society). The meeting was at least in part to introduce the effort to sponsor an International Year of Light in 2015, supported by the UN and international scientific organizations. My remit was “Light from the Universe”, which I took as an excuse to talk about (yes), Planck and the Cosmic Microwave Background. That makes sense because of what is revealed in this plot, a version of which I showed:
This figure (made after an excellent one which will be in an upcoming paper by Dole and Bethermin) shows the intensity of the “background light” integrated over all sources in the Universe. The horizontal axis gives the frequency of electromagnetic radiation — from the radio at the far left, to the Cosmic Microwave Background (CMB), the Cosmic Infrared Background (CIB), optical light in the middle, and on to ultraviolet, x-ray and gamma-ray light. The height of each curve is proportional to the intensity of the background, the amount of energy falling on a square meter of area per second coming from a particular direction on the sky (for aficionados of the mathematical details, we actually plot the quantity νIν to take account of the logarithmic axis, so that the area under the curve gives a rough estimate of the total intensity) which is itself also proportional to the total energy density of that background, averaged over the whole Universe.
Here on earth, we are dominated by the sun (or, indoors, by artificial illumination), but a planet is a very unusual place: most of the Universe is empty space, not particularly near a star. What this plot shows is that most of the background — most of the light in the Universe — isn’t from stars or other astronomical objects at all. Rather, it’s the Cosmic Microwave Background, the CMB, light from the early Universe, generated before there were any distinct objects at all, visible today as a so-called black body with temperature 2.73 degrees Kelvin. It also shows us that there is roughly the same amount of energy in infrared light (the CIB) as in the optical. This light doesn’t come directly from stars, but is re-processed as visible starlight is absorbed by interstellar dust which heats up and in turn glows in the infrared. That is one of the reasons why Planck’s sister-satellite Herschel, an infrared observatory, is so important: it reveals the fate of roughly half of the starlight ever produced. So we see that outside of the optical and ultraviolet, stars do not dominate the light of the Universe. The x-ray background comes from both very hot gas, heated by falling into clusters of galaxies on large scales, or by supernovae within galaxies, along with the very energetic collisions between particles that happen in the environments around black holes as matter falls in. We believe that the gamma ray background also come from accretion onto supermassive black holes at the centres of galaxies. But my talk centred on the yellow swathe of the CMB, although the only Planck data released so far are the relatively small contaminants from other sources in the same range of frequencies.
Other speakers in Varenna discussed microscopy, precision clocks, particle physics, the wave-particle duality, and the generation of very high-energy particles of light in the laboratory. But my favourite was a talk by Alessandro Farini, a Florentine “psychophysicist” who studies our perception of art. He showed the detailed (and extremely unphysical) use of light in art by even such supposedly realistic painters as Caravaggio, as well as using a series of optical illusions to show how our perceptions, which we think of as a simple recording of our surroundings, involve a huge amount of processing and interpretation before we are consciously aware of it. (As an aside, I was amused to see his collection of photographs with CMB Nobel Laureate George Smoot.)
And having found myself on the shores of Lake Como I took advantage of my good fortune:
OK, this post has gone on long enough. I’ll have to find another opportunity to discuss speedy neutrinos, crashing satellites (and my latest appearance on the BBC World News to talk about the latter), not to mention our weeklong workshop at Imperial discussing the technical topic of photometric redshifts, and the 13.1 miles I ran last weekend.
Urban Sputnik is a new interactive cosmology exhibit currently showing at the Royal Institution. It was created by Vanessa Harden and Dominic Southgate of Gammaroot Design collaborating with some Imperial Astrophysicists: me, Dave Clements and Roberto Trotta. Unlike my other recent foray into the science/art overlap, this one is a bit more didactic (that is, scientifically accurate!)
We’ve designed five different exhibits, and, supported by an award from the STFC, Vanessa and Dominic have made two of them. One tries to show how the shape of the Universe can seem flat up close, but curved due to the mass and energy on the largest scales. Another shows how the expansion of the Universe results in a redshift — light from further away loses energy, changing its wavelength and frequency.
The pieces are on display at the Royal Institution through 29 July, and there’s a closing event on Thursday 28 July where we’ll talk about the exhibit and both the science and design principles behind it (the event is free, but tickets are recommended to be sure you get a seat!).
(Apologies to the non-cosmologists and non-Brits who won’t find much of interest in the following.)
This year’s UK National Astronomy Meeting will be held 17-21 April, in Llandudno, North Wales. In the usual way of British “seaside resorts” (scare quotes are certainly appropriate for that phrase) Llandudno sticks frighteningly out into the Irish Sea but, you never know, it might actually seem like Springtime when we gather there.
In any event, there is a lot of astronomy going on in the UK, so NAM is a pretty big meeting, with significant communities working on everything from our solar system out to intergalactic space and the Universe as a whole. To help cover those large scales, I’m organizing a series of sessions on Cosmology, and there are still some open slots for any UK scientists. I would especially love to have lots of input from students and young postdocs looking to show off their work.
The deadline for the submission of abstracts is this Friday — please join us in Wales next month!
This week I’m co-organizing a meeting at the Royal Astronomical Society in London, “Novel methods for the exploitation of large astronomical and cosmological data sets”. It’s an unwieldy title, but we’ll be discussing the implication of the huge flood of astronomical data for cosmology and astrophysics. How do we deal with the sheer volume — terabytes and petabytes of data for coming experiments (for example, the Large Scale Synoptic Telescope, one of the most important ground-based telescopes slated for the coming decade, will produce 20TB per night)? Human beings can only ever look at minuscule fractions of that, so we need computers to do much of the heavy lifting.
So we’ll hear from both astronomers and statisticians about the science and algorithms we’ll need to cope. Paolo Padovani will discuss the worldwide effort to create a virtual observatory — a set of standards (and actual code running on actual servers) which allows uniform access to a wide variety of astronomical data. We’ll also hear from Imperial’s Professor David Hand, President of the Royal Statistical Society and Ben Wandelt of the Insitut d’Astrophysique de Paris about new methods for distilling signals from noisy data, and David van Dyk from UC Irvine about comparing complicated computer models with data. Finally, Alberto Vecchio from Birmingham will discuss one of the next frontiers, the challenges of doing astronomy not with light, but with gravitational radiation, which will allow us to peer closer than ever at neutron stars, black holes, supernovae, and others of the most exotic and exciting objects in the history of the Universe.The meeting is intended for professional astronomers, but open to all (there is a fee if you’re not a Fellow of the RAS).
One of my old friends from graduate school, and a colleague to the present day, Lloyd Knox — whom you may remember from such cosmology hits as the Dark Energy Song — has started an initiative to create “short documentary videos to demonstrate the explanatory power of simple physical models and to help us understand and aesthetically appreciate the natural world”. It’s called The Spherical Cow company — the name comes from the traditional physicists’ trick of idealizing and simplifying any problem he or she gets, sometimes out of all recognition — but usually, when done well, keeping enough of the salient features.
The first video does just that, giving a simple description of the formation of the Cosmic Microwave Background, in the form of a conversation between Lloyd and his son, Teddy — with interpolations for animations and narration. Even with those occasional animations, the whole thing is pleasingly low-fi, but well-explained and charming (especially so for me, as I know the protagonists). I look forward to the next videos in the series, and I’ll certainly be recommending them to students of all ages.
I spent part of this week in Paris (apparently at the same time as a large number of other London-based scientists who were here for other things) discussing whether the European CMB community should rally and respond to ESA’s latest call for proposals for a mission to be launched in the next open slot—which isn’t until around 2022.
As successful as Planck seems to be, and as fun as it is working with the data, I suspect that no one on the Planck team thinks that a 400-scientist, dispersed, international team coming from a dozen countries each with its own politics and funding priorities, is the most efficient way to run such a project. But we’re stuck with it—no single European country can afford the better part of a billion Euros it will cost. Particle physics has been in this mode for the better part of fifty years, and arguably since the Manhattan Project, but it’s a new way of doing things — involving new career structures, new ways of evaluating research, new ways of planning, and a new concentration upon management — that we astrophysicists have to develop to answer our particular kinds of scientific questions.
But a longer discussion of “big science” is for another time. The next CMB satellite will probably be big, but the coming ESA call is officially for an “M-class” (for “medium”) mission, with a meagre (sic) 600 million euro cap. What will the astrophysical and cosmological community get for all this cash? How will it improve upon Planck?
Well, Planck has been designed to mine the cosmic microwave background for all of the temperature information available, the brightness of the microwave sky in all directions, down to around a few arcminutes at which scale it becomes smooth. But light from the CMB also carries information about the polarisation of light, essentially two more numbers we can measure at every point. Planck will measure some of this polarisation data, but we know that there will be much more to learn. We expect that this as-yet unmeasured polarisation can answer questions about fundamental physics that affects the early universe and describes its content and evolution. What are the details of the early period of inflation that gave the observable Universe its large-scale properties and seeded the formation of structures in it—and did it happen at all? What are the properties of the ubiquitous and light neutrino particles whose presence would have had a small but crucial effect on the evolution of structure?
The importance of these questions is driving us toward a fairly ambitious proposal for the next CMB mission. It will have a resolution comparable to that of Planck, but with many hundreds of individual detectors, compared to Plank’s many dozens—giving us over an order of magnitude increase in sensitivity to polarisation on the sky. Actually, even getting to this point took a good day or two of discussion. Should we instead make a cheaper, more focused proposal that would concentrate only on the question oaf inflation and in particular upon the background of gravitational radiation — observable as so-called “B-modes” in polarisation — that some theories predict? The problem with this proposal is that it is possible, or even likely, that it will produce what is known as a “null result”—that is, it won’t see anything at all. Moreover, a current generation of ground- and balloon-based CMB experiments, including EBEX and Polarbear, which I am lucky enough to be part of, are in progress, and should have results within the next few years, possibly scooping any too-narrowly designed future satellite.
So we will be broadening our case beyond these B-modes, and therefore making our design more ambitious, in order to make these further fundamental measurements. And, like Planck, we will be opening a new window on the sky for astrophysicists of all stripes, giving measurements of magnetic fields, the shapes of dust grains, and likely many more things we haven’t yet though of.
One minor upshot of all this is that our original name, the rather dull “B-Pol”, is no longer appropriate. Any ideas?
We get most of the official feedback on our teaching through a mechanism called SOLE — Student On-Line Evaluations — which asks a bunch of questions on the typical “Very Poor” … “Very Good” scale. I’ve written about my results before — they are useful, and there is even some space for ad-hoc comments, but the questionnaire format is a bit antiseptic.
On some occasions, however, students make an extra effort to let you know how they feel. Last year, I received an anonymous paper letter in the old-fashioned snail-mail post from a student in my cosmology course which said, among other statements, that I should “show appropriate humility and shame by not teaching any undergraduate courses at all this coming year.” Well, that year has come and gone, and I was not absolved of teaching responsibilities, so I soldiered on.
Today, I received another anonymous letter, from a most assuredly different student, who said that this year’s cosmology course “is without a doubt the most interesting undergraduate course I have taken at Imperial.” This would have left me ecstatic, except that this otherwise well-intentioned and obviously smart student managed to put the envelope in the mailbox with insufficient postage, which meant that I had to trudge across to the local mail facility and pay the missing 10p, along with a full £1 fee/fine! (If the author of the letter happens to read this, please consider a donation of £1.10 plus appropriate interest to the charity of your choice!).
It would be self-serving of me to make too much of this, beyond noting that, although I did make some significant changes in this year’s course, these letters more likely indicate the very different reactions that a given course can engender, rather than a vast improvement in my teaching.
My apologies to both students if they would have preferred I not quote them on-line, but such is the price of anonymity.
I just received the SOLE (Student On-Line Evaluation) results for my cosmology course. Overall, I was pleased: averaging between “good” and “very good” for “the structure and organisation of the lectures”, “the approachability of” and “the interest and enthusiasm generated by” the lecturer, as well as for “the support materials” (my lecture notes), although only “good” for “the explanation of concepts given by the lecture”, with an evenly-dispersed smattering of “poor” and “very good” —- you can’t please all of the people all of the time. That last, of course, is the crux of any course, and especially one with as many seemingly weird concepts as cosmology (the big bang itself, inflation, baryogenesis, …). So perhaps a bit of confusion is to be expected. Still, must try harder.
The specific written comments were mostly positive (it’s clear the students really liked those typed-up lecture notes), but I remain puzzled by comments like this: “Sometimes 2-3 mins of explanation (which is generally good) is reduced to one or two words on the board which are difficult to understand when going over notes later.” Indeed — I expect the student to take his or her own notes on those “2-3 mins of explanation”, if they were useful and interesting. But many of the comments were quite helpful, about the pace of the lectures, the prerequisites for the course, and, especially, the order in which I use the six sliding blackboards in the classroom.
So, thanks to the students for the feedback (and good luck on the exam…).
I’ve just finished teaching my eleven-week winter-term Cosmology course at Imperial. Like all lecturing, it was exhilerating, and exhausting. And like usual, I am somewhat embarrassed to say that I think I understand the subject better than when I started out. (I hope that the students can say some of the same things. Comments from them welcome, either way.)
It’s my second year, and I think I am slowly getting the hang of it. It’s hard to fit all of the interesting and up-to-date research in cosmology into 26 lectures, starting from scratch. This time I spent a little more time in the early lectures trying to give a heuristic explanation of some of the more advanced background topics, like the interpretation of the metric in Einstein’s General Relativity, and the physics behind the transition of the Universe from and ionized plasma to a neutral gas.
In a way, much of this was prelude to some of the most most exciting research in modern cosmology, the growth of large-scale structure from its first seeds into the pattern of galaxies we observe in the Universe today. Explaining this requires a lot of background: early-Universe thermodynamics and why the Universe started out hot, dense, and dominated by radiation; enough relativity to motivate how structure grows differently on large and small scales; and the generation of the initial conditions for structure, or at least our best current idea, inflation, which takes initial quantum randomness and blows it up to the size of the observable Universe (and solves quite a few other problems besides). All of this, and the background required even to get to these topics, barely fit into those 26 lectures (and I admit I was a little rushed toward the end…). And it was even harder to compress them down into four hours of postgraduate lectures.
Alongside this, I decided that none of the available textbooks had quite the right point of view for my discussion, at least not at the undergraduate level I was aiming for (and there are some very good textbooks out there, including Andrew Liddle, An Introduction to Modern Cosmology; Michael Rowan-Robinson, Cosmology; and Peter Schneider, Extragalactic Astronomy and Cosmology: An Introduction). So I also wrote a hundred or so pages of notes (which are available from my Imperial website, if you’re interested in a crash course).
I’m often puzzled by exactly what students want from the 26 hours of lectures themselves. Many, it seems to me, would prefer to merely transcribe my board notes without having to pay close attention to what I am actually saying; perhaps note-taking is not a skill that students perfect at school nowadays. I hope at least that those written notes make it a bit easier to both listen and think during the lectures. (Again, constructive criticism is more than welcome.)
This week I’ll be giving a review (just half an hour!) of cosmology at the IOP’s High-Energy and Astroparticle Physics 2010 meeting. And then I get to indulge in some of my hobbies, like doing scientific research.
The cosmology community has had a terrible few months.
I am saddened to report the passing of Andrew Lange, a physicist from CalTech and one of the world’s preeminent experimental cosmologists. Among many other accomplishments, Andrew was one of the leaders of the Boomerang experiment, which made the first large-scale map of the Cosmic Microwave Background radiation with a resolution of less than one degree, sufficient to see the opposing action of gravity and pressure in the gas of the early Universe, and to use that to measure the overall density of matter, among many other cosmological properties. He has since been an important leader in a number of other experiments, notably the Planck Surveyor satellite and the Spider balloon-borne telescope, currently being developed to become one of the most sensitive CMB experiments ever built.
I learned about this tragedy on the same day that people are gathering in Berkeley, California, to mourn the passing of another experimental cosmologist, Huan Tran of Berkeley. Huan was an excellent young scientist, most recently deeply involved in the development of PolarBear, another one of the current generation of ultra-sensitive CMB experiments. Huan lead the development of the PolarBear telescope itself, currently being tested in the mountains of California, but to be deployed for real science on the Atacama plane in Chile. We on the PolarBear team are proud to name the PolarBear telescope after Huan Tran, a token of our esteem for him, and a small tribute to his memory.
My thoughts go out to the friends and family of both Huan and Andrew. I, and many others, will miss them both.
The perfect stocking-stuffer for that would-be Bayesian cosmologist you’ve been shopping for:
As readers here will know, the Bayesian view of probability is just that probabilities are statements about our knowledge of the world, and thus eminently suited to use in scientific inquiry (indeed, this is really the only consistent way to make probabilistic statements of any sort!). Over the last couple of decades, cosmologists have turned to Bayesian ideas and methods as tools to understand our data. This book is a collection of specially-commissioned articles, intended as both a primer for astrophysicists new to this sort of data analysis and as a resource for advanced topics throughout the field.
Our back-cover blurb:
In recent years cosmologists have advanced from largely qualitative models of the Universe to precision modelling using Bayesian methods, in order to determine the properties of the Universe to high accuracy. This timely book is the only comprehensive introduction to the use of Bayesian methods in cosmological studies, and is an essential reference for graduate students and researchers in cosmology, astrophysics and applied statistics.
The first part of the book focuses on methodology, setting the basic foundations and giving a detailed description of techniques. It covers topics including the estimation of parameters, Bayesian model comparison, and separation of signals. The second part explores a diverse range of applications, from the detection of astronomical sources (including through gravitational waves), to cosmic microwave background analysis and the quantification and classification of galaxy properties. Contributions from 24 highly regarded cosmologists and statisticians make this an authoritative guide to the subject.
You can order it now from Amazon UK or Amazon USA.
The students in my cosmology course had their exam last week.
There’s no doubt that they found the course tough this year — it was my first time teaching it, and I departed pretty significantly from the previous syllabus. Classically, cosmology was the study of the overall “world model” — the few parameters that describe the overall contents and geometry of the Universe, and courses have usually just concentrated upon the enumeration of these different models. But over the last decade or two we’ve narrowed down to what is becoming a standard model, and we cosmologists have begun to concentrate upon the growth of structure: the galaxies and clusters of galaxies that make the Universe interesting, not least because we need them for our own existence. Moreover, that structure directly teaches us about those contents which make them up and the geometry in which they are embedded. I wanted to give the students a chance to learn about the physics behind this large-scale structure, not traditionally at the heart of undergraduate cosmology courses.
Unfortunately, this also meant that the traditional undergraduate textbooks didn’t cover this material at the depth I needed, and so the students were forced to rely on my lectures and the notes they took there (and eventually a scanned and difficult-to-read copy of my written notes).
I sensed a bit of worry in the increasing numbers of questions from students in the weeks before the exam, and heard rumors of worries. But the day of the exam rolled around, and indeed when I re-read the questions it didn’t seem too bad, although there were some grumbles evident in the examination room.
Later I learned that there was a “record-breaking” number of complaints about the exam. I gather it was perceived to be difficult and unfamiliar.
So marking the exams in the past week, I was happy to find that the students performed just fine: the right “bell-shaped curve”, the correct mean, etc. (Of course I should point out that all results are subject to final approval by the Physics Department Examiners Committee.) I admit some puzzlement, therefore, about the reaction to the exam. Were they worried because the questions were different from those they had seen before? That, I admit, was the point of the exam — to test if they have actually learned something. Which, I am happy to point out, it seems that they had!
There was one question that almost all students got wrong, however. I asked about the “Cosmological Constant Problem” and whether it could be solved by the theory of cosmic inflation. The Cosmological Constant is a number that appears in General Relativity, and, although we can’t predict it for certain, we are pretty sure that if it’s not strictly zero, in most theories we would estimate that it ought to have a value something like 10120 (that is 1 followed by 120 zeros!) times greater than that observed in the Universe today. I suppose I didn’t write on the board the words “Cosmological Constant Problem” next to that extraordinarily large number. (In the end, I reapportioned the small number of marks associated with that problem.) Inflation involves something very much like the cosmological constant, but occurring in the very early Universe — so inflation can’t help us with the 120 zeroes, alas.
Next year, I’ll be sure to spell all of this out, but I’ll also show this movie of my old grad-school friend, collaborator, and colleague Lloyd Knox, now a professor at the University of California, Davis, singing this song about Dark Energy (of which the cosmological constant is a particular manifestation):
The scientifically-accurate lyrics are sung to the tune of Neutral Milk Hotel’s “In the Aeroplane over the Sea”.
Finally, I’d welcome comments on the course or the exam, anonymous or otherwise, from any students who may come across this post.
Today is Ada Lovelace Day, “an international day of blogging to draw attention to women excelling in technology.” I — along with more than a thousand other people — have pledged to write about a female role model in technology.
Ada Lovelace was Byron’s daughter and worked with computer pioneer Charles Babbage on his “Computing Engines” — and is widely thought of as the first computer programmer. A reconstruction of the “Difference Engine” is on view at the Science Museum around the corner from here, and if you’re reading this on 24 March, you can go and talk to Ada herself!
But I want to talk not about a programmer, but a computer. That is, a computer named Henrietta Swan Leavitt. In the early 20th Century, some (always male) astronomers had batteries of (almost always female) “computers” working for them, doing their calculations and other supposedly menial scientific work.
Leavitt — who had graduated from Radcliffe College — was employed by Harvard astronomer Charles Pickering to analyze photographic plates: she counted stars and measured their brightness. Pickering was particularly interested in “variable stars”, which changed their brightness over time. The most interesting variable stars changed in a regular pattern and Leavitt noticed that, for a certain class of these stars known as Cepheids, the brighter ones had longer periods. Eventually, in 1912, she made this more precise, and to this day the “Cepheid Period-Luminosity Relationship” remains one of the most important tools in the astronomers box.
It’s easy enough to measure the period of a Cepheid variable star: just keep taking data, make a graph, and see how long it takes to repeat itself. Then, from the Period-Luminosity relationship, we can determine its intrinsic luminosity. But we can also easily measure how bright it appears to us, and use this, along with the inverse-square relationship between intrinsic luminosity and apparent brightness, to get the distance to the star. That is, if we put the same star twice as far away, it’s four times dimmer; three times as far is nine times dimmer, etc.
This was just the technique that astronomy needed, and within a couple of decades it had led to a revolution in our understanding of the scale of the cosmos. First, it enabled astronomers to map out the Milky Way. But at this time, it wasn’t even clear whether the Milky Way was the only agglomeration of stars in the Universe, or one amongst many. Indeed, this was the subject of the so-called “great debate” in 1921 between American astronomers Harlow Shapley and Heber Curtis. Shapley argued that all of the nebuale (fuzzy patches) on the sky were just local collections of stars, or extended clouds of gas, while Curtis argued that some of them (in particular, Andromeda) were galaxies — “Island Universes” as they were called — like our own. By at least some accounts, Shapley won the debate at the time.
But very soon after, due to Leavitt’s work, Edwin Hubble determined that Curtis was correct: he saw the signature of Cepheid stars in (what turned out to be) the Andromeda galaxy and used them to measure the distance, which turned out to be much further away than the stars in the galaxy. A few years later, Hubble used Leavitt’s Period-Luminosity relationship to make an even more startling discovery: more distant galaxies were receding from us at a speed (measured using the galaxy’s redshift) proportional to their distance from us. This is the observational basis for the Big Bang theory of the Universe, tested and proven time and again in the eighty or so years since then.
Leavitt’s relationship remains crucial to astronomy and cosmology. The Hubble Space Telescope’s “Key Project” was to measure the brightness and period of Cepheid stars in galaxies as far away as possible, determining Hubble’s proportionality constant and set an overall scale for distances in the Universe.
The social situation of academic astronomy of her day strongly limited Leavitt’s options — women weren’t allowed to operate telescopes, and it was yet more difficult for her as she was deaf, as well. Although Leavitt was “only” employed as a computer, she was eventually nominated for a Nobel prize for her work — but she had already died. We can only hope that the continued use of her results and insight to this day is a small recompense and recognition of her life and work.