Another technical note: I’ve just reformatted the whole blog. Let me know if there are any problems (or if you just think it’s ugly).
Just a quick note that the blog has been having some issues with its infrastructure: pointers to individual entries seem to be broken.
I’m on the case — apologies if you can’t get to anything you’re looking for.
Update: fixed, I think. Let me know if there are any further problems. (The blog should be a bit faster, too, as I’ve moved over to statically publishing all the pages. Don’t worry if you don’t know what that means.)
As part of the festival, we’re organising Quest for the Grail: An International Adventure Game, later this month: from noon to 5pm in London and right afterwards, noon to 5pm in Manhattan, New York.
The London teams will “hunt for objects in Clerkenwell hotspots…from the Order of St. John to Blackfriars Bridge to the International Magic Shop. You may be looking for a charm against the Plague, a tombstone or a silver goblet. Your team may be asked to invent something - the holiest of drinks.” The game continues with New York teams searching in “Manhattan hotspots…from Clinton Castle to the tombstones of Trinity Church to the Grand Lodge of the Masons. You may be looking for a marker of a headless ghost who haunts Wall Street, a symbol of George Washington or a troll in the East Village”, aided by London players and puppetmasters overseeing the games.
Unfortunately, I’m in sunny California recovering from my winter (and many years) of Planck work, but if you’re in either city and would like to play, you can join as an individual, a half-team of five, or a full team of ten players. There’s more information on the site, or you can contact the organisers directly at email@example.com.
Yesterday’s release of the Planck papers and data wasn’t just aimed at the scientific community, of course. We wanted to let the rest of the world know about our results. The main press conference was at ESA HQ in Paris, and there was a smaller event here in London run by the UKSA, which I participated in as part of a panel of eight Planck scientists.
The reporters tried to keep us honest, asking us to keep simplifying our explanations so that they — and their readers — could understand them. We struggled with describing how our measurements of the typical size of spots in our map of the CMB eventually led us to a measurement of the age of the Universe (which I tried to do in my previous post). This was hard not only because the reasoning is subtle, but also because, frankly, it’s not something we care that much about: it’s a model-dependent parameter, something we don’t measure directly, and doesn’t have much of a cosmological consequence. (I ended up on the phone with the BBC’s Pallab Ghosh at about 8pm trying to work out whether the age has changed by 50 or 80 million years, a number that means more to him and his viewers than to me and my colleagues.)
There are pieces by the reporters who asked excellent questions at the press conference, at The Guardian, The Economist and The Financial Times, as well as one behind the (London) Times paywall by Hannah Devlin who was probably most rigorous in her requests for us to simplify our explanations. I’ll also point to NPR’s coverage, mostly since it is one of the few outlets to explicitly mention the topology of the Universe which was one of the areas of Planck science I worked on myself.
Aside from the press conference itself, the media were fairly clamouring for the chance to talk about Planck. Most of the major outlets in the UK and around Europe covered the Planck results. Even in the US, we made it onto the front page of the New York Times. Rather than summarise all of the results, I’ll just self-aggrandizingly point to the places where I appeared: a text-based preview from the BBC, and a short quote on video taken after the press conference, as well as one on ITV. I’m most proud of my appearance with Tom Clarke on Channel 4 News — we spent about an hour planning and discussing the results, edited down to a few minutes including my head floating in front of some green-screen astrophysics animations.
Now that the day is over, you can look at the results for yourself at the BBC’s nice interactive version, or at the lovely Planck Chromoscope created by Cardiff University’s Dr Chris North, who donated a huge amount of his time and effort to helping us make yesterday a success. I should also thank our funders over at the UK Space Agency, STFC and (indirectly) ESA — Planck is big science, and these sorts of results don’t come cheap. I hope you agree that they’ve been worth it.
If you’re the kind of person who reads this blog, then you won’t have missed yesterday’s announcement of the first Planck cosmology results.
The most important is our picture of the cosmic microwave background itself:
But it takes a lot of work to go from the data coming off the Planck satellite to this picture. First, we have to make nine different maps, one at each of the frequencies in which Planck observes, from 30 GHz (with a wavelength of 1 cm) up to 850 GHz (0.350 mm) — note that the colour scales here are the same:
At low and high frequencies, these are dominated by the emission of our own galaxy, and there is at least some contamination over the whole range, so it takes hard work to separate the primordial CMB signal from the dirty (but interesting) astrophysics along the way. In fact, it’s sufficiently challenging that the team uses four different methods, each with different assumptions, to do so, and the results agree remarkably well.
In fact, we don’t use the above CMB image directly to do the main cosmological science. Instead, we build a Bayesian model of the data, combining our understanding of the foreground astrophysics and the cosmology, and marginalise over the astrophysical parameters in order to extract as much cosmological information as we can. (The formalism is described in the Planck likelihood paper, and the main results of the analysis are in the Planck cosmological parameters paper.)
The main tool for this is the power spectrum, a plot which shows us how the different hot and cold spots on our CMB map are distributed: In this plot, the left-hand side (low ℓ) corresponds to large angles on the sky and high ℓ to small angles. Planck’s results are remarkable for covering this whole range from ℓ=2 to ℓ=2500: the previous CMB satellite, WMAP, had a high-quality spectrum out to ℓ=750 or so; ground- and balloon-based experiments like SPT and ACT filled in some of the high-ℓ regime.
It’s worth marvelling at this for a moment, a triumph of modern cosmological theory and observation: our theoretical models fit our data from scales of 180° down to 0.1°, each of those bumps and wiggles a further sign of how well we understand the contents, history and evolution of the Universe. Our high-quality data has refined our knowledge of the cosmological parameters that describe the universe, decreasing the error bars by a factor of several on the six parameters that describe the simplest ΛCDM universe. Moreover, and maybe remarkably, the data don’t seem to require any additional parameters beyond those six: for example, despite previous evidence to the contrary, the Universe doesn’t need any additional neutrinos.
The quantity most well-measured by Planck is related to the typical size of spots in the CMB map; it’s about a degree, with an error of less than one part in 1,000. This quantity has changed a bit (by about the width of the error bar) since the previous WMAP results. This, in turn, causes us to revise our estimates of quantities like the expansion rate of the Universe (the Hubble constant), which has gone down, in fact by enough that it’s interestingly different from its best measurements using local (non-CMB) data, from more or less direct observations of galaxies moving away from us. Both methods have disadvantages: for the CMB, it’s a very indirect measurement, requiring imposing a model upon the directly measured spot size (known more technically as the “acoustic scale” since it comes from sound waves in the early Universe). For observations of local galaxies, it requires building up the famous cosmic distance ladder, calibrating our understanding of the distances to further and further objects, few of which we truly understand from first principles. So perhaps this discrepancy is due to messy and difficult astrophysics, or perhaps to interesting cosmological evolution.
This change in the expansion rate is also indirectly responsible for the results that have made the most headlines: it changes our best estimate of the age of the Universe (slower expansion means an older Universe) and of the relative amounts of its constituents (since the expansion rate is related to the geometry of the Universe, which, because of Einstein’s General Relativity, tells us the amount of matter).
But the cosmological parameters measured in this way are just Planck’s headlines: there is plenty more science. We’ve gone beyond the power spectrum above to put limits upon so-called non-Gaussianities which are signatures of the detailed way in which the seeds of large-scale structure in the Universe was initially laid down. We’ve observed clusters of galaxies which give us yet more insight into cosmology (and which seem to show an intriguing tension with some of the cosmological parameters). We’ve measured the deflection of light by gravitational lensing. And in work that I helped lead, we’ve used the CMB maps to put limits on some of the ways in which our simplest models of the Universe could be wrong, possibly having an interesting topology or rotation on the largest scales.
But because we’ve scrutinised our data so carefully, we have found some peculiarities which don’t quite fit the models. From the days of COBE and WMAP, there has been evidence that the largest angular scales in the map, a few degrees and larger, have some “anomalies” — some of the patterns show strange alignments, some show unexpected variation between two different hemispheres of the sky, and there are some areas of the sky that are larger and colder than is expected to occur in our theories. Individually, any of these might be a statistical fluke (and collectively they may still be) but perhaps they are giving us evidence of something exciting going on in the early Universe. Or perhaps, to use a bad analogy, the CMB map is like the Zapruder film: if you scrutinise anything carefully enough, you’ll find things that look a conspiracy, but turn out to have an innocent explanation.
I’ve mentioned eight different Planck papers so far, but in fact we’ve released 28 (and there will be a few more to come over the coming months, and many in the future). There’s an overall introduction to the Planck Mission, and papers on the data processing, observations of relatively nearby galaxies, and plenty more cosmology. The papers have been submitted to the journal A&A, they’re available on the ArXiV, and you can find a list of them at the ESA site.
Even more important for my cosmology colleagues, we’ve released the Planck data, as well, along with the necessary code and other information necessary to understand it: you can get it from the Planck Legacy Archive. I’m sure we’ve only just begun to get exciting and fun science out of the data from Planck. And this is only the beginning of Planck’s data: just the first 15 months of observations, and just the intensity of the CMB: in the coming years we’ll be analysing (and releasing) more than one more year of data, and starting to dig into Planck’s observations of the polarized sky.
OK, back to editing. (I’ll try to update this post with any advance information as it becomes available.)
Update (on timing, not content): the main Planck press conference will be held on the morning of 21 March at 10am CET at ESA HQ in Paris. There will be a simultaneous UK event (9am GMT) held at the Royal Astronomical Society in London, where the Paris event will be streamed, followed by a local Q&A session. (There will also be a more technical afternoon session in Paris.)
Probably more important for my astrophysics colleagues: the Planck papers will be posted on the ESA website at noon on the 21st, after the press event, and will appear on the ArXiV the following day, 22 March. Be sure to set aside some time next weekend!
I am not quite happy to join their ranks: for the last few months, the traffic on this blog has been vastly dominated by attempts to get into the various back-end scripts that run this site, either by direct password hacks or just denial-of-service attacks. In fact, I only noticed it because the hackers exceeded my bandwidth allowance by a factor of a few (and costing me a few hundred bucks in over-usage charged by my host in the process, unfortunately).
I’ve since attempted to block the attacks by denying access to the IP addresses which have been the most active (mostly from domains that look like 163data.com.cn, for what it’s worth). So, my apologies if any of this results in any problems for anyone else trying to access the blog.
There were only two specific questions, rating each of the following from “Very Good” through “Poor” (there’s a “no response” off to the right, as well):
- The structure and delivery of the teaching sessions
- The content of this module
The numerical results (at right) were pretty good. Note that 114 students — about half — responded.
The rest of the results are free-form comments. With such a big class it’s very difficult to find a style of teaching that suits everyone. Hence, the comments showed a split between the students who enjoyed the very mathematical approach of the course and those who wanted more physical examples from the beginning (not that easy in the context of an axiomatic approach to quantum mechanics — but there are a few simple system like quantum dots that exhibit some of the properties of the simplest systems we study in class; it’s clear that these should be highlighted more than I have). Similarly, some students wanted a more step-by-step approach to the mathematics, whereas others would prefer just a sketch of the proofs on the board (“put the algebra in the notes and let students work through it”).
But one set of comments especially hit home. Here’s a good example:
Frankly, I think that Prof. Jaffe has the potential to be an outstanding lecturer, one which he wastes by not being properly prepared. Just showing up to lectures and writing down on the board what was in (the previous lecturer’s) notes without thinking much about it in advance results in time spent staring at notes and board which could otherwise have been used to face the audience and explain what it is we’re doing. Maybe that sounded harsh, but he really is very good and could be outstanding if he put a little more into preparing for lectures and didn’t stick to his notes quite so much… If you actually perform the calculations, and think about the various steps yourself, then it all happens in a way, and at a pace, which suits us as students and allows us to follow.
(OK, I picked one that made an effort to heap on some praise along with the criticism.) I have to admit that this point, repeated by several students, seemed right on, for at least some of the lectures. I did always make an effort to go over the notes in detail beforehand. But these were notes indeed written by the previous lecturer, and this gives a few problems. Yes, I probably wasn’t always careful enough to go over the details of the mathematics beforehand. So sometimes I did spend too much effort trying to puzzle out exactly what I wanted to say (some students also complained about the occasional mistakes I made on the board, perhaps related to this). But sometimes the problem is more subtle: I might not always want to explain the concepts in the same way as the previous lecturer — and sometimes I might only realise this when actually doing the explaining! Either of these can happen in any lecture, but the combination of teaching this course for the first time, and doing so from someone else’s notes, certainly made it worse.
Next year, things will at least be different: I’ll be teaching for the second time, and so have some idea of the pitfalls from this past year. Moreover, our department is making some significant changes to the overall structure of the curriculum, phasing out our system of tutorials and so-called classworks for a series of three medium-sized (20 student) group sessions each week. This is happening alongside some specific changes to the quantum mechanics curriculum, with more material in the first year (happening already). My course will be shortened by a full five lectures, but I suspect that this combination of changes will give me a bit more breathing room, as well as a few different ways to make sure the material gets said in different ways, appropriate for different students.
Further criticism, comments, ideas, etc., are always welcome.
My colleagues, friends and collaborators on the EBEX project are in Antarctica this (Northern) winter preparing the telescope for launch. And today, they did it:
EBEX is a next-generation CMB telescope, with hundreds of detectors measuring temperature and polarisation, which we hope will allow us to see the effects of an early epoch of cosmic inflation through the background of gravitational radiation that it produces. But right now, we are just hoping that EBEX catches some good winds in the Antarctic Polar Vortex and comes back around the continent after about two weeks of observing from 120,000 feet.
A week ago, I finished my first time teaching our second-year course in quantum mechanics. After a bit of a taster in the first year, the class concentrates on the famous Schrödinger equation, which describes the properties of a particle under the influence of an external force. The simplest version of the equation is just This relates the so-called wave function, ψ, to what we know about the external forces governing its motion, encoded in the Hamiltonian operator, Ĥ. The wave function gives the probability (technically, the probability amplitude) for getting a particular result for any measurement: its position, its velocity, its energy, etc. (See also this excellent public work by our department’s artist-in-residence.)
Over the course of the term, the class builds up the machinery to predict the properties of the hydrogen atom, which is the canonical real-world system for which we need quantum mechanics to make predictions. This is certainly a sensible endpoint for the 30 lectures.
But it did somehow seem like a very old-fashioned way to teach the course. Even back in the 1980s when I first took a university quantum mechanics class, we learned things in a way more closely related to the way quantum mechanics is used by practicing physicists: the mathematical details of Hilbert spaces, path integrals, and Dirac Notation.
Today, an up-to-date quantum course would likely start from the perspective of quantum information, distilling quantum mechanics down to its simplest constituents: qbits, systems with just two possible states (instead of the infinite possibilities usually described by the wave function). The interactions become less important, superseded by the information carried by those states.
Really, it should be thought of as a full year-long course, and indeed much of the good stuff comes in the second term when the students take “Applications of Quantum Mechanics” in which they study those atoms in greater depth, learn about fermions and bosons and ultimately understand the structure of the periodic table of elements. Later on, they can take courses in the mathematical foundations of quantum mechanics, and, yes, on quantum information, quantum field theory and on the application of quantum physics to much bigger objects in “solid-state physics”.
Despite these structural questions, I was pretty pleased with the course overall: the entire two-hundred-plus students take it at the beginning of their second year, thirty lectures, ten ungraded problem sheets and seven in-class problems called “classworks”. Still to come: a short test right after New Year’s and the final exam in June. Because it was my first time giving these lectures, and because it’s such an integral part of our teaching, I stuck to to the same notes and problems as my recent predecessors (so many, many thanks to my colleagues Paul Dauncey and Danny Segal).
Once the students got over my funny foreign accent, bad board handwriting, and worse jokes, I think I was able to get across both the mathematics, the physical principles and, eventually, the underlying weirdness, of quantum physics. I kept to the standard Copenhagen Interpretation of quantum physics, in which we think of the aforementioned wavefunction as a real, physical thing, which evolves under that Schrödinger equation — except when we decide to make a measurement, at which point it undergoes what we call collapse, randomly and seemingly against causality: this was Einstein’s “spooky action at a distance” which seemed to indicate nature playing dice with our Universe, in contrast to the purely deterministic physics of Newton and Einstein’s own relativity. No one is satisfied with Copenhagen, although a more coherent replacement has yet to be found (I won’t enumerate the possibilities here, except to say that I find the proliferating multiverse of Everett’s Many-Worlds interpretation ontologically extravagant, and Chris Fuchs’ Quantum Bayesianism compelling but incomplete).
I am looking forward to getting this year’s SOLE results to find out for sure, but I think the students learned something, or at least enjoyed trying to, although the applause at the end of each lecture seemed somewhat tinged with British irony.