I’ve been meaning to give a shout-out to my colleagues on the ADAMIS team at the APC (AstroParticule et Cosmologie) Lab at the Université Paris 7 for a while: in addition to doing lots of great work on Planck, EBEX, PolarBear and other important CMB and cosmology experiments, they’ve also been running a group blog since the Autumn, Paper(s) of the Week et les autres choses (scientifique) which dissects some of the more interesting work to come out of the cosmology community. In particular, one of my favorite collaborators has written an extremely astute analysis of what, exactly, we on the Planck team released in our lengthy series of papers last month (which I have already discussed in a somewhat more boosterish fashion).
The Satellite now known as the Planck Surveyor was first conceived in the mid-1990s, in the wake of the results from NASA’s COBE Satellite, the first to detect primordial anisotropies in the Cosmic Microwave Background (CMB), light from about 400,000 years after the big bang. (I am a relative latecomer to the project, having only joined in about 2000.)
After all this time, we on the team are very excited to produce our very first scientific results. These take the form of a catalog of sources detected by Planck, along with 25 papers discussing the catalog as well as the more diffuse pattern of radiation on the sky.
Planck is the very first instrument to observe the whole sky with light in nine bands with wavelengths from about 1/3 of a millimeter up to one centimeter, an unprecedented range. In fact this first release of data and papers discusses Planck as a tool for astrophysics — as a telescope observing distant galaxies and clusters of galaxies as well as our own Galaxy, the Milky Way. All of these glow in Planck’s bands (indeed they dominate over the CMB in most of them), and with our high-sensitivity all-sky maps we have the opportunity to do astronomy with Planck, the best microwave telescope ever made. Indeed, to get to this point, we actually have to separate out the CMB from the other sources of emission and, somewhat perversely, actively remove that from the data we are presenting.
Over the last year, then, we on the Planck team have written about 25 papers to support this science; a few of them are about the mission as a whole, the instruments on board Planck, and the data processing pipelines that we have written to produce our data. Then there are a few papers discussing the data we are making available, the Early Release Compact Source Catalog and the various subsets discussing separately objects within our own Milky Way Galaxy as well as more distant galaxies and clusters of galaxies. The remaining papers give our first attempts at analyzing the data and extracting the best science possible.
Most of the highlights in the current papers provide confirmation of things that astronomers have suspected, thanks to Planck’s high sensitivity and wide coverage. It has long been surmised that most stars in the Universe are formed in locations shrouded by dust, and hence not visible to optical telescopes. Rather, the birth of stars heats the dust to temperatures much lower than that of stars, but much higher than the cold dust far from star-forming regions. This warm dust radiates in Planck’s bands, seen at lower and lower frequencies for more and more distant galaxies (due to the redshift of light from these faraway objects). For the first time, Planck has observed this Cosmic Infrared Background (CIB) at frequencies that may correspond to galaxies forming when the Universe was less than 15% of its current age, less than 2 billion years after the big bang. Here is a picture of the CIB at various places around the sky, specifically chosen to be as free as possible of other sources of emission:
Another exciting result has to do with the properties of that dust in our own Milky Way Galaxies. This so-called cosmic dust is known to be made of very tiny grains, from small agglomerations of a few molecules up to those a few tens of micrometers across. Ever since the mid-1990s, there has been some evidence that this dust emits radiation at millimeter wavelengths that the simplest models could not account for. One idea, actually first proposed in the 1950s, is that some of the dust grains are oblong, and receive enough of a kick from their environment that they spin at very high rates, emitting radiation at a frequency related to that rotation. Planck’s observations seem to confirm this prediction quantitatively, seeing its effects in our galaxy. This image of the Rho Ophiuchus molecular cloud shows that the spinning dust emission at 30 GHz traces the same structures as the thermal emission at 857 GHz:
In addition, Planck has found more than twenty new clusters of galaxies, has mapped the dust in gas in the Milky Way in three dimensions, and uncovered cold gas in nearby galaxies. And this is just the beginning of what Planck is capable of. We have not yet begun to discuss the cosmological implications, nor Planck’s abilities to measure not just the intensity of light, but also its polarization.
Of course the most important thing we have learned so far is how hard it is to work in a team of 400 or so scientists, whom — myself included — like neither managing nor being managed (and are likewise not particularly skilled at either). I’ve been involved in a small way in the editing process, shepherding just a few of those 25 papers to completion, paying attention to the language and presentation as much as the science. Given the difficulties, I am relatively happy with the results — the papers can be downloaded directly from ESA, and will be available on the ArXiV on 12 January 2011, and will eventually be published in the journal Astronomy and Astrophysics. It will be very interesting to see how we manage this in two years when we may have as many as a hundred or so papers at once. Stay tuned.
One excuse for not blogging over the last month was a couple of weeks spent in North America, first in and around New York and New Jersey, visiting my family, and then a stop in Montreal for the annual collaboration meeting for the EBEX CMB balloon project, which we expect to launch on its science mission from Antarctica in about a year (alas I will be most likely minding the fort back here in Britain rather than joining my adventurous colleagues in the frozen South).
But while in New York I got to attend my first proper art auction, one with a very scientific bent — Beautiful Evidence: The Library of Edward Tufte. Tufte is something of an “info-guru”; in a series of gorgeously produced books, he has talked about techniques for translating numbers and words into graphics. Although he’s got an over-strong aversion to computer graphics (and especially to powerpoint), much of his advice is right-on (and rarely heeded).
In the course of selling his books and giving regular, well-attended courses (and, latterly, working for the President), I expect that Tufte (who started out as a Professor of Statistics at Yale) must have amassed a reasonable nest egg, ploughed back into books, pamphlets, artwork and posters. The 127 or so lots cover everything from science and mathematics to dance and fine art.
I was most interested in the scientific books and manuscripts, and the wonderful thing about auctions is that you can play with — sorry, I mean inspect — the items on offer. I couldn’t resist:
That’s me, holding Christian Huygens’ Cosmotheoros from 1698. Amazingly, it was one of the few items not to make its reserve price, under $1000 — I could afford it that with only a little credit. But the most expensive item was an original of Galileo’s Sidereal Messenger, from which has sprung all of astronomy, most of physics, much of science, and indeed a lot of the society in which we live. Given that, $662,000 doesn’t seem unreasonable.
In between those two extremes was another item I was lucky enough to hold: A third edition of Isaac Newton’s Philosophiae Naturalis Principia Mathematica, which went for $16,250. This is the final edition printed during Newton’s lifetime, albeit with edits by one Henry Pemberton, which became “the basis for all subsequent editions” (and was notable for having lost all references to Newton’s rival in the creation of the calculus, Leibnitz). Like Galileo’s, it is one of the founding texts of modern science. But scientific progress, all that “standing on the shoulders of giants” has the slightly strange effect that such books are often mentioned, but rarely read. It is easier to learn Newton’s laws from a twenty-first century textbook (not to mention wikipedia) than from the original sources. Unlike many other such books, the Principia remains almost entirely mathematically and factually correct, but written in such a style — using geometry and pictures instead of equations, not to mention being in Latin — that even modern physicists find it hard to follow. Partially to ameliorate this (and partially to prove that he was the one of the few people who could manage the task), the great astrophsysicist S. Chandrasekhar decided, in the early 1990s, to produce an edition of “Newton’s Principia for the Common Reader”, translating Newton’s geometry into modern equations. (Needless to say, the book makes impressive demands upon the supposed “common reader”.) We could all do worse than to spend some time trying to get into Newton’s head (or Chandra’s).
[Apologies to those of you who may have seen an inadvertantly-published unfinished version of this post]
I’ve just returned from a week at the Annual meeting of the Institute for Mathematical Statistics in Gothenburg, Sweden. It’s always instructive to go to meetings outside of one’s specialty, outside of the proverbial comfort zone. I’ve been in my own field long enough that I’m used to feeling like one of the popular kids, knowing and being known by most of my fellow cosmologists — it’s a good corrective to an overinflated sense of self-worth to be somewhere where nobody knows your name. Having said that, I was bit disappointed in the turnout for our session, “Statistics, Physics and Astrophysics”. Mathematical statistics is a highly specialized field, but with five or more parallel sessions going on at once, most attendees could find something interesting. However, even cross-cutting sessions of supposedly general interest — our talks were by physicists, not statisticians — didn’t have the opportunity to get a wide audience.
The meeting itself, outside of that session, was very much of mathematical statistics, more about lemmas and proofs than practical data analysis. Of course these theoretical underpinnings are crucial to the eventual practical work, although it’s always disheartening to see the mathematicians idealise a problem all out of recognition. For example, the mathematicians routinely assume that the errors on a measurement are independent and identically distributed (“iid” for short) but in practice this is rarely true in the data that we gather. (I should use this as an opportunity to mention my favourite statistics terms of art: homoscedastic and heteroscedastic, describing, respectively, identical and varying distributions.)
But there were more than a couple of interesting talks and sessions, mostly concentrating upon two of the most exciting — and newsworthy — intersections between statistical problems and the real world: finance and climate. How do we compare complicated, badly-sampled, real-world economic or climate data to complicated models which don’t pretend to capture the full range of phenomena? In what sense are the predictions inherently statistical and in what sense are they deterministic? “Probability”, said de Finetti, the famous Bayesian statistician “does not exist”, by which he meant that probabilities are statements about our knowledge of the world, not statements about the world. The world does, however, give sequences of values (stock prices, temperatures, etc.) which we can test our judgements against. This, in the financial realm, was the discussion of Hans Föllmer’s Medallion Prize Lecture, which veered into the more abstract realm of stochastic integration, Martingales and Itō calculus along the way.
Another pleasure was the session chaired by Robert Adler. Adler is the author of a book called The Geometry of Random Fields, a book which has had a significant effect upon cosmology from the 1980s through today. A “random field” is something that you could measure over some regime of space and time, but for which your theory doesn’t determine its actual value, but only its statistical properties, such as its average and the way the value at different points are related to one another. The best example in cosmology is the CMB itself — none of our theories predict the temperature at any particular place, but the theories that have survived our tests make predictions about the mean value and about the product of temperatures at any two points — this is called the correlation function, and a random field in which only the mean and correlation function can be specified is called a Gaussian random field, after the Gaussian distribution that is the mathematical version of this description. Indeed, Adler uses the CMB as one of the examples on his academic home page. But there are many more application besides: the session featured talks on brain imaging and on Google’s use random fields to analyze data about the way people look at their web pages
Gothenburg itself was nice in that Scandinavian way: nice, but not terribly exciting, full of healthy, attractive people who seem pleased with their lot in life. The week of our meeting overlapped with two other important events in the town. The other big meeting in town was the World Library and Information Congress — you can only imagine the party atmosphere in a town filled with both statisticians and librarians! But adding to that, Gothenburg was hosting its summer kulturkalas festival of culture — the streets were filled with musicians and other performers to distract us from the mathematics.
I spent part of this week in Paris (apparently at the same time as a large number of other London-based scientists who were here for other things) discussing whether the European CMB community should rally and respond to ESA’s latest call for proposals for a mission to be launched in the next open slot—which isn’t until around 2022.
As successful as Planck seems to be, and as fun as it is working with the data, I suspect that no one on the Planck team thinks that a 400-scientist, dispersed, international team coming from a dozen countries each with its own politics and funding priorities, is the most efficient way to run such a project. But we’re stuck with it—no single European country can afford the better part of a billion Euros it will cost. Particle physics has been in this mode for the better part of fifty years, and arguably since the Manhattan Project, but it’s a new way of doing things — involving new career structures, new ways of evaluating research, new ways of planning, and a new concentration upon management — that we astrophysicists have to develop to answer our particular kinds of scientific questions.
But a longer discussion of “big science” is for another time. The next CMB satellite will probably be big, but the coming ESA call is officially for an “M-class” (for “medium”) mission, with a meagre (sic) 600 million euro cap. What will the astrophysical and cosmological community get for all this cash? How will it improve upon Planck?
Well, Planck has been designed to mine the cosmic microwave background for all of the temperature information available, the brightness of the microwave sky in all directions, down to around a few arcminutes at which scale it becomes smooth. But light from the CMB also carries information about the polarisation of light, essentially two more numbers we can measure at every point. Planck will measure some of this polarisation data, but we know that there will be much more to learn. We expect that this as-yet unmeasured polarisation can answer questions about fundamental physics that affects the early universe and describes its content and evolution. What are the details of the early period of inflation that gave the observable Universe its large-scale properties and seeded the formation of structures in it—and did it happen at all? What are the properties of the ubiquitous and light neutrino particles whose presence would have had a small but crucial effect on the evolution of structure?
The importance of these questions is driving us toward a fairly ambitious proposal for the next CMB mission. It will have a resolution comparable to that of Planck, but with many hundreds of individual detectors, compared to Plank’s many dozens—giving us over an order of magnitude increase in sensitivity to polarisation on the sky. Actually, even getting to this point took a good day or two of discussion. Should we instead make a cheaper, more focused proposal that would concentrate only on the question oaf inflation and in particular upon the background of gravitational radiation — observable as so-called “B-modes” in polarisation — that some theories predict? The problem with this proposal is that it is possible, or even likely, that it will produce what is known as a “null result”—that is, it won’t see anything at all. Moreover, a current generation of ground- and balloon-based CMB experiments, including EBEX and Polarbear, which I am lucky enough to be part of, are in progress, and should have results within the next few years, possibly scooping any too-narrowly designed future satellite.
So we will be broadening our case beyond these B-modes, and therefore making our design more ambitious, in order to make these further fundamental measurements. And, like Planck, we will be opening a new window on the sky for astrophysicists of all stripes, giving measurements of magnetic fields, the shapes of dust grains, and likely many more things we haven’t yet though of.
One minor upshot of all this is that our original name, the rather dull “B-Pol”, is no longer appropriate. Any ideas?
To celebrate, the Planck team have released an image of the full sky. The telescope has detectors which can see the sky with 9 bands at wavelengths ranging from 0.3 millimeters up to nearly a centimeter, out of which we have made this false-color image. The center of the picture is toward the center of the Galaxy, with the rest of the sphere unwrapped into an ellipse so that we can put it onto a computer screen (so the left and right edges are really both the same points).
At the longest and shortest wavelengths, our view is dominated by matter in our own Milky Way galaxy — this is the purple-blue cloud, mostly so-called galactic “cirrus” gas and dust, largely concentrated in a thin band running through the center which is the disk of our galaxy viewed from within.
In addition to this so-called diffuse emission, we can also see individual, bright blue-white objects. Some of these are within our galaxy, but many are themselves whole distant galaxies viewed from many thousands or millions of light years distance. Here’s a version of the picture with some objects highlighted:
Even though Planck is largely a cosmology mission, we expect these galactic and extragalactic data to be invaluable to astrophysicists of all stripes. Buried in these pictures we hope to find information on the structure and formation of galaxies, on the evolution of very faint magnetic fields, and on the evolution of the most massive objects in the Universe, clusters of galaxies.
But there is plenty of cosmology to be done: we see the Cosmic Microwave Background (CMB) in the red and yellow splotches at the top and bottom — out of the galactic plane. We on the Planck team will be spending much of the next two years separating the galactic and extragalactic “foreground” emission from the CMB, and characterizing its properties in as much detail as we can. Stay tuned.
I admit that I was somewhat taken aback by the level of interest in these pictures: we haven’t released any data to the community, or written any papers. Indeed, we’ve really said nothing at all about science. Yet we’ve made it onto the front page of the Independent and even the Financial Times, and yours truly was quoted on the BBC’s website. I hope this is just a precursor to the excitement we’ll generate when we can actually talk about science, first early next year when we release a catalog of sources on the sky for the community to observe with other telescopes, and then in a couple of years time when we will finally drop the real CMB cosmology results.
The cosmology community has had a terrible few months.
I am saddened to report the passing of Andrew Lange, a physicist from CalTech and one of the world’s preeminent experimental cosmologists. Among many other accomplishments, Andrew was one of the leaders of the Boomerang experiment, which made the first large-scale map of the Cosmic Microwave Background radiation with a resolution of less than one degree, sufficient to see the opposing action of gravity and pressure in the gas of the early Universe, and to use that to measure the overall density of matter, among many other cosmological properties. He has since been an important leader in a number of other experiments, notably the Planck Surveyor satellite and the Spider balloon-borne telescope, currently being developed to become one of the most sensitive CMB experiments ever built.
I learned about this tragedy on the same day that people are gathering in Berkeley, California, to mourn the passing of another experimental cosmologist, Huan Tran of Berkeley. Huan was an excellent young scientist, most recently deeply involved in the development of PolarBear, another one of the current generation of ultra-sensitive CMB experiments. Huan lead the development of the PolarBear telescope itself, currently being tested in the mountains of California, but to be deployed for real science on the Atacama plane in Chile. We on the PolarBear team are proud to name the PolarBear telescope after Huan Tran, a token of our esteem for him, and a small tribute to his memory.
My thoughts go out to the friends and family of both Huan and Andrew. I, and many others, will miss them both.
I’m happy to be able to point to ESA’s first post-launch press release from the Planck Surveyor Satellite.
Here is a picture of the area of sky that Planck has observed during its “First Light Survey”, superposed on an optical image of the Milky Way galaxy:
(Image credit: ESA, LFI and HFI Consortia (Planck); Background image: Axel Mellinger. More pictures are available on the UK Planck Site as well as in French.)
The last few months since the launch have been a lot of fun, getting to play with Planck data ourselves. Here at Imperial, our data-processing remit is fairly narrow: we compute and check how well the satellite is pointing where it is supposed to, and calculate the shape of its beam on the sky (i.e., how blurry its vision is). Nonetheless, just being able to work at all with this incredibly high-quality data is satisfying.
Because of the way Planck scans the sky, in individual rings slowly stepping around the sky over the course of about seven months, with a nominal mission of two full observations of the sky, even the two weeks of “First Light Survey” data is remarkably powerful: we have seen a bit more than 5% of the sky with about half of the sensitivity that Planck is meant to eventually have (in fact, we hope to extend the mission beyond the initial 14 months). This is already comparable to the most powerful sub-orbital (i.e., ground and balloon-based) CMB experiments to date.
But a full scientific analysis will have to wait a while: after the 14 month nominal mission, we will have one year to analyze the data, and another year to get science out of it before we need to release the data alongside, we hope, a whole raft of papers. So stay tuned until roughly Autumn of 2012 for the next big Planck splash.
[Warning: this post will be fairly technical and political and may only be of interest to those in the field.]
I spent the first couple of days this week stuck in a room in Cambridge with about 40 of my colleagues pondering a very important question: what is the future of the study of the Cosmic Microwave Background in the UK?
Organized by Keith Grainge of Cambridge’s MRAO, and held at Cambridge’s new Kavli Institute for Cosmology, the workshop brought together a significant fraction of the UK CMB community, from Cambridge itself, Cardiff, Imperial, Manchester, Oxford and elsewhere.
With the recent cancellation of the Clover experiment by STFC, there is no major UK-led CMB experiment (I am making a distinction between CMB experiments per se and those with other primary purposes, such observing the Sunyaev-Zel’dovich effect with AMI, or astrophysical foregrounds with QUIJOTE.) However, there is a huge amount of CMB expertise in the UK, from the design of detectors and telescopes through to the analysis of CMB data.
In the short term, it seems there is some appetite for attempting to revive the Clover effort at some level, perhaps in collaboration with other experimental teams outside of the UK. The major driver — and the only way it makes any sense at all — is to get this done quickly, before the other experiments pursuing the same goals begin to gather data (in the interests of full disclosure, I should point out that I am involved in a couple of those other experiments: EBEX and PolarBear). This decision, I imagine, will be dominated by the politics and economics of the current STFC funding
debacle fiasco debate as well as what I understand are the internal relationships of the Clover team.
So of more scientific interest is the question of what to do next. Right now, the UK astronomy and particle physics community is undertaking a series of consultations to figure out what it thinks are the most important topics, instruments and experiments to concentrate upon over the next few years. One very real possibility is that we could decide not to lead any new CMB experiments, but just to continue to lend our expertise to other efforts. This is cost-effective but unsatisfying, especially to experimentalists who want to take the lead in the design of new efforts. The only viable alternative, I think, is for the community to come together and, with apologies for the cliche, speak with a unified voice in support of a coherent plan. There is enough expertise in the UK to produce great CMB science over the next decade, but it is thinly spread. The basic design of any such experiment is clear: thousands of detectors observing the sky over as many frequencies as possible. But the details — exactly what sorts of detectors, flown from a balloon or stationary on the ground, or to wait for a future satellite — will be crucial to the success or otherwise of the experiment. Unfortunately, these decisions can often degenerate into “not-invented-here” syndrome and personality clashes between strong scientific egos. But as Ben Franklin said on signing the Declaration of Independence, “we must, indeed, all hang together, or assuredly we shall all hang separately.”
Right now, the UK’s astronomy and nuclear/particle physics research council, STFC, is supposedly undergoing a series of “consultations” with the community to try to figure out exactly which of the many possible big-ticket items (telescopes, satellites, particle detectors, etc.) the community wants to pursue.
In the meantime, however, things are proceeding in their usual autocratic way, as our financial overlords attempt to deal with the financial shortfall that a combination of bad luck, the global financial crisis, their own mismanagement, and government policy (in no particular order), has bequeathed the council.
Following on the cancellation of the Clover CMB experiment, this week we heard that the number of Advanced Fellowships per year will be cut in half, from twelve to six for all of astronomy and particle physics, and that the outreach budget will be cut by even more.
I came to the UK on an AF, and so have a soft spot for the program: the five-year fellowships have a very profile worldwide and are indeed open to applicants from all over the world. They have traditionally been one of the best ways to attract and retain young scientists. In many institutions, coming with the imprimatur of a pretty rigorous peer-review process, they lead directly to a truly permanent academic position.
As my fellow AF-alumnus, Peter Coles (from whom I got most of this information and the inspiration for writing it here), puts it: “Who needs half a dozen top class scientists when you can have Moonlite instead?”
Update: There was a package on BBC news today, lamenting the state of UK “space policy” — even the representative from EADS Astrium (“industry”) was complaining. Meanwhile, Lord Drayson, the “space minister”, was on the Politics Show, at least admitting this sort of thing “is going to cost money” — especially the twenty-year plan he wants.
Not all CMB (Cosmic Microwave Background) experiments get launched on a rocket.
There’s a long history of telescopes flown from balloons — huge mylar balloons floating over 100,000 feet in the air. MAXIMA and BOOMERaNG, the first experiments to map out the microwave sky on the sub-degree scales containing information about the detailed physical conditions in the Universe over the first few hundred thousand years after the Big Bang. The Planck Satellite will close out that era of CMB experiments, by giving us a complete picture of the microwave sky down to less than a tenth of a degree.
But there is still more to be done, even beyond what Planck is capable of. By measuring the polarization of the microwave background at even higher sensitivities than Planck, we hope to observe the effects of gravitational radiation in the early Universe.
Last week, EBEX, one of a new generation of balloon-borne experiments designed specifically with this goal, had its maiden flight from Fort Sumner, New Mexico.
EBEX Launch, 6/11/09 from asad137 on Vimeo.
It’s worth remembering, of course, that even with a parachute, these telescopes hit the ground pretty hard. But these things are amazingly well-built, and the EBEX crew have managed to recover most of the hardware and all of the data. So now the team have some time to get the hardware and software ready to fly for a couple of weeks over Antarctica next year.
And let’s not forget that New Mexico is also the home of Roswell, where conspiracy theorists and other wackjobs have been trying to find the government cover-up of UFO sightings. Indeed, the EBEX balloon was spotted, but at least in neighbouring Arizona, they can tell the difference.
Meanwhile, another CMB experiment, PolarBear, is about to start its first set of important tests. PolarBear is a ground-based telescope, which means it can watch the sky for far longer than a balloon, at the cost of being at the bottom of the atmosphere and all of the extra noise that adds to the signal. So despite some hard times (especially here in the UK), the next generation of CMB experiments are on the way, hoping to probe all the way back to the epoch of inflation.
In today’s Sunday NY Times Magazine, there’s a long article by psychologist Steven Pinker, on “Personal Genomics”, the growing ability for individuals to get information about their genetic inheritance. He discusses the evolution of psychological traits versus intelligence, and highlights the complicated interaction amongst genes, and between genes and society.
But what caught my eye was this paragraph:
What should I make of the nonsensical news that I… have a “twofold risk of baldness”? … 40 percent of men with the C version of the rs2180439 SNP are bald, compared with 80 percent of men with the T version, and I have the T. But something strange happens when you take a number representing the proportion of people in a sample and apply it to a single individual…. Anyone who knows me can confirm that I’m not 80 percent bald, or even 80 percent likely to be bald; I’m 100 percent likely not to be bald. The most charitable interpretation of the number when applied to me is, “If you knew nothing else about me, your subjective confidence that I am bald, on a scale of 0 to 10, should be 8.” But that is a statement about your mental state, not my physical one. If you learned more clues about me (like seeing photographs of my father and grandfathers), that number would change, while not a hair on my head would be different. [Emphasis mine].
That “charitable interpretation” of the 80% likelihood to be bald is exactly Bayesian statistics (which I’ve talked about, possibly ad nauseum, before) : it’s the translation from some objective data about the world — the frequency of baldness in carriers of this gene — into a subjective statement about the top of Pinker’s head, in the absence of any other information. And that’s the point of probability: given enough of that objective data, scientists will come to agreement. But even in the state of uncertainty that most scientists find themselves, Bayesian probability forces us to enumerate the assumptions (usually called “prior probabilities”) that enter into our assignments reasoning along with the data. Hence, if you knew Pinker, your prior probability is that he’s fully hirsute (perhaps not 100% if you allow for the possibility of hair extensions and toupees); but if you didn’t then you’d probably be willing to take 4:1 odds on a bet about his baldness — and you would lose to someone with more information.
In science, of course, it usually isn’t about wagering, but just about coming to agreement about the state of the world: do the predictions of a theory fit the data, given the inevitable noise in our measurements, and the difficulty of working out the predictions of interesting theoretical ideas? In cosmology, this is particularly difficult: we can’t go out and do the equivalent of surveying a cross section of the population for their genes: we’ve got only one universe, and can only observe a small patch of it. So probabilities become even more subjective and difficult to tie uniquely to the data. Hence the information available to us on the very largest observable scales is scarce, and unlikely to improve much, despite tantalizing hints of data discrepant with our theories, such as the possibly mysterious alignment of patterns in the Cosmic Microwave Background on very large angles of the sky (discussed recently by Peter Coles here). Indeed, much of the data pointing to a possible problem was actually available from the COBE Satellite; results from the more recent and much more sensitive WMAP Satellite have only reinforced the original problems — we hope that the Planck Surveyor — to be launched in April! — will actually be able to shed light on the problem by providing genuinely new information about the polarization of the CMB on large scales to complement the temperature maps from COBE and WMAP.
Although the big satellites get most of the press, a lot of astronomy is done from balloons, huge mylar bubbles that can carry a gondola up to about 120,000 feet over the earth — more than 22 miles or 32 km. That’s high enough that much of the atmospheric contamination is gone, but a lot cheaper and easier to reach than orbit. I’ve been involved in the BOOMERaNG and MAXIMA balloon experiments, to measure the Cosmic Microwave Background, and currently with EBEX. Some experiments, BOOMERaNG among them, take advantage of the conditions at the South Pole and launches from Antarctica, using the “polar vortex” in the atmosphere to keep the balloon aloft for as much as a couple of weeks. (I should point out that for me, “involved with” means that I stay home where it’s warm and comfortable, but get to play with the data once my hardier colleagues return from the field.)
If you want to get a feel for ballooning, check out BLAST!, a film made of the campaign to fly the eponymous experiment (the acronym stands for Balloon-borne Large-Aperture Sub-millimeter Telescope), made by Paul Devlin, the film-maker brother of one the experiment’s Principal Investigator. It follows the team from their university labs to the Northern launch site in Scandanavia, and finally to Antarctica. I haven’t seen the whole thing yet, but I’m told it does a good job of giving the impression of the alternating excitement and boredom — and lofty goals — of these experiments.
A quick pointer to Initiative for Cosmology (iCosmo). The website brings together a bunch of useful calculations for physical cosmology — relatively simple quantities like the relationship between redshift and distance, and also more complicated ones like the power spectrum of density perturbations (which tells us the distribution of galaxies on the largest scales in the Universe) and quantities derived from that like the distortions in the shapes of galaxies due to gravitational lensing, when the path of light from galaxies is perturbed by intervening mass in the Universe. Combined with good documentation and tutorials (and downloadable source), it makes a good companion to sites such as LAMBDA’s CMB toolbox, which provides similar services targeted specifically at Cosmic Microwave Background science. iCosmo looks like it will be useful for researchers in the field as well as students, so thanks and congratulations to its creators (I’d like to point directly at the page listing them, but that doesn’t seem to be possible… instead, there’s a discussion forum at CosmoCoffee.).
PAMELA (Payload for Antimatter Matter Exploration and Light-nuclei Astrophysics) is a Russian-Italian satellite measuring the composition of cosmic rays. One of the motivations for the measurements is the indirect detection of dark matter — the very-weakly-interacting particles that make up about 25% of the matter in the Universe (with, as I’m sure you all know by now) normal matter about 5% and the so-called Dark Energy the remaining 70%. By observing the decay products of the dark matter — with more decay occurring in the densest locations — we can probe the properties of the dark particles. So far, these decays haven’t yet been unequivocally observed. Recently, however, members of the PAMELA collaboration have been out giving talks, carefully labelled “preliminary”, showing the kind of excess cosmic ray flux that dark matter might be expected to produce.
But preliminary data is just that, and there’s a (usually) unwritten rule that the audience certainly shouldn’t rely on the numerical details in talks like these. Cirelli & Strumia have written a paper based on those numbers, “Minimal Dark Matter predictions and the PAMELA positron excess” (arXiv:0808.3867), arguing that the data fits their pet dark-matter model, so-called minimal dark matter (MDM). MDM adds just a single type of particle to those we know about, compared to the generally-favored supersymmetric (SUSY) dark matter model which doubles the number of particle types in the Universe (but has other motivations as well). What do the authors base their results on? As they say in a footnote, “the preliminary data points for positron and antiproton fluxes plotted in our figures have been extracted from a photo of the slides taken during the talk, and can thereby slightly differ from the data that the PAMELA collaboration will officially publish” (originally pointed out to me in the physics arXiv blog).
This makes me very uncomfortable. It would be one thing to write a paper saying that recent presentations from the PAMELA team have hinted at an excess — that’s public knowledge. But a photograph of the slides sounds more like amateur spycraft than legitimate scientific data-sharing.
Indeed, it’s to avoid such inadvertent data-sharing (which has happened in the CMB community in the past) that the Planck Satellite team has come up with its rather draconian communication policy (which is itself located in a password-protected site): essentially, the first rule of Planck is you do not talk about Planck. The second rule of Planck is you do not talk about Planck. And you don’t leave paper in the printer, or plots on your screen. Not always easy in our hot-house academic environments.
Update: Bergstrom, Bringmann, & Edsjo, “New Positron Spectral Features from Supersymmetric Dark Matter - a Way to Explain the PAMELA Data?” (arXiv: 0808.3725) also refers to the unpublished data, but presents a blue swathe in a plot rather than individual points. This seems a slightly more legitimate way to discuss unpublished data. Or am I just quibbling?
Update 2: One of the authors of the MDM paper comments below. He makes one very important point, which I didn’t know about: “Before doing anything with those points we asked the spokeperson of the collaboration at the Conference, who agreed and said that there was no problem”. Essentially, I think that absolves them of any “wrongdoing” — if the owners of the data don’t have a problem with it, then we shouldn’t, either (although absent that I think the situation would still be dicey, despite the arguments below and elsewhere). And so now we should get onto the really interesting question: is this evidence for dark matter, and, if so, for this particular model. (An opportunity for Bayesian model comparison!?)