JSONfeed

More technical stuff, but I’m trying to re-train myself to actually write on this blog, so here goes…

For no good reason other than it was easy, I have added a JSONfeed to this blog. It can be found at http://andrewjaffe.net/blog/feed.json, and accessed from the bottom of the right-hand sidebar if you’re actually reading this at andrewjaffe.net.

What does this mean? JSONfeed is an idea for a sort-of successor to something called RSS, which may stand for really simple syndication, a format for encapsulating the contents of a blog like this one so it can be indexed, consumed, and read in a variety of ways without explicitly going to my web page. RSS was created by developer, writer, and all around web-and-software guru Dave Winer, who also arguably invented — and was certainly part of the creation of — blogs and podcasting. Five or ten years ago, so-called RSS readers were starting to become a common way to consume news online. NetNewsWire was my old favourite on the Mac, although its original versions by Brent Simmons were much better than the current incarnation by a different software company; I now use something called Reeder. But the most famous one was Google Reader, which Google discontinued in 2013, thereby killing off most of the RSS-reader ecosystem.

But RSS is not dead: RSS readers still exist, and it is still used to store and transfer information between web pages. Perhaps most importantly, it is the format behind subscriptions to podcasts, whether you get them through Apple or Android or almost anyone else.

But RSS is kind of clunky, because it’s built on something called XML, an ugly but readable format for structuring information in files (HTML, used for the web, with all of its < and > “tags”, is a close cousin). Nowadays, people use a simpler family of formats called JSON for many of the same purposes as XML, but it is quite a bit easier for humans to read and write, and (not coincidentally) quite a bit easier to create computer programs to read and write.

So, finally, two more web-and-software developers/gurus, Brent Simmons and Manton Reece realised they could use JSON for the same purposes as RSS. Simmons is behind NewNewsWire and Reece’s most recent project is an “indie microblogging” platform (think Twitter without the giant company behind it), so they both have an interest in these things. And because JSON is so comparatively easy to use, there is already code that I could easily add to this blog so it would have its own JSONfeed. So I did it.

So it’s easy to create a JSONfeed. What there isn’t — so far — are any newsreaders like NetNewsWire or Reeder that can ingest them. (In fact, Maxime Vaillancourt apparently wrote a web-based reader in about an hour, but it may already be overloaded…). Still, looking forward to seeing what happens.

Python Bug Hunting

This is a technical, nerdy post, mostly so I can find the information if I need it later, but possibly of interest to others using a Mac with the Python programming language, and also since I am looking for excuses to write more here.

It seems that there is a bug in the latest (mid-May 2017) release of Apple’s macOS Sierra 10.12.5 (ok, there are plenty of bugs, as there in any sufficiently complex piece of software).

It first manifested itself (to me) as an error when I tried to load the jupyter notebook, a web-based graphical front end to Python (and other languages). When the command is run, it opens up a browser window. However, after updating macOS from 10.12.4 to 10.12.5, the browser didn’t open. Instead, I saw an error message:

    0:97: execution error: "http://localhost:8888/tree?token=<removed>" doesn't understand the "open location" message. (-1708)

A little googling found that other people had seen this error, too. I was able to figure out a workaround pretty quickly: this behaviour only happens when I wanted to use the “default” browser, which is set in the “General” tab of the “System Preferences” app on the Mac (I have it set to Apple’s own “Safari” browser, but you can use Firefox or Chrome or something else). Instead, there’s a text file you can edit to explicitly set the browser that you want jupyter to use, located at ~/.jupyter/jupyter_notebook_config.py, by including the line

c.NotebookApp.browser = u'Safari'

(although an unrelated bug in Python means that you can’t currently use “Chrome” in this slot).

But it turns out this isn’t the real problem. I went and looked at the code in jupyter that is run here, and it uses a Python module called webbrowser. Even outside of jupyter, trying to use this module to open the default browser fails, with exactly the same error message (though I’m picking a simpler URL at http://python.org instead of the jupyter-related one above):

>>> import webbrowser
>>> br = webbrowser.get()
>>> br.open("http://python.org")
0:33: execution error: "http://python.org" doesn't understand the "open location" message. (-1708)
False

So I reported this as an error in the Python bug-reporting system, and hoped that someone with more experience would look at it.

But it nagged at me, so I went and looked at the source code for the webbrowser module. There, it turns out that the programmers use a macOS command called “osascript” (which is a command-line interface to Apple’s macOS automation language “AppleScript”) to launch the browser, with a slightly different syntax for the default browser compared to explicitly picking one. Basically, the command is osascript -e 'open location "http://www.python.org/"'. And this fails with exactly the same error message. (The similar code osascript -e 'tell application "Safari" to open location "http://www.python.org/"' which picks a specific browser runs just fine, which is why explicitly setting “Safari” back in the jupyter file works.)

But there is another way to run the exact same AppleScript command. Open the Mac app called “Script Editor”, type open location "http://python.org" into the window, and press the “run” button. From the experience with “osascript”, I expected it to fail, but it didn’t: it runs just fine.

So the bug is very specific, and very obscure: it depends on exactly how the offending command is run, so appears to be a proper bug, and not some sort of security patch from Apple (and it certainly doesn’t appear in the 10.12.5 release notes). I have filed a bug report with Apple, but these are not publicly accessible, and are purported to be something of a black hole, with little feedback from the still-secretive Apple development team.

Knightian Uncertainty

[Update: I have fixed some broken links, and modified the discussion of QBism and the recent paper by Chris Fuchs— thanks to Chris himself for taking the time to read and find my mistakes!]

For some reason, I’ve come across an idea called “Knightian Uncertainty” quite a bit lately. Frank Knight was an economist of the free-market conservative “Chicago School”, who considered various concepts related to probability in a book called Risk, Uncertainty, and Profit. He distinguished between “risk”, which he defined as applying to events to which we can assign a numerical probability, and “uncertainty”, to those events about which we know so little that we don’t even have a probability to assign, or indeed those events whose possibility we didn’t even contemplate until they occurred. In Rumsfeldian language, “risk” applies to “known unknowns”, and “uncertainty” to “unknown unknowns”. Or, as Nicholas Taleb put it, “risk” is about “white swans”, while “uncertainty” is about those unexpected “black swans”.

(As a linguistic aside, to me, “uncertainty” seems a milder term than “risk”, and so the naming of the concepts is backwards.)

Actually, there are a couple of slightly different concepts at play here. The black swans or unknown-unknowns are events that one wouldn’t have known enough about to even include in the probabilities being assigned. This is much more severe than those events that one knows about, but for which one doesn’t have a good probability to assign.

And the important word here is “assign”. Probabilities are not something out there in nature, but in our heads. So what should a Bayesian make of these sorts of uncertainty? By definition, they can’t be used in Bayes’ theorem, which requires specifying a probability distribution. Bayesian theory is all about making models of the world: we posit a mechanism and possible outcomes, and assign probabilities to the parts of the model that we don’t know about.

So I think the two different types of Knightian uncertainty have quite a different role here. In the case where we know that some event is possible, but we don’t really know what probabilities to assign to it, we at least have a starting point. If our model is broad enough, then enough data will allow us to measure the parameters that describe it. For example, in recent years people have started to realise that the frequencies of rare, catastrophic events (financial crashes, earthquakes, etc.) are very often well described by so-called power-law distributions. These assign much greater probabilities to such events than more typical Gaussian (bell-shaped curve) distributions; the shorthand for this is that power-law distributions have much heavier tails than Gaussians. As long as our model includes the possibility of these heavy tails, we should be able to make predictions based on data, although very often those predictions won’t be very precise.

But the “black swan” problem is much worse: these are possibilities that we don’t even know enough about to consider in our model. Almost by definition, one can’t say anything at all about this sort of uncertainty. But what one must do is be open-minded enough to adjust our models in the face of new data: we can’t predict the black swan, but we should expand the model after we’ve seen the first one (and perhaps revise our model for other waterfowl to allow more varieties!). In more traditional scientific settings, involving measurements with errors, this is even more difficult: a seemingly anomalous result, not allowed in the model, may be due to some mistake in the experimental setup or in our characterisation of the probabilities of those inevitable errors (perhaps they should be described by heavy-tailed power laws, rather than Gaussian distributions as above).

I first came across the concept as an oblique reference in a recent paper by Chris Fuchs, writing about his idea of QBism (or see here for a more philosophically-oriented discussion), an interpretation of quantum mechanics that takes seriously the Bayesian principle that all probabilities are about our knowledge of the world, rather than the world itself (which is a discussion for another day). He tentatively opined that the probabilities in quantum mechanics are themselves “Knightian”, referring not to a reading of Knight himself but to some recent, and to me frankly bizarre, ideas from Scott Aaronson, discussed in his paper, The Ghost in the Quantum Turing Machine, and an accompanying blog post, trying to base something like “free will” (a term he explicitly does not apply to this idea, however) on the possibility of our brains having so-called “freebits”, quantum states whose probabilities are essentially uncorrelated with anything else in the Universe. This arises from what is to me a mistaken desire to equate “freedom” with complete unpredictability. My take on free will is instead aligned with that of Daniel Dennett, at least the version from his Consciousness Explained from the early 1990s, as I haven’t yet had the chance to read his recent From Bacteria to Bach and Back: a perfectly deterministic (or quantum mechanically random, even allowing for the statistical correlations that Aaronson wants to be rid of) version of free will is completely sensible, and indeed may be the only kind of free will worth having.

Fuchs himself tentatively uses Aaronson’s “Knightian Freedom” to refer to his own idea

that nature does what it wants, without a mechanism underneath, and without any “hidden hand” of the likes of Richard von Mises’s Kollective or Karl Popper’s propensities or David Lewis’s objective chances, or indeed any conception that would diminish the autonomy of nature’s events,

which I think is an attempt (and which I admit I don’t completely understand) to remove the probabilities of quantum mechanics entirely from any mechanistic account of physical systems, despite the incredible success of those probabilities in predicting the outcomes of experiments and other observations of quantum mechanical systems. I’m not quite sure this is what either Knight nor Aaronson had in mind with their use of “uncertainty” (or “freedom”), since at least in quantum mechanics, we do know what probabilities to assign, given certain other personal (as Fuchs would have it) information about the system. My Bayesian predilections make me sympathetic with this idea, but then I struggle to understand what, exactly, quantum mechanics has taught us about the world: why do the predictions of quantum mechanics work?

When I’m not thinking about physics, for the last year or so my mind has been occupied with politics, so I was amused to see Knightian Uncertainty crop up in a New Yorker article about Trump’s effect on the stock market:

Still, in economics there’s a famous distinction, developed by the great Chicago economist Frank Knight, between risk and uncertainty. Risk is when you don’t know exactly what will happen but nonetheless have a sense of the possibilities and their relative likelihood. Uncertainty is when you’re so unsure about the future that you have no way of calculating how likely various outcomes are. Business is betting that Trump is risky but not uncertain—he may shake things up, but he isn’t going to blow them up. What they’re not taking seriously is the possibility that Trump may be willing to do things—like start a trade war with China or a real war with Iran—whose outcomes would be truly uncertain.

It’s a pretty low bar, but we can only hope.

SOLE Survivor

I recently finished my last term lecturing our second-year Quantum Mechanics course, which I taught for five years. It’s a required class, a mathematical introduction to one of the most important set of ideas in all of physics, and really the basis for much of what we do, whether that’s astrophysics or particle physics or almost anything else. It’s a slightly “old-fashioned” course, although it covers the important basic ideas: the Schrödinger Equation, the postulates of quantum mechanics, angular momentum, and spin, leading almost up to what is needed to understand the crowning achievement of early quantum theory: the structure of the hydrogen atom (and other atoms).

A more modern approach might start with qubits: the simplest systems that show quantum mechanical behaviour, and the study of which has led to the revolution in quantum information and quantum computing.

Moreover, the lectures rely on the so-called Copenhagen interpretation, which is the confusing and sometimes contradictory way that most physicists are taught to think about the basic ontology of quantum mechanics: what it says about what the world is “made of” and what happens when you make a quantum-mechanical measurement of that world. Indeed, it’s so confusing and contradictory that you really need another rule so that you don’t complain when you start to think too deeply about it: “shut up and calculate”. A more modern approach might also discuss the many-worlds approach, and — my current favorite — the (of course) Bayesian ideas of QBism.

The students seemed pleased with the course as it is — at the end of the term, they have the chance to give us some feedback through our “Student On-Line Evaluation” system, and my marks have been pretty consistent. Of the 200 or so students in the class, only about 90 bother to give their evaluations, which is disappointingly few. But it’s enough (I hope) to get a feeling for what they thought.

SOLE 2016 Chart

So, most students Definitely/Mostly Agree with the good things, although it’s clear that our students are most disappointed in the feedback that they receive from us (this is a more general issue for us in Physics at Imperial and more generally, and which may partially explain why most of them are unwilling to feed back to us through this form).

But much more fun and occasionally revealing are the “free-text comments”. Given the numerical scores, it’s not too surprising that there were plenty of positive ones:

  • Excellent lecturer - was enthusiastic and made you want to listen and learn well. Explained theory very well and clearly and showed he responded to suggestions on how to improve.

  • Possibly the best lecturer of this term.

  • Thanks for providing me with the knowledge and top level banter.

  • One of my favourite lecturers so far, Jaffe was entertaining and cleary very knowledgeable. He was always open to answering questions, no matter how simple they may be, and gave plenty of opportunity for students to ask them during lectures. I found this highly beneficial. His lecturing style incorporates well the blackboards, projectors and speach and he finds a nice balance between them. He can be a little erratic sometimes, which can cause confusion (e.g. suddenly remembering that he forgot to write something on the board while talking about something else completely and not really explaining what he wrote to correct it), but this is only a minor fix. Overall VERY HAPPY with this lecturer!

But some were more mixed:

  • One of the best, and funniest, lecturers I’ve had. However, there are some important conclusions which are non-intuitively derived from the mathematics, which would be made clearer if they were stated explicitly, e.g. by writing them on the board.

  • I felt this was the first time I really got a strong qualitative grasp of quantum mechanics, which I certainly owe to Prof Jaffe’s awesome lectures. Sadly I can’t quite say the same about my theoretical grasp; I felt the final third of the course less accessible, particularly when tackling angular momentum. At times, I struggled to contextualise the maths on the board, especially when using new techniques or notation. I mostly managed to follow Prof Jaffe’s derivations and explanations, but struggled to understand the greater meaning. This could be improved on next year. Apart from that, I really enjoyed going to the lectures and thought Prof Jaffe did a great job!

  • The course was inevitably very difficult to follow.

And several students explicitly commented on my attempts to get students to ask questions in as public a way as possible, so that everyone can benefit from the answers and — this really is true! — because there really are no embarrassing questions!

  • Really good at explaining and very engaging. Can seem a little abrasive at times. People don’t like asking questions in lectures, and not really liking people to ask questions in private afterwards, it ultimately means that no questions really get answered. Also, not answering questions by email makes sense, but no one really uses the blackboard form, so again no one really gets any questions answered. Though the rationale behind not answering email questions makes sense, it does seem a little unnecessarily difficult.

  • We are told not to ask questions privately so that everyone can learn from our doubts/misunderstandings, but I, amongst many people, don’t have the confidence to ask a question in front of 250 people during a lecture.

  • Forcing people to ask questions in lectures or publically on a message board is inappropriate. I understand it makes less work for you, but many students do not have the confidence to ask so openly, you are discouraging them from clarifying their understanding.

Inevitably, some of the comments were contradictory:

  • Would have been helpful to go through examples in lectures rather than going over the long-winded maths to derive equations/relationships that are already in the notes.

  • Professor Jaffe is very good at explaining the material. I really enjoyed his lectures. It was good that the important mathematics was covered in the lectures, with the bulk of the algebra that did not contribute to understanding being left to the handouts. This ensured we did not get bogged down in unnecessary mathematics and that there was more emphasis on the physics. I liked how Professor Jaffe would sometimes guide us through the important physics behind the mathematics. That made sure I did not get lost in the maths. A great lecture course!

And also inevitably, some students wanted to know more about the exam:

  • It is a difficult module, however well covered. The large amount of content (between lecture notes and handouts) is useful. Could you please identify what is examinable though as it is currently unclear and I would like to focus my time appropriately?

And one comment was particularly worrying (along with my seeming “a little abrasive at times”, above):

  • The lecturer was really good in lectures. however, during office hours he was a bit arrogant and did not approach the student nicely, in contrast to the behaviour of all the other professors I have spoken to

If any of the students are reading this, and are willing to comment further on this, I’d love to know more — I definitely don’t want to seem (or be!) arrogant or abrasive.

But I’m happy to see that most students don’t seem to think so, and even happier to have learned that I’ve been nominated “multiple times” for Imperial’s Student Academic Choice Awards!

Finally, best of luck to my colleague Jonathan Pritchard, who will be taking over teaching the course next year.

Electoral woes and votes

Like everyone else in my bubble, I’ve been angrily obsessing about the outcome of the US Presidential election for the last two weeks. I’d like to say that I’ve been channelling that obsession into action, but so far I’ve mostly been reading and hoping (and being disappointed). And trying to parse all the “explanations” for Trump’s election.

Mostly, it’s been about what the Democrats did wrong (imperfect Hillary, ignoring the white working class, not visiting Wisconsin, too much identity politics), and what the Republicans did right (imperfect Trump, dog whistles, focusing on economics and security).

But there has been an ongoing strain of purely procedural complaint: that the system is rigged, but (ironically?) in favour of Republicans. In fact, this is manifestly true: liberals (Democrats) are more concentrated — mostly in cities — than conservatives (Republicans) who are spread more evenly and dominate in rural areas. And the asymmetry is more true for the sticky ideologies than the fungible party affiliations, especially when “liberal” encompasses a whole raft of social issues rather than just left-wing economics. This has been exacerbated by a few decades of gerrymandering. So the House of Representatives, in particular, tilts Republican most of the time. And the Senate, with its non-proportional representation of two per state, regardless of size, favours those spread-out Republicans, too (although party dominance of the Senate is less of a stranglehold for the Republicans than that of the House).

But one further complaint that I’ve heard several times is that the Electoral College is rigged, above and beyond those reasons for Republican dominance of the House and Senate: as we know, Clinton has won the popular vote, by more than 1.5 million as of this writing — in fact, my own California absentee ballot has yet to be counted. The usual argument goes like this: the number of electoral votes allocated to a state is the sum of the number of members of congress (proportional to the population) and the number of senators (two), giving a total of five hundred and thirty-eight. For the most populous states, the addition of two electoral votes doesn’t make much of a difference. New Jersey, for example, has 12 representatives, and 14 electoral votes, about a 15% difference; for California it’s only about 4%. But the least populous states (North and South Dakota, Montana, Wyoming, Alaska) have only one congressperson each, but three electoral votes, increasing the share relative to population by a factor of 3 (i.e., 300%). In a Presidential election, the power of a Wyoming voter is more than three times that of a Californian.

This is all true, too. But it isn’t why Trump won the election. If you changed the electoral college to allocate votes equal to the number of congressional representatives alone (i.e., subtract two from each state), Trump would have won 245 to 191 (compared to the real result of 306 to 232).1 As a further check, since even the representative count is slightly skewed in favour of small states (since even the least populous state has at least one), I did another version where the electoral vote allocation is exactly proportional to the 2010 census numbers, but it gives the same result. (Contact me if you would like to see the numbers I use.)

Is the problem (I admit I am very narrowly defining “problem” as “responsible for Trump’s election”, not the more general one of fairness!), therefore, not the skew in vote allocation, but instead the winner-take-all results in each state? Maine and Nebraska already allocate their two “Senatorial” electoral votes to the statewide winner, and one vote for the winner of each congressional district, and there have been proposals to expand this nationally. Again, this wouldn’t solve the “problem”. Although I haven’t crunched the numbers myself, it appears that ticket-splitting (voting different parties for President and Congress) is relatively low. Since the Republicans retained control of Congress, their electoral votes under this system would be similar to their congressional majority of 239 to 194 (their are a few results outstanding), and would only get worse if we retain the two Senatorial votes per state. Indeed, with this system, Romney would have won in 2012.

So the “problem” really does go back to the very different geographical distribution of Democrats and Republicans. Almost any system which segregates electoral votes by location (especially if subjected to gerrymandering) will favour the more widely dispersed party. So perhaps the solution is to just to use nationwide popular voting for Presidential elections. This would also eliminate the importance of a small number of swing states and therefore require more national campaigning. (It could be enacted by a Constitutional amendment, or a scheme like the National Popular Vote Interstate Compact.) Alas, it ain’t gonna happen.


  1. I have assumed Trump wins Michigan, and I have allocated all of Maine to Clinton and all of Nebraska to Trump; see below. ↩︎

The Sick Rose

  • Posted on
  • by

Songs of innocence and of experience page 39 The Sick Rose Fitzwilliam copy

O Rose thou art sick.
The invisible worm,
That flies in the night
In the howling storm:

Has found out thy bed
Of crimson joy:
And his dark secret love
Does thy life destroy.

—William Blake, Songs of Experience

Wussy (Best Band in America?)

It’s been a year since the last entry here. So I could blog about the end of Planck, the first observation of gravitational waves, fatherhood, or the horror (comedy?) of the US Presidential election. Instead, it’s going to be rock ’n’ roll, though I don’t know if that’s because it’s too important, or not important enough.

It started last year when I came across Christgau’s A+ review of Wussy’s Attica and the mentions of Sonic Youth, Nirvana and Television seemed compelling enough to make it worth a try (paid for before listening even in the streaming age). He was right. I was a few years late (they’ve been around since 2005), but the songs and the sound hit me immediately. Attica was the best new record I’d heard in a long time, grabbing me from the first moment, “when the kick of the drum lined up with the beat of [my] heart”, in the words of their own description of the feeling of first listening to The Who’s “Baba O’Riley”. Three guitars, bass, and a drum, over beautiful screams from co-songwriters Lisa Walker and Chuck Cleaver.

Wusst

And they just released a new record, Forever Sounds, reviewed in Spin Magazine just before its release:

To certain fans of Lucinda Williams, Crazy Horse, Mekons and R.E.M., Wussy became the best band in America almost instantaneously…

Indeed, that list nailed my musical obsessions with an almost google-like creepiness. Guitars, soul, maybe even some politics. Wussy makes me feel almost like the Replacements did in 1985.

IMG 1764

So I was ecstatic when I found out that Wussy was touring the UK, and their London date was at the great but tiny Windmill in Brixton, one of the two or three venues within walking distance of my flat (where I had once seen one of the other obsessions from that list, The Mekons). I only learned about the gig a couple of days before, but tickets were not hard to get: the place only holds about 150 people, but their were far fewer on hand that night — perhaps because Wussy also played the night before as part of the Walpurgis Nacht festival. But I wanted to see a full set, and this night they were scheduled to play the entire new Forever Sounds record. I admit I was slightly apprehensive — it’s only a few weeks old and I’d only listened a few times.

But from the first note (and after a good set from the third opener, Slowgun) I realised that the new record had already wormed its way into my mind — a bit more atmospheric, less song-oriented, than Attica, but now, obviously, as good or nearly so. After the 40 or so minutes of songs from the album, they played a few more from the back catalog, and that was it (this being London, even after the age of “closing time”, most clubs in residential neighbourhoods have to stop the music pretty early). Though I admit I was hoping for, say, a cover of “I Could Never Take the Place of Your Man”, it was still a great, sloppy, loud show, with enough of us in the audience to shout and cheer (but probably not enough to make very much cash for the band, so I was happy to buy my first band t-shirt since, yes, a Mekons shirt from one of their tours about 20 years ago…). I did get a chance to thank a couple of the band members for indeed being the “best band in America” (albeit in London). I also asked whether they could come back for an acoustic show some time soon, so I wouldn’t have to tear myself away from my family and instead could bring my (currently) seven-month old baby to see them some day soon.

They did say UK tours might be a more regular occurrence, and you can follow their progress on the Wussy Road Blog. You should just buy their records, support great music.

In an occasionally thoughtful but mostly silly attempted takedown of the so-called New Atheists (Dawkins, Dennett, Harris and such), philosopher John Gray writes that

there is an irresolvable contradiction between viewing religion naturalistically — as a human adaptation to living in the world — and condemning it as a tissue of error and illusion.

-John Gray, What Scares the New Atheists

No, there’s not.

There are lots of human adaptations that are useless or outmoded. Racism, sexism, and other forms of bigotry have at least some naturalistic explanation in terms of evolution, but we certainly ought to condemn them despite this history. This is of a piece with what I understand to be Gray’s general opposition to a sort of Whiggish belief in progress and humanism. But Gray’s argument seems to be another, somewhat disguised and inverted, attempt to derive “ought” from “is”: we are certainly the product of biological and cultural evolution but that doesn’t give us any insight into how we should run the society in which we find ourselves (even though our society is the product of that evolution).

Oscillators, Integrals, and Bugs

[Update: The bug seems fixed in the latest version, 10.0.2.]

I am in my third year teaching a course in Quantum Mechanics, and we spend a lot of time working with a very simple system known as the harmonic oscillator — the physics of a pendulum, or a spring. In fact, the simple harmonic oscillator (SHO) is ubiquitous in almost all of physics, because we can often represent the behaviour of some system as approximately the motion of an SHO, with some corrections that we can calculate using a technique called perturbation theory.

It turns out that in order to describe the state of a quantum SHO, we need to work with the Gaussian function, essentially the combination exp(-y²/2), multiplied by another set of functions called Hermite polynomials. These latter functions are just, as the name says, polynomials, which means that they are just sums of terms like ayⁿ where a is some constant and n is 0, 1, 2, 3, … Now, one of the properties of the Gaussian function is that it dives to zero really fast as y gets far from zero, so fast that multiplying by any polynomial still goes to zero quickly. This, in turn, means that we can integrate polynomials, or the product of polynomials (which are just other, more complicated polynomials) multiplied by our Gaussian, and get nice (not infinite) answers.

Unfortunately, Wolfram Inc.’s Mathematica (the most recent version 10.0.1) disagrees:

MathematicaGaussHermiteBug

The details depend on exactly which Hermite polynomials I pick — 7 and 16 fail, as shown, but some combinations give the correct answer, which is in fact zero unless the two numbers differ by just one. In fact, if you force Mathematica to split the calculation into separate integrals for each term, and add them up at the end, you get the right answer.

I’ve tried to report this to Wolfram, but haven’t heard back yet. Has anyone else experienced this?

Loncon 3

Briefly (but not brief enough for a single tweet): I’ll be speaking at Loncon 3, the 72nd World Science Fiction Convention, this weekend (doesn’t that website have a 90s retro feel?).

At 1:30 on Saturday afternoon, I’ll be part of a panel trying to answer the question “What Is Science?” As Justice Potter Stewart once said in a somewhat more NSFW context, the best answer is probably “I know it when I see it” but we’ll see if we can do a little better than that tomorrow. My fellow panelists seem to be writers, curators, philosophers and theologians (one of whom purports to believe that the “the laws of thermodynamics prove the existence of God” — a claim about which I admit some skepticism…) so we’ll see what a proper physicist can add to the discussion.

At 8pm in the evening, for participants without anything better to do on a Saturday night, I’ll be alone on stage discussing “The Random Universe”, giving an overview of how we can somehow learn about the Universe despite incomplete information and inherently random physical processes.

There is plenty of other good stuff throughout the convention, which runs from 14 to 18 August. Imperial Astrophysics will be part of “The Great Cosmic Show”, with scientists talking about some of the exciting astrophysical research going on here in London. And Imperial’s own Dave Clements is running the whole (not fictional) science programme for the convention. If you’re around, come and say hi to any or all of us.