For someone whose job title is "professor of philosophy," Carnegie Mellon University's Clark Glymour deals heavily in the nuts and bolts of the material world. Glymour specializes in the philosophy of science, a field that he says asks "the big questions about how scientific inquiry works, why it should succeed, what makes it reliable [and] the best way to carry it out." He's spent decades evaluating procedures for everything from wildfire prediction to the space program.
In his new book, Galileo in Pittsburgh (Harvard University Press), Glymour offers a wide-ranging collection of opinionated personal essays on things like the foibles of America's educational system; technological hubris; climate change; lawns; and more. Meanwhile, the title essay revisits opposition to Herbert Needleman, the pioneering University of Pittsburgh researcher who in the 1980s argued that environmental lead was a greater health risk than previously thought. Glymour uses the episode to illustrate the vagaries of statistical methodology.
Glymour spoke with CP from his home in O'Hara Township, with his and his wife's pet parrot and cockatiel squawking in the background.
People criticized Needleman, but you contend that his findings proved lead more dangerous than even he believed. How so?
He was more right than the [statistical] methodology of the time. He couldn't put his case without in some sense violating the methodology of the time. He got in trouble because people were applying very routinized assessments to his work, and some of those routinized assessments were correct, but the major ones were in fact quite controversial, and [critics] simply hadn't thought them through.
It's surprising to hear that statistical methods are still evolving.
Statistics underwent a huge change between 1960 and the 1970s because of the introduction of the digital computer. There are still "statistical wars." Some things are fixed in science, but not very much.
You often emphasize the limits of real-world research into determining what causes problems outside labs, in fields from health care to economics.
The problems of cause and effect were really the source of the development of experimental design: How do you arrange and assess an experiment so you could figure out how big an effect treatment has or doesn't have?
But harder problems arise when you can't do full experimental controls. Most of our social and economic problems ... are not amenable to experimental control.
How does that affect what research gets done?
You look for problems where you can [find] support [for causality]. And the problems where you can't just tend to get ignored.
Yet you argue that public "trust" in science is important. Why?
Climate change is a really good example. You've got thousands and thousands of scientists with very different specialties, all connected. One guy's a specialist in aerosols. Another guy is a computer scientist who knows how to program finite element models. You've got all these specialists, including guys who live on an ice floe and watch the ice retreat. They're all writing reports. Undoubtedly a good segment of what's written by these thousands of people is in error.
On the other hand, one thing you can be sure about is they're not in some kind of conspiracy. Because let's face it: Nobody can keep a conspiracy that involves three people, let alone thousands of people distributed all over the world.
Your attitude about climate change fundamentally has to be: Do you trust the [scientific] community or do you not? You don't trust the community to be perfect, you don't trust it to be error-free. You don't even trust it to be right. You trust it to be more likely to be right than alternative voices on the same topic.