The Quantum Strawman

After presenting an argument, I sometimes get the response: “You claim X, but X is absurd for the following reasons. Therefore you are an ignoramus.”

The form of this counter-argument is valid, but it usually misfires because I never claimed anything remotely similar to X. For example, I have been accused of rejecting quantum mechanics, and therefore of being entirely ignorant of modern physics. But here is an excerpt from what I actually wrote:

“Quantum mechanics has its origins in a series of discoveries made during the late 19th and 20th centuries . . . A close look at this early history reveals that the mathematics of quantum theory was developed in an admirably logical way; it was guided by experiment, by the conservation-of-energy principle, and by the requirement that the theory reduce to Newtonian mechanics in the macroscopic limit.

“As a mathematical formalism, quantum theory has been enormously successful. It makes quantitative predictions of impressive accuracy for a vast range of phenomena, providing the basis for modern chemistry, condensed matter physics, nuclear physics, and optics. It also made possible some of the greatest technological innovations of the 20th century, including computers and lasers.”

Is that a rejection? A woman who rejected a man in that manner would probably wake up with him in the morning.

Are my critics delusional? Well, not exactly. They know that I have rejected something; they just can’t be bothered to correctly identify it. In fact, I reject Niels Bohr’s interpretation of quantum theory. Bohr said “there is no quantum world,” whereas I say there is one. So which of us is the real champion of quantum physics?

Of course, it’s much easier to dismiss me as a crackpot than to defend Bohr’s avant-garde subjectivism. It’s easier — but it’s also cowardly and evasive.

 

Last Thoughts of an Old One

A teacher gave the following assignment to his students: Imagine, he said, that you are an old Native American who is dying, and you want to give your tribe some final advice about how to deal with the evil white man. Here is the best response:

“There is much to learn from the Pale Face.

“I have heard of one called ‘Shakes Spear,’ who, despite his frightening name, has told great tales of love, courage, triumph, and tragedy. There is another with an even more ferocious name–‘I Sack New Town’–who understood everything from the rise and fall of the sea, to the colors of the rainbow, to the movements of the moon and wandering stars. And there are many more, including one who learned to tame the power of lightning, and another who harnessed the magic of the strange attracting rocks.

“We are afraid that it may be a bad omen when the moon covers the sun, but the Pale Face knows beforehand exactly when such events will happen. We ride horses, but he makes the great Iron Horses that ride on rails with the power of a thousand horses. We are confined to this land, but the Pale Face builds great ships that sail around the whole world.

“We should live in peace with these admirable people and try to learn their wisdom. And we should hope that we have something of value to offer in return.”

Carolina in My Mind

Prof. Mark Wilson teaches a course on scientific method at Western Carolina University. The Logical Leap is required reading for the course, and he invited me to visit and give a couple of guest lectures. It was a thoroughly enjoyable experience.

The students were very interested in the subject and eager to hear me explain the differences between my views and those of other philosophers they had read. In the second lecture, we were joined by future high school science teachers who were taking a class on science education. This gave me the opportunity to explain why I think science students must learn the material in essentially the same way that scientists learn it — that is, they need to learn it inductively, grasping the steps of the discovery process.

One student raised a question that made me feel empathy for these future teachers. In effect, he asked: “Teaching the discovery process sounds like a fun way for students to really learn the material rather than simply memorize it. But is this approach consistent with the national standardized tests the students must pass?” I had to admit that a proper inductive curiculum includes a significant amount of material that is not covered on the state tests and it omits some material that is covered on those tests. And today there is a great deal of pressure on teachers to “teach to the test” — even if they think the test is deeply flawed.

Imagine a high school science teacher who has a genuine passion for opening young minds to the wonders of the natural world. In today’s education system, this teacher is told not to think about the best way of accomplishing this goal; that decision is made by committees of government appointed “experts.” The teacher is supposed to teach by rote, while the students learn by rote. The independent thought of both teachers and students is extinguished in the classroom by state mandate. Is it a mystery why so many teachers burn-out and so many students drop-out?

Fortunately, the internet can provide alternatives to the standard curriculum designed by government committees. In the future, we can hope that the popularity of an inductive curriculum creates pressure to change the standardized tests. We can also hope that the mind-numbing power of such tests is diminished by growth of the private education sector.

 

When and How Do People Learn to Think?

Ignatius Loyola, founder of the Jesuit order in the sixteenth century, once said: “Give me a child until the age of seven and I will give you the man.”

He exaggerated, but he wasn’t wrong. If the age is changed to seventeen, then–in a certain sense–I would agree. By that age, an individual has automatized a particular method of thinking (or of not thinking), a way that his mind habitually deals with its content. Ayn Rand called it “psycho-epistemology,” and it is a crucial aspect of who we are. It isn’t set in stone by high school graduation, but it’s difficult and rare for adults to make major improvements in their psycho-epistemology.

Now, how do people learn their basic method of thinking? Typically, they do so by generalizing from countless arguments they heard during their formative years, and not by explicit study of epistemology. If the arguments they accepted in their youth were based on vague ideas and frequent appeals to emotion, then that becomes part of their implicit method. If they accepted many rationalistic arguments that merely deduced connections between floating abstractions, then that becomes part of their method. In any case, most people cannot identify the essentials of their method–it is automatic and implicit. And despite the fact that there are often inconsistencies, there is usually a dominant approach that guides an individual’s thinking.

Finally, what subjects are best-suited for developing proper thinking methods in young people? My answer is: Subjects that deal with the external and non-human world, rather than with human consciousness. It is difficult to learn how to think by studying human beings, who are the most complex entities on the planet. The actions of balls rolling down inclined planes, or of colored light refracting through prisms, or of magnets pushing and pulling on other magnets, are much simpler than the actions of people. These topics are complex enough to offer challenges, but straight-forward enough so that the correct conclusions are uncontroversial. Natural science provides excellent content for developing proper thinking methods in young minds.

This is why Tom VanDamme and I started Falling Apple Science Institute. We want to develop real thinkers by presenting middle and high school students with an inductive science curriculum.

Induction Hanging by a Thread

Here’s another excerpt from my lecture to the scientists at Johns Hopkins Applied Physics Lab:

Let’s start with the philosophers’ description of induction as a giant, illogical leap from a few observed cases to a universal generalization. That does sound like a dubious procedure. How can we possibly justify accepting a conclusion that transcends the evidence in this way? And we do accept such conclusions all the time; we couldn’t survive if we didn’t. But we don’t want to say that induction is illogical, and yet we do it anyway because it works. That leaves us in a position of not being able to distinguish science from pseudo-science, or rationality from irrationality. If we say that even the best thought processes are illogical, that’s a disaster.

So how do we respond to the charge that induction, by its nature, is an illogical leap? The best place to start is with actual examples of scientific induction. Let’s consider Newton’s experiment with the thread that was painted half blue and half red. Recall that when he viewed it through a prism, the two halves appeared discontinuous–and then analysis showed that the prism shifted the blue part more than the red part. On this basis, Newton reached a generalization: blue light is refracted by glass at a greater angle than red light.

Where is the illogical leap here? Oddly enough, there doesn’t seem to be one. The generalization seems to follow with perfect logical necessity from the observations. In fact, under the circumstances, we would be shocked and disappointed in Newton if he didn’t reach the generalization. And yet the generalization goes well beyond the particular observations–it refers to any and all instances of red light and blue light refracting through glass.

The key here is the concepts themselves. When we form a concept, we do it on the basis of essential similarities–and we make a commitment to treat the particular instances of the concept as interchangeable members of a group. If we don’t make that commitment, then we don’t have the concept. Someone who uses the word “red” but treats every instance of red light as different from every other instance does not have the concept “red.” So, in this sense, inductive generalization is actually inherent in conceptualization.

In this case, the only way to deny the validity of Newton’s generalization is to deny the validity of his concepts. We would have to argue that one or more of his concepts are invalid: the concepts “red,” “blue,” “glass,” and “refraction.” And by invalid I mean that the concept is not a proper integration of similar particulars, but instead a juxtaposition of essentially different particulars.

Probably the best that a skeptic could do is point out that there are different types of glass, so perhaps the generalization is true only for the type of glass that Newton used. But notice that it is still a generalization, and notice that it’s easy to get a prism made out of a different type of glass and see similar results. So the skeptic would strike out with this argument.

Scientific Disagreements–and Philosophic Causes

Philosophy is primarily about method; it’s about the principles that tell us how to discover knowledge. And even a quick look at the history of science shows us that these principles are not obvious. In astronomy, for instance, Ptolemy and Copernicus did not simply disagree in their scientific conclusions about the solar system; they also disagreed in their underlying philosophic ideas about how to develop a theory of the solar system. In essence, Ptolemy thought it was best to settle for a mathematical description of the appearances, whereas Copernicus began the transition to focusing on causal explanations. So what is the goal of science–to describe appearances, or to identify causes? The answer depends on the philosophy you accept.

Similarly, in 17th century physics, Descartes and Newton did not simply disagree in their scientific theories; they strongly disagreed about the basic method of developing such theories. Descartes wanted to deduce physics from axioms, whereas Newton induced his laws from observational evidence. So what is the essential nature of scientific method–is it primarily deductive, or primarily inductive? And what is the role of experiment in science? The answers depend on your theory of knowledge.

Here’s another example: Consider the contrast between Lavoisier, the father of modern chemistry, and the alchemists of the previous era. Lavoisier did not merely reject the scientific conclusions of the alchemists; he rejected their method of concept-formation and he originated a new chemical language, and then he used a quantitative method for establishing causal relationships among his concepts. So how do we form valid concepts, and what is the proper role of mathematics in physical science? Again, your answers to such fundamental questions will depend on the philosophy you accept.

Finally, consider the battle between two late 19th century physicists, Boltzmann and Mach. Boltzmann was the leading advocate of the atomic theory and he used that theory to develop the field of statistical thermodynamics. Mach, on the other hand, was a leading advocate of positivism; he thought that physicists should stick to what they can see, and that the atomic theory was nothing more than speculative metaphysics. So what is the relationship between observation and theory, and how is a theory proven, and are there limits to scientific knowledge? Once again, these are philosophic questions.

Such issues have not gone away with time. There is a great deal of controversy in theoretical physics today, and these basic issues of method are at the heart of the controversy. Some physicists say that string theory is a major triumph that has unified quantum mechanics and relativity theory for the first time. Other physicists argue that string theory is just a mathematical game detached from reality–that it isn’t a theory of everything, but instead a theory of anything. And we’re starting to hear similar criticisms of Big Bang cosmology; if the theory is so flexible that it can explain anything, the critics say, perhaps it actually explains nothing.

How do we decide these issues? How do we know the right method of doing science, and what standards should we use to evaluate scientific ideas? These are some of the questions that I try to answer in my book. . . .

Philosophy is BS (Part 2)

Here’s the second argument against philosophy, as presented in my lecture to the scientists at Johns Hopkins Applied Physics Lab:

Now let’s turn to history and the second argument against philosophy. When scientists have taken philosophy seriously, it is said, the results have not been encouraging. Let’s consider some examples. Kepler, for instance, was influenced by Platonism, which led him to several ideas that turned out to be wrong. Newton was influenced by theological speculations about the alleged connection between God and space, and his idea of “absolute space” turned out to be wrong. In the early 19th century, there was a generation of German scientists who were strongly influenced by Kant, Schelling, and Hegel — and nearly all their ideas were embarrassingly wrong. In the late 19th century, many physicists and chemists were strongly influenced by the philosophy of positivism, which led them to reject the atomic theory of matter. And so on. It’s not difficult to come up with a dozen more examples like these.

You get the point. Scientists are better off ignoring the philosophers; when they take philosophy seriously, it leads them astray.

Since you’re not running for the exits, I assume that you’re either a polite audience or you sense there may be a weakness in these arguments against philosophy. I think there is such a weakness; the arguments fall short of establishing that philosophy is bad for science. Instead, they establish only that bad philosophy is bad for science. In other words, if a scientists accepts a false philosophy and it guides his thinking about science, then he will go off-track. This isn’t a surprising conclusion. My response is: “Of course.”

But what if there is a different kind of philosophy, one based on observation and logic — in other words, one that is arrived at by essentially the same method that it prescribes for the special sciences? Could such a philosophy ask important questions and provide answers that are useful and even indispensable to scientists? Let’s sweep aside Plato’s supernatural world of ideas, and let’s sweep aside the modern skeptics who can’t write the words “reality” and “truth” without putting them in scare quotes. Let’s consider what philosophy really is. . . .

Philosophy is BS (Part 1)

Here’s an argument for the worthlessness of philosophy, excerpted from the colloquium lecture I gave at Johns Hopkins Applied Physics Lab (APL):

Scientists often claim that philosophy is detached from reality by its very nature. Philosophers sit in their ivory towers and concoct theories of knowledge, but they are not in the trenches dealing with the real world and actually discovering knowledge. So taking advice from a philosopher is like having a tennis coach who has never played tennis. You’re out there battling on the court, trying to win a match, and this guy is sitting in his office sending you text messages about his theory of tennis. Under those circumstances, it’s understandable if you feel like suggesting–perhaps in some anatomical detail–exactly where he might put his theories.

Unfortunately, there is some supporting evidence for this view of philosophy. Contemporary philosophers provide Exhibit A. In many cases, their claims about knowledge are so bizarre that nobody with any common sense could take them seriously. For example, the dominant viewpoint in philosophy of science during the past fifty years is called the “sociology of knowledge” school. According to these philosophers, scientific truth is determined by authority and concensus within a social context. Now, APL is a prestigious science institute with recognized authority in our society, so this view does have its advantages. You don’t have to do all those complicated experiments in order to prove your ideas; instead, you can establish whether an idea is true or false simply by getting together and voting. Imagine how easy it would be to achieve your milestones. Of course, if you vote in favor of an idea, it becomes true only for our society; it’s not true for people in different cultures such as Madagascar or Malibu.

In my judgment, most contemporary philosophy is detached from the real world and the real issues people face in their work and personal lives. But this detachment is not unique to our era; it can be traced back to the beginning of philosophy in ancient Greece. Plato, the father of Western philosophy, split reality into two realms: a higher world of abstract ideas, and a lower world of physical appearances. His view gave rise to many false dichotomies that are still around today; in science, you can see the influence of Plato in the often uneasy relationship between theorists and experimentalists. Some theorists seem to think that they have the high ground, and they look down on the lab workers who design experiments and take measurements; on the other hand, some experimentalists tend to think that theorists merely play around with abstractions that have little connection with the perceivable world. So Platonism introduces an element of distrust and hostility into a relationship that should be mutually respectful and beneficial.

Of course, philosophers are the ultimate theorists. They have their heads in the upper stratosphere of abstraction. So they can’t help scientists deal with the real-world problems involved in conducting research. The claim here is that philosophers deal with floating abstractions, which by their nature have little to do with the actual practice of science.

Do Scientists Need Philosophy?

Last year, I was invited to give the Friday afternoon colloquium talk at Johns Hopkins Applied Physics Lab. Given the prestigious history of this colloquium, it was a great honor (past speakers include Nobel laureattes in physics). And, as far as I know, it was the first time APL has ever invited someone to speak about philosophy of science.

My talk was titled “Do Scientists Need Philosophy?” It was attended by more than two hundred scientists, and I received a very positive response. I thoroughly enjoyed the entire experience.

Of course, most scientists are skeptical about the value of philosophy. I began by agreeing with that skepticism, rather than fighting it. I offered two arguments for the view that philosophy is just a lot of hot air that scientists should ignore. Then I pointed out the weaknesses in those arguments, discussed what philosophy really is and why it is indispensible to scientists.

I’ll be posting a few excerpts from the lecture.

Feynman on Scientific Integrity

I’m a fan of Richard Feynman, who was a brilliant physicist and a very entertaining guy. If you haven’t already read Surely You’re Joking, Mr. Feynman, then get it–you’re in for a treat.

The last chapter of the book deals with scientific method. Parts of the chapter are hilarious, despite the fact that Feynman intends his points to be taken very seriously. He emphasizes one aspect of proper method that he calls “scientific integrity.” Here’s his description: “It’s . . . a principle of scientific thought that corresponds to a kind of utter honesty–a kind of leaning over backwards. For example, if you’re doing an experiment, you should report everything that you think might make it invalid. . . . Details that could throw doubt on your interpretation must be given, if you know them.”

Feynman is calling our attention to a widespread and dangerous fallacy. If one is trying to prove some idea, it’s easy to “cherry-pick” the facts that support the idea. But that is a technique for misleading the unwary, not a method of proof. Instead, we need to make an effort to survey all the relevant facts–especially those that may seem to conflict with the idea.

Feynman also argues that proper scientific method is self-correcting. In other words, if you have the right method and a wrong idea, you will continue to gather evidence and eventually discover the mistake. I make this point in The Logical Leap and give many examples from the history of science.

Unfortunately, Feynman omits the fundamental point: The method must be primarily inductive, not deductive, in order to be self-correcting and compatible with “scientific integrity.” As I explain in my book, the standard deductive method–in other words, the process of making wild guesses and then working backwards from the supposed answer to the observed facts–is invulnerable to counter-evidence and is not self-correcting. Only the inductive method that goes from observations to generalizations has the integrity and efficacy that Feynman claims for science.

Feynman dismissed philosophy of science as a lot of hot air. He famously stated: “Philosophy is about as useful to scientists as ornithology is to birds.” Nevertheless, I suspect he would have liked the no-nonsense, fact-based philosophy in The Logical Leap.