Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×
Image

The Logical Leap: Induction In Physics 630

FrederickSeiler writes "When David Harriman, this book's author, was studying physics at Berkeley, he noticed an interesting contrast: 'In my physics lab course, I learned how to determine the atomic structure of crystals by means of x-ray diffraction and how to identify subatomic particles by analyzing bubble-chamber photographs. In my philosophy of science course, on the other hand, I was taught by a world-renowned professor (Paul Feyerabend) that there is no such thing as scientific method and that physicists have no better claim to knowledge than voodoo priests. I knew little about epistemology [the philosophy of knowledge] at the time, but I could not help noticing that it was the physicists, not the voodoo priests, who had made possible the life-promoting technology we enjoy today.' Harriman noticed the enormous gulf between science as it is successfully practiced and science as is it described by post-Kantian philosophers such as Feyerabend, who are totally unable to explain the spectacular achievements of modern science." Read on for the rest of Frederick's review.
The Logical Leap: Induction In Physics
author David Harriman
pages 272
publisher NAL Trade
rating 9/10
reviewer Frederick Seiler
ISBN 0451230051
summary Explains how scientists discover the laws of nature
Logical Leap: Induction in Physics attempts to bridge this gap between philosophy and science by providing a philosophical explanation of how scientists actually discover things. A physicist and physics teacher by trade, he worked with philosopher Leonard Peikoff to understand the process of induction in physics, and this book is a result of their collaboration.

Induction is one of the two types of logical argument; the other type is deduction. First described by Aristotle, deduction covers arguments like the following: (1) All men are mortal. (2) Socrates is a man. (3) Therefore, Socrates is mortal. Deductive arguments start with generalizations ("All men are mortal.") and apply them to specific instances ("Socrates"). Deductive logic is well understood, but it relies on the truth of the generalizations in order to yield true conclusions.

So how do we make the correct generalizations? This is the subject of the other branch of logic induction and it is obviously much more difficult than deduction. How can we ever be justified in reasoning from a limited number of observations to a sweeping statement that refers to an unlimited number of objects? In answering this question Harriman presents an original theory of induction, and he shows how it is supported by key developments in the history of physics.

The first chapter presents the philosophical foundations of the theory, which builds directly on the theory of concepts developed by Ayn Rand. Unfortunately for the general reader, Harriman assumes familiarity with Rand's theory of knowledge, including her views of concepts as open-ended, knowledge as hierarchical, certainty as contextual, perceptions as self-evident, and arbitrary ideas as invalid. Those unfamiliar with these ideas may find this section to be confusing. But the good news is that those readers can then proceed to the following chapters, which flesh out the theory and show how it applies to key developments in the history of physics (and the related fields of astronomy and chemistry). These chapters do a wonderful job at bringing together the physics and the philosophy, clarifying both in the process.

Harriman argues that as concepts form a hierarchy, generalizations form a hierarchy as well; more abstract generalizations rest on simpler, more direct ones, relying ultimately on a rock-solid base of "first-level" generalizations which are directly, perceptually obvious, such as the toddler's grasp of the fact that "pushed balls roll." First-level generalizations are formed from our direct experiences, in which the open-ended nature of concepts leads to generalizations. Higher-level generalizations are formed based on lower-level ones, using Mill's Methods of Agreement and Difference to identify causal connections, while taking into account the entirety of one's context of knowledge.

Ayn Rand held that because of the hierarchical nature of our knowledge, it is possible to take any valid idea (no matter how advanced), and identify its hierarchical roots, i.e. the more primitive, lower-level ideas on which it rests, tracing these ideas all the way back to directly observable phenomena. Rand used the word "reduction" to refer to this process. In a particularly interesting discussion, Harriman shows how the process of reduction can be applied to the idea that "light travels in straight lines," identifying such earlier ideas as the concept "shadow" and finally the first-level generalization "walls resist hammering hands."

Harriman's discussion of the experimental method starts with a description of Galileo's experiments with pendulums. Galileo initially noticed that the period of a pendulum's swing seems to be the same for different swing amplitudes, so he decided to accurately measure this time period to see if it is really true. Concluding that the period is indeed constant, he then did further experiments. He selectively varied the weight and material of the pendulum's bob, and the length of the pendulum. This led him to the discovery that a pendulum's length is proportional to the square of its period. Harriman notes the experiments that Galileo did not perform: 'He saw no need to vary every known property of the pendulum and look for a possible effect on the period. For example, he did not systematically vary the color, temperature, or smell of the pendulum bob; he did not investigate whether it made a difference if the pendulum arm is made of cotton twine or silk thread. Based on everyday observation, he had a vast pre-scientific context of knowledge that was sufficient to eliminate such factors as irrelevant. To call such knowledge "pre-scientific" is not to cast doubt on its objectivity; such lower-level generalizations are acquired by the implicit use of the same methods that the scientist uses deliberately and systematically, and they are equally valid.' One powerful tool for avoiding nonproductive speculations in science is Ayn Rand's concept of the arbitrary, and Harriman brilliantly clarifies this idea in the section on Newton's optical experiments. An arbitrary idea is one for which there is no evidence; it is an idea put forth based solely on whim or faith. Rand held that an arbitrary idea cannot be valid even as a possibility; in order to say "it is possible," one needs to have evidence (which can consist of either direct observations or reasoning based on observations).

Newton began his research on colors with a wide range of observations, which led him to his famous and brilliant experiments with prisms. Harriman presents the chain of reasoning and experimentation which led Newton to conclude that white light consists of a mixture of all of the colors, which are separated by refraction.

Isaac Newton said that he "framed no hypotheses," and here he was referring to his rejection of the arbitrary. When Descartes claimed without any evidence that light consists of rotating particles with the speed of rotation determining the color; and when Robert Hooke claimed without any evidence that white light consists of a symmetrical wave pulse, which results in colors when the wave becomes distorted; these ideas were totally arbitrary, and they deserved to be thrown out without further consideration: "Newton understood that to accept an arbitrary idea even as a mere possibility that merits consideration undercuts all of one's knowledge. It is impossible to establish any truth if one regards as valid the procedure of manufacturing contrary 'possibilities' out of thin air." This rejection of the arbitrary may be expressed in a positive form: Scientists should be focused on reality, and only on reality.

After discussing the rise of experimentation in physics, Harriman turns to the Copernican revolution, the astronomical discoveries of Galileo and Kepler, and the grand synthesis of Newton's laws of motion and of universal gravitation. But this reviewer found the most historically interesting chapter to be the one about the atomic theory of matter; this chapter is a cautionary tale about the lack of objective standards for evaluating theories. This story then leads to Harriman proposing a set of specific criteria of proof for scientific theories.

The final, concluding chapter addresses several broader issues, including why mathematics is fundamental to the science of physics, how the science of philosophy is different than physics, and finally, how modern physics has gone down the wrong path due to the lack of a proper theory of induction.

So, with the publication of Logical Leap, has the age-old "problem of induction" now been solved? On this issue, the reader must judge for himself. What is clear to this reviewer is that Harriman has presented an insightful, thought-provoking and powerful new theory about how scientists discover the laws of nature.

You can purchase The Logical Leap: Induction In Physics from amazon.com. Slashdot welcomes readers' book reviews -- to see your own review here, read the book review guidelines, then visit the submission page.

*

This discussion has been archived. No new comments can be posted.

The Logical Leap: Induction In Physics

Comments Filter:
  • Objectivism? (Score:5, Informative)

    by pugugly ( 152978 ) on Monday January 10, 2011 @03:57PM (#34827166)

    I have an inherent distrust of anyone that is basing inductive logic on the underpinnings of Ayn Rand's Objectivism, for the simple reason that I've never . . . *ever* . . . heard of Objectivism as being contributory to *any* philosophy of logic.

    Quite the opposite in fact, I've seen logicians use her as examples of how people can be fooled by pseudo-logic which hides implicit assumptions under carefully concealed vagueness and frame shifting.

    This smells more like an attempt to rehabilitate Ayn Rand as a genuine philosophical contribution than a book on logic.

    Pug

  • Re:Philosophy... (Score:5, Informative)

    by xednieht ( 1117791 ) on Monday January 10, 2011 @04:04PM (#34827254) Homepage
    "I - personally - find it frustrating that we listen to the naval-staring philosopher, and forget what wisdom is in the same moment."

    I find it frustrating that people can't spell "NAVEL". I have stood next to friends of wisdom....

    Naval - "of or pertaining to warships"
    Navel - "umbilicus"
  • What about Jaynes... (Score:5, Informative)

    by rgbatduke ( 1231380 ) <rgb@phy.duk[ ]du ['e.e' in gap]> on Monday January 10, 2011 @04:16PM (#34827422) Homepage
    The problem with books like this -- even by physicists -- is that they all too rarely study the right things physicists have done. Induction/inference in epistemology is put on a mathematically sound axiom-based foundation by Richard Cox and E. T. Jaynes. The former wrote a truly marvellous monograph entitled "The Algebra of Probable Inference" (readily available on Amazon). E. T. Jaynes arrived at a very similar result following instead from Shannon's Information Theory (which is a consequence of Cox's prior work, although this is not generally recognized) and later enthusiastically adopted Cox's axioms as the basis for his own opus major "Probability Theory, the Logic of Science". Both are available as a twofer on Amazon (or even as part of a threefer with Sivia's work on Bayesian Analysis).

    They have one enormous redeeming value -- they don't refer to any work on philosophy including any by Ayn Rand. These are serious works on mathematics, logic, probability theory, and science, and they contain algebra, not handwaving. Absolutely amazing algebra, by the way. The sum total of philosophy in Cox is his highly restrained observation that his work seems to have solved Hume's basic problem -- deriving the theory of inference so it is on a sound mathematical footing.

    Two other places where this general topic is reviewed: David Mackay's superb: "Information Theory, Pattern Recognition and Neural Networks" where he explores the consequences of Shannon's Theorem in cryptography and data compression and reliable storage, then moves on to argue quite persuasively that the human brain and neural networks in general function as a Bayesian inference engine; and my own book-in-writing "Axioms".

    rgb
  • Re:Philosophy... (Score:5, Informative)

    by zolltron ( 863074 ) on Monday January 10, 2011 @04:47PM (#34827908)

    While the greek word philosophia literally means "friend of wisdom", the common-day philosopher tends to stare at their naval and wonder if they even exist

    Which "common day" philosophers are you referring to? How much common day philosophy have you read? I think it's fair to say that this problem is near death and has been for a long time. The problem was made famous by Descartes of course, but he's hardly "common day."

    I - personally - find it frustrating that we listen to the naval-staring philosopher, and forget what wisdom is in the same moment.

    I'm happy to hear that you think people listen to philosophers. How many people do you know that spend their time worry about the problem of existence instead of something else?

    Your attitude about philosophers is common, people take an intro to philosophy course that focuses on rationalist thought of the 15th century and assume they now know the state-of-the-art of philosophy. Somehow people don't realize how stupid this is, even though they wouldn't dare assume they understood contemporary physics after taking physics 101. Philosophy has a very long history of contributing to major scientific breakthroughs. Here are a few:

    1. Einstein, throughout his life, credited many philosophers including Hume and Kant with inspiring him to come up with special and general relativity.

    2. Neils Bohr invented his preferred interpretation of quantum mechanics because we was inspired by Kant.

    3. Adam Smith was a "moral philosopher." Before him economics didn't exist.

    4. Psychology wasn't it's own discipline until very recently. Before that it was philosophy.

  • Re:Oh my (Score:2, Informative)

    by delphi125 ( 544730 ) on Monday January 10, 2011 @04:57PM (#34828052)

    2+2=4 is indeed a theorem of arithmetic, but it does not preclude it from being an axiom or the only member of a theory.

    Ah, a little knowledge is a dangerous thing. What are these "+", "2", "=" and "4" things?

    Over ZZ3 (integers modulo 3), 2+2=1.

    When you learned to count (pre-school) you were actually learning what mathematicians call the successor function, and although the concept of zero was hard to understand, not only in Roman times, but even in the early Renaissance, current the symbol "2" is defined to be the successor of the successor of "0", and "+" is defined as moving an s() from one side to the other until a "0" has been reached on one side, at which point it can be dropped. So "2+2" = s(s(0)) + s(s(0)) = s(s(s(0))) + s(0) = s(s(s(s(0)))) + 0 = "4". IIRC, 0 can be defined as {} the empty set, s(s(x)) as {{x} u s(x)} or summat like that (not being rigorous, just lazy).

    Anyway, a theorem of set theory may turn out to be used as an axiom for arithmetic, and that in turn used as an axiom (or given) for say calculus. But that doesn't make "2+2=4" a theorem at any sensible level, not even a lemma, but rather the definition of the symbols being used.

    It turns out that many of the axioms of used in mathematics correspond to our natural understanding at an early level, and that in physics somewhat weird axioms can predict actual results, as in relativity and QM. When counting sheep jumping fences, integer arithmetic is enough. When counting cats in boxes, it isn't.

  • Re:Philosophy... (Score:3, Informative)

    by Somewhat Delirious ( 938752 ) on Monday January 10, 2011 @06:20PM (#34829142)

    You simply take the claims about what defines "scientific method" and the examples used to illustrate that claim and show that the reality did in fact not conform to those definitions and that the claimed successes of that method were in fact made possible only by violating the terms of that definition.
    You are, I think, confusing the scientific method as used by what we tend to call scientists with the definitions of the "scientific method" and idealized examples used by philosophers of science.

  • Well, actually... (Score:4, Informative)

    by Estanislao Martínez ( 203477 ) on Monday January 10, 2011 @06:50PM (#34829572) Homepage
    Well, actually, Feyerabend does at various equate science to voodoo and other systems of myth. However, the thing is that Feyerabend is not doing this to denigrate science, as the comparison to "voodoo" will be normally read. He in fact explicitly condemns the common practice of referring to "voodoo" as a stand-in for obscurantism and ignorance that can be dismissed out of hand:

    Besides, ancient doctrines and "primitive" myths appear strange and nonsensical only because the information they contain is not known, or is distorted by philologists or anthropologists unfamiliar with the simplest physical, medial or astronomical knowledge. Voodoo, Dr Hesse's pièce de resistance, is a case in point. Nobody knows it, everybody uses it as a paradigm of backwardness of confusion. And yet Voodoo has a firm though still not sufficiently understood material basis, and a study of its manifestations can be used to enrich, and perhaps even to revise, our knowledge of physiology. [Against Method, pp. 35-36]

    Feyerabend thinks that science and myth are very similar and are of comparable worth. (And note I said comparable, not "equal"; the point is that there are arguments about values that can be had in this regard.)

  • Re:oy (Score:5, Informative)

    by fishexe ( 168879 ) on Monday January 10, 2011 @08:46PM (#34830842) Homepage

    Says the guy who has gotten absolutely everything wrong about the economy.

    Yeah, everything except predicting that complex derivatives markets would lead to a subprime collapse and that allowing banks to use our deposits to fund speculation would cause a housing crisis to take down the rest of the economy, which is exactly what happened in 2007-8. Aside from that, he got everything wrong. Also, explaining why countries export the same commodities to one another rather than fully specializing. Man, that dumbass was so wrong, because, of course, Japanese never buy Fords and Americans never buy Toyotas, and Europeans never buy Dodges, and Americans never buy Volkswagens.

    Seriously, his only solution is "spend more", like a bloodletter of old claiming that he could have healed his patient if only the family had let him drain just one more drop of "bad humor" from his system.

    I might buy that if you could show one shred of empirical evidence that spending in time of recession hurts the economy.

The moon is made of green cheese. -- John Heywood

Working...