Monday, September 17, 2007

Tuesday, December 26, 2006

The problem of induction

[Even more fun than the problem of evil.]
The problem of induction is the philosophical issue involved in deciding the place of induction in determining empirical truth. The problem of induction is whether inductive reason works. That is, what is the justification for either:
  1. generalizing about the properties of a class of objects based on some number of observations of particular instances of that class of objects (for example, "All swans we have seen are white, and therefore all swans are white", Hume's Problem of Induction, C18, pre discovery of Cygnus atratus in Australia); or
  2. presupposing that a sequence of events in the future will occur as it always has in the past (for example, the attractive force described by Isaac Newton's law of universal gravitation, or Albert Einstein's revision in general relativity).

Francis Bacon, Isaac Newton, and numerous others up until at least the late 19th century have considered inductive reasoning the basis of scientific method—indeed inductive reasoning is used today, though in a more balanced interaction with deductive reasoning and abductive reasoning. By the inductive approach to scientific method, one makes a series of observations and forms a universal generalization. If correct and stated in a sufficiently accurate way, an inductively arrived at statement relieves others of the need for making so many observations and allows them to instead use the generalization to predict what will happen in specific circumstances in the future. So, for instance, from any series of observations that water freezes at 0°C at sea-level it is valid to infer that the next sample of water will do the same--but only if induction works. That such a prediction comes true when tried merely adds to the series; it does not establish the reliability of induction, except inductively. The problem is, then, what justification can there be for making such an inference?

David Hume framed the problem in An Enquiry Concerning Human Understanding, §4.1.20-27, §4.2.28-33[1]. Among his arguments, Hume asserted there is no logical necessity that the future will resemble the past. Justifying induction on the grounds that it has worked in the past, then, begs the question. It is using inductive reasoning to justify induction, and as such is a circular argument. This logical positivist formulation of the problem would prove to be a tenacious counterargument to the use of inductive propositions. Further, even the largest series of observations consistent with a universal generalization can be logically negated by just one observation in which it is false. By Hume's arguments, there also is no strictly logical basis for belief in the Principle of the Uniformity of Nature. Notably, Hume's stated position on the issue was that instead of unproductive radical skepticism about everything, he actually was advocating a practical skepticism based on common sense, where the inevitability of induction is accepted (but not explained). Hume noted that someone who insisted on sound deductive justifications for everything would starve to death, in that they would not, for example, assume that based on previous observations of, e.g., what time of year to plant seeds, or who has bread for sale, even that bread previously nourished them and others, that these inductions would likely continue to hold true. Hume nonetheless left a lasting legacy by showing that there is no absolute certainty to any induction, even those inductions for which a contrary has never been observed. Bertrand Russell elaborated and confirmed Hume's analysis in his 1912 work, The Problems of Philosophy, chapter 6.[2] (see also: logical positivism)

Karl Popper, an influential philosopher of science, sought to resolve the problem in the context of the scientific method, in part by arguing that science does not primarily rely on induction, but rather primarily upon deduction, in effect making modus tollens the centerpiece of his theory. On this account, when assessing a theory, one should pay greater heed to data which is in disagreement with the theory than to data which is in agreement with it. Popper went further and stated that a hypothesis which does not allow for experimental tests of falsity is outside the bounds of science. However, critics of Popper's approach to solving the problem, such as the famous utilitarian and animal rights advocate Peter Singer, argue that Popper is merely obscuring the role induction plays in science by concealing it in the step of falsification. In that, they mean that the proposition of something having been falsified is in and of itself a scientific theory and can only be assumed to be definitive through induction; no matter how many times a proposition is demonstrated to be accurate, when taken as a strict matter of logic it cannot necessarily be assumed that the proposition will always be accurate under the same circumstances. For this reason, among others, contemporary scientific research tends to regard hypotheses and theories as tentative, validated in terms of degrees of confidence rather than true/false propositions.

Nelson Goodman presented a different description of the problem of induction in the article "The New Problem of Induction" (1966). Goodman proposed a new property, "grue". Something is grue if it has been observed to be green before a given time t, or if it is has been observed to be blue thereafter. The "new" problem of induction is, since all emeralds we have ever seen are both green and grue, why do we suppose that after time t we will find green but not grue emeralds? The standard scientific response is to invoke Occam's razor.


Sunday, December 17, 2006

William Rawls

[Where has this site been? Down in the hole -- lost to the latest season of "The Wire." And now that we're all caught up, a week-long wiki-review of the show's most interesting characters seems in order. So here goes, fictional characters. Let's start with the one with the biggest secret: (WARNING: SPOILERS FOLLOW.)]
William Rawls is a fictional Police officer in the Baltimore Police Department played by John Doman on the HBO drama The Wire. Over the course of the series he has ascended to the rank of Deputy Commissioner of Operations. Only brief glimpses have been seen of his personal life, but it has been strongly implied that he is a closet homosexual....

Nothing has been shown of Rawls's personal life, with one exception: he appeared, out of uniform, in the background in a scene which took place in a gay bar.[2]

....Rawls lack of a sense of humour and distinctive technique for intimidating others is based on real Baltimore CID commander Joe Cooke, although Rawls is far more banal. Simon has also commented that Rawls attitude to the murder rate and his units clearance record is a product of the extreme pressure he is under.[4]


Wednesday, November 29, 2006

Gödel's incompleteness theorems

[Spun off from Monday's toughie, first-order logic. This one's just on the border of comprehensibility, but kind of in the way that Reykjavik borders New York.]
In mathematical logic, Gödel's incompleteness theorems are two celebrated theorems about the limitations of formal systems, proved by Kurt Gödel in 1931 .These theorems show that there is no complete, consistent formal system that correctly describes the natural numbers, and that no sufficiently strong system describing the natural numbers can prove its own consistency....

Gödel's theorems are theorems in first-order logic, and must ultimately be understood in that context. In formal logic, both mathematical statements and proofs are written in a symbolic language, one where we can mechanically check the validity of proofs so that there can be no doubt that a theorem follows from our starting list of axioms. In theory, such a proof can be checked by a computer, and in fact there are computer programs that will check the validity of proofs. (Automatic proof verification is closely related to automated theorem proving, though proving and checking the proof are usually different tasks.)

To be able to perform this process, we need to know what our axioms are. We could start with a finite set of axioms, such as in Euclidean geometry, or more generally we could allow an infinite list of axioms, with the requirement that we can mechanically check for any given statement if it is an axiom from that set or not (an axiom schema). In computer science, this is known as having a recursive set of axioms. While an infinite list of axioms may sound strange, this is exactly what's used in the usual axioms for the natural numbers, the Peano axioms: the inductive axiom is in fact an axiom schema — it states that if zero has any property and whenever any natural number has that property, its successor also has that property, then all natural numbers have that property — it does not specify which property and the only way to say in first-order logic that this is true of all properties is to have infinitely many statements.

Gödel's first incompleteness theorem shows that any such system that allows you to define the natural numbers is necessarily incomplete: it contains statements that are neither provably true nor provably false.

The existence of an incomplete system is in itself not particularly surprising. For example, if you take Euclidean geometry and you drop the parallel postulate, you get an incomplete system (in the sense that the system does not contain all the true statements about Euclidean space). A system can be incomplete simply because you haven't discovered all the necessary axioms.

What Gödel showed is that in most cases, such as in number theory or real analysis, you can never create a complete and consistent finite list of axioms, or even an infinite list that can be produced by a computer program. Each time you add a statement as an axiom, there will always be other true statements that still cannot be proved as true, even with the new axiom. Furthermore if the system can prove that it is consistent, then it is inconsistent.

It is possible to have a complete and consistent list of axioms that cannot be produced by a computer program (that is, the list is not computably enumerable). For example, one might take all true statements about the natural numbers to be axioms (and no false statements). But then there is no mechanical way to decide, given a statement about the natural numbers, whether it is an axiom or not.

Gödel's theorem has another interpretation in the language of computer science. In first-order logic, theorems are computably enumerable: you can write a computer program that will eventually generate any valid proof. You can ask if they have the stronger property of being recursive: can you write a computer program to definitively determine if a statement is true or false? Gödel's theorem says that in general you cannot.

Many logicians believe that Gödel's incompleteness theorems struck a fatal blow to David Hilbert's program towards a universal mathematical formalism which was based on Principia Mathematica. The generally agreed upon stance is that the second theorem is what specifically dealt this blow. However some believe it was the first, and others believe that neither did....


Tuesday, November 28, 2006


[This is either a bad article, or a hard subject. You decide.]
The lift force, lifting force or simply lift consists of the sum of all the fluid dynamic forces on a body perpendicular to the direction of the external flow approaching that body.

Sometimes the term dynamic lift (dynamic lifting force) is used in reference to the vertical force resulting from the relative motion of the body and the fluid, as opposed to the static lifting force resulting from the buoyancy.

The most straightforward and frequently-mentioned application of lift is the wing of an aircraft. However there are many other common, if less obvious, uses such as propellers on both aircraft and boats, rotors on helicopters, fan blades, sails on sailboats and even some kinds of wind turbines.

While the common meaning of the term "lift" suggests an "upwards" action, in fact, the direction of lift (and its definition) does not actually depend on the notions of "up" and "down", e.g., as defined with respect to the direction of the gravity. Specifically, the term negative lift refers to the lift force directed "down".

There are a number of ways of explaining the production of lift, all of which are equivalent. That is, they are different expressions of the same underlying physical principles....


Monday, November 27, 2006

First-order logic

[A theme week? Well, why not. Let's start with extremely difficult subjects, things that make your head spin. So here's #1:]
First-order logic (FOL)
, also known as first-order predicate calculus (FOPC), is a system of deduction extending propositional logic (equivalently, sentential logic). It is in turn extended by second-order logic.

The atomic sentences of first-order logic have the form P(t1, ..., tn) (a predicate with one or more "arguments") rather than being propositional letters as in propositional logic. This is usually written without parentheses or commas, as below.

The new ingredient of first-order logic not found in propositional logic is quantification: where φ is any sentence, the new constructions ∀x φ and ∃x φ -- read "for all x, φ" and "for some x, φ" -- are introduced. For convenience in explaining our intentions, we write φ as φ(x) and let φ(a) represent the result of replacing all (free) occurrences of x in φ(x) with a, then ∀x φ(x) means that φ(a) is true for any value of a and ∃x φ(x) means that there is an a such that φ(a) is true. Values of the variables are taken from an understood universe of discourse; a refinement of first-order logic allows variables ranging over different kinds of objects.

First-order logic has sufficient expressive power for the formalization of virtually all of mathematics. A first-order theory consists of a set of axioms (usually finite or recursively enumerable) and the statements deducible from them. The usual set theory ZFC is an example of a first-order theory, and it is generally accepted that all of classical mathematics can be formalized in ZFC. There are other theories that are commonly formalized independently in first-order logic (though they do admit implementation in set theory) such as Peano arithmetic....


Friday, November 24, 2006

The tryptophan turkey

[From non-Wik haunts: Feeling sleepy? Don't look to the bird.]
"Turkey does contain tryptophan, an amino acid which is a natural sedative. But tryptophan doesn't act on the brain unless it is taken on an empty stomach with no protein present, and the amount gobbled even during a holiday feast is generally too small to have an appreciable effect. That lazy, lethargic feeling so many are overcome by at the conclusion of a festive season meal is most likely due to the combination of drinking alcohol and overeating a carbohydrate-rich repast, as well as some other factors...."