« Audiobooks and Cognitive Evolution | Main | How is Anti-Immigration Socially Conservative? »

February 28, 2007

Comments

Thomas

FYI, I studied Quantum Mechanics and Chaos Theory in college extensively. I'm fairly well certain what the implications are of the two theories. The only scientists that believe that these two theories posit a random component of the universe usually have some other ideological agenda/baggage (usually deist). Even Einstein was famously attributed the quote "God does not play with dice", and he pioneered Quantum Mechanics. He may not have ever actually said that, but he was certainly a determinist through-and-through.

Nato

It should be pointed out that according to the equations, it is impossible *in principle* to measure a combination of momentum and position past a certain point, and it is the hardness of this finding that makes the situation so interesting to a scientist. On the philosophical side there are a number of possible philosophical interpretations of this (many either erroneous or largely irrelevant) but it would seem premature to make final verdicts based on mathematical models that don't even pretend to be complete.

Thomas

The main problem with the definition of 'randomness' in the SA article is that it's contingent on the minimal program only using the basic instruction set of the arithmetic logic unit. If, for instance, we create a different ALU, meaning we create a different function-space, then a sequence that was previously 'random' may cease to be designated so. The implication is that there are no strings of digits that are essentially random, because one could always envision a (perhaps unique) function-space that would map a pattern onto any string of digits. For instance, the addition function operates like 2+2=4, meaning it takes two operands, 2 and 2, and produces 4 as the output, but we could envision a function that behaves like so f(2,2) = 1,000,007. Thus any string of digits could have a compressed logical representation depending on the function-space. So to sum up, I don't really like that definition of 'randomness', not mathematically and certainly not philosophically, but I do like a lot of the discussion of randomness and abstraction in the article. If pressed, I would define the 'random' as that which is unpredictable, or at least difficult to predict. This is a pretty common-sense definition and it flows colloquially, such as "random acts of violence were committed".

Thomas

The uncertainty principle doesn't claim that "if you measure momentum precisely, then position is random, and if you measure position precisely, then momentum is random". That is a most erroneous conclusion. It even defies common-sense logic. How can a thing have a random position at a given time, especially when you consider that you can measure it precisely at a different time (as long as you aren't measuring momentum as well)? I understand why there's so much confusion on this issue. If you're measuring one thing precisely, then the other thing seems to follow a probabilistic curve, which implies that it is random (since probability is normally used to describe random processes such as coin flips). But it is a fallacy of reason to say that since random events are described with probability and this particle's position is described with probability (when measuring for momentum), then this particle's position must be random. For one thing, the position is clearly not random since you can measure it precisely; you just can't measure both position and momentum precisely at the same time.

Thomas

One other caveat. If you know a particle's position but not its momentum at a particular time, then you can't predict what its future position will be. If you know what a particle's momentum is but not its position, then you can't predict what its future momentum will be. That's why the universe is random at the quantum level for practical purposes, but it does not follow that the universe is necessarily veridically random at the quantum level. Nathanael would probably say that we must be agnostic on this point, but I don't really think that's necessary since logical arguments can be made against the veridical existence of randomness.

Nathan Smith

Hmm. I don't have a strong stake in this argument, but I'm fairly sure that Tom's determinism is not the conventional view. Here's Wikipedia on randomness in the physical sciences:

"According to some standard interpretations of quantum mechanics, microscopic phenomena are objectively random.[citation needed]. That is, in an experiment where all causally relevant parameters are controlled, there will still be some aspects of the outcome which vary randomly. An example of such an experiment is placing a single unstable atom in a controlled environment; it cannot be predicted how long it will take for the atom to decay; only the probability of decay within a given time can be calculated. Thus quantum mechanics does not specify the outcome of individual experiments but only the probabilities. Hidden variable theories attempt to escape the view that nature contains irreducible randomness: such theories posit that in the processes that appear random, unobservable (hidden) properties with a certain statistical distribution are somehow at work, behind the scenes, determining the outcome in each case."

http://en.wikipedia.org/wiki/Randomness#In_the_physical_sciences

The Wikipedia article leaves the question undecided, but calls the randomness interpretations "standard," and says that the hidden variable theories "attempt" to explain... Certainly in my Quantum Mechanics class in college, randomness was asserted. I'm pretty sure the professor-- who was a rare non-Mormon at ultra-Mormon BYU, though I can't remember whether he was some other religion, tolerated because in a subject as dry and objective as physics his personal views didn't matter-- wasn't motivated by any "baggage," he was just teaching the standard line.

It's true that Einstein disagreed, but in this case he was a dissenter rather than a trailblazer. I've read histories of science that even treat the "God does not play dice with the universe" as a sort of minor tragedy: the pioneer of the last revolution proves to be a reactionary during the next, that kind of thing.

Tom's right that I would say we have to be agnostic on this point. I'd be sort of interested in the "logical arguments that can be made against the veridical existence of randomness," though I think I have some idea what they would sound like and certainly what their flaws will be: certain premises will be mistaken for *a priori* truths when in reality they are only arbitrary assumptions, and the conclusions will flow from those premises. It's the same error that is made in the "logical" arguments against choice.

Nato

"The main problem with the definition of 'randomness' in the SA article is that it's contingent on the minimal program only using the basic instruction set of the arithmetic logic unit. If, for instance, we create a different ALU, meaning we create a different function-space, then a sequence that was previously 'random' may cease to be designated so. The implication is that there are no strings of digits that are essentially random, because one could always envision a (perhaps unique) function-space that would map a pattern onto any string of digits. For instance, the addition function operates like 2+2=4, meaning it takes two operands, 2 and 2, and produces 4 as the output, but we could envision a function that behaves like so f(2,2) = 1,000,007"

One could envision the function space, but is the choice between f(2,2) that returns 4 and the f(2,2) that returns 1 000 007 arbitrary? I think there are reasons to select the former as basic and reject the latter as rootless. I *think* there are, but I'm not familiar enough with that field to defend the assertion intelligently.

In any case, I take it as given that incompressibility is philosophically defensible.

Thomas

The plain truth is that very few people understand Quantum Mechanics, mostly because it is still very much an incomplete theory. The cutting-edge of theoretical physics is presently occupied with reconciling the incompatibility between Quantum Mechanics and General Relativity. It is a field that I hope to one day contribute to, though my life has taken and may still take unexpected turns away from that ambition.

My argument against randomness is very simple: we can't logically produce a random outcome. In fact, the algorithms that we use in computers to simulate randomness must necessarily use a physical source to 'seed' the algorithm so that it produces a 'random' outcome. One common way to seed a random number generator is with the current time as kept on a computer's internal clock (the internal clock is usually some sort of crystal that oscillates electrical signals at its terminals). Obviously, if the universe is truly determinate, then not even using the current time as a seed will produce a truly random value, but it produces a value that is random for most intents and purposes. That whole aside is irrelevant though to my main point that it is impossible to logically produce a random outcome. What I mean is that propositional logic is fairly rigid and is in the business of outlining concrete relations and making predictions. Propositional logic does not have the capacity to produce true randomness. The main discrepancy between the metaphysical realm and the physical realm in this regard revolves around the idea of 'causation'. The physical realm appears to travel in one causal time direction, whereas the metaphysical realm is timeless. Without the idea of time and causation, the idea of randomness is mostly void of content, because at base randomness is used to describe possible future outcomes. We don't say that the past or present is random because they are both already set in stone. The future is not set in stone, however, and thus it seems to follow a convergence of all possible outcomes describable by probabilistic curvatures.

To me it's not really conceivable how logic could be used to produce true randomness. Perhaps our understanding of logic is inadequate, but in my opinion that sort of claim is a cop-out and avoids the issue. There is a very good reason why the idea of 'destiny' and 'fate' appeared at such an early stage in human intellectual development, and it's because the alternative is not only counter-intuitive, but it's also beyond the grasp of logic's ability to describe it.

Nathan Smith

"My argument against randomness is very simple: we can't logically produce a random outcome."

Assumed here is a proposition that the set of things that exists is a subset of the things that "we can logically produce." But I don't understand this proposition. Can we "logically produce" the subjective experience of red? What would that mean? Yet surely the subjective experience of red exists. It seems to me that most of the things we experience and believe in are that way: to "logically produce" them seems impossible or meaningless.

Interestingly, although we can't generate true randomness in a computer program, say, we can describe the properties of random functions: the probability of discrete random function f taking a value x, the probability of continuous random function g taking a value between x and y, the "probability density" of continuous random function g at z. This is what statisticians do.

Thomas

To Nato:
I'm merely saying that the definition of 'randomness' in the SA article seems somewhat arbitrary, because a string of digits could be random in one context and not random in another. The theory of randomness in the article posits that randomness is an essential quality of certain strings. Why? Because they're not reducible to an algorithm smaller in size? Frankly, that begs a lot of questions. It's not even a useful distinction as it has no practical application (at least none that I can see).

Nathan Smith

About "destiny" and "fate": it's not clear to me that destiny/fate, as classically conceived, are the same as determinism.

Consider, for example, the famous Greek play *Oedipus Rex*. Oedipus hears a prophecy that he will kill his father and marry his mother. Horrified, he flees his hometown and goes to the far end of Greece. It turns out that he was mistaken about the identity of his own parents, and his actions lead to a series of adventures that culminate in the very outcome that was foretold.

The moral of the story is that it is futile to resist the will of the gods (though one feels that Oedipus still did the right thing by trying). But for the story, or its moral, to make any sense, one has to assume-- as of course the Greek audience would have done, since all prephilosophical people believe in free will, and I suspect the belief is indelible no matter how strongly one is indoctrinated against it-- that Oedipus did have some choice in his actions along the way. Otherwise it becomes absurd to say that it is futile to resist the will of the gods, since to resist resisting it would be just as futile!

Also, since Oedipus's actions were clearly affected by the words of the oracle, and since the chain of events that led to the fulfillment of the oracle were highly contingent and coincidental, it seems clear that either (a) the oracle itself *caused* its prediction to become true, by setting the chain of events in motion, or (b) if Oedipus hadn't spoken to the oracle, the horrible event would have been brought about in some completely different way. In case (a), one is inclined to be very hard on the oracle, which becomes, not merely the herald, but the culprit of the horrors. This seems not to be the message of the play. In case (b), we are obviously not talking about ordinary determinism. Oedipus has free will, but *certain things* are decreed beforehand by some sort of fate, in this case an exceptionally malicious one.

Fate or destiny in this sense is an interesting idea because it offers a fourth potential relational mode between Situation A and subsequent Event B, alongside determinism, randomness, and choice. Under determinism, Situation A determines Event B and everything in between; under randomness, there is a probability distribution of Events B1, B2, B3, etc.; under choice, there are many possible events B1, B2, B3, etc., and a will which chooses among them. Under fate, Event B is inevitable, and Situation A will lead to it, but it could lead to it through many different paths of proximate causation.

Now, I'm not sure that I believe that such a situation-event relationship as fate/destiny actually exists in the world. Probably not. But the notion of it is a useful reminder is that neither the three-way set of situation-event relationships we have been discussing-- choice, determinism, and randomness-- nor any narrower set is *logically* exhaustive of the possibilities. Choice, causation, and probably randomness happen to exist, fate probably does not. There could be alternate worlds, or alternate (I think mistaken) interpretations of our own world in which only causation existed, or only choice and randomness, or only causation, randomness, and fate.

Thomas

"Assumed here is a proposition that the set of things that exists is a subset of the things that "we can logically produce." But I don't understand this proposition. Can we "logically produce" the subjective experience of red? What would that mean? Yet surely the subjective experience of red exists. It seems to me that most of the things we experience and believe in are that way: to "logically produce" them seems impossible or meaningless."

Assumed here is a proposition that the subjective experience of red can't be logically produced. But I don't understand how the subjective experience of red could be illogical or beyond logic. What would that mean? It seems to me that most of the things we experience and believe in are subservient to logic, and to be beyond logic would seem impossible or meaningless.

"Interestingly, although we can't generate true randomness in a computer program, say, we can describe the properties of random functions: the probability of discrete random function f taking a value x, the probability of continuous random function g taking a value between x and y, the "probability density" of continuous random function g at z. This is what statisticians do."

Yes, there seems to be a sort of "wave-particle" duality in the causation of events. If we know all of the particulars, we can logically determine an outcome. If we don't know all of the particulars, we can guess what the likely outcome(s) will be using probability and statistics. The uncertainty principle says we can't really know all of the particulars at the quantum level, and thus it implies that at base we must always guess. Of course, all of our guesses are educated guesses, and the more educated a guess is, the more the probability wave collapses to a particle. So yes, while we can't logically produce random outcomes, we can still describe what could be produced by a seemingly random process; in other words, we can logically delineate and weigh the possible outcomes of a theoretically random process, but we can't logically determine what the outcome will be.

Thomas

Regarding Nathanael's take on destiny and faith:
It sort of seems to me that your stance is more pragmatic than anything, though I'm sure you would claim that it's not only pragmatic but also true. Here is a list of the 'causes' and at which level of inquiry they are the most apparent.

1. Quantum particles obey randomness [sub-atomic level].
2. Larger physical non-agent constructs obey determinism [atomic/molecular level].
3. Agents obey choice [macro-biological/agent level].
4. The universe obeys destiny/fate [cosmological/god level].
5. ??????????? [unknown].

This seems to me to be a heterophenomenological view of existence; it implies we should use whichever model of causation is most appropriate for a given realm of inquiry. As a pragmatic viewpoint, it is most certainly the best way to look at existence. However, it seems to me your viewpoint would only be veridical if the different modes of causation were translatable to each other. What I mean is that it must (I believe necessarily) be possible to describe one mode of causation with another. It is easy to see how determinism, choice, and fate could be described as random. Conversely, it must be possible for each mode to describe every other mode in order for there to be causal closure. Without causal closure, you would run into logical paradoxes such as the possibility of perpetual motion machines and whatnot. So why do scientists insist on working within the determinist mode as opposed to the other modes? Why should everything be considered deterministic as opposed to random, or chosen, or fated? Because determinism is easily describable by logic and mathematics, it's easier to formulate predictive theories with it, it's easier to do empirical analysis with it, it is in short the path of least resistance for inquiry. Personally, I'll allow the idea of randomness, choice, and fate being viable causes as long as there is causal closure among them all.

Nato

"The theory of randomness in the article posits that randomness is an essential quality of certain strings. Why? Because they're not reducible to an algorithm smaller in size? Frankly, that begs a lot of questions. It's not even a useful distinction as it has no practical application (at least none that I can see)."

I think the problem is one of equivocation between two senses of 'random.' "Random" in the information theory sense of "incompressible" seems to me to be critically useful, applicable to both practical and philosophical questions. "Random" in the causation sense of "unordered contingency" is, in its hard sense, much harder to advance as a critical distinction, for reasons on which Tom's last post touches.

Nathan Smith

"1. Quantum particles obey randomness [sub-atomic level].
"2. Larger physical non-agent constructs obey determinism [atomic/molecular level].
"3. Agents obey choice [macro-biological/agent level].
"4. The universe obeys destiny/fate [cosmological/god level]."

This is an interesting schema, though as I said, I don't necessarily believe in destiny/fate. But Tom loses me here:

"[I]t must be possible for each mode to describe every other mode in order for there to be causal closure. Without causal closure, you would run into logical paradoxes such as the possibility of perpetual motion machines and whatnot."

I'm not sure I understand what is meant by causal closure. I understand the idea of "the causal closure of the physical," and I know that this is a standard assumption of the natural sciences, and is taken to be veridical by many. But even of those who take it to be true, I doubt many would claim it is *logically* necessary. I, of course, am agnostic at best about the causal closure of the physical. If the idea of causal closure is being detached from physicalism, then I can't imagine what it means.

Also, is the possibility of perpetual motion machines a *logical* paradox? Certainly it violates the Second Law of Thermodynamics, and since I believe the Second Law of Thermodynamics does happen to hold in our world, I do not believe that perpetual motion machines are possible. But there seems to be nothing logically inconsistent in the idea of a world where certain means existed to create perpetual motion machines.

Thomas

"I think the problem is one of equivocation between two senses of 'random.' "Random" in the information theory sense of "incompressible" seems to me to be critically useful, applicable to both practical and philosophical questions. "Random" in the causation sense of "unordered contingency" is, in its hard sense, much harder to advance as a critical distinction, for reasons on which Tom's last post touches."

I guess my point is that there's no reason to call the 'incompressible' random. I don't really think its a useful overloading of the term 'random'. I suppose you're right, though, it is just a quibble over semantics.

Thomas

To Nathan:
I'm a physicalist. That fact has deeper implications for my epistemology than others'. Basically, I believe that the metaphysical realm and physical realm are perfectly isomorphic. The distinctions we make between the two are like the distinctions we make between different species of animals, or different kinds of substances, and like animals and substances, they are reducible/translatable to each other, ie they are of the same 'stuff' which I call existence. With that aside, when I/we talk about causes, we naturally are referring to the physical world. In the list that I presented, it's assumed that randomness, determinism, choice, and destiny/fate are all causes of physical phenomena. Regarding paradoxes, I suppose they are necessarily metaphysical since there's no possibility they could ever be instantiated in the physical realm. According to my epistemology, since the metaphysical and physical realms are isomorphic, a paradoxical proposition is contentless/meaningless. Why is causal closure necessary to prevent paradoxes? Because if there was no causal closure, you could have a case where choices/etc were causing deterministic effects without there being an equal reciprocal effect caused on the chooser. Heck, it's difficult to even talk about causes without assuming reciprocity. The idea of causality has other conceptual problems which I've written about on my blog, and my solution was to say that causation is an artifact of perception and not something fundamental in itself. Of course, my ideas in that regard are highly speculative and I wouldn't expect anyone to take them seriously at the moment.

Nathan Smith

Once again, I find myself stopping short and scratching my head...

"The distinctions we make between the two are like the distinctions we make between different species of animals, or different kinds of substances, and like animals and substances, they are reducible/translatable to each other..."

Animals are reducible/translatable to each other? A cat can be translated into a dog, or a giraffe? I don't get it.

Of course, dogs, cats, and giraffes may all be made of protons, neutrons, and electrons, but believing that is obviously not necessary to the idea dogs, cats, and giraffes.

Then there's this:

"Why is causal closure necessary to prevent paradoxes? Because if there was no causal closure, you could have a case where choices/etc were causing deterministic effects without there being an equal reciprocal effect caused on the chooser. Heck, it's difficult to even talk about causes without assuming reciprocity."

It seems to me that causation is *not* reciprocal under any account of causation that I can think of. Why? Because causation is one-directional in the time dimension. Event A can be a cause of Events B1, B2, B3, etc. that occur *after* Event A. Events B1, B2, and B3 cannot cause Event A because they are subsequent to it in time. Causation is characteristically one-directional and not reciprocal.

Of course, because of the principle of conservation of momentum, if I push you, I push myself in the opposite direction at the same time. That's one of Newton's laws. But I am still the one who did the pushing, both of you and, because of conservation of momentum, of myself. Conservation of momentum is a restriction on the set of events which it is possible to cause. It does not imply that causation is reciprocal.

Nato

A dog could be translated into its elements (whichever way we want to define that), those elements transposed and translated again up to a cat. The elements of meteorology can be translated to thermodynamics and back up to hydrography or down further to chemistry and up to... etc. Since there's nothing lost at each stage* the translation succeeds. What this really means is that there are no special essences that are specific to one type of thing, because all types are different arrangements of a single essence. Since Tom includes the logical world in this monism, he is asserting that, if we were smart enough, we would find that the instantiated world and the logical world are branches of the same ur-essence. I can't say I agree, though neither do I disagree - I cannot find a satisfactory way of approaching the issue.

*One could say that certain logical truths are lost - the dog's intentions are not translated into the cat's because they are logical relations, not elements. However, if the instantiated/logical system is closed, then the intent *is* translatable, and indeed everything is *always already translated*, though we wouldn't only apprehend the translation in vanishingly rare cases.

Nato

...we *would* only apprehend...

Nathan Smith

I may as well take this opportunity to make yet another plug for, and attempt to elucidate, my peculiar epistemology of "faith." Nato writes:

"One could envision the function space, but is the choice between f(2,2) that returns 4 and the f(2,2) that returns 1 000 007 arbitrary? I think there are reasons to select the former as basic and reject the latter as rootless. I *think* there are, but I'm not familiar enough with that field to defend the assertion intelligently."

For induction to work, one must have a way to recognize certain patterns/functions/etc. as somehow more basic than others. If I observe the pattern 0, 1, 2, 3, ... and am asked to guess the next step, the pattern could be (a) "Each number is the previous number plus 1," or (b) "Each number is the previous number plus 3, divided by 2, rounded down to the nearest integer," or (c) "Each number is the square of the previous number squared, minus the number before that," or (d) "Each number in the sequence is the previous number plus 1 if the previous number is less than 3, or 1 million if the previous number is 3 or more." The respective rules underlying the series imply different numbers for the fifth position in the sequence:

(a) 0, 1, 2, 3, 4
(b) 0, 1, 2, 3, 3
(c) 0, 1, 2, 3, 7
(d) 0, 1, 2, 3, 1 million

Now, since one can invent an unlimited number of rules for the sequence, for induction to work, we have to be able to rank-order the simplicity of different functions in order to prefer them. We do this without difficulty, and even if it is possible to "prove" (as opposed to merely *defining* in such a way that someone else could show the definition was arbitrary) that some functions are simpler mathematically, we certainly don't apply-- not consciously, anyway-- such complex mathematical reasoning in our discernment of which patterns are more basic. We discern that by some sort of *intuition*. It's an intuition that we can't exactly justify, yet which we cannot do without (without falling into total skepticism).

I call that the first step of "faith." What I can't figure out is whether my usage is a complete innovation or whether it has some connection to the traditional use of the term. But Thomas Aquinas describes faith as "between opinion and knowledge." That sounds like we might be talking about the same sort of thing.

Nato

This last post of Nathan's is, in my mind, very much on target. This definition of "faith" doesn't seem question-begging or problematic in any way. I would perhaps offer different terms and a different phrasing, but I see nothing incompatible here. Tom might, of course, have some grounds to claim the term lamentably overloaded in a manner similar to "randomness," but that's a finer (though not unimportant) matter.

One thing I would emphasize is the careful mining of intuition to arrive at positions that can be confirmed deductively. The arrival at deductive (or other) proof can be important because our intuitions are sometimes at war with one-another and the only way to decide the issue non-arbitrarily is to show that one side discommodes more and more basic intuitions than the other. That is to say, the more veridical we take some intuitions to be, the more skeptical we must be regarding at least the distal aspects of opposing intuitions. At the end of the day, though, we do have to trust that the preponderance of our intuitions are correct.

Thomas

Regarding Nathanael's post on faith:
I agree with some of your reasoning, though I have many caveats to add. For one thing, the reason why the sequence 0, 1, 2, 3, etc seems intuitive is because that's how we're taught the number symbols, as a well-ordered list. The reason why the decimal number 255 is more intuitive and easier for us to read and understand than the binary number 11111111 is because we're taught base-10 arithmetic as opposed to base-2. I would certainly claim that base-10 is arbitrary, for if Humanity had 11 fingers and toes instead of 10, I imagine we probably would have developed base-11 arithmetic first. So simplicity is as much a function of the language you choose to use as the actual patterns themselves. When we do multiplication, we have to memorize multiplication tables. Why? Couldn't we just use some sort of deduction? The answer is yes and no. We can break multiplication down into the recursive addition of two operands. But then how do we *know* that 1+2=3 and all the other facts of addition? Because we've defined addition to be that way. The patterns in mathematics are largely a function of how we've defined our functions and terms. The relations between any two numbers can be defined by functional mappings. Whole branches of mathematics deal with the idea of functional mappings, such as Set Theory, Category Theory, Topology, Functional Analysis, etc. That's why mathematics constantly undergoes revolutions in definitions of terms and axioms. A prominent example is non-euclidean geometry which shouldn't even be possible according to the axioms of euclidean geometry. Now, that's not to say that there couldn't be some things more inherently simple than others. The universe is apparently more complex than any isolated part of it, and it's the same with all things that could be said to be composited of something else. And yet we have a word to describe the universe as an atom (in the Greek sense), the word of course being 'universe'. We do not need to invoke complex reasoning or modes of thought to talk about 'universe' as an atom/element. So where is the complexity? The set that contains 'universe' as an element and nothing else is extremely simple, and yet we know the universe itself to be complex. What does that mean? At this point I would normally make some brash conclusive statements and then have to fend off attacks, but to be honest, I'm not exactly sure what would make something simple or complex beyond an arbitrary definition. Simplicity/complexity seems to me to be not an essential quality of things. Are axioms more simple than deductions? Couldn't you hold something derived as an axiom, and then derive the axiom you started with? Isn't that how we've even come up with the idea of axioms? Isn't the idea of an axiom itself derived? To me, I see this whole concept as a sort of "chicken and egg" scenario.

Regarding induction, I noticed a discrepancy between my and Nathan's reasoning on this issue a while ago and I just haven't brought it up until now. Knowledge can be broken down into parts, and now I'm going to break it in twine: absolute knowledge, and categorical knowledge. Absolute knowledge is generally the God's-eye view, which philosophers mostly try to wrangle with. Categorical knowledge is not knowledge of particulars, but of ranges of particulars, which is what science mostly deals with. A good example is velocity. When I'm driving in my car, I don't know *exactly* what speed I'm driving, but I do know I'm going faster than zero and I'm going slower than the speed of light. If I'm an experienced driver and I'm familiar with relative velocities as we measure them, then I can know that I'm driving within a 10 mph range, like between 50 and 60, perhaps. These are very much knowledge claims, as valid as the absolute knowledge claims. Just because I don't know exactly what my velocity is doesn't mean I don't have knowledge regarding my velocity. Induction works not because it gives us absolute knowledge but because it gives us categorical knowledge. When data points show up outside of our inductive theories, we must either broaden the range of our knowledge claim, or develop a new theory. If I have the numbers 1-10 in a hat and I draw one, I can make the knowledge claim that I'm 90% certain I don't have the number 7 in my hand. Through convergence of categorical knowledge, we can asymptotically approach absolute knowledge, again collapsing probabilities down to a point.

On to Nathan's specific point about induction in his last post, the "unlimited number of rules for the sequence" are all equally valid. It is until we have more and more data points that the rules get paired off. Simplicity with regard to rules is important only when you have multiple *synonymous* rules, ie addition by one or addition by 2 subtraction by 1, etc. The best rule in that case is the one that keeps the same meaning but takes the least amount of bits or work or logic to implement. In computer science, sometimes we care more about the speed of the algorithm more than we care about how much space it takes up. There may be near-infinite synonymous rules or algorithms, but each one may have different practical applications, like some are better for speed, others better for space, and others better for parallelism, etc. It is not intuitive to me to say that one is necessarily simpler than another. With your example of the series 0, 1, 2, 3, if I were a man who was not taught numbers and instead I saw these numbers of sheep in separate sheep pens and I was told to predict what was in the next, I wouldn't automatically intuit the number 4. If I looked at the first, then the second, then the third, then the forth, I might expect the next one to contain 6 (1+2+3, though that doesn't necessarily even follow a pattern since you'd need two 1's at the beginning to get to 2). But expecting 4 does not seem intuitive to me unless you've been taught number symbols.

There's a lot more I could say on this issue, but I'll leave it at this for now so you guys can respond.

Nathan Smith

"For one thing, the reason why the sequence 0, 1, 2, 3, etc seems intuitive is because that's how we're taught the number symbols, as a well-ordered list. The reason why the decimal number 255 is more intuitive and easier for us to read and understand than the binary number 11111111 is because we're taught base-10 arithmetic as opposed to base-2. I would certainly claim that base-10 is arbitrary, for if Humanity had 11 fingers and toes instead of 10, I imagine we probably would have developed base-11 arithmetic first."

Certainly a base-10 number system is arbitrary, but I don't think the integer number line is arbitrary in the same way. The series 1, 11, 111, 1111, 11111 seems simple visually, and that, to be sure, is an illusion due to our use of the base 10 numbering system. But 0,1,2,3,4 can easily be represented in a numbering system of any base. In base 2, it is, 0, 1, 10, 11, 100. In base 3: 0, 1, 2, 10, 11.

The comments to this entry are closed.

My Photo

Only use a payday cash advance as a last resort.

Categories

Blog powered by Typepad