« | Main | »

November 23, 2009



As always I'm glad Nathan has decided to address Dennett, though of course I continue to disagree with his appraisal.

"He is saying that there are some arithmetical propositions that the mathematician would always get wrong, by the nature of his mental algorithm."

Yes, mathematicians - even the best mathematicians - frequently get things wrong. Something seems like it has to be right, but turns out to be wrong later. The algorithms of our minds change a little all the time, of course, so something that fools a mathematician once may not fool him again later. Further, teams of mathematicians would be quite a bit harder to fool than a single one, though of course it's rather difficult to imagine that even teams of mathematicians are infallible. It may even be that there are some problems so hard that no one will ever get them right. What about this surprises anyone? It requires only normal, not radical, skepticism. For example, Fermat was almost certainly wrong to think he could prove his conjecture that "It is impossible to separate a cube into two cubes, or a fourth power into two fourth powers, or in general, any power higher than the second into two like powers". We have proofs now, but they use math utterly unavailable to Fermat. In fact, mathematicians regularly present proofs that turn out to have errors. It doesn't throw all of mathematics into doubt.

His general response to questions like "if our brains have only limited/evolutionarily-selected access to Truth, why do we think we know anything?" can be found in The Intentional Stance, Elbow Room, or (I think, I'd have to look) Brainchildren. You might be able to catch echos of it in some of his papers:

Anyway, I can offer a simple answer as to why we can repose any confidence that our construction of the world maps well enough onto the world itself to allow us to continue refining toward truth.

First: if our logic is invalid, everything is chaos. This would seem to be an insurmountable, yet unimportant possibility. That is to say that its truth would imply absolutely nothing about anything, because only order produces implications. It literally makes no sense to even contemplate it as a possibility.

Carrying forward the same line of thought in a post from several years ago:
Axioms and the rules of logic are our intuitions codified. If base intuition is absolutely unreliable then reason has no traction, and Descarte's demon has won: we will never believe anything about the world that could be counted as true knowledge.

The real virtue of reason and the logical system it implies is that it directs us to pare our intuitions hierarchically. If logic brings it to our attention that some later-acquired belief or intuition conflicts with a primal one like "A thing can not be 'A' and 'not A' at the same time and in the same respect," then our deeper intuitive belief directs us to discard or correct the shallower one.

Even with this, one might ask something along the line of "why would evolution select for a brain that can comprehend generally?" The quickest answer is to note that we have been a technological and cultural species since before we were a species. Because culture and technology moved so much faster than genes ever could, specialized genes could never evolve. Instead, genes that prepared its host for whatever cognitive challenges it might face in a cognitively-determined fitness landscape would succeed. Thus the grand expansion of the general-purpose modeling 'front brain' that is so hypertrophied in humans. There's a tremendous amount of hand-waving here, but there does seem to be plenty of prima-facie justification for humans having the basket of competencies we do.

As a side note, both Chomsky and Penrose make almost precisely the same mistakes about what kind of thing consciousness must be that characterize basically all of the wasted 1970s. Though popular outside cognitive science and cognitive philosophy, they are perhaps the two thinkers who have fallen the most in terms of attention given to their ideas. Chomsky's pathbreaking work in linguistics and Penrose's work in physics both remains in good standing, but they have fallen off the cognitive charts. Chalmers would be the best philosopher still active that opposes the modern consensus that Dennett, Hofstadter, the Churchlands and others have created. In the late 70s and early 80s see hard realists like Block, Searle and Fodor, though they too I think have been on the wane. These days the fight seems sharpest between hard eliminativists (broadly in the Churchland camp) and the soft realists (broadly in the Dennett camp). I think the eliminativists aren't getting a lot of traction outside UCSD, but they get cited a lot by working scientists so it may just be that I don't agree with them and so I minimize their presence. For other 'soft realists' carrying forward now that Dennett is wasting his time talking about things he doesn't understand particularly well (i.e. religion), see Peter Carruthers (my favorite), the infamous Steven Pinker, Antonio Demasio and so on.

Nathan Smith

re: "Something seems like it has to be right, but turns out to be wrong later."

But if our brains were algorithmic, it would never "turn out to be wrong later." We would just be permanently programmed to mistake falsehood for truth. Maybe all the time. And I agree that this is an "unimportant possibility" in the sense that it's false, but it can only be false because our brains are not algorithmic.

re: "Chomsky's pathbreaking work in linguistics and Penrose's work in physics both remains in good standing, but they have fallen off the cognitive charts. Chalmers would be the best philosopher still active that opposes the modern consensus that Dennett, Hofstadter, the Churchlands and others have created."

'Consensus' within the self-enforcing clique of academia of course. Cognitive studies in general contributes nothing to the practical knowledge that ordinary people use, because it's vitiated by materialist dogma and consequently lacks truth-value.


"But if our brains were algorithmic, it would never "turn out to be wrong later. We would just be permanently programmed to mistake falsehood for truth."

That was not what Dennett was saying at all. No one thinks our brains are simple algorithms that calculate our way to the same answer every time. Nathan may believe that this is Dennett's position, but it is such a caricature as to be almost the opposite of what he believes. Rather, he's saying that for any finite amount of time it's almost certain that one with enough information about our brain can come up with a problem for which we will think we have the answer but be mistaken about it. As Dennett has pointed out, however, we have developed ways of changing ourselves, of modifying what we can and can't understand, such that what is fixed for us in the short term isn't in the long term. We can and do create choices, competencies, understandings and so on.

"'Consensus' within the self-enforcing clique of academia of course. Cognitive studies in general contributes nothing to the practical knowledge that ordinary people use, because it's vitiated by materialist dogma and consequently lacks truth-value."

Actually, I am a bit biased in my description precisely because I pay most attention to the philosophers that most influence working scientists who are doing current research. Little of this is quite what one would call 'practical knowledge that ordinary people use,' but that includes almost fields of scientific inquiry. There are some applications in terms of facial recognition scanners and other such "expert" systems, but most of that is using decades-old research. The neurologist Ramachandran is actually getting some decent results in healing phantom limbs and whatnot using a better understanding of how the brain works, but that is a bit at the gross level. As we start building more sophistical neural interfaces for amputees and other more direct applications of perceptual research we'll be moving into stuff that came out of the 90s. Some other applications coming out of Rodney Brooks' shop are turning into military applications. The stuff they're working on today (e.g. Blue Brain) probably won't start really coming into regular peoples' lives for another couple decades, but the research is producing and testing hypotheses today.

That doesn't even address all of the heterophenomenological hypotheses tested right from the beginning. Dennett's hypothesis about "filling in" vs inattention was tested and the maximal version was found to be incorrect and is now discarded.

Nathan is, of course, free to dismiss it all as "vitiated by materialist dogma and consequently lack[ing] truth-value," but it would seem the Department of Defense, health providers and numerous private startups disagree.

Bill Parks

Consciousness is a property of a living/thinking biological system. The only way to produce consciousness in an artificially intelligent system is to add biological components to create a "hybrid" system consisting of biological life integrated with artificially intelligent components. The biological component would account for any consciousness in the hybrid system - not the artificial component.


"Consciousness is a property of a living/thinking biological system."

If that's axiomatic, I suppose there's nothing more to argue.

Nathan Smith

I wouldn't say that consciousness is only a property of biological organisms. As a Christian, I should at least be open to the possibility of *angels*, which are not biological but are conscious. And God is conscious but not biological.

It seems, on the contrary, that there's something rather mysterious about the fusion of a conscious and rational soul with organic flesh in man. It's a phenomenon we would not know how to create. But the Being that created conscious, rational men could, presumably, create conscious non-biological beings as well. There doesn't seem to be any contradiction.

To say consciousness is a property of "living/thinking" systems seems redundant. Something that thinks is conscious; that is what it is to be conscious. Life in the biological sense is not a sufficient condition for consciousness-- plants are alive but not conscious, as are sleeping people-- nor is it, I think, a necessary one. But we speak of "the living God" even though God (the Father) doesn't have a physical body. The natural answer to "Are angels alive?" seems to "Yes." So it may be that if something is conscious, our concept of "life" naturally extends to it. In that case life would be a necessary condition of consciousness.


Biology describes a class of chemical machines. It's an interesting class, but it would seem somewhat orthogonal to consciousness. Until fairly recently most chairs are made of wood and our archetypal chairs - those we summon for conversational examples of 'class chair' - are invariably wood. Yet of course there are many materials that fill the role quite well, and membership in the class is neither aided nor impeded by wood content. Whether something is conscious would seem similarly related to chemical machine content.

Nathan Smith

The word "machines" would seem to be inapt. I would define a machine as an object of non-trivial mechanico-physical complexity created by a conscious being to serve a function or satisfy a need. Biological phenomena do not satisfy the criterion.


Nathan, does this mean you are coming out against intelligent design?

Nathan Smith

Well, I guess you might be able literally to cross-apply my definition to life from an Intelligent Design perspective, but I'm trying to capture the distinction that we ordinarily make between "life" and "machines," which is quite a sharp and, so to speak, weighty one; it matters, it has a moral character. To say, "A horse is nothing but a machine," seems, as it were, to do violence to the concept of a horse, to be almost a sort of betrayal. I think ordinary people would revolt against the proposition. Why?

In the case of ourselves, we know that there exist features which go beyond any mere mechanico-physical complexity, features such as thought, free will, conscience. A person who doesn't understand machines might sometimes mistake them for possessing thought and will, and this misunderstanding is, I think, where the intuitive force of the idea of "artificial intelligence" comes from. An abacus is really as much a form of artificial intelligence as a computer: it can aid the mind in calculation, it can represent numbers. If you use an abacus to solve a sum that is too difficult for you to do in your head, it might be perfectly sensible, though a bit metaphorical, to answer the question, "How do you know that 4,365+2,599=6,964?" by saying, "The abacus *says* so." And if someone calculates a long sum incorrectly and you want to explain to them why, you might say, "At Step 7 you forgot to add three to the second string. From then on, the abacus *thinks* that.." Since the abacus is serving some (minor) functions usually served by a human mind, we can speak metaphorically as if it were speaking and thinking. But it doesn't-- or at least, we haven't the slightest reason to believe that it does-- have the *experience* of thinking. Someone who doesn't understand the abacus at all might imagine that it was an intelligent being, with which the human user's motions represent a complicated language of communication, and there were real thoughts inside the abacus. In the same way, only rather more easily, one can be confused into thinking that the computer, with its far greater and more diverse powers, is really thinking and believing things. This is a sort of modern variation of the old "God in the gaps" argument. When you don't understand the physical causes of natural phenomena, you might attribute, say, thunder and lightning, to the special intervention of a divinity. But once you understand the physical principles behind lightning and thunder, they cease to be mysterious, and you no longer feel the need to postulate the (direct, special) involvement of a conscious being with a will. When you understand how a computer works, its mysteries are dispelled, and you can see that it's just another machine. But for the common man, to whom the computer is a mystery, it is natural to ascribe to it "artificial intelligence." And some who *can* see how a computer is nothing but algorithms, but who are not very attentive to the nature of human beings, then go on to postulate that humans, too, are "nothing but algorithms," without claiming, except by way of empty bluster when they think they're in the company of people unable to ask penetrating questions, to have any remotely adequate account of how this could be so. This is why the critiques of a guy like Penrose are so useful. Dennett's answer to Penrose is reminiscent of the Russian's military response to Napoleon or Hitler: it consists of retreating into the wildernesses of skepticism, to the point where he has waived the right even to regard the logical and mathematical faculties which are necessarily antecedent to all natural science as presumptively valid; and certainly he is utterly without legitimate resources for rehabilitating them once they have been thus discredited. The best that can be said is that Dennett succeeds in arguing that Penrose can't *prove* that the brain is nothing but algorithm; and this is gained only by sacrificing the validity of logical and mathematical reasoning in general. Now, there is nothing wrong with being a talented writer. Sometimes writing talent is used to good purpose, to illuminate the mind with valid insights in a few words that a less brilliant exposition would take tomes to convey. But in this case Dennett's being a talented writer enables him to seem to be in a familiar realm of common sense which he has no right to, to mask the radicalness of his suggestion that our logical and mathematical faculties might, in themselves and not in their application, be fallible, to suggest that the mistakes made by a tired or careless or hasty mathematician, which his faculties could quickly recognize if awakened to the error then or later, might have something in common with the fundamental and permanent mistakes or limitations built into a faulty or incomplete algorithm, and to imply that if the concession of fallibility of human logical and mathematical faculties is made, we have some reason to believe that it is of limited scope.

Nathan Smith

But we can cut through all this obfuscation by pointing out that if the brain is just a bunch of algorithms we know it could not apprehend the truth even of all propositions of arithmetic, and that if it can't apprehend or misapprehends some of them there's no reason to think the number of such propositions that it misses or errs about is particularly small, therefore we cannot on these suppositions trust even our logical and mathematical faculties very far at all. But we have been building on these faculties from the very beginning. We must, then, become modern uber-Descarteses, demolishing not only the knowledge we have derived from the senses but also what we have derived from logic, and rebuild... except that there is nothing to rebuild on.

The grounds for rejecting this course, everyone knows in a way, yet it is hard to find the words to call them to mind properly. "Common sense" and "sanity" do not wholly fail to capture them; "intuition" might do for some purposes; Penrose's "mathematical insight" is an effort to express them. There are times when truth is handed to us on a platter; we can call it inspiration or insight; and just as a hungry man ought not to spit out wholesome food, we ought not to reject truth when it comes to us. As I said before, we know by experience that we have features that go beyond mechanico-physical complexity, features like pain and pleasure and emotion and free will and joy and conscience and love and aesthetic sensibility. In ourselves these things are fused with a biological organism. We cannot understand why, and indeed the fact that we are both soul and body has been a source of wonder and perplexity to man throughout all the ages, but the fact that soul and body are thus fused in us gives us some reason to think that at least some of these characteristics might extend to other biological organisms, that there is something special about life generally. I think we have pretty strong intuitions (though they didn't persade Descartes) that a horse is not a mere collection of chemical reactions; a virus, by contrast, might be. Of animals I think we don't really know. Of course even in the case of humans we don't know in one, strong, sense: solipsism is not, I think, strictly refutable; and the existence of other people is one of those special epistemic leaps that I call "faith." But the epistemic question is separate from the conceptual question. If one were to accept the solipsist hypothesis, that would be to assert that one is the only conscious, and also the only living, being in the world.

Now, if Intelligent Design is true, does that mean that living things *are* machines after all, by my definition of "an object of non-trivial mechanico-physical complexity created by a conscious being to serve a function or satisfy a need?"-- the conscious being Being, in this case, God? Perhaps a clause needs to be added: "... but not itself conscious..." That might seem to leave open the possibility that there are machines made by God, complex entities lacking consciousness used to serve His ends. But I think that is false. The reason is that God is omnipotent and does not need to operate by means of machines as we do. If God wants to lift something or divide something or reproduce something, He can simply do it. He could have no motive to make a machine to lift or divide or reproduce things for Him. Anything not conscious which God creates, He creates for its own sake, because it is beautiful or otherwise good in itself. God the Creator of a nature is an *artist*, not a gadgeteer, however ingenious the workings of nature may be. And I think we see this in nature all around us: though nature is a vast interlocking system, one can focus on any part of it and find in it a beauty that justifies its existence.

Yet it is said that the end of man is to praise God. In that sense man does have a function or a purpose; his mere existence is not enough. In that special sense man is to God as a machine is to us. But here the free, conscious nature of man is his essential feature, for love and praise and gratitude freely given by a free being are different and better things that any praises that could be uttered by a chorus of automata; yet, by definition, they cannot be, so to speak, *guaranteed* to "work" ex ante, the ideal of the machinist. We have an end, but whether we fulfill it or not depends on us and our choices.

To sum up, life is never a machine. It might, from the divine point of view, be an art form; or, if it is a thing meant to serve an end, as man is, that end can only be served precisely by a virtue of a quality antithetical to the nature of a machine, namely, free will. To call life a machine is to misunderstand its essential nature. To call non-human biological nature "machinery" is like going to an art museum solely for the purpose of chemical analysis of the pigments in the paint. To call humans machines is to imply that they ought to be as predictable and reliable as a machinist seeks to make his devices, in short to make them slaves, and it also implies that they can be destroyed without qualm whenever the expected net present value of their maintenance costs more than the expected net present value of their output.


Before I get to the main body of my response, I should perhaps note that "To sum up, life is never a machine," takes a position that's unclear to me. Is Nathan saying that a mechanistic explanation of the operation of bacteria cannot be complete? That's a rather strong and extremely dubious claim. If he is instead defending consciousness from being mechanistic, then he is apparently missing the thrust of my argument, which was that something biological might be conscious, but consciousness isn't biology. This would generally support rather than contradict Nathan's position, so I'm not sure exactly what was going on there. Perhaps it is all confusion due to equivocation between different meanings of the word "life."

And onto the main event:

"if the brain is just a bunch of algorithms we know it could not apprehend the truth even of all propositions of arithmetic."

Actually, we know it could not *prove* the truth of all true propositions, and Godel's incompleteness theorem didn't have an exception for human mathematicians to somehow prove them extra-mathematically. Meanwhile, modern connectionist algorithms might be said to be "almost entirely sure" about something, or might even be measured as being 98% positive (to use a semi-arbitrary simplification of something far more complex). These are far more complicated, messy and (dare I say it) mysterious algorithms than anything with which Penrose was familiar. In fact, even their human "designers" can't really keep track of all the dynamics that emerge when these self-modifying systems of equations meet the world. Of course, they are still far too simplistic and functionally impoverished to account for what cats and dogs do, much less humans, but they have gotten quite sophisticated enough to challenge the blithe assertions of what algorithms can and can't be like. Thus Nathan inverts the truth when he says things like:

"A person who doesn't understand machines might sometimes mistake them for possessing thought and will, and this misunderstanding is, I think, where the intuitive force of the idea of "artificial intelligence" comes from."

Basically, Nathan doesn't really know anything about what modern artificial intelligence researchers are really doing or what cognitive scientists have actually discovered, leaving him free to use his intuition to demolish the entire enterprise in a little quick a-priori, plus an extended psychological explanation of why those AI fellows don't realize they're off in the wilderness. I've tried to show that algorithms aren't just what Penrose thought they were to try and get away from talking about Godel's incompleteness theorem as if it somehow dealt a coup-de-grace to logeto-mathematical certainty for algorithms but not for human cogitators. It's a total non-sequitur. Nathan didn't invent it, but he should stop following Penrose in it.

I do think Dennett and I are committed to the proposition that nothing can be proven "all the way down," but I don't think this is radical skepticism; it's just understanding what the context of the word "proof" is. I suspect that Nathan has something of the same commitment except that the terms in which he defends holding to certain axioms are theologically inflected where mine are essentially appeals to parsimony.

We can also both agree that conscious life isn't meaningfully "mechanistic", though here the outlines of our defense diverge wildly, starting with the old fight about the meaning of "free will". The substantial disagreement in this thread is all in that argument about what would count as choice, as will, and so on.

Nathan Smith

One of the defense mechanisms of an academic field is to insist that no one is qualified to express disagreement with its premises and claims unless they "know" enough about it, that is, unless they are initiates. Marxists do this. Nato is doing something like this when he writes: "Basically, Nathan doesn't really know anything about what modern artificial intelligence researchers are really doing or what cognitive scientists have actually discovered, leaving him free to use his intuition to demolish the entire enterprise in a little quick a-priori..." No comment is permitted unless you are an initiate. And you are an initiate only if you accept the premises. Occasionally someone who can't be dismissed for inadequate credentials, a Penrose or a Chomsky, comes out against the orthodox position, and the initiates have to engage in a combination of argument and reformulation of doctrine and polemic/sneering/social isolation to marginalize the heretic. It is important, too, not to let non-initiates think the disagreement among experts gives them the right to comment on which experts seem to be right, thus: "Nathan didn't invent it, but he should stop following Penrose in it."

I could defend my credential a bit-- I have worked with evolutionary algorithms in my simulations, read about neural networks, and run lots of regressions-- but I'll choose instead to be classified as an outsider, for I believe it is sometimes appropriate for outsiders to rebel against self-reinforcing expertise-cliques. In the present case, what I am saying is:

1. Ordinary people trust their logical and mathematical faculties as sources of truth. They are in a sense infallible, for when we do make mistakes, we can recognize the same mistakes via the logical and mathematical faculties, which shows that it was not those faculties that made the mistakes-- or they would make them again-- but other features of the operation of the mind. Let's call it Thesis A.

2. Dennett and other advocates of materialism are compelled to abandon Thesis A, at least temporarily, because they are committed to reducing the mind to a materialist basis. They presumably would like to get back to Thesis A, since the negation of Thesis A is that the mind is unreliable, and they must have an uncomfortable awareness that this dissolves all knowledge, although Dennett, a master of neglecting inconvenient ramifications, does not follow that argument to its end. What Godel's Theorem shows is that they cannot get back to Thesis A. No algorithm can know, i.e., prove, even all the propositions of arithmetic. So if we are to accept materialism, we must abandon Thesis A. This strikes me as being an updated version of C.S. Lewis's Argument from Reason, which I read at the age of 17 or so and which all the debates I've had on the subject since confirm me in thinking is an ironclad refutation of materialism.

So, aside from questioning my credentials, what does Nato have to argue? Nato writes that:

"modern connectionist algorithms might be said to be 'almost entirely sure' about something, or might even be measured as being 98% positive (to use a semi-arbitrary simplification of something far more complex)"

Since this certainly doesn't amount to an algorithmic vindication of Thesis A, I don't see why Penrose or I should be the slightest bit affected by the claim, even if the claim itself is sound. I suspect it isn't, and that a closer examination would reveal that these algorithms are parasitic on human mathematical intuition... Again, Nato writes:

"In fact, even their human 'designers' can't really keep track of all the dynamics that emerge when these self-modifying systems of equations meet the world."

What's the point here? This is true even when you write down an arithmetic problem on paper because you can't solve it in your head. Computers, like abacuses and paper, certainly have more memory than we do, but they lack mathematical intuition. Nato writes:

"Actually, we know [the brain as a bunch of algorithms] could not *prove* [as opposed to know] the truth of all true propositions, and Godel's incompleteness theorem didn't have an exception for human mathematicians to somehow prove them extra-mathematically."

Nato's prove/know distinction here is odd. I would perhaps not want to commit to a general epistemological principle that one can only know what one can prove. We do, after all, sometimes use the word "know" in a looser sense. But mathematics is one area where there seems to be little reason to accept less than the gold standard of proof in defining knowledge. Does Nato want to claim that computational algorithms could not prove all true propositions of arithmetic, but, somehow, could "know" them?

And *of course* Godel's incompleteness theorem didn't "have an exception for human mathematicians"-- it is not a theory of what minds can do, but of what algorithms can do. If minds can do something that algorithms cannot-- and it seems pretty clear that minds can prove all propositions of arithmetic, as asserted in Thesis A-- then minds are something more than algorithms.

If nothing can be proven all the way down, that is radical skepticism unless there are some things that are "foundational," that can be known directly, without any deeper or prior proof. I would say that there are such things, including logical and mathematical truths, and the experience of free will, among other things. These are NOT enough to avoid being a skeptic: one needs what I call "faith" to escape Hume's disproof of inductive reasoning or to believe in other minds, and someone who denies order in the world is certainly a skeptic. Dennett and Nato start their epistemic journey, it seems, even more empty-handed than I do. To say it's "just [a matter of] understand what the context of the word 'proof' is" is just an evasion, a way for Dennett to say that he chooses to claim to know lots of things that he doesn't have grounds for claiming to know, and he prefers not to be explicit about this. In particular, Dennett has a purely fideist belief in materialism that is motivated not by any evidence but by his desire to have an impressively complete theory of things that is socially acceptable among an intellectual elite where the prestige of the natural sciences is the most salient feature of the times.


Nathan did not attempt to address any extant responses to Penrose, including those offered by Dennett, yet he wrote "Dennett is strangely oblivious to the radical skepticism inherent in his argument; strangely, because he is a philosopher of sorts and one might think he would be sensitive to it." There is an extensive corpus of discussion of the issue, of which Nathan is either unaware or refuses to acknowledge. I had hoped my statement that Nathan "doesn't really know anything about what modern artificial intelligence researchers are really doing or what cognitive scientists have actually discovered" would result in Nathan responding to the actual positions of Dennett et al, but I suppose Nathan views this as a dialog-squelching demand for Nathan to become an initiate.

As Dennett points out, Penrose is making a fairly simple mistake. Penrose equates 'knowing' with a process of formal logic despite the fact that very rarely is our experience of certain knowledge anything like the conclusion of a proof. Thus we have no real reason to think his sidetrack into 'corrected quantum gravity' in order to create a globally-capable proof-generator has anything to do with consciousness. That identity is a relic of AI from the 50s and 60s when behaviorism was in vogue and researchers wanted thought to look like well-ordered propositional logic*. Anyone reading Dennett's "Content and Consciousness" from that era would wince at his computer-style box diagrams, typical of the field at that time. I don't even understand why Nathan would go along with this mistake. Is it just because it purports to give Dennett et al trouble? I would assume him to find more affinity for Block's anti-fuctionalism or Searle's realism, for example.

Picking Chomsky and Penrose as Nathan's critics of choice seems totally nonsensical to me, and I can only presume it's a product of ignorance.

"It seems pretty clear that minds can prove all propositions of arithmetic"

This is deeply confusing. How is it clear? What the heck can Nathan mean by "prove" in this context?

*Indeed, one of Chomsky's big breakthroughs was to show that there are important ways in which the linguistic system is so mechanistic, though of course further research has tempered the finding.

Nathan Smith

I did respond to Dennett's argument against Penrose in the initial post. My response to it is that it leads to radical skepticism.

Chomsky and Penrose are not exactly my "critics of choice," and I didn't endorse Chomsky's argument because I'm not sure I've understood it; I'd have to read more of him. Penrose's argument does strike me as successful. That doesn't mean I endorse his views more broadly, of course. Penrose is a guy who sees that the materialist account of mind is doomed, but who's still wedded ideologically to materialism, which leads him to his weird quantum-gravity ideas. I don't endorse those, and see Penrose's argument as an updated version of the Argument from Reason.

Thought does not always look like well-ordered propositional logic. That's just one of the things it can do.

By the way, Nato wrote above that there's no evidence humans can solve problems "extra-mathematically." That's not a good word choice. Humans solve problems by insight, by our privileged access to the realm of ideas. It would be more in keeping with the historical use of the word, I think, to use the word "mathematically" to describe this human faculty, and to describe computers' method of solving via algorithms, we might coin the word "sub-mathematically."


Nathan responded to Dennett's offering of a "perfect understander" analogue to Penrose, but that response was actually peripheral to the main criticism of the whole equation between human consciousness and perfect understanders. Penrose's entire line of criticism starts with a bad assumption that is the simple mistake. Along the way, he raises the general issue of humans' singular breadth of ability to understand, and this is a question that needs answering, but no one *actually* thinks any *actual* human is a perfect understander. That's an idealization taken as a theoretical given.

"By the way, Nato wrote above that there's no evidence humans can solve problems "extra-mathematically.""

This mis-paraphrase is useful in elucidating part of the disconnect here. We can absolutely solve problems without "proving" the solution in a logeto-mathematical manner. Proving something so that it is true a-priori is a very special case of solving. Even a mathematician's solutions in her own life could hardly be more than half mathematical; the rest are in response to the need to sequence trips to the store, or figure out what to do with the dogs on vacation. Persistent equivocation between the special ("prove") and general ("solve") cases is causing the trouble here.


Maybe it would be useful to lay out Penrose's reasoning.

1)Humans are perfect understanders
2)A perfect understander would be able to prove any true proposition
3)Godel's incompleteness theorem shows that some true propositions within any consistent axiomatic system are not arithmetically provable.

Therefore Humans cannot be arithmetically described within a consistent axiomatic system.

Penrose's 'solution' to the conundrum seems to be something like an attempt to describe humans within an inconsistent axiomatic system. Quantum indeterminacy gives us access to multiple systems at once or whatever. Libertarians like it for their own reasons, of course, but that's neither here nor there except as a psychological explanation for why folks still bring up CQG two decades later.


One more note. It is rather a leap to move directly from "there are (relatively precise) circumstances in which a person will be mistaken every time" to "the person can have no beliefs that qualify as knowledge." There are tricks (for the sake of argument) that will fool 100% of humans when they first encounter them, yet that doesn't tempt us to radical skepticism, and neither should that be a primary criterion for (radical) skepticism regarding algorithmic knowledge.

Nathan Smith

I still don't understand what the "bad assumption that was a simple mistake" by Penrose is. I know Dennett *said* that, but I think he's wrong. Nato doesn't seem quite to have said what Penrose's mistake was. Is it a mistake to think that humans are "perfect understanders?" Because "no one *actually* thinks that any *actual* human is a perfect understander?" But that's a bone-headed dodge. Humans err because of limited time, limited memory capacity, the fact that thought usually moves by free association rather than by conscious direction and is difficult to focus on a concrete problem: that's why we uses paper and abacuses and computers. Obviously Penrose knows that perfectly well, and if he claims human beings are perfect understanders, he means something different. He means that we have an ability, call it "mathematical insight" or whatever, to apprehend truth, which is reliable, not reliable just in a casual or general or probabilistic sense, but reliable with absolute certainty when the faculty is applied with sufficient diligence and without interference from laziness or distraction. And this is a necessary premise of mathematics which is regularly revealed in all the ways we talk about mathematics. Thus, if we try to explain a difficult problem to someone and find our efforts encounter no success, we may give up on the grounds that the effort is not worth it or that the learner lacks the will, but we will not typically, and I would claim that ultimately we will never, conclude, "Hmm... He just doesn't seem to possess the faculty of mathematical insight that would enable him to understand this." Or again, if someone errs in solving a math problem, we consider it a *lapse*, not merely a bit of bad luck. We know the human faculty of mathematical insight is not fundamentally limited or prone to error. A belief of this kind-- a belief that humans are "perfect understanders" in this sense (and obviously not in the sense that we never make mistakes due to laziness or distraction)-- is implicit in the way mathematicians do business. In wrongly diagnosing Penrose's "simple mistake," Dennett is himself making the simple mistake of ignoring all this. He takes Penrose to mean something he clearly couldn't have met, and ignoring the far more interesting and cogent argument that Penrose did seem to have in mind.

It might be useful, but I fear a bit beyond my capacity, to offer some formal, satisfactory definition of "proof." I have some *experience* with proof: I wrote down an economic model just the other day which involved a certain amount of deductive proof. But I'm not sure I could define the term. One thing that does emerge from my encounters with proofs by mathematicians and economists, and my occasional attempts to frame my own, is precisely what Penrose asserts, that they often appeal to "mathematical insight," to simple intuitions, which, once the problem has been manipulated in such a way as to isolate them, one *just sees.* Algorithms play an important supporting role, and human-computer collaboration seems invariable to involve the creation of an algorithm by a human, endowed with mathematical insight, to be executed by a computer. In any case, I don't think that "proving something is true *a priori* is a very special case of solving"; I think we basically do it all the time in math, even if we don't quite formalize it to the degree that a grad school does it.

Nato's last note is yet another evasion. He says "there are tricks... that will fool 100% of humans when they first encounter them." *When they first encounter them,* maybe so. People do not usually apply their logeto-mathematical faculty, and there are all sorts of regularities in the associations and short cuts and assumptions that characterize the everyday operations of our mind that a trickster can exploit. But we *can* see through them if given the chance, and that is sufficient to make humans "perfect understanders" in the sense with which we are here concerned.


That Nathan thinks my last note is an evasion is symptomatic of our ongoing failure to communicate. I was attempting to draw attention to the implications of the cognitive scientist's background assumption that we are all always changing. The algorithm that defines us is never be quite the same from moment to moment, and will never be the same again. This is considered so obvious as to go without saying. With sufficient knowledge about the current state of us, a trickster can trick us now and perhaps even for the foreseeable future, but Dennett would agree without qualm that only someone omniscient could hope to trick us forever. Only a static or trivial algorithm would necessarily fail once and forever. So we 'amount to' perfect understanders in the familiar sense, but *not* for the purposes of Penrose's proof. That is to say that the imperfection of algorithms in some formal sense does not imply anything very interesting about what the capabilities of algorithms would really be.

Nathan nicely sums up why Penrose gets confused on this:
"a belief that humans are "perfect understanders" in this sense [of being not fundamentally limited or prone to error] (and obviously not in the sense that we never make mistakes due to laziness or distraction)-- is implicit in the way mathematicians do business."

Mathematics relentlessly idealizes our intuitions and formalizes our interactions with them - indeed I would argue that it is more or less its raison d'etre. As any modern cognitive researcher or indeed philosopher of mind would agree, however, the idealizations don't seem to be very much like what's actually going on introspectively. Nathan seems to take that as supporting his positions that we can't be algorithms, but I don't see how this is true. As mentioned earlier, algorithms can also have judgments that look a lot like intuitions, and they can be similarly reliable. There's no obvious reason why an algorithm can't introspectively "see" that something must be true and even use a formalization of that introspection. To a trivial extent that's already true of expert systems that arrive at a reliable judgment but have no access to just how exactly the judgment came about. Once again, Nathan could follow some very respected philosophers in disputing whether these intuitions and judgments are veridical, but Penrose was offering a functionalist objection that only makes sense if one frames the whole question incorrectly. Penrose superposes the idealization of mathematics back over us* and of course comes up with a weird answer. Nathan has things backwards if he thinks that's what Dennett is trying to do.

*Our mistakes are, in this view, special cases in which we don't fully engage our perfect understanding. A better view is where we construct from fairly basic (reliable) intuitions an arbitrarily-perfect understanding.

Nathan Smith

re: "I was attempting to draw attention to the implications of the cognitive scientist's background assumption that we are all always changing. The algorithm that defines us is never be quite the same from moment to moment, and will never be the same again. This is considered so obvious as to go without saying."

I wonder whether a computer scientist would even think the term "algorithm" could be an appropriate term for any entity that is "always changing." I am inclined to think that the claim that the brain is a bundle of algorithms, yet is always changing-- not that the data the algorithm works with are changing, but that the algorithm itself is changing-- is an illegitimate move. Of course, it would be one thing if there were a sort of "genetic drift" in the algorithm, little random changes that might help, and presumably would usually hurt. But that's not what we're talking about. We're talking about changes in the mind with a very strong tendency to move it in the direction of truth. An evolutionist could (try to) account for that in the very long run by saying that bad mutations in the algorithm are killed off by natural selection, but that won't do to explain learning by individuals. To account for learning by individuals some meta-algorithm would be needed that can manipulate the regular algorithms. But then that meta-algorithm would have to do the tasks that a human mind can do, and Godel's Theorem shows that an algorithm can't do what mathematics assumes the human mind can do.

If we're to think of the brain as a bundle of algorithms, it seems to me we really have to think of it as a bundle of genetically predetermined algorithms, though of course they can have all sorts of on-off switches and if-then clauses and whatnot that enable it to accomplish a good deal of practical learning. If cognitive scientists "consider it so obvious [that it can] go without saying" that the algorithm changes, they are very much mistaken. It would need to be brought out into the open, explained, and defended, and it seems clear beforehand that this could not be done successfully.


Nathan has identified an important equivocation in the ordinary materialist view of the mind. The whole thing is describable in a single equation because it's matter and matter is exhaustively describable that way*. This isn't a very interesting equation, however, since it's all physics and doesn't really get at what's special about *this* lump of matter. For that, one must look to the algorithmically-describable information-processing topology of the brain. This is the ordinary "bottom" level of mental description. But a neuron obviously doesn't have much in the way of consciousness about it, so the real interesting parts are, indeed, in bundles of algorithms just as Nathan says. Where he goes off the rails is when he starts talking about biological natural selection in regard to modifying these algorithmic bundles. Presumably he knows how one neural net can train another, and how synaptic connects change in response to activity. The main search in cognitive science is to discover how this self-editing system of algorithms is put together. When Nathan says things like "it seems clear beforehand that this could not be done successfully," he certainly expresses a conviction, but nothing going before it would indicate this is an *informed* conviction. It is so "brought out into the open," so "explained and defended" that I can only believe that Nathan has not read any actual cognitive science. The alternative is to think he is willfully misreading it for his intellectual convenience.

*Though I should note for completeness that even this changes as the matter of the brain exchanges with the matter of the Universe at large

Nathan Smith

If "one neural net can train another," aren't the two of them just one big neural net? Well, maybe not: it depends on how the neural nets are defined. But "algorithm" is a big and vague enough word that I think if a bundle of algorithms is self-training, i.e., if some parts of an algorithm train others, it's best to just define the algorithm to include all these sub-algorithms and to deny that the algorithm as a whole changes. It learns-- it takes in new data-- but it doesn't change.

Of course, the algorithms in our brains-- some of the brain's functions can be described algorithmically-- do change, but that's because they don't comprise the mind but are constructed by it as tools.


I think Nathan is imagining the human algorithm proposed by cognitive scientists to be an elaborate state machine. In some trivial sense this must be true in the sense that any closed computational system must have a finite number of possible states into which it can be driven, but this isn't a very worrying limitation. We could, after all, say the same of the Universe. There's a finite number of possible arrangements for all the particles in finite space, so that each electron (for example) has only 10^129 possible positions per cubic meter and there are only so many cubic meters in the Universe*. A gigabit of memory can only have ~ 10^300 possible configurations, so a small artificial intelligence project like IBM's Blue Brain could be regarded as a state machine with ~ 10^12,000,000 possible states. Scaled to that of a human brain, the number of possible states could potentially be as low as 10^12,000,000,000,000.

Seriously, though, everything from what is connected to what to how excitable a particular neuron is can be changed by events within the wider network. Not only does the data change, the information processing topology itself changes. This isn't an incidental feature, either: the genes that lay out the gross topology of the infant brain don't have nearly enough information in them to encode anything very sophisticated. Instead, we begin with a rough template that goes nowhere until it encounters the world, at which time it starts refining itself rapidly into functional units. Animals with narrower architectures resolve quickly into the ordinary animal instincts we all know, while humans' very open plan takes a very long time to collapse into the full enduring personality**.

An algorithm it may be, but so fantastically complex and capable of taking on so many different forms that intuitions regarding the limitations of familiar algorithms may be misleading. There may be a trivial sense in which each of us is an algorithm with a static set of available states, but this is same sense in which a painting never changes because the same colored molecules merely change their arrangements. It's not so much incorrect as a profoundly misleading way of looking at it.

*Assuming the theories currently dominant in physics are correct.

**Though elements of "proto-personality" may show up in only a few weeks.

Nathan Smith

Nato actually concedes my point, but dismisses it as "trivial." But (a) it is not trivial if it follows from this that Godel's Theorem would apply to the mind inasmuch as it is an algorithm, so that if we do not believe that the incompleteness theorem can apply to the human mind-- and I repeat that to say it *does* apply leads to radical skepticism since if we once abandon the innate conviction that our logeto-mathematical faculty apprehends truth we can't get back to it-- then we must deny that the mind consists of algorithms. And (b) to the extent that I was simply offering a *definition* the charge of "triviality" is neither here nor there. To say "Not only does the data change, the information processing topology itself changes" is to create a false sense of mystery. In any algorithm that contains if/then statements, certain changes in data will change the flow of the algorithm, that is, if you like, they will change "the information processing topology." We can nonetheless quite easily understand the algorithm/data distinction. I agree, of course, that the algorithms of the mind do change, because I think there is more to the mind than algorithms, and algorithms-- that is, mnemonics, shortcuts, methods, procedures, habits, etc.-- are created by the mind by means of faculties and for purposes that transcend algorithmic computation. For those who deny that there is more to the mind than algorithms, I think it is illegitimate to speak of the algorithm changing as a means of getting out of unwelcome consequences of this reductionist view of mind.

Don't be impressed by the big numbers, e.g., 10^12,000,000 etc. Here's something simple that a computer can't do: solve 2/3 with perfect accuracy. You, human reader, *can.* It's easy: 0.666..., that is, in words, "zero point six repeating." But "repeating" means "repeating to infinity." Our minds can grasp concepts of infinity without too much difficulty. But a computer is incorrigibly finite. It has to reperesent 2/3 as the binary equivalent of something like 0.666666666666667: accurate to many decimal places, but ultimately rounded off. If we made a computer that represented 2/3 by 10^12,000,000 sixes after the decimal point before finishing with a 7, it would still be less accurate than your, dear reader, humble human mind. Computers are prisoners of finitude; we are not.

The comments to this entry are closed.

My Photo

Only use a payday cash advance as a last resort.


Blog powered by Typepad