« Why is America Culpable in the Holocaust? | Main | Peace, Freedom, and Open Borders »

February 16, 2007

Comments

Nato

Ah, well, I think humans are pretty neat and am a fairly hardcore humanist, enough that I'm not even quite a transhumanist. Certainly we can become one with computers, but I think there are pretty good reasons for the instinct and emotion stuff and probably computers will either end up having the same, more or less, or not be very smart at all. That's a long discussion, though.

froclown

Well, my view is that life is a sort of symbiotic relationship, where information mediates itself via evolution, not just biological but physical chemistry in general. The nature of this information entity is to develop itself in complexity, and thus material structures become more and more complex, and these intern create more complex objects and effect the environment in more complex ways.

That inherently the human being is a computer on which this information expresses itself as software, though it's a software that is inseparable form the hardware.

In any effect, as "nature" generally creates the environment and organisms must them be fitted to that environment, in our case we have created the city, which is a post-natural environment of technology. Thus we must ourselves be fitted to that environment.

As we have shaped the earth now we must build new bodies to manifest this information. A caterpillar must die so that a butterfly can live. our instincts are fit for apes living in the savanna, those days are over. The computers have been created, our new home awaits us in cyberspace, humanity has performed it's function and not it's time to rest.

I don't propose that we become more than human anymore than I would propose that an chimp can become a man. The apes all died and man took their place in the sun. I say time for humans to die they can not go further, it's time for a new and superior information medium to take our place.

For that to happen we must integrate our biology (the record of the highest expression of time-binding), with the new technology of the internet. Until there is no seperation between organic minds and computer circuitry.

Nato

I'm not sure how you derive this apparent loyalty to pure information. Why should you - much less anyone else - worry about the next level of informational manifestation? Will it make people happy? If it doesn't cause any sort of positive emotion, how are we to place any value on it at all?

froclown

I don't place value on "being happy" as humans are mere cogs and specks in the cosmic progression.

IF we are happy or sad, in pain or pleasure, it matters not the blink on an eye in the eternal driving of the cosmos.

We humans are but cells in the greater cosmos, as cells blink in and out of life constantly and I suppose you hold a funeral for each one of them, and you stop your life to concern yourself with how each one of them feels?

No, because they are not important in that way. That they perform their function and our bodies can perform their functions that is what matters.

What matters is not if people are happy or not, it's that humans perform their functions properly and pave the way for the next manifestations. That which is smalled exists for that which is larger. It exists for itself only so much as happiness and pleasure, as well as pain avoidance, keep us alive and functioning. Without these we may all do stupid things and die, thus we will not complete our function. However happiness is not in and of itself a worthy goal, if it were a drug induced stupor would be the best life, but we have discovered it is not, because the opium addict though enraptured in bliss, is not functional and thus his life is not desirable.

Nato

It would seem nothing matters unless we decide it does - from what other perspective can one justify value ascription? Why should anyone care more about the eternal driving of the cosmos than, say, the continued propensity for a rock to sit on the ground? There just doesn't appear to be a motivation to project (our oh-so-human) normative reasoning onto the cosmic perspective.

And it's true that the dead bliss of a smackhead isn't a happiness worth wanting, but I think we can imagine more sophisticated, meaningful hedonic goods than chemical euphoria.

froclown

indeed, but are not all of our emotions and experiences properly used in service to higher principles. The problem with the smackhead, as you say is not his use of the drug, but it in his addiction to it, that is in his reverence to the chemical euphoria, in place of his devotion to his natural purpose, his "true WILL" being sublimated by his heroin use as an end in itself. (perhaps heroin may numb his painful cancer allowing him to do his job, in this case we feel less contempt for the junkie and more a tendency to pity him. as his drug use is not an excuse to drop out of the common good, he is not a drain on our economy, etc.) Are their cases such as Holm's use of cocaine where drug use is beneficial due to it;s enhancing of certain skills of observation, energy, or creativity? Thus the value of the drug is not the euphoria in itself, it is the potential use of the drugged state to perform another less hedonistic task.

Likewise, happiness is not to be taken as an end in itself, but as an indication of that one is on the right track, that is happiness is a chemical reward (reinforcer to use the behavioralist's term) which is the natural result of proper action in one's proper sphere of influence.

However, just as we can fool our biology with illusionist tricks, so to can we fool our internal reward system with drugs, false hopes, delusions, and other means.

If hedonist pleasure is taken as an ends in itself (as the epicureans do) then we lose create a short sort of short circuit, and like an electrical circuit if the power is misdirected the mechanism fails to function correctly.

What if we put a drug or hypnotic spell on all the animals in nature so that they feel pleasure and happiness all the time. What is their motivation to do anything? they will not eat, they will not mate, they will not build nests, or do anything. The whole ecosystem would fail.

Thus, every animal has a complex and dynamic purpose interwoven into the web of life, the same is true for human beings. No one would say that the purpose of say a wolf is to be happy. No, rather the function of a wolf is to eat rabbits, fertilize the grass, and other such acts which preserve the balances homeostasis of nature.

The wolf may very well life to be happy and free of pain, hunger and fear all of it's life. However part of being a wolf is suffering, hunger, fear, and for some wolves violent death at the hands of farmers.

One's natural function is not necessarily pleasant for oneself, however for that part of oneself that is inter-twined with the information matrix, even the death of that tiny star that is one's whole being, is not a loss. Thou you and I suffer and feel pain, that is but the shadow of a higher being, the cosmos itself which is continuous.

As the death of cells, if they perform their function properly is in service to our body, so do is our suffering and death in service to a higher aspect of oneself. Thus, suffering is an illusion it is the shadow of a greater joy. All that is transitory are but shadows they pass and are done, but that which is behind them (information) is continuous and comes not into being nor out of being, but is the ground of beings, as Heidegger called it the "BEING of beings"

Or the Goddess Nuit, as it is personified in Liber Al Vel Legis

Thomas

I'm back! My mother-in-law came to visit for a week and after that Nicole and I went camping for a week, but I really do intend to write something lengthy to add to the discussion. For now I will simply try to relieve some apparent confusion on Nathanael's part.


From Wikipedia on the uncertainty principle:
“In quantum physics, the Heisenberg uncertainty principle is a mathematical limit on the accuracy with which it is possible to measure everything there is to know about a physical system. In its simplest form, it applies to the position and momentum of a single particle, and implies that if we continue increasing the accuracy with which one of these is measured, there will come a point at which the other must be measured with less accuracy.”

The Heisenberg Uncertainty Principle does *NOT* assert that the universe has some sort of random component. It merely asserts that there are limits to our abilities to measure physical properties.


From Wikipedia on chaos theory:
“Systems that exhibit mathematical chaos are deterministic and thus orderly in some sense; this technical use of the word chaos is at odds with common parlance, which suggests complete disorder. ... As well as being orderly in the sense of being deterministic, chaotic systems usually have well defined statistics. For example, the Lorenz system is chaotic, but has a clearly defined structure. Weather is chaotic, but its statistics—climate—are not.”

Chaos Theory is *NOT* non-deterministic. It is a deterministic mathematical paradigm used to predict systems that merely seem inherently non-deterministic.


From a Scientific American article on randomness and mathematical proof, http://www.cs.auckland.ac.nz/CDMTCS/chaitin/sciamer.html :
"[Consider two sequences of binary digits]: 01010101010101010101 and 01101100110111100010. The first could be specified to a computer by a very simple algorithm, such as ``Print 01 ten times.'' If the series were extended according to the same rule, the algorithm would have to be only slightly larger; it might be made to read, for example, ``Print 01 a million times.'' The number of bits in such an algorithm is a small fraction of the number of bits in the series it specifies, and as the series grows larger the size of the program increases at a much slower rate. For the second series of digits there is no corresponding shortcut. The most economical way to express the series is to write it out in full, and the shortest algorithm for introducing the series into a computer would be ``Print 01101100110111100010.'' If the series were much larger (but still apparently patternless), the algorithm would have to be expanded to the corresponding size. This ``incompressibility'' is a property of all random numbers; indeed, we can proceed directly to define randomness in terms of incompressibility: A series of numbers is random if the smallest algorithm capable of specifying it to a computer has about the same number of bits of information as the series itself.
... ...
Solomonoff represented a scientist's observations as a series of binary digits. The scientist seeks to explain these observations through a theory, which can be regarded as an algorithm capable of generating the series and extending it, that is, predicting future observations. For any given series of observations there are always several competing theories, and the scientist must choose among them. The model demands that the smallest algorithm, the one consisting of the fewest bits, be selected. Stated another way, this rule is the familiar formulation of Occam's razor: Given differing theories of apparently equal merit, the simplest is to be preferred. Thus in the Solomonoff model a theory that enables one to understand a series of observations is seen as a small computer program that reproduces the observations and makes predictions about possible future observations. The smaller the program, the more comprehensive the theory and the greater the degree of understanding. Observations that are random cannot be reproduced by a small program and therefore cannot be explained by a theory. In addition the future behavior of a random system cannot be predicted. For random data the most compact way for the scientist to communicate his observations is for him to publish them in their entirety.
... ...
The minimal program is closely related to another fundamental concept in the algorithmic theory of randomness: the concept of complexity. The complexity of a series of digits is the number of bits that must be put into a computing machine in order to obtain the original series as output. The complexity is therefore equal to the size in bits of the minimal programs of the series. Having introduced this concept, we can now restate our definition of randomness in more rigorous terms: A random series of digits is one whose complexity is approximately equal to its size in bits.
... ...
Three paradoxes delimit what can be proved. The first, devised by Bertrand Russell, indicated that informal reasoning in mathematics can yield contradictions, and it led to the creation of formal systems. The second, attributed to Epimenides, was adapted by Gödel to show that even within a formal system there are true statements that are unprovable. The third leads to the demonstration that a specific number cannot be proved random.
* Russell Paradox—Consider the set of all sets that are not members of themselves. Is this set a member of itself?
* Epimenides Paradox—Consider this statement: ``This statement is false.'' Is this statement true?
* Berry Paradox—Consider this sentence: ``Find the smallest positive integer which to be specified requires more characters than there are in this sentence.'' Does this sentence specify a positive integer?"

The whole article is pretty interesting, and I feel it covers some modes of reasoning that are salient to the discussion. To be honest, though, it seems to me that the definitions of 'randomness' and 'complexity' are somewhat arbitrary (a point that is actually made in the larger article). My intuition tells me that logic and mathematics cannot be used to represent randomness, for they are in the business of predicting outcomes and producing reliable results, whereas randomness is inherently unpredictable and is merely describable by probabilistic distributions.


Nathan said:
“Do scientists work for free, then? For that matter, do they even take vows of poverty like many priests and monks?”

The answer is yes and yes. There are plenty of scientists that work on their own research without getting paid. Even the most prominent minds, such as Einstein and Newton, have done vast amounts of research without getting paid. There have even been cases of scientists/thinkers taking vows of poverty for non-religious reasons, the most recent prominent example being the Russian mathematician Grigori Perelman who famously turned down the millennium award for solving the Poincare Conjecture and who also refuses to receive any compensation for his work (he currently lives a utilitarian lifestyle at his mother's house).


There are so many points for me to contend and so little time. A more prolific post will follow soon, I promise. It might be too long and off-topic to post as a comment, so when I write it, I will probably post it on my blog and then post a link to it as a comment.

froclown

Ok, I got it now.

When a you go see a movie, the camera has a film it it, that film contains whatever the camera was pointed at. No matter where you point the camera, you will never get an image of the film or internal workings of the camera on the film. If you did manage to get the film on film, it would not be the film itself just an image of the film, and it would be an infinite regress, such that the camera can never come to project the film on the screen, the movie does not have an image of the camera's film.

However, no matter what image is projected on the screen, that image is on the camera's film, thus there is no way to not project the film onto the movie screen, unless you turn off the projector and project nothing.

Now, it is possible to open one camera and point another camera at the film in the first camera, then you can capture the first camera's film on the film of the second camera, and you can project the first camera's film as a movie image. (however, this movie image is printed on the second camera's film).

This is exactly how first and third person perspectives work with the brain.

As a first person, I can not see my own brain and how it works, I can only see what is printed on my brain, if I look at my brain in a mirror for example, It's just an image impresses on my brain that refers to my brain, I don't actually experience by brain's workings themselves this way. However, everything I see, touch taste, smell, thinks, etc, is the workings of my brain.

Now if you cut open my head, you can observe the workings of my brain, you can see how every time I say I see blue, the blue nerve pathway lights up, etc. However, the fact that you can see my brain, recorded on your brain, does mean that you too can not escape your first person perspective.

However, another person's first person perspective is alien to and third person to my perspective. I can not get out of the box to perceive my own brain, but anyone else is already outside of my box and can perceive my brain.

Just as any other camera can perceive the film and internal workings of any other camera, but not camera and perceive the internal workings of itself.

(even if we assume a camera that looks in on itself, the part of the workings that is doing the introspection is part of the camera as a whole, and it can't introspect in on that part. It would require an infinite regression of introspective lenses, and there would always be one higher order part left out. However two different camera's can each perceive the whole inner workings of the other, each form it's own perspective, (vantage point)

I can see your brain and you can see mine, but none one can see his own brain, you dig?

Nathan Smith

"I can see your brain and you can see mine, but none one can see his own brain, you dig?"

First, it seems that if you cut open my head, it would be easy to set up a system of mirrors or video cameras to allow me to see my own brain just as well as anyone else can see it.

Second, while we can see each other brains, what we can learn about what's going on in the mind by looking at the brain is partial. At any rate it's partial in practice. It may also be limited by the nature of the mind, if, for example, the mind is non-supervenient. We just don't know.

froclown

If you aim a camera at a mirror, you see the image of the camera, and you the image of the image of the camera, and so on for infinity.

The original camera itself, is not seen in a mirror, The original camera is "the alpha and omega" of the infinite series of images. That is it's the source or those images, the medium on which those images exist.

Those images are only impressions made on the camera's film, not the camera itself.

The same is true if we look at our own brain in a mirror or via an MRI or EEG etc. We can only see impressions about our brain, but those impressions are part of our brain as well. Thus to have a true image of our brain, we need to have the impression of the impressions of the impressions, ad infinite of the brain, and as such the our own brain is not found in those impressions, but is rather the unperceived substance on which those impressions are imprinted.

In a sense our brain is everything we know, it's the alpha and the omega of our universe. We are aware of the impressed images, but not of the stuff on which those images are impressed.

However, our brains are capable of impressing images of the substance on which other impressions are man on other brains.

There is no "Von Neuman's catastrophe of infinite regression" when one complete system observes another complete system, as in one brain observing another, however there is no avoiding this problem within a self-referential system of fractal redundancy, such as occurs with introspection.

There will always be that remainder left out of your equations because, but this remainder you seek is accounted for by the substrata on which the impression is impressed, or as Heidegger called it "the BEING of beings".

Val Larsen

Nato (and or Frodown) appeal to Occam's razor as an argument for not retaining the unnecessary assumption of something beyond the brain (a soul) to account for human behavior/experience. But what are the grounds for an appeal to Occam's razor as a justification for one account of a phenomenon rather than another? It is not hard to understand why we might prefer a simpler account of a phenomenon over a more complicated account that explains the same facts. Simplicity has utility for a cognitive miser, and as as finite beings, we are of necessity cognitive misers. But utility is not the same thing as truth, unless one is a thorough going pragmatist. Other than utility, what grounds are there for preferring a simple account of a phenomenon that is adequate over a more complicated account that is equally adequate. I don't think any grounds exist. So in effect, when we deploy Occam's razor, we are choosing to have the reality we affirm be the reality that best conforms itself to our mental limitations. Freeman Dyson has speculated that there may never be a grand unification theory of the universe because we aren't smart enough to ever undertand the universe as it really is. In the case at hand, the simpler account is not fully adequate. A purely physicalist account of the human mind can never validate our intuition that moral mandates are real. If we are nothing but a brain, a series of biochemical reactions, Neitzsche was right that the moral law has no foundation--contrary to the firm intuition of millions. Is greater simplicity an adequate rationale for abandoning our intuition that the moral law is grounded? Nato admits on another thread that religion has utility in that it can facilitate impulse control that would otherwise not occur. So in this case, it isn't even clear that the simpler, purely physicalist account has greater utility. The cost of simplicity is extraordinarly high if the loss of belief in a grounded moral code is the price.

Nato

"Other than utility, what grounds are there for preferring a simple account of a phenomenon that is adequate over a more complicated account that is equally adequate. I don't think any grounds exist."

There are infinite logically possible explanations for almost any set of data. How are we to select one from another? By chosing the one that's really real? Good luck with that line of argument. By choosing the one that seems nicest to believe? Somehow that seems a recipe for disappointment. Occam's razor provides an objectively superior belief from the standpoint of one's dataset, then that's pretty darned good. After all, to what other dataset do we have (direct) access?

"A purely physicalist account of the human mind can never validate our intuition that moral mandates are real"

I'll see your asseveration and raise you one: My intuition that moral mandates are real is validated by my purely physicalist account. Even if I'm wrong, I can correct my account (while remaining just as physicalist) so that it *does* validate the moral intuition.

froclown

I'm not using Occam's razor, Which only applies when you have two models which represent the same event with no clear indication of which one is better.

What I am doing is no different than disproving the absurd claim that there are little men singing in the radio.

If you can show the mechanical process by which every note that is heard in the radio is accounted for, without employing tiny men, then not only is there no need for the tiny men theory, there is no room for it.

What is left for the tiny men to do? and why has no one ever seen a single 3 inch tall man?

Thus, I dismiss supernaturalism, not because of Occam's Razor but because we can slice open the brain and observe the process by which everything the brain does is accounted for, and we do not see a single supernatural or non-physical mechanism nor is one even hinted at in the structure of the brain.

I mean, it like if I told you when your Mom makes toast and jelly, she waves a magick wand over it that makes it good, however I hate all other jelly toast. This is because your mom infused the bread with a mysterious quality, that is not the bread itself nor is it a force or a particle or anything that can be known. I have faith that it is there.

Now what if you dissect the bread and find nothing but flour in it, no magick. and You trick me with a blind taste test and I can't tell the difference. Still I demand that the magick only works by going into my eyes, they have to be open for it to work.

Then you find that your mother uses an expensive bread made from rare flour and thus you say, this flour is the magick you speak of, and I say, no that because flour is an ingredient and the magick special, it's not an ingredient in the toast.

But you see, There is no reason to believe in this magick whatever it is that makes toast good. Especially when it's shown that the only toast i like is make with the exensive flour,

There is no room in the toast for bullshit magick stuff. There is likewise no room in your head for such nonsense.

It's just a gay fairy story that people tell themselves to feel special.
If you want to believe in supernatural hokum, you might as well go prance around calling yourself a princess and go cry to your mommy so she can heal your "boo-boos" with her magick kiss.

Nato

Froclown, it's hard to take you seriously after that last paragraph.

froclown

Why, I don't see how belief in God is any different than plato's world of forms, or of the astral plane, that is that dreams are in another non-material world.

All these notions are things of the world of make believe, and if you fall for fairy tales like these you might as well, never age beyond 4 years old.

You can have your Santa clause and eat it too.

Nathan Smith

re: "we can slice open the brain and observe the process by which everything the brain does is accounted for..."

Just for the record, the statement quoted is completely false. Any cognitive scientist will tell you that much remains to be explained in the operations of the mind/brain. That's enough to totally deflate froclown's argument.

Val Larsen

Froclown,
You discredit your argument by lumping together mother's magic wand, Santa Claus, and Plato's world of forms. Plato's realism is a perfectly respectable rational account of things like our concept of a triangle, which has no perfect real world instantiation and yet exists as an intelligible concept. What is this concept's referent? Plato has an answer. Do you?

Val Larsen

Nato writes:
There are infinite logically possible explanations for almost any set of data. How are we to select one from another? By chosing the one that's really real? Good luck with that line of argument. By choosing the one that seems nicest to believe? Somehow that seems a recipe for disappointment. Occam's razor provides an objectively superior belief from the standpoint of one's dataset, then that's pretty darned good. After all, to what other dataset do we have (direct) access?

Statement 1:
There are infinite logically possible explanations for almost any set of data.

This I accept. In affirming it, you accept a premise of my argument--that there is more than one adequate account of a phenomenon.

Statment 2:
How are we to select one from another?

This is the critical question. My answer is that, in the end, our choice will hinge on faith and taste, not on some objective, rational principle.

Statement 3:
By chosing the one that's really real? Good luck with that line of argument.

I didn't say anything in my original post about how I would choose my "truth." More on that in a moment. But I like the the force of your rhetorical question. It implies that there is no rational basis for choosing the really real. I agree with that. That was part of what I was trying to show by calling Occam's Razor into question as an objective standard.

Statement 4:
By choosing the one that seems nicest to believe? Somehow that seems a recipe for disappointment.

Please explain, Nato, why this isn't what you are doing, because that was the force of my original argument against making an appeal to Occam's Razor. As finite beings with limited cognitive capacity, the simplest explanation that fits the facts is the "nicest to believe" because it is the easiest to understand and use. Can you give other grounds for using it to define our accepted truth beyond its utility for creatures with limited minds? If not, then an appeal to it is not dispositive--and that was the main point of my post.

Statement 5:
Occam's razor provides an objectively superior belief from the standpoint of one's dataset, then that's pretty darned good. After all, to what other dataset do we have (direct) access?

Okay, here is the critical question posed by my original post that you don't explicitly address. "Objectively superior" in what sense. If you are saying it is "objectively superior" because it is more adapted to our finite minds, I agree, but again, that makes its use a matter of taste that someone with a Barroque sensibility can rationally reject since they prefer the more ornate explanation. Can you make explitic the grounds of your "objectively superior" claim?

Val Larsen

Now, what grounds do I have for the truth I affirm. If as I argued above, we have no objective way of choosing between the infinite set of adequate explanations of a phenomenon, how can we choose what to believe? I have two answers: revelation and tradition. While, as Freeman Dyson speculated, and I have argued, a finite mind cannot know the universe as it really is, the infinite mind of God can, if He exists. Thus, the only plausible path for living as a realist--as one whose manner of living is in harmony with the universe as it truly is--is revelation received from an infinite mind which has access to that reality as we never can. In effect, our understanding of truth must be mediated by the understanding of an infinite mind. As finite beings, we can never know for ourselves that the revealed truth (if there is one) is, in fact, an adequate account of the universe as it really is, but we can know that revelation is our only hope of living in harmony with things as they really are. And it is rational for a person to make Pascal's wager and take the leap of faith that is required to accept revelation as a guide for one's life. I'll tackle tradition in the next post.

Val Larsen

Let me now explain why I think tradition is a reliable guide for ordering one's life. My position is basically Hyaekean and Burkean. Given the finitude of our minds, we can never create for ourselves, wholecloth, a mode and philosophy of life that is valid. If the many brilliant minds at GOSPLAN couldn't get prices right in the Soviet Union, there is little reason to think that one individual, no matter how brilliant, can discover in one lifetime the principles that should guide one in living a fulfilling life, a more difficult problem than getting a price right. As in matters economic, so in matters ethical and cultural, we need the power of distributed intelligence, the hard-won wisdom of millions of lives lived, to arrive at a set of truths that are an adequate guide to living well. There are, of course, multiple traditions within which one can live well. But I think it nigh on impossible to live an optimally fulfilled life outside of a well tested cultural and religious tradition.

Nato

Val, the most computationally tractible model is the simplest - something provable a priori from mathematics. I think we can all take finiteness as a given. If you want to predict the future or the past then:
1)One's experiences either have some minimally-stable relationship with the world or our experiences provide no guide to the world.
2)Rules either stay static in some way or all prediction is impossible.
3)The most effective way to yield expectations of the world is to apply the most computationally tractable model that systemizes our experiences.

Occam's Razor is, then a portion of fundamental epistemic rules turned into a rule of thumb for a specific case.

I don't see how accepting revelation really advances the case, since then we've added an additional layer of epistemic conundrum "is this a revelation from God? How would I know?" Now, arguments from authority have a sort of place, if one can first confirm the authority is based on something (expertise in scientific questions, official status in legal ones), and one can secondly confirm that the authority in question has actually taken the position in question. If we are to do the same with God, well, we seem to be in a difficult position, since the existence and opinions of God have been matters of... some dispute, over the millennia.


Regarding tradition as a guide - this makes good sense, and interestingly Dennett would agree. That said, one should recognize the contingent nature of the histories generating traditions. I mean, which tradition? One of the currently extant ones, or a historical version? How would you choose between them*?

It would seem traditions offer sets of values that for one reason or another (we don't necessarily have to understand exactly why) worked together in a stable manner when instantiated by humans, so that's good prima facie evidence than there's considerable wisdom to be gleaned. It seems clear, however, that taking tradition seriously is just a (powerful!) tool in our personal heuristic toolbox. We're still all ultimately responsible for working out the best moral system we can. Note this does not imply that there are no right answers - it's just that we each have to work that way based on our own experiences (i.e. we have no one else's experiences).


*This also, of course, applies to Pascal's wager - which leap of faith should one take? There's so many different religions. Perhaps I should be like the Yezidis and seek revelation from Satan's divine mind. They've been doing it for a very long time and it seems to work for them, persecution by the Muslims aside.

Val Larsen

Nato,
The key terms in your lucid account of the grounds for accepting Occam's Razor are "tractable" and "effective." So your defense of this epistemic rule of thumb is pragmatic. It has practical utility. It appears there is no disagreement between us on this point since my argument was that the rule has no ground but utility and, thus, is not dispostive when making a TRUTH claim (which is how I thought you were using it when arguing with Nathanel). It retains utility in making the more humble lower case truth claims that pragmatists make, just for the nonce. Of course, by that standard, the help religion gives one in controlling impluses becomes evidence of the religion's truth. And with that, let me segway to your questions on faith: how do we answer the questions, "Is this a revelation from God? How would I know?" The answer, of course, is that if there is no God, one never could know. If there is one, He would have to reveal himself to you. Most religious people (including me) affirm that there are other channels of knowing (in addition to reason) that give one the ability to tell whether something is or isn't of God. I'm confident that most if not all major faiths are grounded in one way or another in that mode of knowing and are, therefore, to one degree or another connected to reality (mediated through God's infinite mind) in ways that purely human knowledge never can be. Which brings me to your claim that "We're still all ultimately responsible for working out the best moral system we can." Religious people (and even Burkean conservatives) will tend to disagree with the plain sense of your claim. Both will hold that moral imperatives are given, not worked out by each individual. People may have to struggle with how what is given is to be applied in a particular case (though many will feel that in most cases that is given to them, as well), but they do not have to work out those principles for themselves. And since religious people presume that the moral law that is given to them is warranted by God, the law's "thou shalt's" have force. I think Neitzche correctly argued that, without God, moral laws lose all force. If you disagree, why? One last point. If all or most of our beliefs must ultimately be validated by their pragmatic utility, then belief in God and the practice of religion is stongly supported. (Oddly, its truth is affirmed even on Darwinian grounds since in developed economies, only the religious continue to replicate with sufficient fecundity to replace themselves). Thus it is unsurprising that William James, the father of pragmatism, had so much respect for religion, which at least in its major forms, appears to be pragmatically true.

Nato

Well, it seems to me that moral truths are either eternally true straight from logic not moral truths. God declaring them is only notable in that God, being omniscient, knows them. Those portions of Christianity (or any other religion) that say that things are ultimately right and wrong *because* God said so are, from my point of view, amoral.

There's of course a great deal more to be said here, beginning with discussion of force-bearing moral law, but Western philosophy wasn't written in a single blog comment.

froclown

When I press the letters of the alphabet, I see the letters appear on my screen. Where are those letters stored? If I cut open my CPU, I do not see the letters. Where then are those letters at? OH, they must exist in another dimension beyond space and time in a magical fairy land of FORMS.

NO! They exist as stored patterns on computer chips which send out different electrical pulses to different input pulses, those pulses are then converted into the image of letters, by the structure of the LCD screen or the circuitry that drives the cathode Ray tube.

There are no GHOST letters in the CPU, there is no world of Forms, There is not WROLD or Perfect circles, and their is no Spirit tat created material substances, infuses "intelligence" or guides evolution.

Intelligence = mechanical process of data manipulation.

Nato

Well, I would of course just take the position that biology *is* machinery.

The comments to this entry are closed.

My Photo

Only use a payday cash advance as a last resort.

Categories

Blog powered by Typepad