Archive for November, 2012

I no longer experience words as connecting us to anything essential. I see them as merely events in a language game.

Read Full Post »

Prevalence is a measure of the number of people meeting criteria for a given condition in a specific time period. Lifetime prevalence is the number of people in a given population that will have the condition at some point in their lives. For example, the lifetime prevalence for asthma in the US is about 13%, which means that 13% of the US population will have asthma at one point or another in their lifetime. Let’s compare that number to the figures for “mental illness.” At some point in their lives, 46% of adults will meet criteria for a psychiatric diagnosis, 30% of adults will meet criteria for alcoholism, and 40-50% of adults will get divorced. Now definitions of what constitutes an “illness” certainly do differ, and most of the definitions that I looked up in Webster’s relating to health, illness, sickness or disease emphasized a functional definition. That is, they defined health as practical, working capacity to do things and illness as loss of this capacity. However, when one looks up “disorder” one finds this: “an abnormal physical or mental condition.” The concept of normality, which is a statistical concept, is turn defined as “conforming to a type, standard or regular pattern.” Now I ask you, what sort of regular pattern are we seeing when 30-50% of the population doesn’t conform to it at some point in their lives? Of course, if you look around you right now you won’t see 50% of the population suffering from “mental illness,” and so it may indeed look like emotional suffering is not so statistically common. But if we take a larger, high altitude view, our species is suffering from a hell of a lot more emotional disorder than is commonly understood. This is the myth of isolation which I believe contributes to much of the emotional suffering that we call mental illness.

Read Full Post »

In my opinion it was Kant, not Wittgenstein, who solved all the problems of western philosophy. My reasoning goes like this. The pre-eminent problem bedeviling the corpus of western thought since Plato was the question of how knowledge is possible. The issue is the ontological status of epistemological conclusions. For example, if I conclude that two circles are similar in the fact that the ratio of their circumference to their diameter is always the same, I naturally posit a cause of this similarity which is consistent and unchanging, since if I were changing it could not have caused the pattern that I perceived. And even if I suggest that the pattern I’m perceiving is not entirely accurate (that in fact the ratio of C to d for any circle is “cake” and not “pi”), still there is something causing the erroneous pattern that I am seeing and that something must actually exist in order to cause my erroneous perception. So the question becomes for the neo-Platonic thinkers, “how can I know that I know,” or “how can I have justified true belief and also know that it is justified and be justified in believing it?” The question was later re-described by Hume as the “problem of induction” and his answer was, quite simply, “you can’t!”

Of course, Hume’s skepticism sent shivers through the community of rationalist philosophers who wanted to continue their dabbling in natural philosophy and their readings of “the book of nature written in the language of mathematics.” And Kant, taking him very seriously, clearly was not going to let his beloved ship of science founder on the reef of British skepticism. So Kant’s answer was this: we can indeed have synthetic a priori knowledge of the contingent (phenomenal) world, but what this means is just that we can employ the heuristic of induction and universality merely as a utilitarian tool or allegorical interpretation of experiences, and not as a literal representation of “the truth.” The problem, as Kant saw it, was that thinkers since Plato had been far too prone to think about the world analogically. This led them to believe that, for example, if a watch has to have a watch maker, then a universe that is as intricately designed and as lawfully described as a watch must also have a universe maker. In Kant’s world, we can still speak about ideas like “pi” or “God,” but must understand that these are simply useful heuristic models of experiences that, though they point beyond the contingent, cannot ever be directly given in experience. Ontological conclusions about the heuristics of reason (about epistemological principles) can never be proven or disproven. They are “the noumenal” and are forever forbidden to us as phenomenal creatures.

In this way, we begin to see numbers as convenient words that we use to describe experiences. Numbers are simply a way to signal our understanding of certain properties of sets. And the idea of a set is itself nothing more than a product of our brain’s evolutionarily adaptive tendency to remember and classify events. Whether numbers and sets are anything more than that cannot be entertained or verified by a contingent mind. Likewise, God is a convenient word that we use to express the idea that a duty is a principle that does not depend on physical contingencies, and that if such an uncontingent principle is to have real causal power, its existence must be guaranteed by an uncontingent cause or being. But whether God is anything more than that convenient idea is not something that can be experienced. And if it cannot be experienced, it cannot be spoken. By describing the limits of our thought, what Wittgenstein re-described as the limits of speech, Kant thus absolved us of ever having to answer Hume’s provocative skepticism about the problem of induction, which was the problem of knowledge which was the problem of everything, in the west.

Read Full Post »

When I was in graduate school studying neuroanatomy I realized one day that the paradigm of brain mapping being taught was not intellectually satisfying to me in one very important way. Here’s the paradigm: peripheral nerve group A is stimulated by an event and fires signals which travel to brain cell group A which are then stimulated into activity which is different than peripheral nerve group B being stimulated and sending electrochemical impulses to brain cell group B, so that’s why we experience different sensory events as different. For example, if I see the color “green” this stimulates certain cells in my retina which sends impulses to various regions of my brain in the back of my head so I see green, whereas when I hear the note middle C on the piano, other brain cells in other parts of my brain are stimulated, so that’s why I don’t confuse green with middle C.

Neuroanatomy comes down to brain mapping—a description of which events end up stimulating which groups of cells in your peripheral and central nervous system.

The problem I had with this was that one day I suddenly realized that the model didn’t explain anything at all! How do we, the sensing creature, know that group A’s neurons being stimulated corresponds to “green” events and that group B neurons being stimulated corresponds to “middle C” events? Is there some master control center somewhere else in the brain where cells know that A neurons mean “green” and B neurons mean “middle C?” But if that were the case, then we would just end up with yet another extension of the brain mapping. So is there yet another neural center of representation to code the first representational center? Of course, this could quickly become an infinite regress…

It quickly became clear to me that the neuroanatomical model provided no explanation whatsoever for what I named “the subjective character” of human experience. (Which is the fact that I clearly experience green as distinct from middle C). Based on this model of brain mapping, how could I ever know the difference between middle C and green, since they all seemed to reduce to one group of neurons or another when mapped onto my central nervous system?
I asked all my professors to clear up this paradox. All but one had no idea what I was talking about. The one professor who understood the question also understood that he did not have the answer.

Philosophers of mind call this the “anesthesia problem.” In order to give an objective account of the human mind, they claim, one must feign anesthesia about all of these so called subjective experiences, since the objective neuroanatomical models provide no way of locating a neuroanatomical cause of subjectivity. Since to feign anesthesia is clearly ridiculous, they therefore claim that the objective models must commit the fallacy of trying to reduce something that is irreducible. Call it “subjectivity” or “intentionality” or “mind,” the objection to the neuroanatomical model is still the same, and it is misguided.

The problem is that we have forgotten what Kant taught us about models of human experience. In brief, what Kant said was that we are contingent beings trying to understand the world from the perspective of the uncontingent. He cautioned that though this may be a useful approach to many of life’s problems, it is ultimately doomed to fail, because it is clear that we cannot, at least in this life, transcend the contingencies of an existence in a physical body. Thus, he said, we should not be surprised if we constantly come up against the limits of our knowledge, which Wittgenstein said were the limits of language.

What does this have to do with modern neuroanatomy? The mistake that I made as a student was to follow my professor’s lead in assuming that events create distinct and discreetly localizable responses in the brain. Certainly at the level of resolution that most modern imagining techniques allow, it looks as though responses to distinct events can be localized with some degree of certainty. However, we also know that the brain is a much more interdependent organ than was previously thought, and that there really is no such thing as a completely isolatable event. The body impacts the brain and the brain impacts the body in a continuous dance of mutuality. For example, cortisol levels released in response to stress damage the hippocampus and may be implicated in depression and anxiety. Brain deficiencies in prefrontal executive circuits can impede activities of daily living and result in increases in stress hormones and subjective experiences of anxiety and depression. The list could go on from there, and no doubt will as our science refines its level of resolution. The point is that it is no longer appropriate to think about discrete brain regions but rather we need to think about the state of the entire brain as it is situated in the body at any given moment.

From this perspective, it is no longer appropriate to think about “knowing” the difference between middle C and green. To say that we “know” the difference between middle C and green becomes a non-sequitor, unless by “know” we are simply describing a propensity for future actions, which seems as preposterous to me as the argument from anesthesia. Because to know the difference between C and green would seem to imply that there is something added to the complete description of the whole brain in response to C. But this just recapitulates the mistake that Kant was trying to warn us about. It represents the effort to go beyond a contingent description of the facts and hypothesize an object that can never be given in experience. Kant destroyed this endeavor in his critique of rational psychology but we clearly still see it cropping up in our cultural habits of experiential description. I think that the answer Kant would have proposed to my student question would be something like this:

            “The answer is that the whole brain experiences C and green, and the whole brain is in a different electrochemical configuration when stimulated by C as opposed to when it is stimulated by green. What we call ‘knowing the difference between C and green’ is nothing more than the fact that our brains are in a different state at different points in time. There is nothing else to add to the description. You have exhausted all the facts available to your contingent brain. To try to move beyond these facts is to entertain an experience of the uncontingent, and that, my son, you will never achieve in this life.”

Read Full Post »

a colleague once asked what i think is a very important question, which i have continued to ponder for some time: what does a neurophysiological description of behavior add to our understanding of human psychology already described with a functional model? today it hit me: it helps me be less judgmental about my own and others’ emotional vulnerability. if i describe emotional vulnerability in terms of the plasticity (learning) of my hippocampal-amygdala-autonomic response circuit to significant events or the relatively prolonged period of time that it takes my autonomic nervous system (ANS) to return to baseline after an emotionally upsetting event, then i think i can be less judgmental about myself and others as being “moody” or “sensitive” or “irrational.” it orients me to an understanding of my own inborn limits and allows me to accept those limits rather than judging them as “wrong” or “bad” as i’ve been taught to do for most of my life. example: once when i had a falling out with a colleague she remarked that i “need to examine my issues and why i put up so many emotional walls.” in the context, it was a statement that i experienced as quite pejorative, invalidating and hostile. needless to say it was not effective in promoting problem solving between us, nor in providing me with useful information about myself (at least at that time…). in fact, i am a person who does not trust easily and finds it difficult to overcome distrust and the neurophysiological model allows me to redescribe the situation: when i learn to “distrust” a situation or person i can validate that in fact it is objectively difficult for me to learn “trust” again just because my brain is especially plastic in that circuit involving the creation of memories and the linking of those memories (synapses in the hippocampus) to my emotional response circuits (amygdala and ANS). as the buddhists would say, i can learn to understand the nature of the way things are (the dharma) rather than being stuck in the shoulds of life.

Read Full Post »

i was on my way to asking a friend if he was a buddhist.

the problem is that the question is immediately unanswerable.

like the question are you a pragmatist?

you cannot ask someone who challenges the entire metaphysical world of “definitions” to define themselves as one thing or another.

it’s like asking if the sentence “this sentence is a lie” is true or false?

the stimulus has no satisfying response class.

why is this a problem for me?

i often find myself disappointed by my friend’s words.

they seem anti-buddhist.

so i want to ask him if he is a buddhist.

because he certainly reminds me of one.

but then, his words disappoint me.

like an orchestra that starts playing schubert in the middle of an all-beethoven concert.


that’s not right.

it’s more like wagner.

schubert would be a pleasant surprise.

i therefore come to a question that demonstrates the inadequacy of words.

are you a buddhist?

Read Full Post »

The classical problem of knowledge is this: if I presume that the evidence of my senses corresponds to something that would exist independently of my senses, then I must assume some sort of continuity of that something. I must assume that my observation is representative of a larger universal set of events. Modern science enshrines this principle in two words: homogeneity (the idea that my observations are a fair sample of observations that could be made from anywhere else in the universe) and isotropy (the idea that the laws of physics are universal). Following these principles has, unfortunately led us to several contradictions. For example, the speed of light is observed to be universal for all observers, regardless of their state of motion. Given that the speed of light is constant for all observers, it represents the maximum attainable speed of any existing phenomenon. (This is because if you could exceed the speed of light, it would no longer be constant for the person exceeding it—if your car accelerates past another car the other car’s speed appears to decrease).

Given that nothing can exceed the speed of light, there are now portions of the known universe which are so far apart that it is not possible for an event in one part of the universe to be known in another part of the universe, because the light could not have traveled that far in the amount of time that the universe has existed (about 14 billion years). Thus, the principles of homogeneity and isotropy cannot be affirmed for all parts of the universe, since it is possible that the laws of physics may have changed in certain parts of the universe but that these changes would remain unknown in other parts of the universe.

Modern cosmologists try to deal with this paradox through the “inflationary principle,” which states that at some point in the past the universe was small enough to be in complete causal contact (since it was small enough to allow light to traverse the entire known universe in the time that it had existed). At that time the laws of the universe would have been constant throughout and, as the universe expanded, it maintained this homogeneity, since once it expanded past the size that would allow light to traverse the entire universe, there was no longer any way for any event to change this pre-established homogeneity.

The original problem is still present, however, in concealed format. Because inflationary theory posits a finite point in the past at which time the universe went from complete causal contact to incomplete causal contact. And this represents a change in the laws of the universe (a transition point in the behavior of everything). Presumably, we should be in a position to observe those changes, since as we gaze up at the night sky we are actually looking into the “past,” since the light reaching our eyes has been traveling towards us for a long time. It therefore follows that we in fact cannot assume that all observations of the known universe are homogeneous and universal, since we can in fact observe the finite period in time in which the universe’s laws became “uncoupled” in time.

Once again we discover that, contrary to Plato’s myth of the cave, there is no absolute truth. Once again we discover that the statement “there is no absolute truth” is itself a lie and a paradox. Once again we discover the contingency of words, models, ideas, conjectures, hypotheses, sense data. Once again we are reduced to a confounding mass of information that causes us to throw up our hands and utter that ultimate epithet of ignorance: “God.”

Read Full Post »

Older Posts »