In Hinduism, there is a fundamental trinity of gods – Brahma, Vishnu and Shiva. They are to be understood as birth, existence and death. They are the gods of creation, well-being and destruction, as our grandmothers told us.
Tag Archives: Philosophy
Richard Feynman — How Much Can We Know?
We open our eyes, we see the world, we discern patterns. We theorize, formalize; we use and rationality and mathematics to understand and describe everything. How much can we really know, though?
To illustrate what I mean, let me use an analogy. I wish I had the imagination to come up with it, but it was Richard Feynman who did. He was, by the way, quirky enough to compare physics with sex.
Man as Chinese Room
In the previous posts in this series, we discussed how devastating Searle’s Chinese Room argument was to the premise that our brains are digital computers. He argued, quite convincingly, that mere symbol manipulation could not lead to the rich understanding that we seem to enjoy. However, I refused to be convinced, and found the so-called systems response more convincing. It was the counter-argument saying that it was the whole Chinese Room that understood the language, not merely the operator or symbol pusher in the room. Searle laughed it off, but had a serious response as well. He said, “Let me be the whole Chinese Room. Let me memorize all the symbols and the symbol manipulation rules so that I can provide Chinese responses to questions. I still don’t understand Chinese.”
Now, that raises an interesting question — if you know enough Chinese symbols, and Chinese rules to manipulate them, don’t you actually know Chinese? Of course you can imagine someone being able to handle a language correctly without understanding a word of it, but I think that is stretching the imagination a bit too far. I am reminded of the blind sight experiment where people could see without knowing it, without being consciously aware of what it was that they were seeing. Searle’s response points in the same direction — being able to speak Chinese without understanding it. What the Chinese Room is lacking is the conscious awareness of what it is doing.
To delve a bit deeper into this debate, we have to get a bit formal about Syntax and Semantics. Language has both syntax and semantics. For example, a statement like “Please read my blog posts” has the syntax originating from the grammar of the English language, symbols that are words (syntactical placeholders), letters and punctuation. On top of all that syntax, it has a content — my desire and request that you read my posts, and my background belief that you know what the symbols and the content mean. That is the semantics, the meaning of the statement.
A computer, according to Searle, can only deal with symbols and, based on symbolic manipulation, come up with syntactically correct responses. It doesn’t understand the semantic content as we do. It is incapable of complying with my request because of its lack of understanding. It is in this sense that the Chinese Room doesn’t understand Chinese. At least, that is Searle’s claim. Since computers are like Chinese Rooms, they cannot understand semantics either. But our brains can, and therefore the brain cannot be a mere computer.
When put that way, I think most people would side with Searle. But what if the computer could actually comply with the requests and commands that form the semantic content of statements? I guess even then we would probably not consider a computer fully capable of semantic comprehension, which is why if a computer actually complied with my request to read my posts, I might not find it intellectually satisfying. What we are demanding, of course, is consciousness. What more can we ask of a computer to convince us that it is conscious?
I don’t have a good answer to that. But I think you have to apply uniform standards in ascribing consciousness to entities external to you — if you believe in the existence of other minds in humans, you have to ask yourself what standards you apply in arriving at that conclusion, and ensure that you apply the same standards to computers as well. You cannot build cyclical conditions into your standards — like others have human bodies, nervous systems and an anatomy like you do so that that they have minds as well, which is what Searle did.
In my opinion, it is best to be open-minded about such questions, and important not to answer them from a position of insufficient logic.
Minds as Machine Intelligence
Prof. Searle is perhaps most famous for his proof that computing machines (or computation as defined by Alan Turing) can never be intelligent. His proof uses what is called the Chinese Room argument, which shows that mere symbol manipulation (which is what Turning’s definition of computation is, according to Searle) cannot lead to understanding and intelligence. Ergo our brains and minds could not be mere computers.
The argument goes like this — assume Searle is locked up in a room where he gets inputs corresponding to questions in Chinese. He has a set of rules to manipulate the input symbols and pick out an output symbol, much as a computer does. So he comes up with Chinese responses that fool outside judges into believing that they are communicating with a real Chinese speaker. Assume that this can be done. Now, here is the punch line — Searle doesn’t know a word of Chinese. He doesn’t know what the symbols mean. So mere rule-based symbol manipulation is not enough to guarantee intelligence, consciousness, understanding etc. Passing the Turing Test is not enough to guarantee intelligence.
One of the counter-arguements that I found most interesting is what Searle calls the systems argument. It is not Searle in the Chinese room that understands Chinese; it is the whole system including the ruleset that does. Searle laughs it off saying, “What, the room understands Chinese?!” I think the systems argument merits more that that derisive dismissal. I have two supporting arguments in favor of the systems response.
The first one is the point I made in the previous post in this series. In Problem of Other Minds, we saw that Searle’s answer to the question whether others have minds was essentially by behavior and analogy. Others behave as though they have minds (in that they cry out when we hit their thumb with a hammer) and their internal mechanisms for pain (nerves, brain, neuronal firings etc) are similar to ours. In the case of the Chinese room, it certainly behaves as though it understands Chinese, but it doesn’t have any analogs in terms of the parts or mechanisms like a Chinese speaker. Is it this break in analogy that is preventing Searle from assigning intelligence to it, despite its intelligent behavior?
The second argument takes the form of another thought experiment — I think it is called the Chinese Nation argument. Let’s say we can delegate the work of each neuron in Searle’s brain to a non-English speaking person. So when Searle hears a question in English, it is actually being handled by trillions of non-English speaking computational elements, which generate the same response as his brain would. Now, where is the English language understanding in this Chinese Nation of non-English speaking people acting as neurons? I think one would have to say that it is the whole “nation” that understands English. Or would Searle laugh it off saying, “What, the nation understands English?!”
Well, if the Chinese nation could understand English, I guess the Chinese room could understand Chinese as well. Computing with mere symbol manipulation (which is what the people in the nation are doing) can and does lead to intelligence and understanding. So our brains could really be computers, and minds software manipulating symbols. Ergo Searle is wrong.
Look, I used Prof. Searle’s arguments and my counter arguments in this series as a sort of dialog for dramatic effect. The fact of the matter is, Prof. Searle is a world-renowned philosopher with impressive credentials while I am a sporadic blogger — a drive-by philosopher at best. I guess I am apologizing here to Prof. Searle and his students if they find my posts and comments offensive. It was not intended; only an interesting read was intended.
Problem of Other Minds
How do you know other people have minds as you do? This may sound like a silly question, but if you allow yourself to think about it, you will realize that you have no logical reason to believe in the existence of other minds, which is why it is an unsolved problem in philosophy – the Problem of Other Minds. To illustrate – I was working on that Ikea project the other day, and was hammering in that weird two-headed nail-screw-stub thingie. I missed it completely and hit my thumb. I felt the excruciating pain, meaning my mind felt it and I cried out. I know I have a mind because I felt the pain. Now, let’s say I see another bozo hitting his thumb and crying out. I feel no pain; my mind feels nothing (except a bit of empathy on a good day). What positive logical basis do I have to think that the behavior (crying) is caused by pain felt by a mind?
Mind you, I am not suggesting that others do not have minds or consciousness — not yet, at least. I am merely pointing out that there is no logical basis to believe that they do. Logic certainly is not the only basis for belief. Faith is another. Intuition, analogy, mass delusion, indoctrination, peer pressure, instinct etc. are all basis for beliefs both true and false. I believe that others have minds; otherwise I wouldn’t bother writing these blog posts. But I am keenly aware that I have no logical justification for this particular belief.
The thing about this problem of other minds is that it is profoundly asymmetric. If I believe that you don’t have a mind, it is not an issue for you — you know that I am wrong the moment you hear it because you know that you have a mind (assuming, of course, that you do). But I do have a serious issue — there is no way for me to attack my belief in the non-existence of your mind. You could tell me, of course, but then I would think, “Yeah, that is exactly what a mindless robot would be programmed to say!”
I was listening to a series of lectures on the philosophy of mind by Prof. John Searle. He “solves” the problem of other minds by analogy. We know that we have the same anatomical and neurophysical wirings in addition to analogous behavior. So we can “convince” ourselves that we all have minds. It is a good argument as far as it goes. What bothers me about it is its complement — what it implies about minds in things that are wired differently, like snakes and lizards and fish and slugs and ants and bacteria and viruses. And, of course, machines.
Could machines have minds? The answer to this is rather trivial — of course they can. We are biological machines, and we have minds (assuming, again, that you guys do). Could computers have minds? Or, more pointedly, could our brains be computers, and minds be software running on it? That is fodder for the next post.
Brains and Computers
We have a perfect parallel between brains and computers. We can easily think of the brain as the hardware and mind or consciousness as the software or the operating system. We would be wrong, according to many philosophers, but I still think of it that way. Let me outline the compelling similarities (according to me) before getting into the philosophical difficulties involved.
A lot of what we know of the workings of the brain comes from lesion studies. We know, for instances, that features like color vision, face and object recognition, motion detection, language production and understanding are all controlled by specialized areas of the brain. We know this by studying people who have suffered localized brain damage. These functional features of the brain are remarkably similar to computer hardware units specialized in graphics, sound, video capture etc.
The similarity is even more striking when we consider that the brain can compensate for the damage to a specialized area by what looks like software simulation. For instance, the patient who lost the ability to detect motion (a condition normal people would have a hard time appreciating or identifying with) could still infer that an object was in motion by comparing successive snapshots of it in her mind. The patient with no ability to tell faces apart could, at times, deduce that the person walking toward him at a pre-arranged spot at the right time was probably his wife. Such instances give us the following attractive picture of the brain.
Brain → Computer hardware
Consciousness → Operating System
Mental functions → Programs
It looks like a logical and compelling picture to me.
This seductive picture, however, is far too simplistic at best; or utterly wrong at worst. The basic, philosophical problem with it is that the brain itself is a representation drawn on the canvas of consciousness and the mind (which are again cognitive constructs). This abysmal infinite regression is impossible to crawl out of. But even when we ignore this philosophical hurdle, and ask ourselves whether brains could be computers, we have big problems. What exactly are we asking? Could our brains be computer hardware and minds be software running on them? Before asking such questions, we have to ask parallel questions: Could computers have consciousness and intelligence? Could they have minds? If they had minds, how would we know?
Even more fundamentally, how do you know whether other people have minds? This is the so-called Problem of Other Minds, which we will discuss in the next post before proceeding to consider computing and consciousness.
Seeing and Believing
When we open our eyes and look at some thing, we see that damn thing. What could be more obvious than that, right? Let’s say you are looking at your dog. What you see is really your dog, because, if you want, you can reach out and touch it. It barks, and you can hear the woof. If it stinks a bit, you can smell it. All these extra perceptual clues corroborate your belief that what you are seeing is your dog. Directly. No questions asked.
Of course, my job on this blog is to ask questions, and cast doubts. First of all, seeing and touching seem to be a bit different from hearing and smelling. You don’t strictly hear your dog bark, you hear its sound. Similarly, you don’t smell it directly, you smell the odor, the chemical trail the dog has left in the air. Hearing and smelling are three place perceptions — the dog generates sound/odor, the sound/odor travels to you, you perceive the sound/odor.
But seeing (or touching) is a two place thing — the dog there, and you here perceiving it directly. Why is that? Why do we feel that when we see or touch something, we sense it directly? This belief in the perceptual veracity of what we see is called naive realism. We of course know that seeing involves light (so does touching, but in a much more complicated way), what we are seeing is the light reflected off an object and so on. It is, in fact, no different from hearing something. But this knowledge of the mechanism of seeing doesn’t alter our natural, commonsense view that what we see is what is out there. Seeing is believing.
Extrapolated from the naive version is the scientific realism, which asserts that our scientific concepts are also real, eventhough we may not directly perceive them. So atoms are real. Electrons are real. Quarks are real. Most of our better scientists out there have been skeptical about this extraploation to our notion of what is real. Einstein, probably the best of them, suspected that even space and time might not be real. Feynman and Gell-Mann, after developing theories on electrons and quarks, expressed their view that electrons and quarks might be mathematical constructs rather than real entities.
What I am inviting you to do here is to go beyond the skepticism of Feynman and Gell-Mann, and delve into Einstein’s words — space and time are modes by which we think, not conditions in which we live. The sense of space is so real to us that we think of everything else as interactions taking place in the arena of space (and time). But space itself is the experience corresponding to the electrical signals generated by the light hitting your retina. It is a perceptual construct, much like the tonality of the sound you hear when air pressure waves hit your ear drums. Our adoption of naive realism results in our complete trust in the three dimensional space view. And since the world is created (in our brain as perceptual constructs) based on light, its speed becomes an all important constant in our world. And since speed mixes space and time, a better description is found in a four dimensional Minkowski geometry. But all these descriptions are based on perceptual experiences and therefore unreal in some sense.
I know the description above is highly circular — I talked about space being a mental construct created by light traveling through, get this, space. And when I speak of its speed, naturally, I’m talking about distance in space divided by time, and positing as the basis for the space-time mixing. This circularity makes my description less than clear and convincing. But the difficulty goes deeper than that. You see, all we have is this cognitive construct of space and time. We can describe objects and events only in terms of these constructs even when we know that they are only cognitive representations of sensory signals. Our language doesn’t go beyond that. Well, it does, but then we will be talking the language, for instance, of Advaita, calling the constructs Maya and the causes behind them Brahman, which stays unknowable. Or, we will be using some other parallel descriptions. These descriptions may be profound, wise and accurate. But ultimately, they are also useless.
But if philosophy is your thing, the discussions of cognitive constructs and unknown causations are not at all useless. Philosophy of physics happens to be my thing, and so I ask myself — what if I assume the unknown physical causes exist in a world similar to our perceptual construct? I could then propagate the causes through the process of perception and figure out what the construct should look like. I know, it sounds a bit complex, but it is something that we do all the time. We know, for instance, that the stars that we see in the night sky are not really there — we are seeing them the way they were a few (or a few million or billion) years ago because the light from them takes a long time to reach us. Physicists also know that the perceived motion of celestial objects also need to be corrected for these light-travel-time effects.
In fact, Einstein used the light travel time effects as the basis for deriving his special theory of relativity. He then stipulated that space and time behave the way we perceive them, derived using the said light-travel-time effects. This, of course, is based on his deep understanding that space and time are “the modes by which we think,” but also based on the assumption that the the causes behind the modes also are similar to the modes themselves. This depth of thinking is lost on the lesser scientists that came after him. The distinction between the modes of thinking and their causation is also lost, so that space and time have become entities that obey strange rules. Like bent spoons.
Photo by General Press1
Deferred Satisfaction
The mother was getting annoyed that her teenaged son was wasting time watching TV.
“Son, don’t waste your time watching TV. You should be studying,” she advised.
“Why?” quipped the son, as teenagers usually do.
“Well, if you study hard, you will get good grades.”
“Yeah, so?”
“Then, you can get into a good school.”
“Why should I?”
“That way, you can hope to get a good job.”
“Why? What do I want with a good job?”
“Well, you can make a lot of money that way.”
“Why do I want money?”
“If you have enough money, you can sit back and relax. Watch TV whenever you want to.”
“Well, I’m doing it right now!”
What the mother is advocating, of course, is the wise principle of deferred satisfaction. It doesn’t matter if you have to do something slightly unpleasant now, as long as you get rewarded for it later in life. This principle is so much a part of our moral fabric that we take it for granted, never questioning its wisdom. Because of our trust in it, we obediently take bitter medicines when we fall sick, knowing that we will feel better later on. We silently submit ourselves to jabs, root-canals, colonoscopies and other atrocities done to our persons because we have learned to tolerate unpleasantnesses in anticipation of future rewards. We even work like a dog at jobs so loathesome that they really have to pay us a pretty penny to stick it out.
Before I discredit myself, let me make it very clear that I do believe in the wisdom of deferred satisfaction. I just want to take a closer look because my belief, or the belief of seven billion people for that matter, is still no proof of the logical rightness of any principle.
The way we lead our lives these days is based on what they call hedonism. I know that the word has a negative connotation, but that is not the sense in which I am using it here. Hedonism is the principle that any decision we take in life is based on how much pain and pleasure it is going to create. If there is an excess of pleasure over pain, then it is the right decision. Although we are not considering it, the case where the recipients of the pain and pleasure are distinct individuals, nobility or selfishness is involved in the decision. So the aim of a good life is to maximize this excess of pleasure over pain. Viewed in this context, the principle of delayed satisfaction makes sense — it is one good strategy to maximize the excess.
But we have to be careful about how much to delay the satisfaction. Clearly, if we wait for too long, all the satisfaction credit we accumulate will go wasted because we may die before we have a chance to draw upon it. This realization may be behind the mantra “live in the present moment.”
Where hedonism falls short is in the fact that it fails to consider the quality of the pleasure. That is where it gets its bad connotation from. For instance, a ponzi scheme master like Madoff probably made the right decisions because they enjoyed long periods of luxurious opulence at the cost of a relatively short durations of pain in prison.
What is needed, perhaps, is another measure of the rightness of our choices. I think it is in the intrinsic quality of the choice itself. We do something because we know that it is good.
I am, of course, touching upon the vast branch of philosophy they call ethics. It is not possible to summarize it in a couple of blog posts. Nor am I qualified enough to do so. Michael Sandel, on the other hand, is eminently qualified, and you should check out his online course Justice: What is the Right Thing to Do? if interested. I just want to share my thought that there is something like the intrinsic quality of a way of life, or of choices and decisions. We all know it because it comes before our intellectual analysis. We do the right thing not so much because it gives us an excess of pleasure over pain, but we know what the right thing is and have an innate need to do it.
That, at least, is the theory. But, of late, I’m beginning to wonder whether the whole right-wrong, good-evil distinction is an elaborate ruse to keep some simple-minded folks in check, while the smarter ones keep enjoying totally hedonistic (using it with all the pejorative connotation now) pleasures of life. Why should I be good while the rest of them seem to be reveling in wall-to-wall fun? Is it my decaying internal quality talking, or am I just getting a bit smarter? I think what is confusing me, and probably you as well, is the small distance between pleasure and happiness. Doing the right thing results in happiness. Eating a good lunch results in pleasure. When Richard Feynman wrote about The Pleasure of Finding Things Out, he was probably talking about happiness. When I read that book, what I’m experiencing is probably closer to mere pleasure. Watching TV is probably pleasure. Writing this post, on the other hand, is probably closer to happiness. At least, I hope so.
To come back my little story above, what could the mother say to her TV-watching son to impress upon him the wisdom of deferred satisfaction? Well, just about the only thing I can think of is the argument from hedonism saying that if the son wastes his time now watching TV, there is a very real possibility that he may not be able to afford a TV later on in life. Perhaps intrinsically good parents won’t let their children grow up into a TV-less adulthood. I suspect I would, because I believe in the intrinsic goodness of taking responsibility for one’s actions and consequences. Does that make me a bad parent? Is it the right thing to do? Need we ask anyone to tell us these things?
My Life, My Way
After almost eight years in banking, I have finally called it quits. Over the last three of those years, I had been telling people that I was leaving. And I think people had stopped taking me seriously. My wife certainly did, and it came as a major shock to her. But despite her studied opposition, I managed to pull it off. In fact, it is not just banking that I left, I have actually retired. Most of my friends greeted the news of my retirement with a mixture of envy and disbelief. The power to surprise — it is nice to still have that power.
Why is it a surprise really? Why would anyone think that it is insane to walk away from a career like mine? Insanity is in doing the same thing over and over and expecting different results. Millions of people do the same insanely crummy stuff over and over, everyone of them wanting nothing more than to stop doing it, even planning on it only to postpone their plans for one silly reason or another. I guess the force of habit in doing the crummy stuff is greater than the fear of change. There is a gulf between what people say their plans are and what they end up doing, which is the theme of that disturbing movie Revolutionary Road. This gulf is extremely narrow in my case. I set out with a bunch of small targets — to help a few people, to make a modest fortune, to provide reasonable comfort and security to those near. I have achieved them, and now it is time to stop. The trouble with all such targets is that once you get close to them, they look mundane, and nothing is ever enough for most people. Not for me though — I have always been reckless enough to stick to my plans.
One of the early instances of such a reckless action came during my undergraduate years at IIT Madras. I was pretty smart academically, especially in physics. But I wasn’t too good in remembering details like the names of theorems. Once, this eccentric professor of mine at IIT asked me the name of a particular theorem relating the line integral of the electric field around a point and the charge contained within. I think the answer was Green’s theorem, while its 3-D equivalent (surface integral) is called Gauss’s theorem or something. (Sorry, my Wikipedia and Google searches didn’t bring up anything definitive on that.) I answered Gauss’s theorem. The professor looked at me for a long moment with contempt in his eyes and said (in Tamil) something like I needed to get a beating with his slippers. I still remember standing there in my Khakki workshop attire and listening to him, with my face burning with shame and impotent anger. And, although physics was my favorite subject (my first love, in fact, as I keep saying, mostly to annoy my wife), I didn’t go back to any of his lectures after that. I guess even at that young age, I had this disturbing level of recklessness in me. I now know why. It’s is the ingrained conviction that nothing really matters. Nothing ever did, as Meursault the Stranger points out in his last bout of eloquence.
I left banking for a variety of reasons; remuneration wasn’t one of them, but recklessness perhaps was. I had some philosophical misgivings about the rightness of what I was doing at a bank. I suffered from a troubled conscience. Philosophical reasons are strange beasts — they lead to concrete actions, often disturbing ones. Albert Camus (in his collection The Myth of Sisyphus) warned of it while talking about the absurdity of life. Robert Pirsig in his epilog to Zen and the Art of Motorcycle Maintenance also talked about when such musings became psychiatrically dangerous. Michael Sandel is another wise man who, in his famous lectures on Justice: What is the Right Thing to Do? pointed out that philosophy could often color your perspective permanently — you cannot unlearn it to go back, you cannot unthink a thought to become normal again.
Philosophy and recklessness aside, the other primary reason for leaving the job was boredom. The job got so colossally boring. Looking out my window at the traffic 13 floors below was infinitely more rewarding than looking at the work on my three computer screens. And so I spent half my time staring out the window. Of course, my performance dwindled as a result. I guess scuttling the performance is the only way to realistically make oneself leave a high-paying job. There are times when you have have to burn the bridges behind you. Looking back at it now, I cannot really understand why I was so bored. I was a quantitative developer and the job involved developing reports and tools. Coding is what I do for fun at home. That and writing, of course. May be the boredom came from the fact that there was no serious intellectual content in it. There was none in the tasks, nor in the company of the throngs of ambitious colleagues. Walking into the workplace every morning, looking at all the highly paid people walking around with impressive demeanors of doing something important, I used to feel almost sad. How important could their bean-counting ever be?
Then again, how important could this blogging be? We get back to Meursault’s tirade – rien n’avait d’importance. Perhaps I was wrong to have thrown it away, as all of them keep telling me. Perhaps those important-looking colleagues were really important, and I was the one in the wrong to have retired. That also matters little; that also has little importance, as Meursault and my alter ego would see it.
What next is the question that keeps coming up. I am tempted to give the same tongue-in-cheek answer as Larry Darrell in The Razor’s Edge — Loaf! My kind of loafing would involve a lot of thinking, a lot of studying, and hard work. There is so much to know, and so little time left to learn.
Photo by kenteegardin
Everything and Nothing
I once attended a spiritual self-help kind of course. Toward the end of the course, there was this exercise where the teacher would ask the question, “What are you?” Whatever answer the participant came up with, the teacher would tear it apart. For instance, if I said, “I work for a bank as a quantitative finance professional,” she would say, “Yeah, that’s what you do, but what are you?” If I said, “I am Manoj,” she would say, “Yeah, that’s only your name, what are you?” You get the idea. To the extent that it is a hard question to answer, the teacher always gets the upper hand.
Not in my case though. Luckily for me, I was the last one to answer the question, and I had the benefit of seeing how this exercise evolved. Since I had time, I decided to cook up something substantial. So when my turn came, here was my response that pretty much floored the teacher. I said, “I am a little droplet of consciousness so tiny that I’m nothing, yet part of something so big that I’m everything.” As I surmised, she couldn’t very well say, “Yeah, sure, but what are you?” In fact, she could’ve said, “That’s just some serious bullshit, man, what the heck are you?” which is probably what I would’ve done. But my teacher, being the kind and gentle soul she is, decided to thank me gravely and move on.
Now I want to pick up on that theme and point out that there is more to that response than something impressive that I made up that day to sound really cool in front of a bunch of spiritualites. The tininess part is easy. Our station in this universe is so mindbogglingly tiny that a sense of proportion is the one thing we cannot afford to have, if we are to keep our sanity — as Douglas Adams puts it in one of his books. What goes for the physical near-nothingness of our existence in terms of space also applies to the temporal dimension. We exist for a mere fleeing instant when put in the context of any geological or cosmological timescale. So when I called myself a “little” droplet, I was being kind, if anything.
But being part of something so vast — ah, that is the interesting bit. Physically, there is not an atom in my body that wasn’t part of a star somewhere sometime ago. We are all made up of stardust, from the ashes of dead stars. (Interesting they say from dust to dust and from ashes to ashes, isn’t it?) So, those sappy scenes in sentimental flicks, where the dad points to the star and says, “Your mother is up there sweetheart, watching over you,” have a bit of scientific truth to them. All the particles in my body will end up in a star (a red giant, in our case); the only stretch is that it will take another four and half billion years. But it does mean that the dust will live forever and end up practically everywhere through some supernova explosion, if our current understanding of how it all works is correct (which it is not, in my opinion, but that is another story). This eternal existence of a the purely physical kind is what Schopenhauer tried to draw consolation from, I believe, but it really is no consolation, if you ask me. Nonetheless, we are all part of something much bigger, spatially and temporally – in a purely physical sense.
At a deeper level, my being part of everything comes from the fact that we are both the inside and the outside of things. I know it sounds like I smoked something I wouldn’t like my children to smoke. Let me explain; this will take a few words. You see, when we look at a star, we of course see a star. But what we mean by “see a star” is just that there are some neurons in our brain firing in a particular pattern. We assume that there is a star out there causing some photons to fall on our retina and create neuronal firing, which results in a cognitive model of what we call night sky and stars. We further assume that what we see (night sky and star) is a faithful representation of what is out there. But why should it be? Think of how we hear stuff. When we listen to music, we hear tonality, loudness etc, but these are only cognitive models for the frequency and amplitude of the pressure waves in the air, as we understand sound right now. Frequency and amplitude are very different beasts compared to tonality and loudness — the former are physical causes, the latter are perceptual experiences. Take away the brain, there is no experience, ergo there is no sound — which is the gist of the overused cocktail conundrum of the falling tree in a deserted forest. If you force yourself to think along these lines for a while, you will have to admit that whatever is “out there” as you perceive it is only in your brain as cognitive constructs. Hence my hazy statement about we are both the inside and the outside of things. So, from the perspective of cognitive neuroscience, we can argue that we are everything — the whole universe and our knowledge of it is all are patterns in our brain. There is nothing else.
Want to go even deeper? Well, the brain itself is part of the reality (which is a cognitive construct) created by the brain. So are the air pressure waves, photons, retina, cognitive neuroscience etc. All convenient models in our brains. That, of course, is an infinite regression, from which there is no escape. It is a logical abyss where we can find no rational foothold to anchor our thoughts and crawl out, which naturally leads to what we call the infinite, the unknowable, the absolute, the eternal — Brahman.
I was, of course, thinking of Brahman ( and the notion that we are all part of that major oneness) when I cooked up that everything-and-nothing response. But it is all the same, isn’t it, whichever way you look at it? Well, may be not; may be it is just that I see it that way. If the only tool you have is a hammer, all the problems in the world look like nails to you. May be I’m just hammering in the metaphysical nails whenever and wherever I get a chance. To me, all schools of thought seem to converge to similar notions. Reminds of that French girl I was trying impress long time ago. I said to her, rather optimistically, “You know, you and I think alike, that’s what I like about you.” She replied, “Well, there is only one way to think, if you think at all. So no big deal!” Needless to say I didn’t get anywhere with her.