Category Archives: Computers

Of computers and gadgets — why your screen goes blank, what kind of hosting you should get, how to get started on blogging etc.

On Large Language Models

The Philosophy

Why are they so intelligent? The moment we speak of intelligence, we step into the arena of philosophy. Intelligence presupposes consciousness and awareness, doesn’t it? Or does it? You see, Artificial Intelligence is fraught with philosophical possibilities.

As teachers and writers, we often distinguish among various categories or levels of knowledge, creating hierarchies like Data, Information, Knowledge, and Wisdom, with insights and creativity likely lurking between the last two. I remember using this framework while teaching text analytics, explaining to my students that as one ascends these higher orders, the information density increases. I illustrated this by showing that two numbers—the mean and standard deviation of their scores—could encapsulate the essential performance of the cohort. What I was subtly implying, of course, was that my own place in this hierarchy was closer to the Wisdom end, where highly distilled information is infused with creativity and intellect to yield a neatly packaged product: wisdom.

What large language models (LLMs) such as ChatGPT suggest, however, is quite the opposite. The process of creating intelligence—or wisdom, depending on your preference—is not one of distillation and concentration but of granulation. In fact, the entire hierarchy from Data to Wisdom may be fundamentally flawed, at worst, or misleading, at best. Allow me to explain.

In my earlier statistical example, where the mean and spread summarize the cohort’s performance, I could make the model generative. For instance, I could predict a new student’s score by assigning the mean value in the absence of any other information. Alternatively, I could draw a score randomly from a normal distribution defined by the given mean and standard deviation.

When people say that LLMs are merely “predicting the next word,” they are essentially assuming the latter: that LLMs determine the most probable next word—akin to assigning the mean score to the new student. A more nuanced practitioner might argue that the LLM generates a random word from a statistical model, much like assigning a random score based on the normal distribution. Of course, the word-prediction process is far more complex: The model is “large,” and predictions depend heavily on the context of the conversation.

To build on my toy model, I could create sub-models for specific groups—such as males and females, tall and short students, or individuals of different nationalities and backgrounds—to improve prediction accuracy. In my example, however, segmentation reduces statistical power because the data set becomes too fragmented. For language models, on the other hand, segmentation enhances accuracy. Precisely because they are “large,” LLMs do not suffer from statistical power loss. Instead, their predictions improve. In essence, the more granular the model, the better its performance. But this granularity seems to contradict the traditional Data-Information-Knowledge-Wisdom hierarchy. After all, a fully segmented model is equivalent to the data itself, isn’t it? Does this not suggest that the hierarchy is flawed?

So much for this quasi-philosophical exploration of how LLMs work. Let us now turn to why they appear so intelligent, smart, or wise—or at least, knowledgeable. Ultimately, all these terms may point to the same phenomenon.

How to Start an Internet Business

Starting a business online is easier than you think. Succeeding in one is another story, of course. First of all, you need a product or service, which had better be something that people want. In my experience, what people want most is to make money. Anything that helps them make money is a good product. Second, you need a way of collecting money and delivering the product or providing the service in return for payment. Third, you need to get visibility.

Continue reading

MySQL on Mac OSX Yosemite

If you use XAMPP for dev work on your Mac at home, and updated your OS to Yosemite, you may be temporarily distressed when you find that your MySQLd doesn’t start up. The fix is fairly simple.

Edit /Applications/XAMPP/xamppfiles/xampp. (You may have to use sudo to do this.)

Look for:

$XAMPP_ROOT/bin/mysql.server start > /dev/null &

And add unset DYLD_LIBRARY_PATH on top of it. It should look like:

unset DYLD_LIBRARY_PATH
$XAMPP_ROOT/bin/mysql.server start > /dev/null &

Restart MySQLd and you it should work.

Robotic Takeover

Years ago, I read this fictional story by Marshall Brain called Manna. It talked about the robotic takeover of a fast food chain by an intelligent system.

Marshall Brain, as you may know, is the founder of HowStuffWorks.com and a well known speaker, teacher, writer etc. Although he wrote Manna as fiction, he was so certain that it was the way of our future that he actually patented the system he described (if memory serves). Of course, he was right. I just got this link from a friend about how fulfillment centers work — how do you get the same-day or next day delivery on all those mountains of things that you order from the Internet? Here is how. It is astonishing how similar this scenario is to what Marshall Brain described in Manna.

Continue reading

High Performance Blogs and Websites

Do you have a website or a blog and feel that it is getting bogged down with heavy traffic? First of all, congratulations — it is one of those problems that webmasters and bloggers would love to have. But how would you solve it? The first thing to do is to enable PHP acceleration, if your site/blog is PHP based. Although it should be straightforward (in theory), it might take a while to get it right. You know what they say — In theory, theory and practice are the same. In practice, they are not. Acceleration, however, is a low-hanging fruit, and will go a long way in solving your problems.

Once you have extracted all the mileage out of the accelerator solution, it is time to incorporate a Content Delivery Network or CDN. What a CDN does is to serve all your static files (images, style sheets, javascript files, and even cached blog pages) from a network of servers other than your own. These servers are strategically placed around the continent (and around the globe) so that your readers receive the content from a location geographically close to him. In addition to reducing the latency due to distance, CDN also helps you by reducing the load on your server.

Continue reading

Man as Chinese Room

In the previous posts in this series, we discussed how devastating Searle’s Chinese Room argument was to the premise that our brains are digital computers. He argued, quite convincingly, that mere symbol manipulation could not lead to the rich understanding that we seem to enjoy. However, I refused to be convinced, and found the so-called systems response more convincing. It was the counter-argument saying that it was the whole Chinese Room that understood the language, not merely the operator or symbol pusher in the room. Searle laughed it off, but had a serious response as well. He said, “Let me be the whole Chinese Room. Let me memorize all the symbols and the symbol manipulation rules so that I can provide Chinese responses to questions. I still don’t understand Chinese.”

Now, that raises an interesting question — if you know enough Chinese symbols, and Chinese rules to manipulate them, don’t you actually know Chinese? Of course you can imagine someone being able to handle a language correctly without understanding a word of it, but I think that is stretching the imagination a bit too far. I am reminded of the blind sight experiment where people could see without knowing it, without being consciously aware of what it was that they were seeing. Searle’s response points in the same direction — being able to speak Chinese without understanding it. What the Chinese Room is lacking is the conscious awareness of what it is doing.

To delve a bit deeper into this debate, we have to get a bit formal about Syntax and Semantics. Language has both syntax and semantics. For example, a statement like “Please read my blog posts” has the syntax originating from the grammar of the English language, symbols that are words (syntactical placeholders), letters and punctuation. On top of all that syntax, it has a content — my desire and request that you read my posts, and my background belief that you know what the symbols and the content mean. That is the semantics, the meaning of the statement.

A computer, according to Searle, can only deal with symbols and, based on symbolic manipulation, come up with syntactically correct responses. It doesn’t understand the semantic content as we do. It is incapable of complying with my request because of its lack of understanding. It is in this sense that the Chinese Room doesn’t understand Chinese. At least, that is Searle’s claim. Since computers are like Chinese Rooms, they cannot understand semantics either. But our brains can, and therefore the brain cannot be a mere computer.

When put that way, I think most people would side with Searle. But what if the computer could actually comply with the requests and commands that form the semantic content of statements? I guess even then we would probably not consider a computer fully capable of semantic comprehension, which is why if a computer actually complied with my request to read my posts, I might not find it intellectually satisfying. What we are demanding, of course, is consciousness. What more can we ask of a computer to convince us that it is conscious?

I don’t have a good answer to that. But I think you have to apply uniform standards in ascribing consciousness to entities external to you — if you believe in the existence of other minds in humans, you have to ask yourself what standards you apply in arriving at that conclusion, and ensure that you apply the same standards to computers as well. You cannot build cyclical conditions into your standards — like others have human bodies, nervous systems and an anatomy like you do so that that they have minds as well, which is what Searle did.

In my opinion, it is best to be open-minded about such questions, and important not to answer them from a position of insufficient logic.

Minds as Machine Intelligence

Prof. Searle is perhaps most famous for his proof that computing machines (or computation as defined by Alan Turing) can never be intelligent. His proof uses what is called the Chinese Room argument, which shows that mere symbol manipulation (which is what Turning’s definition of computation is, according to Searle) cannot lead to understanding and intelligence. Ergo our brains and minds could not be mere computers.

The argument goes like this — assume Searle is locked up in a room where he gets inputs corresponding to questions in Chinese. He has a set of rules to manipulate the input symbols and pick out an output symbol, much as a computer does. So he comes up with Chinese responses that fool outside judges into believing that they are communicating with a real Chinese speaker. Assume that this can be done. Now, here is the punch line — Searle doesn’t know a word of Chinese. He doesn’t know what the symbols mean. So mere rule-based symbol manipulation is not enough to guarantee intelligence, consciousness, understanding etc. Passing the Turing Test is not enough to guarantee intelligence.

One of the counter-arguements that I found most interesting is what Searle calls the systems argument. It is not Searle in the Chinese room that understands Chinese; it is the whole system including the ruleset that does. Searle laughs it off saying, “What, the room understands Chinese?!” I think the systems argument merits more that that derisive dismissal. I have two supporting arguments in favor of the systems response.

The first one is the point I made in the previous post in this series. In Problem of Other Minds, we saw that Searle’s answer to the question whether others have minds was essentially by behavior and analogy. Others behave as though they have minds (in that they cry out when we hit their thumb with a hammer) and their internal mechanisms for pain (nerves, brain, neuronal firings etc) are similar to ours. In the case of the Chinese room, it certainly behaves as though it understands Chinese, but it doesn’t have any analogs in terms of the parts or mechanisms like a Chinese speaker. Is it this break in analogy that is preventing Searle from assigning intelligence to it, despite its intelligent behavior?

The second argument takes the form of another thought experiment — I think it is called the Chinese Nation argument. Let’s say we can delegate the work of each neuron in Searle’s brain to a non-English speaking person. So when Searle hears a question in English, it is actually being handled by trillions of non-English speaking computational elements, which generate the same response as his brain would. Now, where is the English language understanding in this Chinese Nation of non-English speaking people acting as neurons? I think one would have to say that it is the whole “nation” that understands English. Or would Searle laugh it off saying, “What, the nation understands English?!”

Well, if the Chinese nation could understand English, I guess the Chinese room could understand Chinese as well. Computing with mere symbol manipulation (which is what the people in the nation are doing) can and does lead to intelligence and understanding. So our brains could really be computers, and minds software manipulating symbols. Ergo Searle is wrong.

Look, I used Prof. Searle’s arguments and my counter arguments in this series as a sort of dialog for dramatic effect. The fact of the matter is, Prof. Searle is a world-renowned philosopher with impressive credentials while I am a sporadic blogger — a drive-by philosopher at best. I guess I am apologizing here to Prof. Searle and his students if they find my posts and comments offensive. It was not intended; only an interesting read was intended.

Problem of Other Minds

How do you know other people have minds as you do? This may sound like a silly question, but if you allow yourself to think about it, you will realize that you have no logical reason to believe in the existence of other minds, which is why it is an unsolved problem in philosophy – the Problem of Other Minds. To illustrate – I was working on that Ikea project the other day, and was hammering in that weird two-headed nail-screw-stub thingie. I missed it completely and hit my thumb. I felt the excruciating pain, meaning my mind felt it and I cried out. I know I have a mind because I felt the pain. Now, let’s say I see another bozo hitting his thumb and crying out. I feel no pain; my mind feels nothing (except a bit of empathy on a good day). What positive logical basis do I have to think that the behavior (crying) is caused by pain felt by a mind?

Mind you, I am not suggesting that others do not have minds or consciousness — not yet, at least. I am merely pointing out that there is no logical basis to believe that they do. Logic certainly is not the only basis for belief. Faith is another. Intuition, analogy, mass delusion, indoctrination, peer pressure, instinct etc. are all basis for beliefs both true and false. I believe that others have minds; otherwise I wouldn’t bother writing these blog posts. But I am keenly aware that I have no logical justification for this particular belief.

The thing about this problem of other minds is that it is profoundly asymmetric. If I believe that you don’t have a mind, it is not an issue for you — you know that I am wrong the moment you hear it because you know that you have a mind (assuming, of course, that you do). But I do have a serious issue — there is no way for me to attack my belief in the non-existence of your mind. You could tell me, of course, but then I would think, “Yeah, that is exactly what a mindless robot would be programmed to say!”

I was listening to a series of lectures on the philosophy of mind by Prof. John Searle. He “solves” the problem of other minds by analogy. We know that we have the same anatomical and neurophysical wirings in addition to analogous behavior. So we can “convince” ourselves that we all have minds. It is a good argument as far as it goes. What bothers me about it is its complement — what it implies about minds in things that are wired differently, like snakes and lizards and fish and slugs and ants and bacteria and viruses. And, of course, machines.

Could machines have minds? The answer to this is rather trivial — of course they can. We are biological machines, and we have minds (assuming, again, that you guys do). Could computers have minds? Or, more pointedly, could our brains be computers, and minds be software running on it? That is fodder for the next post.

Brains and Computers

We have a perfect parallel between brains and computers. We can easily think of the brain as the hardware and mind or consciousness as the software or the operating system. We would be wrong, according to many philosophers, but I still think of it that way. Let me outline the compelling similarities (according to me) before getting into the philosophical difficulties involved.

A lot of what we know of the workings of the brain comes from lesion studies. We know, for instances, that features like color vision, face and object recognition, motion detection, language production and understanding are all controlled by specialized areas of the brain. We know this by studying people who have suffered localized brain damage. These functional features of the brain are remarkably similar to computer hardware units specialized in graphics, sound, video capture etc.

The similarity is even more striking when we consider that the brain can compensate for the damage to a specialized area by what looks like software simulation. For instance, the patient who lost the ability to detect motion (a condition normal people would have a hard time appreciating or identifying with) could still infer that an object was in motion by comparing successive snapshots of it in her mind. The patient with no ability to tell faces apart could, at times, deduce that the person walking toward him at a pre-arranged spot at the right time was probably his wife. Such instances give us the following attractive picture of the brain.
Brain → Computer hardware
Consciousness → Operating System
Mental functions → Programs
It looks like a logical and compelling picture to me.

This seductive picture, however, is far too simplistic at best; or utterly wrong at worst. The basic, philosophical problem with it is that the brain itself is a representation drawn on the canvas of consciousness and the mind (which are again cognitive constructs). This abysmal infinite regression is impossible to crawl out of. But even when we ignore this philosophical hurdle, and ask ourselves whether brains could be computers, we have big problems. What exactly are we asking? Could our brains be computer hardware and minds be software running on them? Before asking such questions, we have to ask parallel questions: Could computers have consciousness and intelligence? Could they have minds? If they had minds, how would we know?

Even more fundamentally, how do you know whether other people have minds? This is the so-called Problem of Other Minds, which we will discuss in the next post before proceeding to consider computing and consciousness.