Monday, April 24, 2017

Professor Hubert Dreyfus (1929 - 2017)


UC Berkeley Professor Hubert Dreyfus has passed away at the age of 87. Professor Dreyfus is a hero of mine. He was a fearless rebel at heart, the first to criticise the AI community for their symbolic AI nonsense. They hated him for it but he was right, of course. Did the AI community ever apologise for their personal attacks on him? Of course not. The AI community has always been full of themselves and they still are.

Dreyfus contributed more to the field of artificial intelligence than its best practitioners. His insistence that the brain does not model the world is an underappreciated tour de force. His ability to connect the works of his favorite philosophers (Martin Heidegger, Maurice Merleau-Ponty) to the working of the brain was his greatest intellectual achievement in my opinion. I wrote an article about this topic in July of last year. Please read it to appreciate the depth of Dreyfus' understanding of a field that rejected him.

The World Is its Own Model or Why Hubert Dreyfus Is Still Right About AI

The world owes Professor Dreyfus a debt of gratitude. Thank you, Professor.

Monday, April 10, 2017

Signals, Sensors, Patterns and Sequences

[Note: The following is an excerpt from a paper I am writing as part of the eventual release of the Rebel Speech demo program, the world's first unsupervised audio classifier. I have not yet set a date for the release. Please be patient.]

Abstract

Signals, sensors, patterns and sequences are the basis of the brain’s amazing ability to understand the world around it. In this paper, I explain how it uses them for perception and learning. Although I delve a little into the neuroscience at the end, I restrict my explanation mostly to the logical and functional organization of the cerebral cortex.

The Perceptual System

Four Subsystems

Perception is the process of sensing and understanding physical phenomena. The brain’s perceptual system consists of four subsystems: the world, the sensory layer, pattern memory and sequence memory. Both pattern and sequence memories are unsupervised, feedforward, hierarchical neural networks. As explained later, the term “memory” is somewhat inadequate. The networks are actually high level or complex sensory organs. An unsupervised network is one that can classify patterns, objects or actions in the world directly from sensory data. A feedforward network is one in which input information flows in only one direction. A hierarchical network is organized like a tree. That is to say, higher level items are composed of lower level ones.

The world is the main perceptual subsystem because it dictates how the rest of the system is organized. The brain learns to make sense of the way the world changes over time. Elementary sensors in the sensory layer detect minute changes in the world (transitions) and convert them into precisely timed discrete signals that are fed to pattern memory where they are combined into small concurrent patterns. These are commonly called “spatial” patterns although it is a misleading label because concurrent patterns are inherently temporal and used by all sensory modalities, not just vision.

Signals from pattern detectors travel to sequence memory where sequences (transformations) are detected. Sequence memory is the seat of attention and of short and long-term memory. It is also where actual object recognition occurs. An object is a top-level sequence, i.e., a branch in the sequence hierarchy. A recognition event is triggered when the number of signals arriving at a top sequence detector surpasses a preset threshold. Recognition signals (green arrow) from sequence memory are fed back to pattern memory. They are part of the mechanism used by the brain to deal with noisy or incomplete patterns in the sensory stream.

Sequence memory can also generate motor signals but that is beyond the scope of this paper. What follows is a detailed description of each of the four subsystems.

(to be continued)

Thursday, March 16, 2017

Thalamus Prediction

Concurrent Pattern Hierarchy

This is just a short post to make a quick prediction about the internal organization of the thalamus, a relatively small but complex area of the brain that is thought to serve primarily as a relay center between various sensors and the sensory cortex. Given my current understanding of the brain and intelligence, I predict that the parts of the thalamus that process sensory signals (e.g., the lateral and medial geniculate nuclei) will be found to be hierarchically organized. The function of the hierarchy is to discover small concurrent patterns in the sensory space. These are commonly called "spatial patterns" in neuroscience. I personally don't like the use of the word "spatial" to refer to patterns because I think it is misleading. All patterns are temporal in my view, even if they refer to visual patterns. Here are some of the characteristics of the thalamic pattern hierarchy as predicted by my current model:
  • The hierarchy consists of a huge number of pattern detectors organized as binary trees.
  • The bottom level of the hierarchy receives signals from sensors.
  • The hierarchy has precisely 10 levels. This means that the most complex patterns have 1024 inputs.
  • Every level in the hierarchy makes reciprocal connections with the first level of the cerebral cortex.
  • Every pattern detector receive recognition feedback signals from the first level of the cerebral cortex.
The cerebral cortex (sequence memory) can instantly stitch these elementary patterns to form much bigger entities of arbitrary complexity. A number of researchers in artificial general intelligence (AGI), such as Jeff Hawkins and Subutai Ahmad of Numenta, assume (incorrectly in my view) that both concurrent and sequential patterns are learned and detected in the cortical columns of the cerebral cortex. In my model of the cortex, the cortical columns are used exclusively for sequence learning and detection while concurrent patterns are learned and recognized by the thalamus.

Stay tuned.

Edit 3/16/2017, 2:42 PM:

I should have elaborated further on the binary tree analogy. I prefer to call it an inverse or upside-down binary tree. That is to say, each node (pattern detector) in the tree receives only two inputs from lower level nodes. Each node may send output signals to any number of higher level nodes. It is a binary tree in the sense that the number of inputs doubles every time one climbs up one level in the hierarchy.

Saturday, January 7, 2017

Raising Money for AI Research

Smartphone Apps

I refuse to solicit or accept money from anyone to finance my research because I don't want to be indebted to or controlled by others. So I recently came up with a plan to put some of the knowledge I have acquired over the years to good use and do it in a way that does not reveal my hand too much. I am working on two intelligent mobile applications as described below. Let me know if you think they might be useful to you.

1. Crystal Clear Smartphone Conversations

The first app will filter out all background sounds other than the user's voice during a call. It will also repair or clean up the user's voice by filling in missing signals if necessary. Can be activated or deactivated at the touch of a button. Advantage: Crystal clear conversations.

2. Voice-based Security

The second app will use both voice and speech recognition to eliminate passwords. It does this by asking the user to read a random word or phrase. This app can be used for unlocking the phone, accessing accounts, etc. If your voice changes over time or if you want to give someone else access to your accounts, the app can be reset in an instant. Advantage: High security and no need to remember passwords.

Development

Although I think the first app has a better chance of being successful, I believe the second one is also doable. Some in the voice authentication and security business may disagree but the human voice is very much like a fingerprint. Every voice is unique in subtle ways that current technologies may not be able to capture. I use Microsoft Visual Studio and C# exclusively for programming. I will be using the Xamarin cross-platform tools to deploy the apps for the Windows Phone, the iPhone and Android phones. I don't anticipate needing GPU coprocessing.

I will release beta-test versions as soon as they are ready. Given my schedule, I anticipate the first app to be ready in two or three months.

The Ultimate Goal

If any of the apps is successful, I may venture into the hearing aid business. My plan is to generate enough funds to finance an artificial intelligence and computer research and development company. I believe that the requirements of true intelligence call for a new type of computer hardware and a better way to create software. My ultimate goal (or dream) is to build a truly intelligent bipedal robot that can do all your chores around the house such as cleaning, preparing food, babysitting the kids, doing the laundry, gardening, etc. A tall order, I know.

Wednesday, November 30, 2016

True Artificial Intelligence Will Arrive Suddenly and Will Stun the World

Abstract

In this article, I argue that true artificial intelligence, aka artificial general intelligence or AGI, may arrive on the world scene within the next ten/fifteen years or even sooner. It will not be a gradual process. It will arrive suddenly and take the world completely by surprise.

True AI Will Not Come from the Mainstream AI Community

When I say that the arrival of true AI will take the world by surprise, what I mean is that it will come from an unexpected place. Don't wait for the mainstream AI community to figure out intelligence. That will not happen. Knowing what I know about the brain and intelligence, there is no doubt in my mind that mainstream AI scientists are completely clueless as to how to even approach the problem. They are clueless because over 99 % of AI research money currently goes into funding deep learning, which is, as I have explained elsewhere, a hindrance to progress toward true AI. The most important ingredient in intelligence is time. And yet, amazingly, time is a mere afterthought in AI research, especially deep learning.

There are a handful of AI researchers who do understand the crucial importance of time to intelligence but, as I explained in my previous article, they are handicapped by their continued adherence to a representational approach to intelligence. In other words, in spite of all the hype, they are still doing symbolic AI or GOFAI. Please read, The World Is its Own Model or Why Hubert Dreyfus Is Still Right About AI for more on this topic.

Another obvious reason that the mainstream AI community is clueless is that they believe that the brain is performing some kind of massive parallel computation on sensory inputs. They assume that the brain continually generates an internal model of the world using statistical calculations on its input signals. The problem with this view is that neurons are way too slow for this kind of signal processing. The surprising truth is that the brain does not compute anything when it perceives the world. The brain assumes that the world is deterministic and does its own computations. It learns how the world behaves and expects that this behavior is perfect and will not deviate. The mechanism is akin to an automatic coin sorting machine whereby the machine assumes that the different sizes of the coins automatically determine which slots they belong to.

True AI Will Arrive Suddenly

A truly intelligent system, such as the human brain, consists of multiple, highly integrated modules. What I mean is that every module that comprises an intelligent system has a specific function, organization and operation that complement the other modules. No single module can function in isolation. It is not possible to solve one aspect of intelligence without also solving all the other aspects. In other words, one cannot understand sensory perception without also understanding motor behavior, and vice versa. There will be no evolution during which advances are made a little at a time while machines become gradually more intelligent over the years until a time is reached when they achieve human-like intelligence. True AI will appear suddenly.

The Secret of True AI Will Come from a Completely Unexpected Source

The most surprising thing about the arrival of true AI on the world scene will not be that it is finally here (although that will certainly make the front pages) but where it came from. I am not going to say too much about this other than the following. True AI is so counterintuitive that it would take us (humanity) hundreds, if not thousands of years to figure it out on our own. Fortunately for us, there is an ancient source of scientific knowledge about the brain and intelligence that the world has chosen to ignore. I have worked for more than a decade to decipher and understand this knowledge and I have made great progress. But whether or not I publish my work is not up to me. The only caveat here is that I am a known internet nut. Stay tuned.

The World Is its Own Model or Why Hubert Dreyfus Is Still Right About AI
Why Deep Learning Is a Hindrance to Progress Toward True AI
In Spite of the Successes, Mainstream AI is Still Stuck in a Rut

Sunday, July 10, 2016

The World Is its Own Model or Why Hubert Dreyfus Is Still Right About AI

In Memoriam: Professor Hubert Dreyfus (1929 - 2017)

Abstract

In this article, I argue that mainstream artificial intelligence is about to enter a new AI winter because, in spite of claims to the contrary, they are still using a representational approach to intelligence, aka symbolic AI or GOFAI. This is a criticism that Hubert Dreyfus has been making for half a century to no avail. I further argue that the best way to get rid of the representationalist baggage is to abandon the observer-centric approach to understanding intelligence and adopt a brain-centric approach. On this basis, I conclude that timing is the key to unlocking the secrets of intelligence.

The World Is its Own Model

Hubert Dreyfus is a Professor of philosophy at the University of California, Berkeley. Dreyfus has been the foremost critic of artificial intelligence research (What Computers Still Can't Do) since its early days. The AI community hates him for it. Here we are, many decades later, and Dreyfus is still right. Drawing from the work of famed German philosopher, Martin Heidegger and the French existentialist philosopher, Merleau-Ponty, Dreyfus's argument has not changed after all those years. Using Heidegger as a starting point, he argues that the brain does not create internal representations of objects in the world. The brain simply learns how to see the world directly, something that Heidegger referred to as presence-at-hand and readiness-to-hand. Dreyfus gave a great example of this in his paper Why Heideggerian AI Failed and how fixing it would require making it more Heideggerian (pdf). He explained how roboticist Rodney Brooks solved the frame problem by moving away from the traditional but slow model-based approach to a non-representational one:
The year of my talk, Rodney Brooks, who had moved from Stanford to MIT, published a paper criticizing the GOFAI robots that used representations of the world and problem solving techniques to plan their movements. He reported that, based on the idea that “the best model of the world is the world itself,” he had “developed a different approach in which a mobile robot uses the world itself as its own representation – continually referring to its sensors rather than to an internal world model.” Looking back at the frame problem, he writes:
And why could my simulated robot handle it? Because it was using the world as its own model. It never referred to an internal description of the world that would quickly get out of date if anything in the real world moved.
Deep Learning's GOFAI Problem

By and large, the mainstream AI community continues to ignore Dreyfus and his favorite philosophers. Indeed, they ignore everyone else including psychologists and neurobiologists who are more than qualified to know a thing or two about intelligence and the brain. AI's biggest success, deep learning, is just GOFAI redux. A deep neural network is actually a rule-based expert system. AI programmers just found a way (gradient descent, fast computers and lots of labeled or pre-categorized data) to create the rules automatically. The rules are in the form, if A then B, where A is a pattern and B a label or symbol representing a category.

The problem with expert systems is that they are brittle. Presented with a situation for which there is no rule, they fail catastrophically. This is what happened back in May to one of Tesla Motors's cars while on autopilot. The neural network failed to recognize a situation and caused a fatal accident. This is not to say that deep neural nets are bad per se. They are excellent in controlled environments, such as the factory floor, where all possible conditions are known in advance and humans are kept at a safe distance. But letting them loose in the real world is asking for trouble.

As I explain below, the AI community will never solve these problems until they abandon their GOFAI roots and their love affair with representations.

The Powerful Illusion of Representations

The hardest thing for AI experts to grasp is that the brain does not model the world. They have all sorts of arguments to justify their claim that the brain creates representations of objects in the world. They point out that MRI scans can pinpoint areas in the brain that light up when a subject is thinking about a word or a specific object. They argue that imagination and dreams are proof that the brain creates representations. These are powerful arguments and, in hindsight, one cannot fault the AI community too much for believing in the illusion of representations. But then again, it is not as if knowledgeable thinkers, such as Hubert Dreyfus, have not pointed out the fallacy of their approach. Unfortunately, mainstream AI is allergic to criticism.

Why the Brain Does Not Model the World

There are many reasons. I'll just list a few as follows.
  • The brain has to continually sense the world in real time in order to interact with it. The perceptions only last a short time and are mostly forgotten afterwards. If the brain had a stored (long-term) model of the world, it would only need to update the model occasionally. There are not enough neurons in the brain to store a model of the world. Besides, the brain's neurons are too slow to engage in any complex computations that an internal model would require.
  • It takes the brain a long time (years) to build a universal sensory framework that can instantly perceive an arbitrary pattern. However, when presented with a new pattern (which is almost all the time since we rarely see the same exact thing more than once), the cortex instantly accommodates existing memory structures to see the new pattern. No new structures are learned. A neural network, by contrast, must be trained with many samples of the new pattern. It follows that the brain does not learn to create models of objects in the world. Rather it learns how to sense the world by figuring out how the world works.
  • The brain should be understood as a complex sensory organ. Saying that the brain models the world is like saying that a sensor models what it senses. The brain builds a huge collection of specialized sensors that sense all sorts of phenomena in the world. The sensors are organized hierarchically. They are just sensors (detectors) that respond directly to specific sensory phenomena in the world. For example, we may have a high level sensor that fires when grandma comes into view but it is not a model of grandma. Our brain cannot model anything outside of it because our eyes do not see grandma. They just sense changes in illumination. To model something, one must have access to both a subject and an object. An artist can model something by looking at both the subject and the painting. The brain must sense things directly. It only has the signals from its senses to work with.
To Understand the Brain, Be the Brain

The most crippling mistake that most AI researchers make is that they try to understand intelligence from the point of view of an outside observer. Rather, they should try to understand it from the point of view of the intelligence itself. They need to adopt a brain-centric approach to AI as opposed to an observer-centric approach. They should ask themselves, what does the brain have to work with? How can the brain create a model of something that it cannot see until it learns how to see it?

Once we put ourselves in the brain's shoes, so to speak, representations no longer exist because they make no sense. They simply disappear.

Timing is the Key to Unsupervised Learning

The reason that people like Yann LeCun, Quoc Le and others in the machine learning community are having such a hard time with unsupervised learning (the kind of learning that people do) is that they do not try to "see" what the brain sees. The cortex only has discrete sensory spikes to work with. It does not know or care where they come from. It just has to make sense of the spikes by figuring out how they are ordered. Here is the clincher. The only order that can be found in multiple sensory streams of discrete signals is temporal order: they are either concurrent or sequential. Timing is thus the key to unsupervised learning and everything else in intelligence.

One only has to take a look at the center-surround design of the human retina to realize that the brain is primarily a complex timing mechanism. It may come as a surprise to some that we cannot see anything unless there is motion in the visual field. This is the reason that the human eye is continually moving in tiny movements called microsaccades. Movements in the visual field generate precisely timed spikes that depend on the direction and speed of the movements. The way the brain sees is completely different from the way computer vision systems work. They are not even close.

New AI Winter in the Making

Discrete signal timing should be the main focus of AI research, in my opinion. It is very precise in the brain, on the order of milliseconds. This is something that neurobiologists and psychologists have known about for decades. But the AI community thinks they know better. They don't. They are lost in a lost world of their own making. Is it any wonder that their field goes from one AI winter to the next? Artificial intelligence research is entering a new winter as I write but most AI researchers are not aware of it.

See Also

Mark Zuckerberg Understands the Problem with DeepMind's Brand of AI
Why Deep Learning Is a Hindrance to Progress Toward True AI
In Spite of the Successes, Mainstream AI is Still Stuck in a Rut

Monday, July 4, 2016

Why We Have a Supernatural Soul

Abstract

In this article, I argue that we consciously experience something that is provably nonexistent in the physical or material universe. Therefore, it can only be the result of a non-material entity.

From Neuronal Pulses to the Illusion of Distance

To deny the existence of an immaterial or supernatural soul is to stop believing one's own eyes. The amazing colorful 3D vista we think we see in front of our eyes is entirely supernatural. Why? Because there is no 3D vista in our visual cortex or anywhere else. Our visual cortex and our entire brain are just a bunch of firing neurons. Space and distance are not functions or properties of neuronal pulses. Every pulse is pretty much identical to another. The only thing that matters in the brain, as far as intelligence is concerned, is the temporal relationships between the pulses. They are either concurrent or sequential.

We certainly do not sense biochemicals and electric pulses flowing through our axons, synapses and dendrites. We see a fabulous, dynamic model of the world in glorious 3D. Something must have translated those neuronal firings into a colorful 3D vista. Call it spirit, soul or whatever. But it certainly exists and it is not material, a billion materialists claiming otherwise notwithstanding.

Why (Space) Distance Is an Illusion of the Soul/Spirit

It can be logically shown that space (distance) does not exist at all. It is an illusion, i.e., a creation of the mind. I posted an article on this topic back in 2010. Let me just repeat the main argument below. The reason that space/distance is an illusion is that the existence of space leads to an infinite regress. Over the years, I have found that almost everything that is fundamentally wrong with classical physics has to do with infinite regress. Note that physical space is defined as a collection of positions existing apart from particles. The idea is that, in order for any physical entity or property to exist, it must exist at a specific position in space. But if a position is a physical entity that exists, it too, must exist at a specific position. In other words, if space exists, where is it? One can posit a meta-space for space, and a meta-meta-space for the meta-space, but this quickly turns into an infinite regress. The only possible conclusion is that there is no such thing as space. It is an illusion of perception.

The Society of the Soul

Again, we must ask the essential question. If space/distance does not exist, why do we see and consciously experience a 3D vista? Where does it come from? The answer should be obvious. Since it comes from neither the brain nor the external physical universe, it must come from some other realm, a parallel but complementary realm. It must be a non-physical phenomenon. This is undeniable.

I hypothesize that every soul/spirit consists of a huge number of individual parts (call them qualia, if you wish), each one of which is distinct from the other but each belonging to a single entity, the soul. The function of a quale is to give a unique meaning to the neuronal pulse it is associated with in order to distinguish it from another. In other words, there is a unique quale for every conscious pulse event in the cortex. The illusion of space is a manifestation of a subassembly of "positional" qualia. The soul is thus a society of qualia.

Conscious versus Unconscious Neurons

But what about the cerebellum which is completely unconscious while being very active during waking hours? The cerebellum is a parallel brain, a pure automaton. It is a supervised, sensorimotor behaving machine that handles routine tasks for us (e.g., walking, balancing or maintaining posture) while the conscious cortex is busy thinking about other things. Why is it unconscious? Obviously, as an automaton, it does not have to be. Its function is not to pay attention to anything in particular but to make it possible for the brain to focus on more important matters. Without it, we would not be able to walk and speak or even think at the same time.

In my opinion, future neurological studies will reveal the existence of a fundamental physicochemical difference between the working of cortical neurons and of cerebellar neurons. There is something qualitatively special about the physiology of some (not all) cortical neurons that makes it possible for the quales to interface with them. I am also willing to bet that future experimental research will show that this special property is missing in animals. Only human cortices have them.

See Also

Why Space (Distance) Is an Illusion