Of Brains and Computers

Why AI isn’t getting more human

Conversations about AI are difficult. This is partially due to people having different definitions, or not even consistently standing behind one definition, but rather forming their opinion based on some fluctuating intuition of what AI is. For example, reacting to news of an improved image classification algorithm that demonstrates near human-level skill within that task with concerns about AI reaching human-level intelligence or replacing humans is plain wrong. The mention of human-level skill ignites the violent fear of replacement or endangered superiority status while ignoring the fact that high performance within a set task differs fundamentally from general intelligence or emergent consciousness. This article will elaborate on why this fear of replacement is unfounded, preceded by a brief discussion of the common analogy of brains and computers, and concluding that narrow AI is the only AI we can realistically discuss. Machine Learning tools perfectly complement our brains’ strengths and weaknesses, and it is due to these complementary characteristics that AI-Human collaboration is the effective work-setup of the future.

The age-old comparison
Brains are often compared to computers, and aspects of human cognition are likened to algorithms. Following the rise of AI, such resemblances have been increasingly discussed. In what ways are computers, programs, algorithms similar to brains and neuronal circuits? Virtually in all ways, if you believe John von Neumann, a mathematician who was an avid defender of equating the brain to a computing machine. The posthumous publication of his book “The Computer and the Brain” (1958) preceded the philosophical contention of Multiple Realizability, which gave rise to the idea that cognitive states can be implemented in different physical states. The mathematician Hilary Putnam and cognitive scientist Jerry Fodor compared the mind-brain relationship to software and hardware, and further generalized that the same mental state could be realized in a medium other than the brain. The software-hardware analogy starts breaking down if you consider that cognition (aka the mind) is not some program “run” on the brain: you cannot effectuate a change in cognition without a change in the brain (e.g., temporary synaptic changes). In a way, the mind is an abstraction of the brain, a descriptive model of how functions emerge from the brain’s biological properties, unlike different programs running on a computer (which do not permanently affect the hardware). Nevertheless, there are obvious similarities between the way a brain and a computer transmit information, e.g., both send messages via electrical signals through a seemingly binary system (although neuronal firing is more complex and differentiated). We can find mathematical analogies of cognitive functions and study how the brain solves problems for technical inspiration. After all, it is the product of many million years of evolutionary selection, and the most versatile physical implementation of intelligence that exists.

In a 2017 paper, neuroscientist and DeepMind co-founder Demis Hassabis argues for neuroscience-inspired artificial intelligence, holding that studying brain architecture should be central to AI research as it is the only existing proof that such an intelligence is even possible. Based on Marr’s levels of analysis, we should look at the brain on an algorithmic and computational level, acknowledging the obvious differences in implementation, to gain transferrable insights into how intelligence can be constructed/realized. We recognize biology as a source of inspiration in state-of-the-art convolutional neural networks, which share several characteristics of neural computation like nonlinear transduction, divisive normalisation, max pooling of inputs etc. (Yamins and DiCarlo, 2016) These operations are derived from single-cell recordings of the mammalian visual cortex; and on a higher level, we also find neural network architectures that loosely follow the hierarchical organisation of cortical systems. Similarly, biologically inspired computation on multiple levels has been applied to methods in natural language processing (see Hassabis’ mention of Hinton et al 2012). Reinforcement learning, one of the hottest methods in AI research, is also rooted in animal learning (more specifically, in the idea of temporal-difference methods from animal conditioning experiments).

…but different
Heuristic recourse to the brain and the translation of certain efficient methods of its neural circuits into algorithmic architectures has led to great advances within computer science/AI research. But this does not take away from the fact that artificial intelligence and its foreseeable development are qualitatively drastically different from human intelligence. While we can find or create mathematical analogies of function, the bottom-line is that artificial intelligence, in its narrow specificity on the one hand and superhuman accuracy/processing speed on the other, differs categorically from general human intelligence: Brains and AI systems are good at very different things. Some of these differences stem from neurobiological features of the brain. Information processing depends on a range of constraints (e.g. myelination, availability of neurotransmitters, neurotransmitter diffusion time and prior history of neuronal firing) versus the straightforward mechanism of a computer (concrete speed of a microprocessor).

The fallacy
The idea of current state-of-the-art AI naturally progressing to human levels with growing power involves the mistake of assuming that we are moving up a linear spectrum of intelligence from low to high. When talking about the future of AI, we often see images of an “intelligence spectrum” like these:

Original articles for left figure and right figure from popular site waitbutwhy.com

The issue with graphs like these is that presenting intelligence as a one-dimensional, linearly increasing characteristic ignores the qualitative differences between types of intelligence. While, within the narrow domains of its particular application, e.g. image classification, “weak” AI is getting more powerful, this advancement will never equal general intelligence, which is qualitatively different. The key distinction, which these graphs ignore, is between strong and weak, or general and narrow AI. Narrow AI systems display intelligence within a particular field and thus perform highly specialized tasks. General AI is the idea of a system that would resemble the intertwined range of cognitive abilities and general understanding of humans; such systems are still the fabric of Sci-Fi movies and are not just “more” or “stronger” narrow AI. The issue arises when we, in response to a new feat of narrow AI, start discussing worries or exaggerated hopes which would only relate to general AI. The underlying assumption of these spectrums is that one form just precedes the other, and high-performing narrow AI should be expected to transition into generalized (human-like) intelligence, and from that swiftly to super-human levels (a notion maybe supported by the unfortunate terminology of weak and strong: after all, weakness can turn into strength with sufficient training). This is anything but given. Individual ML algorithms can perform specific tasks, e.g., a genetic algorithm predicting the binding affinities of proteins, or a convolutional neural network performing face detection. Think of the immense number of tasks our brain can perform within the field of perception alone and take into account the incredible interconnectedness of our cognitive functions — contrary to computer systems, our brain is not modular, i.e., is not an arrangement of separate functional units. A nudity detection algorithm suddenly developing semantic understanding, or the prospect of assembling countless specialized algorithms in a way that approximates the architecture of over 10^14 synapses, is not even remotely within the scope of our near future. However, seeing machine learning as a toolbox for automation processes does not take away from the transformative power that these tools hold.

The strength of “weak” AI
Deep reinforcement learning is revolutionizing novel drug design by generating models of chemical compounds that have specific desired physical, chemical, bioactive properties (Popova et al, 2018). In 2016, Google reduced its energy consumption for cooling by up to 40% with help of highly accurate deep-learning based predictions for power usage effectiveness. These applications are fascinating, powerful, and radically innovative, and also completely dissociated from some progression towards consciousness or general understanding. We must remember that AI is not on a natural progression to human intelligence and that these complementary strengths are innately linked to our structure and material constraints. Efforts to make deep learning algorithms more similar to human cognitive architectures, like DiCarlo’s 2018 CORnet models, or endeavours like Markram’s Human Brain Project, are fascinating, but similarities to our biological neural nets are artificially constructed. AI does not develop like our brain by itself — any similarities must be engineered. In conclusion, AI is not getting more human, and yet it is the most fascinating technology we could ever have dreamed of.

All Rights Reserved for Arianna Dorschel

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

This site uses Akismet to reduce spam. Learn how your comment data is processed.