Notes On Intelligent Machine Design: Sapient Mimicry

The prospect of human-like machine intelligence seems to dazzle and thrill the public to no end. Consider the 2018 article from Scientific America titled, A New Supercomputer Is The World’s Fastest Brain-Mimicking Machine which speaks about the issue of brain-emulation at great length. The principal question, however, that many people are not asking in relation to the topic is: Why start from the design premise that the [intelligent] machine should be as maximally similar to us [humans] as possible?

We already know (by and large) what the human system can do and what it can not (just not precisely how, in every detail, the brain, for instance is not fully understood, hence why it cannot, as yet, be replicated). In the design of non-intelligent machines the normative principal is accounting for operations which humans cannot do, rather than for operations which they can. Yet when designing for intelligent machines the desire is completely different and the movement is towards maximal sapiency. There are some general reasons why you’d want to emulate human brain function, such as in the design of a partial cortex replacement module for brain-damaged patients, for example, but typically, in most fields of machine intelligence, one isn’t going to require maximal similarity. Indeed, one would actually have to degrade certain present machine capabilities to make intelligent machine maximally similar to ourselves because a intelligent machine that is of comparable average human intelligence (100 IQ) can do numerous things that humans cannot do and would be able to do them much faster because neurons – nerve cells which process and transmit electrochemical signals – in our brains transmit signals every 0.5 milliseconds and fire 200 times per second. There are approximately 100 billion neurons in any given human brain. Each neuron connects to 1000 other neurons. Thus, the simple equation: 100 billion x 200 x 1000 = 20 million billion bits of information transmitted per second. Such a large number might strike one as indicative of great speed, but transmission speed of a system alone means little if it is not compared to some other information exchange system. The human brain when compared to copper electrical wire is quite slow and even slower when compared to fiber optics. Thus, a true AI that was capable doing everything which a human mind could do would be able to not just maintain memory much better, but also think much faster. However, speed here should not be confused with processing power.

Despite the fact that computers are much, much faster at transmitting data, the human brain is much, much more efficient in its arraignment and storage of information. For example, in 2013, a team of researchers at the Riken Research Institute of Japan attempted to utilize the K supercomputer to simulate human brain activity. Simulating 1 second of human brain activity required 82,922 processors and the 4th fastest computer in the world at that time, at testament to the organ’s innate complexity. Yet for us, we require only the 15 centimeters and 3 pounds of mushy, gray matter suspended within our skulls. Women require slightly less size (as male brains are, on average, larger than females). Thus the obvious line of future design development should be to continue to emulate the compact efficiency of human (and other animal) brains whilst moving as far away from emulation of human neurons as possible due to their sluggishness in comparison to computer wiring.

More interesting, at least to me, than either of these design trajectories, are those areas of function which machines can perform which bares no direct or obvious human comparison. Much of this falls under the rubric of machine vision, such as infrared sensors, meta-image-creation, etc. All of these functions are unique to our creations and thus intensify our own sensory arsenal. The problem might best be summed up by the question: Why build a replica of a human hand when one could build a better hand? Even if you wanted to replace a human hand that was missing to merely replicate it is fine but to improve upon the prevailing design is even better. When one is designing a boat, the designer doesn’t try to make the boat as maximally humanoid as possible. This holds true for virtually every mechanical device. Whilst this is obvious upon introspection and is thus, in certain circles, implicit, it needs to be made explicit. The move from implicit design philosophy (preconditioning which trends towards particular eventualities) to explicit design philosophy (present-conditioning towards a particular eventuality) is analogous to moving from the purely instinctual to the theoretical, from gut-feeling to formal logic and for that reason, so much more the efficacious.


Sources

  1. Andrian Kreye et al. (2018) The State of Artificial Intelligence.
  2. John C. Mosby. (2018) The Real Key To Protecting US National Security Interests In Space? Launch Capability. Modern War Institute.
  3. Mindy Weisberger. (2018) A New Supercomputer Is The World’s Fastest Brain-Mimicking Machine.
  4. Neurons & Circuits.

Intelligence: Artificial & Otherwise

To speak sanguinely about Artificial intelligence (AI) – real and speculative – one must first ask the question: Is AI possible? Before that question can even be rendered answerable, however, one must define one’s terms, especially given the proclivity for intelligence-as-such, that is, as a material process, to be conflated and constrained wholly to sapience (human intelligence). If one’s definition for intelligence-as-such is constrained solely to human intelligence it is definitionally refuting, as it would be to claim that intelligence is a human-exclusive process (which it isn’t). It may be the case (and indeed is likely) that the concept of intelligence is unique to humans, but the process there described is clearly not. No one would contend that pigs, dogs, monkeys and dolphins do not have their own, unique, forms of non-sapient intelligence. However, if one theorizes from the ludicrously anthropocentric1 position that human intelligence is the sum-total of intelligence-as-such than clearly AI (often used synonymously with MI, or Machine Intelligence) has not yet been developed and is, indeed, impossible. This is conceptually egregious.

Intelligence is a particular configuration of matter; a durable process of some system which allows for the further processing of information (both internal and external to the originary entity) which then allows the system to react, in some way, to the information there processed. Thus defined, AI is not only possible, but already actual. This is to say that a contemporary computer IS artificially intelligent, it is not conscious of its intelligence but there is no reason why any given entity must be conscious of its intelligence for it to display intelligence because intelligence is a function of a particular material configuration. The complexity of intelligence, however, prohibits simple and all-encompassing characterization in a way which is not comparable to flight, swimming, lifting or running. For example, if a roboticist were to create a fully-functional machine that, in every detail, imitated the structure of a bat, no one would say that this machinic creation wasn’t really capable of flight. If it were swooshing about a room via the power of its metallic wings one would readily admit it were flying without a qualm. Similarly, if this same genius roboticist were to create a fully-functional replica of a fish and then placed it into a stream and watched it slip through the liquid, no one would say that this replica-fish was not really swimming. However, when it comes to computers performing tasks, such as mathematical problem-solving, the cry “that isn’t real intelligence” is invariably raised.

Sam Harris elaborates upon the issue, “We already know that it is possible for mere matter to acquire ‘general intelligence’—the ability to learn new concepts and employ them in unfamiliar contexts—because the 1,200 cc of salty porridge inside our heads has managed it. There is no reason to believe that a suitably advanced digital computer couldn’t do the same.”2

Writing the same year, Benjamin H. Bratton makes a similar case, “Unless we assume that humanlike intelligence represents all possible forms of intelligence – a whopper of an assumption – why define an advanced A.I. by its resemblance to ours? After all, “intelligence” is notoriously difficult to define, and human intelligence simply can’t exhaust the possibilities. Granted, doing so may at times have practical value in the laboratory, but in cultural terms it is self-defeating, unethical and perhaps even dangerous.” And somewhat later in his text, “Contemporary A.I. research suggests instead that the threshold by which any particular arrangement of matter can be said to be “intelligent” doesn’t have much to do with how it reflects humanness back at us. As Stuart Russell and Peter Norvig (now director of research at Google) suggest in their essential A.I. textbook, biomorphic imitation is not how we design complex technology. Airplanes don’t fly like birds fly, and we certainly don’t try to trick birds into thinking that airplanes are birds in order to test whether those planes “really” are flying machines. Why do it for A.I. then?”3

Why indeed? Of course, artificial intelligence-as-such and the desire to create artificial intelligence which is human-like, or human-exact, are two completely different issues. It may be that the process of creating human-like machine intelligence is at some point discovered and deemed imminently desirable. Whatever is decided in the future, I would recommend the acronym SEAI (Sapient Emulating Artificial Intelligence) to differentiate, with brevity and clarity, general artificial intelligence from human-like artificial intelligence systems.

1Anthropocentrism has two principal classes: (a.) the belief that humans are the most, or one of the most, significant entities in the known universe. (b.) the belief that humans are the fundemental, indispensible or central component of all existence which leads to the interpretation of reality solely through human-familiar conception. All utilizations of ‘anthropocentrism’ in this paper are (b.)-type. The author finds no fault with definition (a.) and has extensively remarked upon this topic elsewhere; see: Kaiter Enless. (2018) Suzerain. Logos.

2Sam Harris. (2015) Can We Avoid A Digital Apocalypse? A Response To The 2015 Edge Question. SamHarris.org.

3Benjamin H. Bratton. (2015) Outing AI: Beyond The Turing Test. NYTimes.