Posts AI: From Imagination to Abstraction
Post
Cancel

AI: From Imagination to Abstraction

img

Development is an iterative process. Photo by NOAA on Unsplash

This article revisits the mystical elements that have formed our conception of artificial minds and intelligent systems from myths to the foundations of formal reasoning functioning as building blocks of Artificial Intelligence.

Historical Conception that Led to AI

McCorduck (2004), suggested that the conception of artificial intelligence, in its antiquity, could be traced back to the ancient Greeks. The Greek myths of Hephaestus and Pygmalion incorporated the idea of intelligent robots (such as the Talos) and artificial beings (such as the Galatea and Pandora) [1]. In fact, the methods that are being used in the AI community today, such as word ontologies has its roots traced back to Aristotle.

Paracelsus, a Swiss alchemist, had written about a procedure that he claimed to fabricate an artificial man. In this period, it was believed that such automata — a moving mechanical device made in imitation of a human being — possess the magical ability to answer questions.

img

A postulated interior of the Duck of Vaucanson (1738–1739). Source: https://en.wikipedia.org/wiki/Automaton.

In the early 17th century, René Descartes postulated that the bodies of animals are nothing more than complex machines, but the mental state is of a different substance [also known as the mind-body problem]. During the same era, Thomas Hobbes presented a mechanical, combinatorial theory of cognition stating that “[…] reason is nothing but reckoning”, that is, reason can be modeled on algorithmic computation. Gottfried Leibniz invented the binary numeral system and envisioned a universal calculus of reasoning by which arguments could be decided mechanically. George Boole, in his investigations of the fundamental laws that govern the mind, has derived a symbolic system for expressing reasons — this is now called The Boolean Algebra [1].

By the 19th century, fiction writers had imagined and conceived about the artificial men and thinking machines such works can be evidently found on Mary Shelley’s Frankenstein or Karel Čapek’s R.U.R. (Rossum’s Universal Robots), and speculation, such as Samuel Butler’s “Darwin among the Machines”. Since then, AI has become a common topic in science fiction.

In the early 20th century, The Principia Mathematica of Bertrand Russell and Alfred North Whitehead (1913) has formalized both the language of the reason (logic) and mathematics by proposing a fundamental question to proofs. During the 1920’s and 1930s, Ludwig Wittgenstein and Rudolf Carnap led philosophy into a logical analysis of knowledge. Alonzo Church developed Lambda Calculus to investigate computability using recursive functional notation.

In the 1950’s Alan Turing proposed the Turing tests [2] as a measure of machine intelligence. At the same time, Isaac Asimov had published the Three Laws of Robotics. Herbert Gelernter and Nathan Rochester (from IBM) conceived the notion of a theorem prover in geometry, a proof assistant that validates correct logical deductions, it does so by exploiting the semantic model of the domain in the form of diagrams of typical cases. In the later period of the 20th century, Roy Solomonoff [3–4] lays a mathematical theory of AI by introducing universal Bayesian methods for inductive inference and prediction. Joseph Weizenbaum (from MIT) [5].

Ideas that led to AI research

img

In the 1940s and 50s, a handful of scientists from a variety of fields (mathematics, psychology, engineering, economics, and political science) began to discuss the possibility of creating an artificial brain. The field of artificial intelligence research was founded as an academic discipline in 1956.

The coming together of foundational concepts from cybernetics (the notion that has provided a control structure and communication of entities), information theory (which abstractly universalized a unit of measure as bits of information and described digital signals), and theory of computation (that has mathematically proved that a Turing-complete system could model a universe of computation) has formed a basic ground to which the notion of the electronic brain might be possible.

In 1950, Turing had proposed a moving question that has substantially given direction to AI. He started by questioning as to “whether or not it is possible for machinery to show intelligent behavior” this is known as the Turing Test.

Upon the notion that computers could manipulate numbers could also manipulate symbols of which bear a resemblance to the essence of human thought (through a symbolic language), Allen Newell and Herbert Simon programmed the Logic Theorist (1955) which had eventually proved, more elegantly, 38 of the 52 theorems in Russell and Whitehead’s Principia Mathematics.

As a young Assistant Professor of Mathematics at Dartmouth College, John McCarthy had decided to bring together a group to clarify and develop ideas about artificial thinking machines. “He picked the name ‘Artificial Intelligence’ for the new field. He chose the name partly for its neutrality; avoiding a focus on narrow automata theory and avoiding cybernetics which was heavily focused on analog feedback, as well as him potentially having to accept the assertive Norbert Wiener as guru or having to argue with him” [6].

The proposal goes on to discuss computers, natural language processing, neural networks, the theory of computation, abstraction, and creativity (these areas within the field of artificial intelligence are considered to still relevant to the work of the field). [7]. Thereafter, several directions have been initiated or encouraged by the conference such as the rise of symbolic methods, systems focussed on limited domains (early Expert Systems), and deductive systems versus inductive systems [8].

Summary

From the mystical realms of myths and alchemy to the esoteric domain of mathematical logic (and theoretical computer science), the birth of AI has been integrated into the fabrics of intelligent human endeavor in the 21st Century in search for intelligent behavior that mimics human-like responses. The coming together of ideas from various fields (such as cybernetics, information theory, and the theory of computation) has formalized the field of Artificial Intelligence.

References:

[1]. McCorduck, Pamela (2004), Machines Who Think (2nd ed.), Natick, MA: A. K. Peters, Ltd., ISBN 978–1–56881–205–2.

[2]. Turing, A. M. (2009). Computing machinery and intelligence. In Parsing the Turing Test (pp. 23–65). Springer, Dordrecht.

[3]. Solomonoff, R. J. (1964). A formal theory of inductive inference. Part I. Information and control, 7(1), 1–22.

[4]. Solomonoff, R. J. (1964). A formal theory of inductive inference. Part II. Information and control, 7(2), 224–254.

[5]. Weizenbaum, J. (1966). ELIZA-a computer program for the study of natural language communication between man and machine. Communications of the ACM, 9(1), 36–45.

[6]. Nilsson, N., The Quest for Artificial Intelligence, Cambridge University Press, 2010.

[7]. McCarthy, John; Minsky, Marvin; Rochester, Nathan; Shannon, Claude (1955), A Proposal for the Dartmouth Summer Research Project on Artificial Intelligence.

[8]. McCorduck, P., Machines Who Think, A.K. Peters, Ltd, Second Edition, 2004.

This post is licensed under CC BY 4.0 by the author.