Artificial Intelligence

The field of artificial intelligence, abbreviated as AI, has been quite turbulent and ever-changing in its scope since its creation. This essay deals with its inception, transition from idealistic to pragmatic, history, development, crises, touches the current state of the art, and concludes with a thought on where the future of AI research lies.

In order to understand artificial intelligence, it is important to grasp the historical context that birthed it. Although humans have dreamt of creating intelligent, artificial beings almost as long as they’ve been recording history, it was not until WWII, that the conditions were ripe for actually expanding and formalizing machine intelligence. It had all started with the need to design machines capable of responding to external feedback, and adjusting themselves to it. Norbert Wiener, a professor from the United States, created an automatic flak control system during WWII, that adjusted itself according to input and feedback from a radar – which was tremendously important to our understanding of intelligent and adaptable systems. Weiner believed that most of what we think of as intelligent behavior, is in fact a result of feedback mechanisms, and this principle led to the development of feedback theory.

Although these early attempts had touching points with contemporary AI, and were somewhat in the spirit of it, it is most commonly thought that AI requires actual digital computers in order to advance, and outside of this brief mention of mechanical systems, this essay mostly concentrates on digital efforts.  Thus it can be safely said that artificial intelligence, as a branch of computer science, was started in 1956 at the Dartmouth summer AI conference. The fathers of AI that attended this conference are John McCarthy, Marvin Minsky, Allen Newell, and Herbert Simon, while McCarthy coined the actual term in 1955.

One of the most important early contributions to this field, although not directly tied to AI, was the invention of the Lisp programming language by John McCarthy in 1958. Lisp was one of the pillars of the academic AI community, and still bears a great presence in some altered forms. Another great accomplishment was the creation of SHRDLU in 1968 – a natural language processing program that was capable of manipulating virtual geometrical objects by processing and following commands written in English.

In its earliest stages, AI was closely bound to the academic community, and its goals and promoters were unrealistically optimistic – for example, Minsky, on numerous occasions, had stated that the ultimate goal of AI, creating a general intelligence much alike to human’s, would be achieved in less than a dozen years. Such promises, and the failure to meet them, led to large government funding cuts and near-disappearance of corporate funding – this era was dubbed the first AI winter and lasted throughout the seventies.

Another large sub-field or approach in AI was also experiencing severe issues at the time of the first AI winter – connectionism – which dealt with achieving artificial intelligence by means of distributed networks of nodes and links, much alike biological neurons. As many other techniques, it was first hailed as a solution to most problems AI faced (understanding natural language, general intelligence, et cetera), but was quickly proven not to be as promising as it was advertised. Minsky and Papert detailed various problems that this approach faced in their 1968 book titled “Perceptrons”.

After such a devastating period in the seventies, the entire field had gone through a second era of enthusiasm and commercial success, following the rise of expert systems, programs that are well-suited for providing expert knowledge and solving problems in a specific domain, such as helping with diagnoses. Another phenomenon that could be observed was the second rise of connectionism. Most importantly, many of the initial problems of the technique were solved – like the inability to approximate nonlinear functions – by algorithms such as backpropagation, which  improved upon the original idea and gave it the much needed space to expand.

However, this period came to an end as well. Again, funding was running shorter and shorter, results didn’t meet the expectations, and general stagnation ensued – but the beginnings of a new trend could be observed: commercialization and practicalization of the techniques and research topics. The former goals of creating a general intelligence, and the enthusiasm for the field, mostly due to lack of understanding of how complex intelligence really was and unrealistic expectations, were waning.

Today, AI has completed its unstable period of constant overly-optimistic enthusiasm, and affirmed its place amongst other disciplines permanently. However, it has also completed the aforementioned transition into the commercial and the pragmatic. The academic community has created a powerful toolbox for the IT industry to utilize, and AI is currently being used in many aspects of our lives: cars can learn, our phones can learn, Google learns about your interests and provides better search results (and ads) accordingly – and soon due to the rise of hypercheap electronics and robotics, we will have more real-life interactions with intelligent machines.

However, there are still ambitious projects that aim to create “hard”, or artificial general intelligence. This is because such endeavors are finally becoming less idealistic and utopian, and more practical. One very promising example is the OpenCog project – an open source initiative based in Hong Kong that is currently experimenting with intelligent Nao robots, and creating a comprehensive free/libre open source C++ AI framework around their work.

Essentially, there are currently three further directions that research into AGI could take. The first one, which is what OpenCog is doing, is based around crafting a specific framework which consists of different algorithms, data structures, and techniques, both already developed and new, that operates on some kind of knowledge base while adapting itself to the environment. The second approach uses evolution and connectionism in order to achieve its goal. This is the course that one of my projects has taken – the basic idea is that if intelligence had once arisen via evolution, as it did with humans (and some other mammals and animals), then it could happen again – except this time, humans would be simulating the environment in which evolution takes place. The main problem of this approach is that it’s simply very computationally expensive, and greatly depends on the initial conditions and the basic rules that we’ve set up. The third and the most direct approach would be directly simulating a biological brain. The problem here is that the desired emergent behavior could depend on seemingly unrelated processes in our brain’s biochemistry (or even quantum physics), that we either don’t know about yet, or some phenomena that we deem irrelevant for the simulation by mistake from ignorance.

Maybe the answer is held in the combination of these three approaches. Only two things are certain: AI is here to stay, and only the tip of the iceberg has been reached.

2 thoughts on “Artificial Intelligence

Leave a Reply