While a number of definitions of artificial intelligence (AI) have surfaced over the last few decades, John McCarthy offers the following definition in this 2004 paper (PDF, 127 KB) (link resides outside IBM), ” It is the science and engineering of making intelligent machines, especially intelligent computer programs. It is related to the similar task of using computers to understand human intelligence, but AI does not have to confine itself to methods that are biologically observable.”
However, decades before this definition, the birth of the artificial intelligence conversation was denoted by Alan Turing’s seminal work, “Computing Machinery and Intelligence” (PDF, 92 KB) (link resides outside of IBM), which was published in 1950. In this paper, Turing, often referred to as the “father of computer science”, asks the following question, “Can machines think?” From there, he offers a test, now famously known as the “Turing Test”, where a human interrogator would try to distinguish between a computer and human text response. While this test has undergone much scrutiny since its publish, it remains an important part of the history of AI as well as an ongoing concept within philosophy as it utilizes ideas around linguistics.
Stuart Russell and Peter Norvig then proceeded to publish, Artificial Intelligence: A Modern Approach (link resides outside IBM), becoming one of the leading textbooks in the study of AI. In it, they delve into four potential goals or definitions of AI, which differentiates computer systems on the basis of rationality and thinking vs. acting:
- Systems that think like humans
- Systems that act like humans
- Systems that think rationally
- Systems that act rationally
Alan Turing’s definition would have fallen under the category of “systems that act like humans.”
At its simplest form, artificial intelligence is a field, which combines computer science and robust datasets, to enable problem-solving. It also encompasses sub-fields of machine learning and deep learning, which are frequently mentioned in conjunction with artificial intelligence. These disciplines are comprised of AI algorithms which seek to create expert systems which make predictions or classifications based on input data.
Over the years, artificial intelligence has gone through many cycles of hype, but even to
skeptics, the release of OpenAI’s ChatGPT seems to mark a turning point. The last time generative AI loomed this large, the breakthroughs were in computer vision, but now the leap forward is in natural language processing. And it’s not just language: Generative models can also learn the grammar of software code, molecules, natural images, and a variety of other data types.
The applications for this technology are growing every day, and we’re just starting to
explore the possibilities. But as the hype around the use of AI in business takes off,
conversations around ethics become critically important. To read more on where IBM stands within the conversation around AI ethics, read more here.