EDITOR’S NOTE: This is part one of “Cyber AI Chronicles” – written by lawyers and named by ChatGPT. This series will highlight key legal, privacy, and technical issues associated with the continued development, regulation, and application of artificial intelligence.
Artificial Intelligence is not a new concept or endeavor. In October 1950, Alan Turing published “Computing Machinery and Intelligence,” proposing the question: Can machines think? Since then, the concept has been studied at length, with an immediately recognizable example being IBM Watson, which memorably defeated Jeopardy! champions Ken Jennings and Brad Rutter in 2011. AI has been captured and fictionalized in movies, video games, and books. Even if we are not aware of it, AI underlies many technical tools that we use every day.
In recent months, both AI capabilities and its availability to the general public seem to be rapidly expanding and making headlines, from AI-generated photorealistic images to the prolific use and application of ChatGPT. Proponents of the technology look to its promising applications, such as diagnosing rare illnesses. Critics worry that it will overtake certain jobs, and voice concerns over cognitive biases. And of course, as portrayed in seemingly every movie on the topic, there’s always the risk that AI converges to take over the world.
So, what exactly is “Artificial Intelligence”?
In general, AI may be thought of as a term for systems designed to “think.” It aggregates branches of nearly all disciplines: Computer science and programming; psychology and behavioral modeling; mathematics and statistics; economics; ethics; and linguistics, among many others.
In its first AI Risk Management Framework, NIST defines an “AI system” as “an engineered or machine-based system that can, for a given set of objectives, generate outputs such as predictions, recommendations, or decisions influencing real or virtual environments.” Most current definitions include capabilities traditionally associated with “human” intelligence—capabilities like reasoning, learning, self-improvement, common sense, and problem-solving.
Much of the AI in that exists now may be considered “narrow” or “weak” AI—AI that is capable of completing a specific or limited task set. Narrow AI is, in some cases, so pervasive in our daily lives that it no longer receives focused attention (for example, the natural language processor on your smartphone that can add a reminder to your calendar or provide restaurant options in response to a voice prompt). As systems like ChatGPT have shown in recent months, however, general accessibility to AI is only likely to increase, and to compound the questions surrounding its use. Those questions include the data entered to train or prompt the system, as well as the intended—and unintended—results of using AI.
But why does it matter?
How AI is defined and described will have major implications for its development, implementation, and regulation—as well as other software, applications, and technologies. Communicating the definition of “AI” will be equally important for regulators and policy makers seeking to govern its use, as well as the individual system users seeking to comply with responsible use requirements. The terms are amorphous, the functionalities and underlying disciplines are multitudinous, and the potential applications will be considered not only within the limitations of AI capabilities, but also within the context of societal ethos and legal frameworks.