The IT Law Wiki
Advertisement
The development of AI will shape the future of power. The nation with the most resilient and productive economic base will be best positioned to seize the mantle of world leadership.
NSCAI Interim Report, at 9.
The development of full artificial intelligence could spell the end of the human race.
— Stephen Hawking, interview with the BBC (Dec. 2014).

Definitions

Artificial intelligence (AI) is

[a]n umbrella term that is used to refer to a set of sciences, theories and techniques dedicated to improving the ability of machines to do things requiring intelligence.[1]
a science and a set of computational technologies that are inspired by — but typically operate quite differently from — the ways people use their nervous systems and bodies to sense, learn, reason, and take action.[2]
[t]he capability of a device to perform functions that are normally associated with human intelligence such as reasoning, learning, and self-improvement.[3]
[a]ny artificial system that performs tasks under varying and unpredictable circumstances, without significant human oversight, or that can learn from their experience and improve their performance.... They may solve tasks requiring human-like perception, cognition, planning, learning, communication, or physical action.[4]
are considered to comprise software and/or hardware that can learn to solve complex problems, make predictions or undertake tasks that require human-like sensing (such as vision, speech, and touch), perception, cognition, planning, learning, communication, or physical action. Examples are wide-ranging and expanding rapidly. They include, but are not limited to, AI assistants, computer vision systems, biomedical research, unmanned vehicle systems, advanced game-playing software, and facial recognition systems as well as application of AI in both Information Technology (IT) and Operational Technology (OT).[5]
the collection of computations that at any time make it possible to assist users to perceive, reason, and act. Since it is computations that make up AI, the functions of perceiving, reasoning, and acting can be accomplished under the control of the computational device (e.g., computers or robotics) in question.

AI at a minimum includes

Brief History of AI

Endowing computers with human-like intelligence has been a dream of computer experts since the dawn of electronic computing. Although the term "Artificial Intelligence" was not coined until 1956, the roots of the field go back to at least the 1940s,[7] and the idea of AI was crystalized in Alan Turing's famous 1950 paper, "Computing Machinery and Intelligence." Turing's paper posed the question: "Can machines think?" It also proposed a test for answering that question,[8] and raised the possibility that a machine might be programmed to learn from experience much as a young child does.

The field of Artificial Intelligence (AI) can be traced back to a 1956 workshop organized by John McCarthy, held at Dartmouth College. The workshop's goal was to explore how machines could be used to simulate human intelligence. Disciplines that contribute to AI include computer science, economics, linguistics, mathematics, statistics, evolutionary biology, neuroscience, and psychology, among others.

In the ensuing decades, the field of AI went through ups and downs as some AI research problems proved more difficult than anticipated and others proved insurmountable with the technologies of the time. It wasn't until the late 1990s that research progress in AI began to accelerate, as researchers focused more on sub-problems of AI and the application of AI to real-world problems such as image recognition and medical diagnosis. An early milestone was the 1997 victory of IBM's chess-playing computer Deep Blue over world champion Garry Kasparov. Other significant breakthroughs included DARPA's Cognitive Agent that Learns and Organizes (CALO), which led to Apple Inc.'s Siri; IBM's question-answering computer Watson's victory in the TV game show "Jeopardy!"; and the surprising success of self-driving cars in the DARPA Grand Challenge competitions in the 2000s.

The current wave of progress and enthusiasm for AI began around 2010, driven by three factors that built upon each other: the availability of big data from sources including e-commerce, businesses, social media, science, and government; which provided raw material for dramatically improved machine learning approaches and algorithms; which in turn relied on the capabilities of more powerful computers.[9]

This growth has advanced the state of Narrow AI, which refers to algorithms that address specific problem sets like game playing, image recognition, and navigation. All current AI systems fall into the Narrow AI category. The most prevalent approach to Narrow AI is machine learning, which involves statistical algorithms that replicate human cognitive tasks by deriving their own procedures through analysis of large training data sets. During the training process, the computer system creates its own statistical model to accomplish the specified task in situations it has not previously encountered.

Experts generally agree that it will be many decades before the field advances to develop General AI, which refers to systems capable of human-level intelligence across a broad range of tasks.[10] Nevertheless, the growing power of Narrow AI algorithms has sparked a wave of commercial interest.

Three Waves of Development

Another approach to understanding AI is by considering the waves in which the technology has developed, rather than a specific or singular definition. John Launchbury [11] provides a framework that conceptualizes AI as having three waves based on differences in capabilities with respect to perceiving, learning, abstracting, and reasoning (see fig. 1 below). These waves can broadly be described as follows:

  • Wave 1 – Expert or rules-based systems
  • Wave 2 – Statistical learning, perceiving and prediction systems, and
  • Wave 3 – Abstracting and reasoning capability, including explainability.

The first wave of AI is represented by expert knowledge or criteria developed in law or other authoritative sources and encoded into a computer algorithm, which is referred to as an expert system. Examples of expert systems include programs that do logistics scheduling or tax preparation. Expert systems are strong with respect to reasoning, as they reflect the logic and rules that are programmed into them. Human tax experts, for example, understand the rules of the tax code, and these rules can be programmed into software that yields a completed tax return based on the inputs provided. First-wave systems continue to yield benefits and are an active area of AI. Expert systems are not strong, however, when it comes to perceiving, learning, or abstracting to a domain outside the one programmed into the system.

Second-wave AI technology is based on machine learning, or statistical learning, and includes natural-language processing (e.g., voice recognition) and computer-vision technologies, among others. In contrast to first-wave systems, second-wave systems are designed to perceive and learn. Second-wave AI systems have nuanced classification and prediction capabilities but no contextual and minimal reasoning capabilities. Examples of second-wave systems include voice-activated digital assistants, applications that assist healthcare workers in selecting appropriate treatment options or making diagnoses, and self-driving automated vehicles.

Third-wave AI technologies combine the strengths of first- and second-wave AI and are also capable of contextual sophistication, abstraction, and explanation. An example of third-wave AI is a ship that can navigate the sea without human intervention for a few months at a time while sensing other ships, navigating sea lanes, and carrying out necessary tasks.

Overview

"Artificial intelligence is more than the simple automation of existing processes: it involves, to greater or lesser degrees, setting an outcome and letting a computer program find its own way there. It is this creative capacity that gives artificial intelligence its power. But it also challenges some of our assumptions about the role of computers and our relationship to them."[12]

AI attempts to emulate the results of human reasoning by organizing and manipulating factual and heuristic knowledge. Areas of AI activity include expert systems, natural language understanding, speech recognition, vision, and robotics.

Examples of AI already in use include: communicating with computers in natural language, deriving new insights from transport data, operating autonomous and adaptive robotic systems, managing supply chains, and designing more life-like video games. Applied AI is already changing business practices across financial services, law, medicine, accounting, tax, audit, architecture, consulting, customer service, manufacturing and transport. . . . AI could improve the functioning of most digital operations, products and services. Wherever a process uses digital data, AI may enable us to use that data more effectively and in new ways.[13]

What has made AI possible is

the confluence of four advancing technologies . . . vast increases in computing power and progress in machine learning techniques . . . breakthroughs in the field of machine perception . . . [and] improvements in the industrial design of robots.[14]

Cybersecurity

Today's AI has important applications in cybersecurity, and is expected to play an increasing role for both defensive and offensive cybermeasures. Currently, designing and operating secure systems requires significant time and attention from experts. Automating this expert work partially or entirely may increase security across a much broader range of systems and applications at dramatically lower cost, and could increase the agility of the Nation's cyber-defenses. Using AI may help maintain the rapid response required to detect and react to the landscape of evolving threats.

Military

Challenging issues are raised by the potential use of AI in weapon systems.[15] The United States has incorporated autonomy in certain weapon systems for decades, allowing for greater precision in the use of weapons and safer, more humane military operations. Nonetheless, moving away from direct human control of weapon systems involves some risks and can raise legal and ethical questions.

"The key to incorporating autonomous and semi-autonomous weapon systems into American defense planning is to ensure that U.S. Government entities are always acting in accordance with international humanitarian law, taking appropriate steps to control proliferation, and working with partners and Allies to develop standards related to the development and use of such weapon systems. The United States has actively participated in ongoing international discussion on Lethal Autonomous Weapon Systems, and anticipates continued robust international discussion of these potential weapon systems. Agencies across the U.S. Government are working to develop a single, government-wide policy, consistent with international humanitarian law, on autonomous and semi-autonomous weapons.

Safety

Use of AI to control physical-world equipment leads to concerns about safety, especially as systems are exposed to the full complexity of the human environment. A major challenge in AI safety is building systems that can safely transition from the 'closed world' of the laboratory into the outside 'open world' where unpredictable things can happen. Adapting gracefully to unforeseen situations is difficult yet necessary for safe operation. Experience in building other types of safety-critical systems and infrastructure, such as aircraft, power plants, bridges, and vehicles, has much to teach AI practitioners about verification and validation, how to build a safety case for a technology, how to manage risk, and how to communicate with stakeholders about risk.

Economic impact

[B]etween now and 2030, artificial intelligence will . . . increase global gross economic product by $13 trillion.[16]

AI's central economic effect in the short term will be the automation of tasks that could not be automated before. This will likely increase productivity and create wealth, but it may also affect particular types of jobs in different ways, reducing demand for certain skills that can be automated while increasing demand for other skills that are complementary to AI. Analysis by the White House Council of Economic Advisors (CEA) suggests that the negative effect of automation will be greatest on lower-wage jobs, and that there is a risk that AI-driven automation will increase the wage gap between less-educated and more-educated workers, potentially increasing economic inequality. Public policy can address these risks, ensuring that workers are retrained and able to succeed in occupations that are complementary to, rather than competing with, automation. Public policy can also ensure that the economic benefits created by AI are shared broadly, and assure that AI responsibly ushers in a new age in the global economy.

References

  1. Unboxing Artificial Intelligence: 10 steps to protect Human Rights, at 24.
  2. One Hundred Year Study on Artificial Intelligence, at 4.
  3. ITU, "Compendium of Approved ITU-T Security Definitions," at 23 (Feb. 2003 ed.) (full-text).
  4. U.S. Congress, H.R. 4625 and S. 2217 (Dec. 12, 2017).
  5. U.S. Leadership in AI: A Plan for Federal Engagement in Developing Technical Standards and Related Tools, at 7-8.
  6. Computer Science and Artificial Intelligence, at 1.
  7. See, e.g., Warren S. McCulloch & Walter H. Pitts, "A Logical Calculus of the Ideas Immanent in Nervous Activity," 5 Bull. of Mathematical Biophysics 115 (1943).
  8. Restated in modern terms, the "Turing Test" (also called the "Imitation game" puts a human judge in a text-based chat room with either another person or a computer. The human judge can interrogate the other party and carry on a conversation, and then the judge is asked to guess whether the other party is a person or a computer. If a computer can consistently fool human judges in this game, then the computer is deemed to be exhibiting intelligence.
  9. A more detailed history of AI is available in the Appendix of the AI 100 Report — One Hundred Year Study on Artificial Intelligence.
  10. Preparing for the Future of Artificial Intelligence, at 7-9.
  11. A DARPA Perspective on Artificial Intelligence.
  12. Artificial Intelligence: Opportunities and Implications for the Future of Decision Making, at 5.
  13. Growing the Artificial Intelligence Industry in the UK, at 8/
  14. Jerry Kaplan, "Humans Need Not Apply – A Guide to Wealth and Work in the Age of Artificial Intelligence" 38-39 (2015).
  15. See generallyArtificial Intelligence and National Security.
  16. Artificial Intelligence: A Roadmap for California, at 4.

Source

See also

External resources

  • Frank Chen, "AI, Deep Learning, and Machine Learning: A Primer," Andreessen Horowitz (June 10, 2016) (full-text).
  • Kate Crawford, "Artificial Intelligence's White Guy Problem," The New York Times (June 25, 2016) (full-text).
Advertisement