The IT Law Wiki
Register
Advertisement
The development of AI will shape the future of power. The nation with the most resilient and productive economic base will be best positioned to seize the mantle of world leadership.
NSCAI Interim Report, at 9.
The development of full artificial intelligence could spell the end of the human race.
— Stephen Hawking, interview with the BBC (Dec. 2014).

Definitions[]

Artificial intelligence (AI) is

[a]n umbrella term that is used to refer to a set of sciences, theories and techniques dedicated to improving the ability of machines to do things requiring intelligence.[1]
a science and a set of computational technologies that are inspired by — but typically operate quite differently from — the ways people use their nervous systems and bodies to sense, learn, reason, and take action.[2]
[t]he capability of a device to perform functions that are normally associated with human intelligence such as reasoning, learning, and self-improvement.[3]
[a]ny artificial system that performs tasks under varying and unpredictable circumstances, without significant human oversight, or that can learn from their experience and improve their performance.... They may solve tasks requiring human-like perception, cognition, planning, learning, communication, or physical action.[4]
are considered to comprise software and/or hardware that can learn to solve complex problems, make predictions or undertake tasks that require human-like sensing (such as vision, speech, and touch), perception, cognition, planning, learning, communication, or physical action. Examples are wide-ranging and expanding rapidly. They include, but are not limited to, AI assistants, computer vision systems, biomedical research, unmanned vehicle systems, advanced game-playing software, and facial recognition systems as well as application of AI in both Information Technology (IT) and Operational Technology (OT).[5]
the collection of computations that at any time make it possible to assist users to perceive, reason, and act. Since it is computations that make up AI, the functions of perceiving, reasoning, and acting can be accomplished under the control of the computational device (e.g., computers or robotics) in question.

AI

include[s] the following:
(1) Any artificial system that performs tasks under varying and unpredictable circumstances without significant human oversight, or that can learn from experience and improve performance when exposed to data sets.
(2) An artificial system developed in computer software, physical hardware, or another context that solves tasks requiring human-like perception, cognition, planning, learning, communication, or physical action
(3) An artificial system designed to think or act like a human, including cognitive architectures and neural networks.
(4) A set of techniques, including machine learning, that is designed to approximate a cognitive task.
(5) An artificial system designed to act rationally, including an intelligent software agent or embodied robot that achieves goals using perception, planning, reasoning, learning, communicating, decision-making, and acting.[6]

AI at a minimum includes

Brief History of AI[]

Endowing computers with human-like intelligence has been a dream of computer experts since the dawn of electronic computing. Although the term "Artificial Intelligence" was not coined until 1956, the roots of the field go back to at least the 1940s,[8] and the idea of AI was crystalized in Alan Turing's famous 1950 paper, "Computing Machinery and Intelligence." Turing's paper posed the question: "Can machines think?" It also proposed a test for answering that question,[9] and raised the possibility that a machine might be programmed to learn from experience much as a young child does.

The field of Artificial Intelligence (AI) can be traced back to a 1956 workshop organized by John McCarthy, held at Dartmouth College.[10] The workshop's goal was to explore how machines could be used to simulate human intelligence. Disciplines that contribute to AI include computer science, economics, linguistics, mathematics, statistics, evolutionary biology, neuroscience, and psychology, among others.

In the ensuing decades, the field of AI went through ups and downs as some AI research problems proved more difficult than anticipated and others proved insurmountable with the technologies of the time.

Waves of AI[]

The Defense Advanced Research Projects Agency (DARPA), which has funded AI R&D since the 1960s, has described the development of AI technologies in terms of three waves.[11] These waves are described by the varying abilities of technologies in each to perceive rich, complex, and subtle information; to learn within an environment; to abstract to create new meanings; and to reason in order to plan and reach decisions.[12]

  • First wave: handcrafted knowledge. The first wave of AI technologies have abilities primarily to perceive and reason but no learning capability and poor handling of uncertainty. For such technologies, researchers and engineers create sets of rules to represent knowledge in well-defined domains for narrowly defined problems. The TurboTax software, an expert system, is one example. Rules are built into the application, which then turns input information into tax form outputs, but it has only a rudimentary ability to perceive and no ability to learn (e.g., about a new tax law) or to abstract beyond what it is programmed to know.
  • Second wave: statistical learning. Starting in the 1990s, a second wave of AI technologies were developed with more nuanced abilities to perceive and learn, with some ability to abstract, minimal reasoning ability, but no contextual ability. For these systems, engineers create statistical models for specific problem domains and train them on big data. Generally, while such systems are statistically powerful, they can be individually unreliable, especially in the presence of skewed training data (e.g., a face recognition system trained on a limited range of skin tones can be powerful for similar faces, but highly unreliable for individuals outside of the training spectrum). As noted by DARPA, these technologies are "dependent on large amounts of high quality training data, do not adapt to changing conditions, offer limited performance guarantees, and are unable to provide users with explanations of their results.”[13] Additional examples of second wave AI technologies include voice recognition and text analysis.
  • Third wave: contextual adaptation. The third wave of AI technologies is oriented toward making it possible for machines to adapt to changing situations (i.e., contextual adaptation). Engineers create systems that construct explanatory models of real world phenomena, and "AI systems learn and reason as they encounter new tasks and situations." Examples of third wave technologies would include explainable AI (XAI), as described below.

It wasn't until the late 1990s that research progress in AI began to accelerate, as researchers focused more on sub-problems of AI and the application of AI to real-world problems such as image recognition and medical diagnosis. An early milestone was the 1997 victory of IBM's chess-playing computer Deep Blue over world champion Garry Kasparov. Other significant breakthroughs included DARPA's Cognitive Agent that Learns and Organizes (CALO), which led to Apple Inc.'s Siri; IBM's question-answering computer Watson's victory in the TV game show "Jeopardy!"; and the surprising success of self-driving cars in the DARPA Grand Challenge competitions in the 2000s.

The current wave of progress and enthusiasm for AI began around 2010, driven by three factors that built upon each other: the availability of big data from sources including e-commerce, businesses, social media, science, and government; which provided raw material for dramatically improved machine learning approaches and algorithms; which in turn relied on the capabilities of more powerful computers.[14]

This growth has advanced the state of Narrow AI, which refers to algorithms that address specific problem sets like game playing, image recognition, and navigation. All current AI systems fall into the Narrow AI category. The most prevalent approach to Narrow AI is machine learning, which involves statistical algorithms that replicate human cognitive tasks by deriving their own procedures through analysis of large training data sets. During the training process, the computer system creates its own statistical model to accomplish the specified task in situations it has not previously encountered.

Experts generally agree that it will be many decades before the field advances to develop General AI, which refers to systems capable of human-level intelligence across a broad range of tasks.[15] Nevertheless, the growing power of Narrow AI algorithms has sparked a wave of commercial interest.

Overview[]

"Artificial intelligence is more than the simple automation of existing processes: it involves, to greater or lesser degrees, setting an outcome and letting a computer program find its own way there. It is this creative capacity that gives artificial intelligence its power. But it also challenges some of our assumptions about the role of computers and our relationship to them."[16]

AI attempts to emulate the results of human reasoning by organizing and manipulating factual and heuristic knowledge. Areas of AI activity include expert systems, natural language understanding, speech recognition, vision, and robotics.

Examples of AI already in use include: communicating with computers in natural language, deriving new insights from transport data, operating autonomous and adaptive robotic systems, managing supply chains, and designing more life-like video games. Applied AI is already changing business practices across financial services, law, medicine, accounting, tax, audit, architecture, consulting, customer service, manufacturing and transport. . . . AI could improve the functioning of most digital operations, products and services. Wherever a process uses digital data, AI may enable us to use that data more effectively and in new ways.[17]

What has made AI possible is

the confluence of four advancing technologies . . . vast increases in computing power and progress in machine learning techniques . . . breakthroughs in the field of machine perception . . . [and] improvements in the industrial design of robots.[18]

Cybersecurity[]

Today's AI has important applications in cybersecurity, and is expected to play an increasing role for both defensive and offensive cybermeasures. Currently, designing and operating secure systems requires significant time and attention from experts. Automating this expert work partially or entirely may increase security across a much broader range of systems and applications at dramatically lower cost, and could increase the agility of the Nation's cyber-defenses. Using AI may help maintain the rapid response required to detect and react to the landscape of evolving threats.

Military[]

Challenging issues are raised by the potential use of AI in weapon systems.[19] The United States has incorporated autonomy in certain weapon systems for decades, allowing for greater precision in the use of weapons and safer, more humane military operations. Nonetheless, moving away from direct human control of weapon systems involves some risks and can raise legal and ethical questions.

"The key to incorporating autonomous and semi-autonomous weapon systems into American defense planning is to ensure that U.S. Government entities are always acting in accordance with international humanitarian law, taking appropriate steps to control proliferation, and working with partners and Allies to develop standards related to the development and use of such weapon systems. The United States has actively participated in ongoing international discussion on Lethal Autonomous Weapon Systems, and anticipates continued robust international discussion of these potential weapon systems. Agencies across the U.S. Government are working to develop a single, government-wide policy, consistent with international humanitarian law, on autonomous and semi-autonomous weapons.

Safety[]

Use of AI to control physical-world equipment leads to concerns about safety, especially as systems are exposed to the full complexity of the human environment. A major challenge in AI safety is building systems that can safely transition from the 'closed world' of the laboratory into the outside 'open world' where unpredictable things can happen. Adapting gracefully to unforeseen situations is difficult yet necessary for safe operation. Experience in building other types of safety-critical systems and infrastructure, such as aircraft, power plants, bridges, and vehicles, has much to teach AI practitioners about verification and validation, how to build a safety case for a technology, how to manage risk, and how to communicate with stakeholders about risk.

Economic impact[]

[B]etween now and 2030, artificial intelligence will . . . increase global gross economic product by $13 trillion.[20]

AI's central economic effect in the short term will be the automation of tasks that could not be automated before. This will likely increase productivity and create wealth, but it may also affect particular types of jobs in different ways, reducing demand for certain skills that can be automated while increasing demand for other skills that are complementary to AI. Analysis by the White House Council of Economic Advisors (CEA) suggests that the negative effect of automation will be greatest on lower-wage jobs, and that there is a risk that AI-driven automation will increase the wage gap between less-educated and more-educated workers, potentially increasing economic inequality. Public policy can address these risks, ensuring that workers are retrained and able to succeed in occupations that are complementary to, rather than competing with, automation. Public policy can also ensure that the economic benefits created by AI are shared broadly, and assure that AI responsibly ushers in a new age in the global economy.

References[]

  1. Unboxing Artificial Intelligence: 10 steps to protect Human Rights, at 24.
  2. One Hundred Year Study on Artificial Intelligence, at 4.
  3. ITU, "Compendium of Approved ITU-T Security Definitions," at 23 (Feb. 2003 ed.) (full-text).
  4. U.S. Congress, H.R. 4625 and S. 2217 (Dec. 12, 2017).
  5. U.S. Leadership in AI: A Plan for Federal Engagement in Developing Technical Standards and Related Tools, at 7-8.
  6. Section 238(g) of the John S. McCain National Defense Authorization Act for Fiscal Year 2019, Pub. L. No. 115-232, 132 Stat. 1636, 1695 (Aug. 13, 2018) (codified at 10 U.S.C. § 2358, note).
  7. Computer Science and Artificial Intelligence, at 1.
  8. See, e.g., Warren S. McCulloch & Walter H. Pitts, "A Logical Calculus of the Ideas Immanent in Nervous Activity," 5 Bull. of Mathematical Biophysics 115 (1943).
  9. Restated in modern terms, the "Turing Test" (also called the "Imitation game" puts a human judge in a text-based chat room with either another person or a computer. The human judge can interrogate the other party and carry on a conversation, and then the judge is asked to guess whether the other party is a person or a computer. If a computer can consistently fool human judges in this game, then the computer is deemed to be exhibiting intelligence.
  10. See J. McCarthy et al., "A Proposal for the Dartmouth Summer Research Project on Artificial Intelligence" (Aug. 31, 1955) (full-text).
  11. See "DARPA Announces $2 Billion Campaign to Develop Next Wave of AI Technologies" (Sept. 7, 2018) (full-text).
  12. Arati Prabhakar, former Director of DARPA, "Powerful but Limited: A DARPA Perspective on AI," presentation at National Academies of Sciences, Engineering, and Medicine workshop, Robotics and Artificial Intelligence: Policy Implications for the Next Decade (Dec. 12, 2016) (full-text).
  13. See "DARPA Announces $2 Billion Campaign to Develop Next Wave of AI Technologies" (Sept. 7, 2108) (full-text).
  14. A more detailed history of AI is available in the Appendix of the AI 100 Report — One Hundred Year Study on Artificial Intelligence.
  15. Preparing for the Future of Artificial Intelligence, at 7-9.
  16. Artificial Intelligence: Opportunities and Implications for the Future of Decision Making, at 5.
  17. Growing the Artificial Intelligence Industry in the UK, at 8/
  18. Jerry Kaplan, "Humans Need Not Apply – A Guide to Wealth and Work in the Age of Artificial Intelligence" 38-39 (2015).
  19. See generallyArtificial Intelligence and National Security.
  20. Artificial Intelligence: A Roadmap for California, at 4.

Source[]

See also[]

External resources[]

  • Nick Bostrom, "Superintelligence: Paths, Dangers, Strategies" (Oxford Univ. Press, 2014).
  • Frank Chen, "AI, Deep Learning, and Machine Learning: A Primer," Andreessen Horowitz (June 10, 2016) (full-text).
  • Kate Crawford, "Artificial Intelligence's White Guy Problem," The New York Times (June 25, 2016) (full-text).
Advertisement