The Risk of Artificial Intelligence in Cyber Security and the Role of Humans
Abstract:
This paper will present and analyze
reported failures of artificially intelligent systems and extrapolate our analysis
to future AIs. I suggest that both the frequency and the seriousness of future AI
failures will steadily increase. AI Safety can be improved based on ideas developed
by cybersecurity experts. For narrow AIs safety failures are at the same, moderate,
level of criticality as in cybersecurity, however for general AI, failures have
a fundamentally different impact. A single failure of a super intelligent system
may cause a catastrophic event without a chance for recovery. The goal of cybersecurity
is to reduce the number of successful attacks on the system; the goal of AI Safety
is to make sure zero attacks succeed in bypassing the safety mechanisms. Unfortunately,
such a level of performance is unachievable. Every security system will eventually
fail; there is no such thing as a 100% secure system. Future generations may look
back at our time and identify it as one of intense change. In a few short decades,
we have morphed from a machine-based society to an information-based society, and
as this Information Age continues to mature, society has been forced to develop
a new and intimate familiarity with data-driven and algorithmic systems. Artificial
agents to refer to devices and decision-making aids that rely on automated, data-
driven, or algorithmic learning procedures. Such agents are becoming an intrinsic
part of our regular decision-making processes. Their emergence and adoption lead
to a bevy of related policy questions.
Keywords: AI
Safety, Cybersecurity, Failures, Super intelligence, Algorithms, Advanced Persistent
Threats (APT).
References:
[1].Acemoglu, Daron, and Pascual Restrepo,
“Robots and Jobs: Evidence from US Labor Markets,” National Bureau of Economic Research,
NBER Working Paper No. 23285, 2017. As of October 11, 2017:
http://www.nber.org/papers/w23285
[2].Anderson, James M., Nidhi Kalra,
Karlyn Stanley, Paul Sorensen, Constantine Samaras, and Tobi A. Oluwatola, Autonomous
Vehicle Technology: A Guide for Policymakers, Santa Monica, Calif.: RAND Corporation,
RR-443-2-RC, 2016. As of October 11, 2017: https://www.rand.org/pubs/research_reports/RR443-2.html
[3].Artificial Intelligence Safety
and Cybersecurity: A Timeline of AI Failures by Roman V. Yampolskiy and M. S. Spellchecker
from https://arxiv.org/pdf/1610.07997.pdf
[4].Artificial Intelligence and its impact
on Cyber Security from https://medium.com/@chiraghdewan/artificial-intelligence-and-its-impact-on-cyber-security-1b2446d770b9
[5].Autor, David H., David Dorn, Lawrence
F. Katz, Christina Patterson, and John Van Reenen, “The Fall of the Labor Share
and the Rise of Superstar Firms,” CEPR Discussion Paper No. DP12041, May 2017b.
As of October 11, 2017: https://ssrn.com/abstract=2968382
[6].Axelrod, R. (1984). The evolution
of cooperation. New York, Basic.
[7].Baiocchi, Dave, and D. Steven
Fox, Surprise! From CEOs to Navy SEALs: How a Select Group of Professionals Prepare
for and Respond to the Unexpected, Santa Monica, Calif.: RAND Corporation, RR-341-NRO,
2013. As of November 16, 2016: http://www.rand.org/pubs/research_reports/RR341.html
[8].Baker, Brian J., “The Laboring
Labor Share of Income: The ‘Miracle’ Ends,” Monthly Labor Review, U.S. Bureau of
Labor Statistics, 2016. As of November 16, 2016: http://www.bls.gov/opub/mlr/2016/beyond-bls/the-laboring-labor-share-of-income-
the-miracle-ends.htm
[9].Barocas, S., and A. D. Selbst,
“Big Data’s Disparate Impact,” California Law Review, Vol. 104, 2016.
[10]. Bayern, Shawn J., “The
Implications of Modern Business-Entity Law for the Regulation of Autonomous Systems,”
Stanford Technology Law Review, Vol. 19, No. 93, October 31, 2015. As of October
11, 2017.
[11]. Cieply, M. & Barnes,
B. (2014, 12/30), "Sony Cyberattack, first a Nuisance, Swiftly Grew into a
Firestorm", New York Times, from http://www.nytimes.com/2014/12/31/business/media/sony-
attack-first-a-nuisance-swiftly-grew-into-a-firestorm-.html
[12]. Daniel Merkle; Martin
Middendorf (2013). "Swarm Intelligence". In Burke, Edmund K.; Kendall,
Graham. Search Methodologies: Introductory Tutorials
in Optimization and Decision Support Techniques. Springer Science &
Business Media. ISBN 9781461469407.
[13]. Definition of AI as the study of intelligent
agents: Poole, Mackworth & Goebel 1998, p. 1, which provides the version that is used
in this article. Note that they use the term "computational intelligence"
as a synonym for artificial intelligence. Russell & Norvig (2003) (who prefer the term "rational agent")
and write "The whole-agent view is now widely accepted in the field" (Russell
& Norvig 2003, p. 55). Nilsson 1998, Legg & Hutter 2007.
[14]. F. Pistono and R. V.
Yampolskiy, "Unethical Research: How to Create a Malevolent Artificial Intelligence,"
presented at the 25th International Joint Conference on Artificial Intelligence
(IJCAI-16). Ethics for Artificial Intelligence Workshop (AI-Ethics-2016), New York,
NY, July 9, 2016.
[15]. Human Factors in Cybersecurity
and the Role for AI by Ranjeev Mittu & William F. Lawless from https://www.aaai.org/ocs/index.php/SSS/SSS15/paper/viewFile/10248/10054
[16]. Lawless, W.F., Mittu,
Ranjeev, Marble, Julie, Coyne, Joseph, Abramson, Myriam, Sibley, Ciara & Gu,
Wei (2015, forthcoming), The Human Factor in Cybersecurity: Robust & Intelligent
Defense. To be published by Springer.
[17]. LII (2014), 44 U.S.
Code § 3544 - Federal agency responsibilities, Title 44, Chapter 35, Subchapter
III, Legal Information Institute at Cornell University Legal School, from 44 USC
§3542; see http://www.law.cornell.edu/uscode/text/44/3544
[18]. Loukas, G., Gan, D.
& Vuong, T. (2013, 3/22), A taxonomy of cyber-attack and defence mechanisms
for emergency management, 2013, Third International Workshop on Pervasive Networks
for Emergency Management, IEEE, San Diego.
[19]. Maloof, Mark. "Artificial Intelligence: An Introduction, p. 37" (PDF). georgetown.edu.
[20]. Martinez, D., Lincoln
Laboratory, Massachusetts Institute of Technology (2014, invited presentation),
Architecture for Machine Learning Techniques to Enable Augmented Cognition in the
Context of Decision Support Systems. Invited paper for presentation at HCI.
[21]. The Risks of Artificial
Intelligence to Security and the Future of Work Osonde A. Osoba, William Welser
IV from https://www.rand.org/content/dam/rand/pubs/perspectives/PE200/PE237/RAND_PE237.pdf
[22]. R. V. Yampolskiy, "Taxonomy
of Pathways to Dangerous Artificial Intelligence," in Workshops at the Thirtieth
AAAI Conference on Artificial Intelligence, 2016.