Official websites use .gov

Secure .gov websites use HTTPS

Logo for U.S. Department of Defense

DOD Committed to Ethical Use of Artificial Intelligence

The Defense Department is prioritizing ethical considerations and collaboration in its approach to developing and fielding military applications of artificial intelligence, a top Pentagon technology official said today.

Michael C. Horowitz, the director of the emerging capabilities policy office in the office of the undersecretary of defense for policy, underscored the U.S.' commitment to leading the international conversation surrounding artificial intelligence during a panel discussion in Washington on setting rules and expectations for emerging technologies in national security.

A man in uniform points to a computer screen.

Underpinning this commitment, Horowitz said, is a comprehensive set of policy decisions within DOD that governs the development and fielding of autonomous weapon systems, ethical artificial intelligence strategy, and the development of responsible artificial intelligence strategy and pathways.

U.S. leadership, in codifying these principles, is now driving responsible artificial intelligence policy formulation among international partners, he said.

Spotlight: Science and Tech

"If you look at NATO's ethical AI principles, for example, they're very similar to the Defense Department's ethical AI principles and that's not that's not an accident," Horowitz said. "It reflects in many ways the sort of common values and perspective on how we're thinking about... when we would want to use AI and how."

He said U.S. also led on the international stage by issuing its Political Declaration of Responsible Military use of Artificial Intelligence and Autonomy in February.

A photo illustration depicting a soldier wearing virtual reality glasses with a chess set in the foreground.

"That's a set of strong norms that lay out principles of what responsible use looks like that we're now working to bring other countries on board to endorse since we think that bringing the international community together on this issue, that there is a lot of possibility for cooperation and we want to encourage the rest of the world to take these issues as seriously as the department has," Horowitz said. "And in looking at our allies and partners, we're really encouraged by that."

That commitment to the responsible development of artificial intelligence, and its transparency concerning the development of policy surrounding emerging technologies, is also how the U.S. has distinguished itself from its global competitors, he said.

He said all DOD policy surrounding artificial intelligence and emerging technology is publicly available.

A small drone is photographed while hovering.

"That's in contrast to some of the competitors of the United States who are a lot less transparent in what their policies are concerning the development and use of artificial intelligence and autonomous systems, including autonomous weapons systems," Horowitz said. "And we think that there's a real distinction there."

At the same time, the U.S. has remained committed to being at the leading edge of emerging technologies, including artificial intelligence, Horowitz said.

Spotlight: Engineering in the DOD

He said the rapid advance of the technology has opened up a wide array of use cases for artificial intelligence beyond defense. The U.S. continues to be "an engine of innovation when it comes to AI."

"The Defense Department does lots and lots of different experimentation with emerging technologies," Horowitz said. "And we both want to do them in a safe and responsible way, but also want to do them in a way that can push forward the cutting edge and ensure the department has access to the emerging technologies that it needs to stay ahead."

Spotlight: Artificial Intelligence Spotlight: Artificial Intelligence:  https://www.defense.gov/Explore/Spotlight/Artificial-Intelligence/

Subscribe to Defense.gov Products

Choose which Defense.gov products you want delivered to your inbox.

Related Stories

Defense.gov, helpful links.

  • Live Events
  • Today in DOD
  • For the Media
  • DOD Resources
  • DOD Careers
  • Help Center
  • DOD / Military Websites
  • Agency Financial Report
  • Value of Service
  • Taking Care of Our People
  • FY 2025 Defense Budget
  • National Defense Strategy

U.S. Department of Defense logo

The Department of Defense provides the military forces needed to deter war and ensure our nation's security.

Help | Advanced Search

Computer Science > Computers and Society

Title: a method for ethical ai in defence: a case study on developing trustworthy autonomous systems.

Abstract: What does it mean to be responsible and responsive when developing and deploying trusted autonomous systems in Defence? In this short reflective article, we describe a case study of building a trusted autonomous system - Athena AI - within an industry-led, government-funded project with diverse collaborators and stakeholders. Using this case study, we draw out lessons on the value and impact of embedding responsible research and innovation-aligned, ethics-by-design approaches and principles throughout the development of technology at high translation readiness levels.
Comments: 10 pages, 2 tables, pre-print approved for publication in the Special Issue Reflections on Responsible Research and Innovation for Trustworthy Autonomous Systems in the Journal of Responsible Technology
Subjects: Computers and Society (cs.CY); Artificial Intelligence (cs.AI)
 classes: K.4.0; K.5
Cite as: [cs.CY]
  (or [cs.CY] for this version)
  Focus to learn more arXiv-issued DOI via DataCite

Submission history

Access paper:.

  • Other Formats

license icon

References & Citations

  • Google Scholar
  • Semantic Scholar

BibTeX formatted citation

BibSonomy logo

Bibliographic and Citation Tools

Code, data and media associated with this article, recommenders and search tools.

  • Institution

arXivLabs: experimental projects with community collaborators

arXivLabs is a framework that allows collaborators to develop and share new arXiv features directly on our website.

Both individuals and organizations that work with arXivLabs have embraced and accepted our values of openness, community, excellence, and user data privacy. arXiv is committed to these values and only works with partners that adhere to them.

Have an idea for a project that will add value for arXiv's community? Learn more about arXivLabs .

Assessing ethical AI principles in defense

Subscribe to the center for technology innovation newsletter, mark maccarthy mark maccarthy nonresident senior fellow - governance studies , center for technology innovation.

November 15, 2019

On October 31, the Defense Innovation Board unveiled principles for the ethical use of AI by the Defense Department , which call for AI systems in the military to be responsible, equitable, reliable, traceable, and governable. Though the recommendations are non-binding, the Department is likely to implement a substantial number of them. The special focus on AI-based weapons arises because their speed and precision make them indispensable in modern warfare. Meanwhile, their novel elements create new and substantial risks that must be managed successfully to take advantage of these new capabilities.

What makes AI weapons systems so controversial?

The chief concern of the Board was the possibility that an AI weapon system might not perform as intended, with potentially catastrophic results.  Machine learning incorporated into weapons systems might learn to carry out unintended attacks on targets that the military had not approved and escalate a conflict. They might in some other way escape from the area of use for which they had been designed and launch with disastrous outcomes.

As former Secretary of the Navy Richard Danzig has noted, whenever an organization is using a complex technological system to achieve its mission, it is to some extent playing “ technological roulette .” Analyses of the 1979 Three Mile Island nuclear power incident and the 1986 Challenger Space Shuttle disaster have shown a combination of organizational, technical, and institutional factors can cause these systems to behave in unintended ways and lead to disasters. The Department of Defense has devoted substantial resources to unearthing the causes of the 1988 incident where the cruiser USS Vincennes downed an Iranian civilian flight killing 290 people. That tragedy that had nothing to do with the advanced machine learning techniques, but AI weapons systems raise new ethical challenges that call for fresh thinking.

Principles for AI weapons systems

Among these principles, several key points stand out. One is that is that there is no exemption from the existing laws of war for AI weapons systems, which should not cause unnecessary suffering or be inherently indiscriminate. Using AI to support decisionmaking in the field “includes the duty to take feasible precautions to reduce the risk of harm to the civilian population.”

The Defense Department has always tested and evaluated their systems to make sure that they perform reliably as intended. But the Board warns that AI weapon systems can be “non-deterministic, nonlinear, high-dimensional, probabilistic, and continually learning.” When they have these characteristics, traditional testing and validation techniques are “insufficient.”

The Board strongly recommended that the Department develop mitigation strategies and technological requirements for AI weapons systems that “foreseeably have a risk of unintentional escalation.” The group pointed to the circuit breakers established by the Securities and Exchange Commission to halt trading on exchanges as models. They suggested analogues in the military context including “limitations on the types or amounts of force particular systems are authorized to use, the decoupling of various AI cyber systems from one another, or layered authorizations for various operations.”

The Department’s 2012 directive 3000.09 recommended that commanders and operators should always be able to exercise “appropriate levels of human judgment” over the use of autonomous weapons in the field. The idea was that the contexts in which AI systems might be used in the military differ in so many crucial details that no more precise rules can be formulated in the abstract. The Board agreed with this reasoning. It did not try to make this guidance more precise, saying instead it is “a standard to continue using.” But it did add other elements to this guidance through a discussion of an off switch for AI weapons systems.

The Board publicly debated whether humans should be able to turn off AI weapons systems, even after they have been activated. The discussion seemed to turn on whether the systems would have to be slow enough for humans to intervene, which in many cases would defeat the purpose. In the end, the Board agreed that there had to be an off switch, but it might have to be triggered automatically without human intervention. In this way, the Board recognized the reality that “due to the scale of interactions, time, and cost, humans cannot be ‘in the loop’ all the time.” Others, including Danzig, have noted “communications and processing speed tilt the equation against human decision making.” The report moves beyond reliance on human decisionmakers to recommend designing systems that can disengage or deactivate automatically when they begin to go off course.

Implementing these principles

The Board reported that there have already been exercises with DOD personnel to see how some of the principles would work in practice. It would be especially important to implement one of the Board’s most thoughtful and most consequential recommendations, namely, to develop a risk management typology. This framework would introduce AI-based military applications based on “their ethical, safety, and legal risk considerations” with the rapid adoption of mature technologies in low-risk applications and greater precaution in less mature applications that might lead to “more significant adverse consequences.”

Next steps might be for the Board or Department leaders to reach out to the group of AI researchers seeking to discourage scientists from working on AI military research and the human rights groups seeking an international treaty banning fully autonomous weapons . The Department’s aim of seeking reliable weapons systems that do not engage in unintended campaigns coincides with the critics’ aim to prevent the development of out-of-control systems that violate the laws of war. The Board’s report can be seen by both sides as a signal of good faith and a demonstration that there is much common ground as the basis for a discussion.

Related Content

John R. Allen, Darrell M. West

March 24, 2021

Jack Karsten

September 17, 2018

January 30, 2020

Related Books

Darrell M. West, John R. Allen

July 28, 2020

Mark MacCarthy

November 7, 2023

Paul Stares

July 1, 1987

Artificial Intelligence

Governance Studies

Center for Technology Innovation

Artificial Intelligence and Emerging Technology Initiative

Nicol Turner Lee, David Vladeck, Mary Engle

October 15, 2024

Mary Burns, Rebecca Winthrop, Michael Trucano, Natasha Luther

Joshua P. Meltzer, Paul Triolo

October 4, 2024

A method for ethical AI in Defence: A case study on developing trustworthy autonomous systems

recent case study of ethical initiatives in defence

What does it mean to be responsible and responsive when developing and deploying trusted autonomous systems in Defence? In this short reflective article, we describe a case study of building a trusted autonomous system - Athena AI - within an industry-led, government-funded project with diverse collaborators and stakeholders. Using this case study, we draw out lessons on the value and impact of embedding responsible research and innovation-aligned, ethics-by-design approaches and principles throughout the development of technology at high translation readiness levels.

Please sign up or login with your details

Generation Overview

AI Generator calls

AI Video Generator calls

AI Chat messages

Genius Mode messages

Genius Mode images

Genius Mode videos

AD-free experience

Private images

  • Includes 500 AI Image generations, 1750 AI Chat Messages, 30 AI Video generations, 60 Genius Mode Messages, 60 Genius Mode Images, and 5 Genius Mode Videos per month. If you go over any of these limits, you will be charged an extra $5 for that group.
  • For example: if you go over 500 AI images, but stay within the limits for AI Chat, Genius Mode, and Genius Mode Videos, you'll be charged $5 per additional 500 AI Image generations.
  • Additional Genius Mode Videos are charged at $1 per video.
  • Includes 100 AI Image generations and 300 AI Chat Messages. If you go over any of these limits, you will have to pay as you go.
  • For example: if you go over 100 AI images, but stay within the limits for AI Chat, you'll have to reload on credits to generate more images. Choose from $5 - $1000. You'll only pay for what you use.
  • Genius Mode Videos are charged at $1 per video.

Out of credits

Refill your membership to continue using DeepAI

Share your generations with friends

Search form

  • Accelerating Asymmetric Advantage – delivering More, Together
  • More, Together 2020-2030
  • DST at a glance
  • Our value proposition
  • Our leadership
  • Our divisions
  • Our facilities and research centres
  • Our high achievers
  • Our history
  • Our innovations
  • Air, land and sea vehicles
  • Autonomous systems
  • Chemical, biological, radiological & nuclear (CBRN)
  • Electronic warfare
  • Human science
  • Information and communications
  • Operations analysis
  • Propulsion and energy
  • Surveillance and space
  • Weapons systems
  • Explore our research activities
  • Scientific publications
  • Books and brochures
  • Fact sheets
  • Innovators and Aviators
  • Plans, reviews and discussion papers
  • Upcoming events
  • Australian Defence Science, Technology and Research Summit (ADSTAR)
  • National Science Week
  • Emerging Disruptive Technology Assessment Symposium (EDTAS)
  • Defence Human Sciences Symposium
  • International Conference on Health and Usage Monitoring (HUMS)
  • Past events
  • Opportunities
  • Australian government agencies
  • International government agencies
  • Defence Research, Innovation and Collaboration Security (DRICS)
  • Access our expertise and facilities
  • Access our technology
  • Search our partnerships
  • Safeguarding Australia through Biotechnology Response and Engagement (SABRE) Alliance
  • Next Generation Technologies Fund
  • Student opportunities
  • STEM Careers
  • Defence Graduate Program: Research and Innovation Stream
  • Indigenous opportunities
  • Applying for a job at DSTG
  • Benefits of working at DSTG
  • Career options
  • Equity, diversity and inclusion at DSTG
  • Job vacancies
  • News and media releases
  • Image gallery
  • Submit an enquiry
  • Postal addresses and phone numbers

You are here

Publications, technical report | a method for ethical ai in defence.

Recent developments in artificial intelligence (AI) have highlighted the significant potential of the technology to increase Defence capability including improving performance, removing humans from high-threat environments, reducing capability costs and achieving asymmetric advantage. However, significant work is required to ensure that introducing the technology does not result in adverse outcomes. Defence's challenge is that failure to adopt AI in a timely manner may result in a military disadvantage, while premature adoption without sufficient research and analysis may result in inadvertent harms. To explore how to achieve ethical AI in Defence, a workshop was held in Canberra from 30 July to 1 August 2019 with 104 people from 45 organisations in attendance, including representatives from Defence, other Australian government agencies, the Trusted Autonomous Systems Defence Cooperative Research Centre (TASDCRC), civil society, universities and Defence industry.

The workshop was designed to elicit evidence-based hypotheses regarding ethical AI from a diverse range of perspectives and contexts and produce pragmatic methods to manage ethical risks on AI projects in Defence.

20 topics emerged from the workshop including: education command, effectiveness, integration, transparency, human factors, scope, confidence, resilience, sovereign capability, safety, supply chain, test and evaluation, misuse and risks, authority pathway, data subjects, protected symbols and surrender, de-escalation, explainability and accountability.

These topics were categorised into five facets of ethical AI:

  • Responsibility – who is responsible for AI?
  • Governance – how is AI controlled?
  • Trust – how can AI be trusted?
  • Law – how can AI be used lawfully?
  • Traceability – How are the actions of AI recorded?

A further outcome of the workshop was the development of a practical methodology that could support AI project managers and teams to manage ethical risks. This methodology includes three tools: an Ethical AI for Defence Checklist, Ethical AI Risk Matrix and a Legal and Ethical Assurance Program Plan (LEAPP).

It is important to note that the facets, topics and methods developed are evidence-based results of a single workshop only, rather than exhaustive of all ethical AI considerations (there were many more ideas expressed that may be valid under further scrutiny and research). Furthermore, A Method for Ethical AI in Defence does not represent the views of the Australian Government. Additional workshops are recommended and more stakeholders engaged to further explore appropriate frameworks and methods for using AI ethically within Defence.

Key information

Kate Devitt, Michael Gan , Jason Scholz and Robert Bolia

Publication number

DSTG-TR-3786

Publication type

Technical report

Publish Date

February 2021

Classification

PDF icon

  • Accessibility
  • Information Publication Scheme
  • Freedom of Information
  • RDI: DST Staff Access
  • Alumni Association

Ethical Principles for Artificial Intelligence in National Defence

  • First Online: 08 November 2022

Cite this chapter

recent case study of ethical initiatives in defence

  • Mariarosaria Taddeo 5 , 6 ,
  • David McNeish 7 ,
  • Alexander Blanchard 6 &
  • Elizabeth Edgar 7  

Part of the book series: Digital Ethics Lab Yearbook ((DELY))

581 Accesses

Defence agencies across the globe identify artificial intelligence (AI) as a key technology to maintain an edge over adversaries. As a result, efforts to develop or acquire AI capabilities for defence are growing on a global scale. Unfortunately, they remain unmatched by efforts to define ethical frameworks to guide the use of AI in the defence domain. This chapter provides one such framework. It identifies five principles -- justified and overridable uses; just and transparent systems and processes; human moral responsibility; meaningful human control; reliable AI systems – and related recommendations to foster ethically sound uses of AI for national defence purposes.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Subscribe and save.

  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
  • Available as EPUB and PDF
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info
  • Durable hardcover edition

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Similar content being viewed by others

recent case study of ethical initiatives in defence

Ethical governance of artificial intelligence for defence: normative tradeoffs for principle to practice guidance

recent case study of ethical initiatives in defence

Public perceptions of the use of artificial intelligence in Defence: a qualitative exploration

https://www.gov.uk/government/publications/future-force-concept-jcn-117

https://media.defense.gov/2019/Feb/12/2002088963/-1/-1/1/SUMMARY-OF-DOD-AI-STRATEGY.PDF

Roberts et al. ( 2020 ).

https://www.csa.gov.sg/~/media/csa/documents/publications/singaporecybersecuritystrategy.pdf

https://www.nisc.go.jp/eng/pdf/cs-senryaku2018-en.pdf

https://www.business.gov.au/news/budget-2019-20

www.aitesting.org

https://assets.kpmg/content/dam/kpmg/xx/pdf/2018/04/next-major-defense-challenge.pdf

https://www.nato.int/docu/review/articles/2019/02/12/natos-role-in-cyberspace/index.html

https://www.un.org/en/sections/un-charter/un-charter-full-text/

https://www.loc.gov/rr/frd/Military_Law/pdf/ASubjScd-27-1_1975.pdf

https://www.icrc.org/en/doc/resources/documents/misc/57jm93.htm

https://www.roke.co.uk/products/startle

https://breakingdefense.com/2019/03/atlas-killer-robot-no-virtual-crewman-yes/

https://www.oecd.org/going-digital/ai/principles/

It should be noted that the High-Level Expert Group’s principles also include provisions for human control, but given its focus on trustworthy AI, these are more flexible. For example, it allows that less human oversight may be exercised so long as more extensive testing and stricter governance is in place.

Acalvio Autonomous Deception. (2019). Acalvio. 2019. https://www.acalvio.com/

Asaro, P. (2012). On banning Autonomous weapon systems: Human rights, automation, and the dehumanization of lethal decision-making. International Review of the Red Cross, 94 (886), 687–709. https://doi.org/10.1017/S1816383112000768

Article   Google Scholar  

BehavioSec: Continuous Authentication Through Behavioral Biometrics. (2019). BehavioSec. 2019. https://www.behaviosec.com/

Boardman, M., & Butcher, F. (2019). An exploration of maintaining human control in AI enabled systems and the challenges of achieving it . STO-MP-IST-178.

Google Scholar  

Boulanin, V., Carlsson, M. P., Goussac, N., & Davidson, D. (2020). Limits on autonomy in weapon systems: Identifying practical elements of human control . Stockholm International Peace Research Institute and the International Committee of the Red Cross. https://www.sipri.org/publications/2020/other-publications/limits-autonomy-weapon-systems-identifying-practical-elements-human-control-0

Brundage, M., Avin, S., Clark, J., Toner, H., Eckersley, P., Garfinkel, B., & Dafoe, A., et al. (2018). The malicious use of artificial intelligence: Forecasting, prevention, and mitigation. ArXiv:1802.07228 [Cs], February. http://arxiv.org/abs/1802.07228

Brunstetter, D., & Braun, M. (2013). From jus ad bellum to jus ad vim: Recalibrating our understanding of the moral use of force. Ethics & International Affairs, 27 (01), 87–106. https://doi.org/10.1017/S0892679412000792

DarkLight Offers First of Its Kind Artificial Intelligence to Enhance Cybersecurity Defenses. (2017). Business Wire . 26 July 2017. https://www.businesswire.com/news/home/20170726005117/en/DarkLight-Offers-Kind-Artificial-Intelligence-Enhance-Cybersecurity

DeepLocker: How AI Can Power a Stealthy New Breed of Malware. (2018). Security intelligence (blog). 8 August 2018. https://securityintelligence.com/deeplocker-how-ai-can-power-a-stealthy-new-breed-of-malware/

Department for Digital, Culture, Media & Sport. (2018). Data Ethics Framework . https://www.gov.uk/government/publications/data-ethics-framework/data-ethics-framework

DIB. (2020a). AI principles: Recommendations on the ethical use of Artificial Intelligence by the Department of Defense . https://media.defense.gov/2019/Oct/31/2002204458/-1/-1/0/DIB_AI_PRINCIPLES_PRIMARY_DOCUMENT.PDF

DIB. (2020b). AI principles: Recommendations on the ethical use of Artificial Intelligence by the Department of Defense - supporting document . Defence Innovation Board [DIB]. https://media.defense.gov/2019/Oct/31/2002204459/-1/-1/0/DIB_AI_PRINCIPLES_SUPPORTING_DOCUMENT.PDF

Docherty, B. (2014). Shaking the foundations: The human rights implications of killer robots . Human Rights Watch. https://www.hrw.org/report/2014/05/12/shaking-foundations/human-rights-implications-killer-robots

Ekelhof, M. (2019). Moving beyond semantics on Autonomous weapons: Meaningful human control in operation. Global Policy, 10 (3), 343–348. https://doi.org/10.1111/1758-5899.12665

Enemark, C. (2011). Drones over Pakistan: Secrecy, ethics, and counterinsurgency. Asian Security, 7 (3), 218–237. https://doi.org/10.1080/14799855.2011.615082

Floridi, L. (2008). The method of levels of abstraction. Minds and Machines, 18 (3), 303–329. https://doi.org/10.1007/s11023-008-9113-7

Floridi, L. (2016a). Mature information societies—A matter of expectations. Philosophy & Technology, 29 (1), 1–4. https://doi.org/10.1007/s13347-016-0214-6

Floridi, L. (2016b). Faultless responsibility: On the nature and allocation of moral responsibility for distributed moral actions. Philosophical Transactions of the Royal Society A: Mathematical, Physical and Engineering Sciences, 374 (2083), 20160112. https://doi.org/10.1098/rsta.2016.0112

Floridi, L. (2018). Soft ethics and the governance of the digital. Philosophy & Technology, 31 (1), 1–8. https://doi.org/10.1007/s13347-018-0303-9

Floridi, L., & Cowls, J. (2019). A unified framework of five principles for AI in Society. Harvard Data Science Review . https://doi.org/10.1162/99608f92.8cd550d1 .

Floridi, L., Cowls, J., King, T. C., & Taddeo, M. (2020). How to design AI for social good: Seven essential factors. Science and Engineering Ethics, 26 (3), 1771–1796. https://doi.org/10.1007/s11948-020-00213-5

Floridi, L., & Sanders, J. W. (2004). On the morality of artificial agents. Minds and Machines, 14 (3), 349–379. https://doi.org/10.1023/B:MIND.0000035461.63578.9d

Fraga-Lamas, P., Fernández-Caramés, T. M., Suárez-Albela, M., Castedo, L., & González-López, M. (2016). A review on internet of things for defense and public safety. Sensors (Basel, Switzerland), 16 (10). https://doi.org/10.3390/s16101644

Gavaghan, C., Knott, A., Maclaurin, J., Zerilli, J., & Liddicoat, J. (2019). Government use of artificial intelligence in New Zealand, Final report on phase 1 of the law Foundation’s artificial intelligence and law in New Zealand project’. In New Zealand Law Foundation . https://www.cs.otago.ac.nz/research/ai/AI-Law/NZLF%20report.pdf

International Telecommunications Union. (2017). Minimum Requirements Related to Technical Performance for IMT-2020 Radio Interface(s) . 2017. https://www.itu.int/pub/R-REP-M.2410-2017

Japanese Society for Artificial Intelligence [JSAI]. (2017). Ethical Guidelines . http://ai-elsi.org/wp-content/uploads/2017/05/JSAI-Ethical-Guidelines-1.pdf

Jobin, A., Ienca, M., & Vayena, E. (2019). The global landscape of AI ethics guidelines. Nature Machine Intelligence, 1 (9), 389–399. https://doi.org/10.1038/s42256-019-0088-2

Johnson, A. M., & Axinn, S. (2013). The morality of Autonomous robots. Journal of Military Ethics, 12 (2), 129–141. https://doi.org/10.1080/15027570.2013.818399

King, T. M., Arbon, J., Santiago, D., Adamo, D., Chin, W., & Shanmugam, R. (2019). AI for testing today and tomorrow: Industry perspectives. In 2019 IEEE international conference on Artificial Intelligence Testing (AITest) (pp. 81–88). IEEE. https://doi.org/10.1109/AITest.2019.000-3

Chapter   Google Scholar  

Kott, A., Swami, A., & West, B. J. (2017). The internet of Battle things . ArXiv:1712.08980 [Cs], December. http://arxiv.org/abs/1712.08980

Lysaght, R. J., Harris, R., & Kelly, W. (1988). Artificial intelligence for command and control . ANALYTICS INC WILLOW GROVE PA. https://apps.dtic.mil/docs/citations/ADA229342

McMahan, J. (2013). Forward. In R. Jenkins, M. Robillard, & B. J. Strawser (Eds.), Who should die? The ethics of killing in war (pp. ix–xiv). Oxford University Press.

Mirsky, Y., Mahler, T., Shelef, I., & Elovici, Y. (2019). CT-GAN: Malicious tampering of 3D medical imagery using deep learning . ResearchGate. https://www.researchgate.net/publication/330357848_CT-GAN_Malicious_Tampering_of_3D_Medical_Imagery_using_Deep_Learning/figures?lo=1

Mökander, J., & Floridi, L. (2021). Ethics-based auditing to develop trustworthy AI. Minds and Machines , 1–5. https://doi.org/10.1007/s11023-021-09557-8

NATO. (2020). NATO 2030: United for a new era . Brussels. https://www.nato.int/nato_static_fl2014/assets/pdf/2020/12/pdf/201201-Reflection-Group-Final-Report-Uni.pdf

O’Connell, M. E. (2014). The American way of bombing: How legal and ethical norms change. In M. Evangelista & H. Shue (Eds.), The American way of bombing changing ethical and legal norms, from flying fortresses to drones . Cornel University Press.

Rigaki, M., & Elragal, A. (2017). Adversarial deep learning against intrusion detection classifiers (p. 14). Luleå tekniska universitet, Datavetenskap.

Roberts, H., Cowls, J., Morley, J., Taddeo, M., Wang, V., & Floridi, L. (2020). The Chinese approach to artificial intelligence: An analysis of policy, ethics, and regulation. AI & SOCIETY, 36 . https://doi.org/10.1007/s00146-020-00992-2

Schubert, J., Brynielsson, J., Nilsson, M., & Svenmarck, P. (2018). Artificial intelligence for decision support in command and control systems , p. 15.

Sharkey, A. (2019). Autonomous weapons systems, killer robots and human dignity. Ethics and Information Technology, 21 (2), 75–87. https://doi.org/10.1007/s10676-018-9494-0

Sharkey, N. (2010). Saying “no!” to lethal Autonomous targeting. Journal of Military Ethics, 9 (4), 369–383. https://doi.org/10.1080/15027570.2010.537903

Sharkey, N. (2012a). Killing made easy: From joysticks to politics. In P. Lin, K. Abney, & G. Bekey (Eds.), Robot ethics: The ethical and social implications of robotics (pp. 111–128). MIT Press.

Sharkey, N. E. (2012b). The Evitability of Autonomous robot warfare. International Review of the Red Cross, 94 (886), 787–799. https://doi.org/10.1017/S1816383112000732

Sparrow, R. (2007). Killer Robots. Journal of Applied Philosophy, 24 (1), 62–77. https://doi.org/10.1111/j.1468-5930.2007.00346.x

Sparrow, R. (2016). Robots and respect: Assessing the case against Autonomous weapon systems. Ethics & International Affairs, 30 (1), 93–116. https://doi.org/10.1017/S0892679415000647

Taddeo, M. (2012a). Information warfare: A philosophical perspective. Philosophy and Technology, 25 (1), 105–120.

Taddeo, M. (2012b). An analysis for a just cyber warfare. In Fourth international conference of cyber conflict . NATO CCD COE and IEEE Publication.

Taddeo, M. (2013). Cyber security and individual rights, striking the right balance. Philosophy & Technology, 26 (4), 353–356. https://doi.org/10.1007/s13347-013-0140-9

Taddeo, M. (2014a). Just information warfare. Topoi , 1–12. https://doi.org/10.1007/s11245-014-9245-8

Taddeo, M. (2014b). The struggle between liberties and authorities in the information age. Science and Engineering Ethics , 1–14. https://doi.org/10.1007/s11948-014-9586-0

Taddeo, M. (2017a). The limits of deterrence theory in cyberspace. Philosophy & Technology, 31 . https://doi.org/10.1007/s13347-017-0290-2

Taddeo, M. (2017b). Trusting Digital Technologies Correctly. Minds and Machines 27 (4), 565–68. https://doi.org/10.1007/s11023-017-9450-5 .

Taddeo, M. (2019a). The challenges of cyber deterrence. In C. Öhman & D. Watson (Eds.), The 2018 yearbook of the digital ethics lab (pp. 85–103). Springer. https://doi.org/10.1007/978-3-030-17152-0_7

Taddeo, M. (2019b). Three ethical challenges of applications of artificial intelligence in cybersecurity. Minds and Machines, 29 (2), 187–191. https://doi.org/10.1007/s11023-019-09504-8

Taddeo, M., & Floridi, L. (2018). Regulate artificial intelligence to avert cyber arms race. Nature, 556 (7701), 296–298. https://doi.org/10.1038/d41586-018-04602-6

Taddeo, M., McCutcheon, T., & Floridi, L. (2019). Trusting artificial intelligence in cybersecurity is a double-edged sword. Nature Machine Intelligence, 1 (12), 557–560. https://doi.org/10.1038/s42256-019-0109-1

Tamburrini, G. (2016). On banning autonomous weapons systems: From deontological to wide consequentialist reasons. In B. Nehal, S. Beck, R. Geiβ, H.-Y. Liu, & C. Kreβ (Eds.), Autonomous weapons systems: Law, ethics, policy (pp. 122–142). Cambridge University Press.

The UK and International Humanitarian Law 2018. (n.d.) Accessed 1 Nov 2020. https://www.gov.uk/government/publications/international-humanitarian-law-and-the-uk-government/uk-and-international-humanitarian-law-2018

US Army. (2017). Robotic and autonomous systems strategy . https://www.tradoc.army.mil/Portals/14/Documents/RAS_Strategy.pdf

Yang, G.-Z., Bellingham, J., Dupont, P. E., Fischer, P., Floridi, L., Full, R., Jacobstein, N., et al. (2018). The grand challenges of science robotics. Science Robotics, 3 (14), eaar7650. https://doi.org/10.1126/scirobotics.aar7650

Zhuge, J., Holz, T., Han, X., Song, C., & Zou, W. (2007). Collecting Autonomous spreading malware using high-interaction honeypots. In S. Qing, H. Imai, & G. Wang (Eds.), Information and communications security (Lecture Notes in Computer Science) (pp. 438–451). Springer.

Download references

Acknowledgement

We are very grateful to Isaac Taylor for his work and comments on an early version of this chapter and to Rebecca Hogg and the participants of the 2020 Dstl AI Fest for their questions and comments, for they enabled us to improve several aspects of our analysis. We are responsible for any remaining mistakes.

Mariarosaria Taddeo and Alexander Blanchard’s work on this chapter has been funded by the Dstl Ethics Fellowship held at the Alan Turing Institute. The research underpinning this work was funded by the UK Defence Chief Scientific Advisor’s Science and Technology Portfolio, through the Dstl Autonomy Programme. This chapter is an overview of UK Ministry of Defence (MOD) sponsored research and is released for informational purposes only. The contents of this paper should not be interpreted as representing the views of the UK MOD, nor should it be assumed that they reflect any current or future UK MOD policy. The information contained in this chapter cannot supersede any statutory or contractual requirements or liabilities and is offered without prejudice or commitment.

Author information

Authors and affiliations.

Oxford Internet Institute, University of Oxford, Oxford, UK

Mariarosaria Taddeo

Alan Turing Institute, London, UK

Mariarosaria Taddeo & Alexander Blanchard

Defence Science Technology Laboratory (Dstl), Salisbury, UK

David McNeish & Elizabeth Edgar

You can also search for this author in PubMed   Google Scholar

Corresponding author

Correspondence to Mariarosaria Taddeo .

Editor information

Editors and affiliations.

Jakob Mökander

Center for Information Technology Policy, Princeton University, Princeton, NJ, USA

Marta Ziosi

Rights and permissions

Reprints and permissions

Copyright information

© 2022 The Author(s), under exclusive license to Springer Nature Switzerland AG

About this chapter

Taddeo, M., McNeish, D., Blanchard, A., Edgar, E. (2022). Ethical Principles for Artificial Intelligence in National Defence. In: Mökander, J., Ziosi, M. (eds) The 2021 Yearbook of the Digital Ethics Lab. Digital Ethics Lab Yearbook. Springer, Cham. https://doi.org/10.1007/978-3-031-09846-8_16

Download citation

DOI : https://doi.org/10.1007/978-3-031-09846-8_16

Published : 08 November 2022

Publisher Name : Springer, Cham

Print ISBN : 978-3-031-09845-1

Online ISBN : 978-3-031-09846-8

eBook Packages : Religion and Philosophy Philosophy and Religion (R0)

Share this chapter

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Publish with us

Policies and ethics

  • Find a journal
  • Track your research

Center for Ethics and the Rule of Law – University of Pennsylvania (CERL, Penn))

Ethical Dilemmas in the Global Defense Industry

Co-sponsored By:

The Conference

The defense industry operates at the intersection of the public and private sectors in a global arena and routinely interacts with foreign legal systems and diverse cultures.  Navigating these different contexts creates challenges for the defense industry, particularly where legal and ethical norms conflict.  How should a defense industry company conduct business in countries where government officials operate according to different moral norms?  Should the defense industry be responsive to ethical objections to technological developments in the context of surveillance or controversial new weapons such as autonomous weapons systems?  Should the global defense industry be held to a higher standard than other industries given the sensitive and potentially controversial nature of its enterprise?  Domestically, other pressing questions arise.  Should partnerships between the defense industry and institutions of higher learning be encouraged?  Do such partnerships raise ethical concerns?

The purpose of this conference, held in partnership with Lockheed Martin Corporation, is to inspire constructive discussion pertaining to such questions, by bringing  together distinguished practitioners and scholars from the private sector, academia, government service and the military to engage in an in-depth exploration of the moral and legal challenges facing the global defense industry.

WEDNESDAY, APRIL 15, 2015
 4:30 – 6:00 pm
, Senior Associate,  Partnering with “front-line” militaries has become a centerpiece of President Obama’s counter-terrorism policy.  Yet the governments those militaries serve might be described as sophisticated criminal organizations, whose core objective is the use of public office to amass personal gain.  Though human rights considerations do constrain some delivery of U.S. military assistance, the problem may be broader than the Leahy Law, for example, draws it.  Are these really the best partners in the effort to combat extremism?  What precautions are being taken to avoid associating the U.S. with the abuses of these governments?  
 6:00 – 7:00 pm
 7:30 – 9:00 pm 
THURSDAY, APRIL 16, 2015
8:00 – 8:30 am Breakfast
8:30 – 8:35  
8:35 – 9:45 am  , Chair of the Executive Board of CERL
9:45  – 10:15 am Break
10:15 – 11:30 am    , Transparency International UK
 11:30 am – 1:00 pm   , CEO, Ethics and Compliance Officers  There is evidence that organizations can empower individual employees to make good decisions in everyday business, by creating cultures and programs that foster ethics and compliance.  Dr. Harned will present findings from the Ethics Research Center’s (ERC) longitudinal study of the industry through the Defense Industry Benchmark (DIB), a project of the Defense Industry Initiative (DII). 
 1:00 – 2:15 pm  ., University of Notre Dame
 2:15 – 2:45 pm Break
 2:45 – 4:00 pm    Claire O. Finkelstein

Participants

Mr. Jamal Ahmed

Vice President, Internal Audit and Chief Ethics Officer, Day&Zimmermann  

Major General Thomas E. Ayres

Deputy Judge Advocate, U.S. Army

Judge Harold Berger

Managing Shareholder, Berger & Montague, P.C.

Ms. Sarah Chayes

Senior Associate, Democracy and Rule of Law Carnegie Endowment for International Peace

Mr. William R. Craven

Federal Systems

Professor Michael Davis

Illinois Institute of Technology, Philosophy

Ms. Arlene Fickler

Partner, Schnader Harrison Segal & Lewis LLP

Professor Claire Finkelstein

University of Pennsylvania, Law and Philosophy

Ms. Ashling Gallagher

Research Fellow for Center for Ethics and the Rule of Law, University of Pennsylvania

Professor Kevin Govern

Ave Maria School of Law

Mr. Paul Haaga, Jr.

Former Acting President and CEO of NPR

Dr. Patricia Harned

CEO, Ethics and Compliance Officers Association and Ethics Resource Center

Professor Nancy F. Hite

Tufts University, International Affairs

Mr. Eric Kantor

Deputy General Counsel and Chief Compliance Officer, General Electric Aviation Operation

Major General (ret.) Robert Latiff, Ph.D.

University of Notre Dame, Reilly Center for Science, Technology, and Values

Professor Sarah E. Light

University of Pennsylvania, Legal Studies and Business Ethics

Professor George R. Lucas Jr.

United States Naval Academy, Ethics

Professor Duncan MacIntosh

Dalhousie University, Philosophy

Dr. Leo S. Mackay Jr.

Vice President, Ethics & Business Conduct, Lockheed Martin Corporation

Ms. Blair C. Marks

Director, Ethics Awareness and Operations, Lockheed Martin Corporation 

Professor Christopher W. Morris

University of Maryland, Philosophy

Professor Philip M. Nichols

Mr. C. Edward Peartree

Department of State

Mr. Dean Popps

Former Assistant Secretary of the Army for Acquisition, Logistics and Technology

Mr. Mark Pyman

Director, International Defense & Security Programme, Transparency International UK

Mr. Ilya Rudyak

Director of Research for Center for Ethics and the Rule of Law, University of Pennsylvania

Mr. Timothy Schultz

Director, Business Ethics and Compliance for Raytheon Company

Professor Joshua I. Schwartz

George Washington University, Government Contracts Law

Lt. Gen (ret.) Harry E. Soyster

Center for Immigration Studies

Professor Jessica Tillipman

George Washington University, Law

Mr. Frank Vogl

Co-Founder, Transparency International

Ms. Gay Walling

Corporate and Foundation Relations Officer, University of Pennsylvania

Professor Patricia Werhane

DePaul University, Philosophy

Brigadier General (ret.) Stephen Xenakis

Center for Translational Medicine; Physicians for Human Rights

Professor Christopher R. Yukins

George Washington University, Government Procurement Law

Mr. Jules Zacher

Council for a Livable World, Attorney at Law

Background Readings

Recent articles.

Defence Groups Quiet on Anti-Corruption Measures  , Financial Times, April 27, 2015

Blackwater’s Legacy Goes Beyond Public View  , New York Times, April 14, 2015

Ex-U.S. Army Colonel Tied to Tilton Equity Firm Reaches Plea Deal  , Reuters, April 8, 2015

Blackwater: One of the Pentagon’s Top Contractors for Afghanistan Training  , The Nation, March 31, 2015

Ethics in Government Act of 1978, 5 U.S.C. App. § 101 (2012).

Foreign Corrupt Practices Act of 1977, 15 U.S.C. §§ 78dd-1 (1998).

Procurement Integrity Act of 1988, 41 U.S.C.A. §§ 2102 (2011)

Requirement of Exemplary Conduct, 10 U.S.C. § 3583 (2014)

DOD 5500.07-R, The Joint Ethics Regulation (2011).

Exec. Order No. 12674, “Principles of Ethical Conduct for Government Officers and Employees,” (Apr. 12, 1989).

Federal Acquisition Regulation (FAR) 3.103 (2013). Part 1 | Part 2 | Part 3  

Standards of Ethical Conduct for Employees of the Executive Branch, 5 C.F.R. pt. 2635 (2014).

American Association of University Professors,  Academic Freedom and National Security in a Time of Crisis

Association of American Universities,   National Defense Education and Initiative: Meeting America’s Security Challenges in the 21st Century  (2006).

Barton H. Halpern, Keith F. Snider,  Products that Kill and Corporate Social Responsibility: The Case of U.S. Defense Firms  , 38.4 Armed Forces & Society (2012).

Brenda Kowske,  Ethical Dilemmas Across Cultures , CEO Middle East, Sept. 2007, at 54.

Charlie Cray, Lee Drutman,  Corporations and the Public Purpose: Restoring the Balance , 4.1 Seattle J. for Social Justice (2005)

Connie Glaser,  Doing a Good Job Isn’t Enough – ‘Cultural Astuteness’ is Needed to Succeed  , Business First – Louisville (July 2009).

David Ginsberg and Robert Bohn,  Let’s Get Personal: A Guide to the Interpretation and Implementation of the FAR Personal Conflicts of Interest Rules , 47.4 The Procurement Lawyer 11 (2012).

Deborah G. Johnson,  Technology with no Human Responsibility? , J. Bus. Ethics (2014).

David Miller, Tom Mills,  Counterinsurgency and Terror Expertise: The Integration of Social Scientists into the War Effort , 23 Cambridge Review of International Affairs (2010).

Doreen Lustig,  The Nature of the Nazi State and the Question of International Criminal Responsibility of Corporate Officials at Nuremberg: Revisiting Franz Neumann’s Concept of Behemoth at the Industrialist Trials , 43 N.Y.U. J. INT’L L. & POL. 965 (2011).

Edmund F. Byrne,  Assessing Arms Makers’ Corporate Social Responsibility  , 74 J. Bus. Ethics (2007).

Gavin Maitland,  The Ethics of the International Arms Trade  , 7.4 Bus. Ethics (1998).

George Lucas,  The Ethics of Defense and Private Security Contracting , in Military Ethics: What Everyone Needs to Know (forthcoming). 

George Lucas,  Legal and Ethical Precepts Governing Emerging Military Technologies: Research and Use , 5 Utah L. Rev. (2013).

Henry A. Giroux,  The Militarization of US Higher Education After 9/11 , 25.5 Theory, Culture, & Society (2008).

John Bryan Warnock,  Principled or Practical Responsibility: 60 Years of Discussion , 41 Pub. Cont. L.J. 881 (2012).

Joseph C. Bryce, Thomas J. Gibson, and Daryn E. Rush,  Ethics in Government,  29Am. Crim. L. Rev. 315 (1991).

Joseph W. Yockey,  Choosing Governance in the FCPA Reform Debate , Journal of Corporation Law 881 (2012).

Joshua Newberg and Richard Dunn,  Keeping Secrets in the Campus Lab: Law, Values, and Rules of Engagement for Industry-University R&D Partnerships  (2002).

Leslie Green,  Legal Positivism , The Stanford Encyclopedia of Philosophy (Sept. 2009).

Margot Cleveland, Christopher M. Favo, Thomas J. Frecka, Charles L. Owens,  Trends in the International Fight Against Bribery  .

Mark Pyman, Regina Wilson, Dominic Scott, The Extent of Single Sourcing in Defense Procurement and its Relevance as a Corruption Risk: A First Look, 20.3 Defense and Peace Economics 215 (2009).

Michael N. Tennison, Jonathan D. Moreno,  Neuroscience, Ethics, and National Security: The State of the Art , 10.3 Plos Biology (2012).

Nancy Hite-Rubin,  A Corruption, Military Procurement and FDI Nexus? , in Greed, Corruption, and the Modern State: Essays in Political Economy (Susan Rose-Ackerman and Paul Lagunes eds., forthcoming).

Peter Hayes,  Corporate Freedom of Action in Nazi Germany , Bulletin of the German Historical Institute 29 (2008).

Philip Brey,  Anticipatory Ethics for Emerging Technologies , 6 Nanoethics (2012).

Philip M. Nichols,  The Business Case for Complying with Bribery Laws , 49.2 Am. Bus. L.J. 325 (2012).

Robert Latiff,  Ethical Issues in Defense Systems Acquisition  , in Routledge Handbook of Military Ethics (George Lucas ed., 2015).

Robert Rhoads,  The U.S. University as a Global Model: Some Fundamental Problems to Consider , 7.2 InterActions: UCLA Journal of Education and Information Studies (2011). 

Ryan Jay Lambrecht,  The Big Payback: How Corruption Taints Offset Agreements in International Defense Trade  (2012).

Steven L. Schooner,  Desiderata: Objectives for a System of Government Contract Law , 11 Public Procurement Law Review 103 (2002).

Steven L. Schooner and Nathaniel E. Castellano,  Review Essay: Reading the Dream Machine: The Untold Story of the Notorious V-22 Osprey , 43.3 Public Contract Law Journal 391 (2014).

Tim Wilson,  A Review of Business-University Collaboration, Department for Business , Innovation and Skills (2012).

Tim Shorrock,  Blackwater: One of the Pentagon’s Top Contractors for Afghanistan Training , The Nation, Mar. 31, 2015.

Transparency International,  Building Integrity and Countering Corruption in Defense and Security: 20 Practical Reforms  (2011).

Transparency International,  Codes of Conduct in Defense Ministries and Armed Forces: What Makes a Good Code of Conduct?  (2011).

Transparency International, Defense Offsets: Addressing the Risks of Corruption & Raising Transparency (2010).

Transparency International,  Organized Crime, Corruption, and the Vulnerability of Defense and Security Forces  (2011).

Required Readings

session 1:  fiduciary duties and moral obligations: addressing corruption in a multicultural environment.

Philip M. Nichols,  To Whom Does a Defense Business Owe a Duty When There is an Opportunity to Pay a Bribe  Abstract | Paper

Nancy Hite-Rubin,  A Corruption, Military Procurement and FDI Nexus?, in  Greed, Corruption, and the Modern State: Essays in Political Economy (Susan Rose-Ackerman and Paul Lagunes eds., forthcoming).

Jessica Tillipman and Vijaya Surampudi,  The Compliance Mentor-Protege Program: Improving Compliance in Small to Mid-Sized Contractors  Abstract | Paper

Christopher Yukins, Mandatory Disclosure: A Case Study in How Anti-Corruption Measures Can Affect Competition in Defense Markets Abstract | Paper

Transparency International, Report,  Building Integrity and Countering Corruption in Defense and Security: 20 Practical Reforms  (2011). Excerpt 

Session 2:  Assessing Legal Standards in the Defense Industry from an Ethical Perspective

Robert Latiff,  Ethical Issues in Defense Systems Acquisition, in  Routledge Handbook of Military Ethics (George Lucas ed., 2015). 

Duncan MacIntosh,  The Sniper and the Psychopath: a Parable in Defense of the Weapons Industry.  Abstract | Paper

Kevin Govern,  Procurement Integrity  Abstract | Paper

Charlie Cray, Lee Drutman,  Corporations and the Public Purpose: Restoring the Balance , 4.1 Seattle J. for Social Justice (2005).

Session 3:   Ethical Dilemmas in Expertise and New Technologies

Michael Davis,  Ethical Issues in the Global Arms Industry: A Role for Engineers  Abstract | Paper

Patricia H. Werhane,  Silo Mentality and Its Ethical Challenges in the Defense Industry [and elsewhere in all organizations]  Abstract | Paper

Philip Brey,  Anticipatory Ethics for Emerging Technologies , 6 Nanoethics (2012). Excerpt

Session 4:   Should Universities Partner with the Defense   Industry?

Association of American Universities ,  National Defense Education and Initiative: Meeting America’s Security Challenges in the 21st Century  (2006). Excerpt

Joshua Newberg and Richard Dunn,  Keeping Secrets in the Campus Lab: Law, Values, and Rules of Engagement for Industry-University R&D Partnerships  (2002). Excerpt

Henry Gioux,  The Militarization of US Higher Education After 9/11 , 25.5 Theory, Culture, & Society (2008). Excerpt

For any questions regarding the conference or registration, please contact: Jennifer Cohen at  [email protected]

Center for Ethics and the Rule of Law – University of Pennsylvania (CERL, Penn))

Center For Ethics and the Rule of Law

The Annenberg Public Policy Center of the University of Pennsylvania

All Rights Reserved © 2024

Share Ethical Dilemmas in the Global Defense Industry on:

IMAGES

  1. Ethical Hacking and Defence: A Case Study

    recent case study of ethical initiatives in defence

  2. PPT

    recent case study of ethical initiatives in defence

  3. (PDF) Performable Case Studies in Ethics Education

    recent case study of ethical initiatives in defence

  4. DOD Adopts 5 Principles of Artificial Intelligence Ethics > U.S

    recent case study of ethical initiatives in defence

  5. (PDF) A method for ethical AI in Defence: A case study on developing

    recent case study of ethical initiatives in defence

  6. Australian Defence Force Ethical Decision-Making Framework (Figure 5.1

    recent case study of ethical initiatives in defence

VIDEO

  1. Defense Microelectronics Modernization Challenges and Opportunities

  2. 2024 SAIEE PRESIDENT'S INVITATIONAL LECTURE

  3. Ethical Dilemmas

  4. CASE STUDY ETHICAL AND NON FINANCIAL CONSIDERATION IN INVESTMENT DECISIONS

  5. EDA Annual Conference 2020

  6. Question 7-2023

COMMENTS

  1. A method for ethical AI in defence: A case study on developing

    Our case study helps answer this call by presenting an industry-led approach outside of academic innovation and focuses on an example with practical, downstream outcomes. ... MEAID incorporates evidence-based hypotheses represented as topics and five facets of ethical AI in Defence drawn from over 100 attendees of a workshop from 45 ...

  2. Ethical Artificial Intelligence in the Italian Defence: a Case Study

    The ethical or responsible use of Artificial Intelligence (AI) is central to numerous civilian AI governance frameworks and to literature. Not so in defence: only a handful of governments have engaged with ethical questions arising from the development and use of AI in and for defence. This paper fills a critical gap in the AI ethics literature by providing evidence on the perception of ...

  3. A Method for Ethical AI in Defence: A case study on developing

    hin an industry-led, government-funded project with diverse collaborators and stakeholders. We describe the case study focused on the design and development of a trusted autonomous system - At. ena AI - which aims to augument human ethical and legal decision-making on the battlefield. Athena AI uses AI to quickly and accurately identify ...

  4. Ethical Principles for Artificial Intelligence in National Defence

    Defence agencies across the globe identify artificial intelligence (AI) as a key technology to maintain an edge over adversaries. As a result, efforts to develop or acquire AI capabilities for defence are growing on a global scale. Unfortunately, they remain unmatched by efforts to define ethical frameworks to guide the use of AI in the defence domain. This article provides one such framework ...

  5. DOD Committed to Ethical Use of Artificial Intelligence

    He said the rapid advance of the technology has opened up a wide array of use cases for artificial intelligence beyond defense. The U.S. continues to be "an engine of innovation when it comes to AI."

  6. Case Study: A Method for Ethical AI in Defence Applied to an Envisioned

    The use of artificial intelligence (AI) in a defence context poses significant ethical questions and risks. Defence will need to address these as AI systems are developed and deployed in order to maintain the reputation of the ADF, uphold Australia's domestic and international legal obligations, and support the development of an international AI regime based on liberal democratic values.

  7. A method for ethical AI in Defence: A case study on developing

    We describe the case study focused on the design and development of a trusted autonomous. system - Athena AI - which aims to augument human ethical and legal decision-making on the ...

  8. A method for ethical AI in Defence: A case study on developing

    What does it mean to be responsible and responsive when developing and deploying trusted autonomous systems in Defence? In this short reflective article, we describe a case study of building a trusted autonomous system - Athena AI - within an industry-led, government-funded project with diverse collaborators and stakeholders. Using this case study, we draw out lessons on the value and impact ...

  9. Ethical AI within Defence

    This could dramatically change how military personnel are used, freeing them up for other activities, which could be extremely valuable for Defence. As part of their work in this case study, the researchers reviewed A Method for Ethical AI in Defence documentation, completed multiple interviews with defence subject matter experts, watched ...

  10. 36 The Case for Ethical AI in the Military

    1. Purpose. The primary purpose of Ethical AI is to improve the safety of protected entities and noncombatants within the Law of Armed Conflict and rules of engagement. A secondary purpose is to increase freedom of maneuver for military commanders, thereby enabling further ethical benefits. 2.

  11. The Ethical Use of AI in the Security, Defense Industry

    In February 2020, the Defense Department formally adopted five principles of artificial intelligence ethics as a framework to design, develop, deploy and use AI in the military. To summarize, the department stated that AI will be responsible, equitable, traceable, reliable and governable. It is an outstanding first step to guide future ...

  12. Assessing ethical AI principles in defense

    On October 31, the Defense Innovation Board unveiled principles for the ethical use of AI by the Defense Department, which call for AI systems in the military to be responsible, equitable ...

  13. PDF Ethical Artificial Intelligence in the Italian Defence: a Case Study

    The ethical or responsible use of Artificial Intelligence (AI) is central to numerous civilian AI governance frameworks and to literature. Not so in defence: only a hand-ful of governments have engaged with ethical questions arising from the develop-ment and use of AI in and for defence. This paper fills a critical gap in the AI ethics ...

  14. A method for ethical AI in Defence: A case study on developing ...

    A method for ethical AI in Defence: A case study on developing trustworthy autonomous systems. 06/21/2022 . ... Using this case study, we draw out lessons on the value and impact of embedding responsible research and innovation-aligned, ethics-by-design approaches and principles throughout the development of technology at high translation ...

  15. Ethical governance of artificial intelligence for defence: normative

    The rapid diffusion of artificial intelligence (AI) technologies in the defence domain raises challenges for the ethical governance of these systems. A recent shift from the what to the how of AI ethics sees a nascent body of literature published by defence organisations focussed on guidance to implement AI ethics principles. These efforts have neglected a crucial intermediate step between ...

  16. Defence releases report on ethical use of AI

    Findings from a workshop on the ethics of Artificial Intelligence (AI) for Defence in 2019 have been released to support science and technical considerations for the potential development of Defence policy, doctrine, research and project management. The technical report entitled A Method for Ethical AI in Defence summarises the discussions from ...

  17. Defence releases report on ethical use of AI

    16 February 2021 Findings from a workshop on the ethics of Artificial Intelligence (AI) for Defence in 2019 have been released to support science and technical considerations for the potential development of Defence policy, doctrine, research and project management.. The technical report entitled A Method for Ethical AI in Defence summarises the discussions from the workshop, and outlines a ...

  18. PDF AI Principles: Recommendations on the Ethical Use of Artificial

    AI applications in ways inconsistent with the legal, ethical, and moral norms expected by democratic countries. Our aim is to ground the principles offered here in DoD's longstanding ethics framework - one that has withstood the advent and deployment of emerging military-specific or dual-use technologies over decades and reflects our

  19. Ethical Artificial Intelligence in the Italian Defence: a Case Study

    Ethical Artificial Intelligence in the Italian Defence: a Case Study. July 2023. DOI: 10.1007/s44206-023-00056-. Authors: Rosanna Fanni. Fernando Giancotti. To read the full-text of this research ...

  20. Ethical Dilemmas in the Global Defense Industry

    It aims to inform a discussion about the moral and legal challenges facing the global defense industry and to introduce solutions that are innovative, effective, and practical. Keywords: defense industry, defense production, military services, emerging technology, arms trade, offsets, bribery, corruption, national security, ethical dilemmas.

  21. A Method for Ethical AI in Defence

    This methodology includes three tools: an Ethical AI for Defence Checklist, Ethical AI Risk Matrix and a Legal and Ethical Assurance Program Plan (LEAPP). It is important to note that the facets, topics and methods developed are evidence-based results of a single workshop only, rather than exhaustive of all ethical AI considerations (there were ...

  22. Ethical Principles for Artificial Intelligence in National Defence

    The choice of the method to identify the ethical problems posed by the use of AI for national defence is not a trivial one. For example, one may think of developing a complete taxonomy of the ethical issues posed by existing uses of AI in the defence domain; but this is unfeasible and of little value: the taxonomy would be quickly outdated by the rapid developments in AI and its application to ...

  23. Ethical Dilemmas in the Global Defense Industry

    Harned will present findings from the Ethics Research Center's (ERC) longitudinal study of the industry through the Defense Industry Benchmark (DIB), a project of the Defense Industry Initiative (DII). 1:00 - 2:15 pm. Session 3Ethical Dilemmas in New TechnologiesModerator: Professor George R. Lucas Jr., University of Notre Dame.