Conscious Ethical Warplanes

By Deshna Naruka, University of Florida

I’ve learned that often you may receive questions for which you have no answer. During a particularly charged Q&A session, in the midst of an advocacy campaign as an International Humanitarian Law (IHL) advocate under the American Red Cross, I was asked about what the rules of warfare are in regards to the usage of Artificial Intelligence (AI). How can one aim to distinguish between a civilian or combatant, and trust autonomous weapons to do the same?

As autonomous air warfare technology becomes a reality, such as the In-Flight Stability Test Aircraft X-62A flown and tested in the past couple years (Eddins, 2024), the ability of AI systems to adhere to principles of distinction and proportionality is crucial in upholding international humanitarian and ethical standards. As I’m focused on my research with AI models such as LEABRA (Local, Error-driven and Associative, Biologically Realistic Algorithm), a neural network learning algorithm which mirrors human thinking, I am compelled to draft understanding on how such technology can use pattern recognition to mimic the functions in the human brain. Models such as LEABRA aid AI in both processing and interpreting complex environments in order to make decisions–in theory similar to a human. LEABRA uses invariant object recognition–a concept from human psychology and neuroscience where an object moves up through a visual hierarchy to be increasingly distinguished, regardless of variations in location, size, angle, etc. Through invariant object recognition, AI models can mimic this aspect of human perception. Tools such as convolutional neural networks (CNNs) help AI models to recognize patterns and features in data similar to hierarchical human perception. Thus, LEABRA is enabled to engage in many stages of human information processing–however, there are some limitations. Because models like LEABRA are incapable of phenomenal consciousness, they are not self-aware and have no experience of the world itself, thus LEABRA cannot perfectly replicate human information processing. What are the implications of humans and machines perceiving differently, for the ethical employment of AI?

My journey as an IHL advocate with the American Red Cross has brought me profound awareness about the moral and ethical importance of this topic, especially in regards to means of warfare. Efforts to address the connection between these two fields of ethics and technology have been made, as the usage of AI is said to be “in accord with States’ obligations under international humanitarian law, including its fundamental principles” (US Department of Defense, 2023). The joining of my research on LEABRA and my advocacy for IHL reflects a broader vision for the future of AI—one with an equal balance between technological innovation and humanitarian responsibility.

One of the IHL principles, proportionality, forbids methods of warfare used to attack military objectives if the overall damage and loss of civilian life is greater than the anticipated military advantage gained. In a past discussion with fellow IHL advocates, I spoke about how vague the guidelines for this principle can be. Essentially, it is a judgment call: the final outcome lies with the one holding the trigger. When it comes to autonomous weapons and AI, making that choice suddenly seems even more ambiguous. The ability for the autonomous weapon to perceive the most fair and proportional choice is significant to a future conflict. AI has a powerful potential to mitigate human suffering with enhanced precision and decision-making in conflict zones. However, this potential can only be employed through continuing to design AI systems that are aligned with human thinking, understanding, and respect for the rules of warfare.

With the rapid technological developments made, my two-sided focus on advanced AI models and humanitarian law furthers my understanding for a balanced approach between both AI usage and ethical principles in times of conflict. Only then can we ensure that AI technologies serve to not control, but enhance the well-being of humanity and further technological advancements for our societies.


Eddins, J. M. (2024, May 20). The United States Air Force’s focus on AI research and development. Airman Magazine.

US Department of State Bureau of Arms Control, Verification and Compliance. (2023). Political declaration on responsible military use of artificial intelligence and autonomy.

* ​The views, opinions, and/or findings contained in this presentation are those of the author and should not be interpreted as representing the official views, position, or policies, either expressed or implied, of the United States Government, the Department of Defense, the United States Air Force or the United States Space Force. The intent of this academic seminar is to discuss publicly available science with thought leaders in the field.