Space Catastrophe!

By Amber Joneleit, Purdue University

Imagine that you’re an astronaut out on a spacewalk, partnered with an artificial intelligence (AI) system for help instead of another crew member. Just like in your favorite sci-fi stories, future AI teammates can be vital to the success of space missions. What would happen if your AI partner suddenly forgot how to regulate life support systems when it got an update about a new crisis situation? That would be catastrophic. Catastrophic interference is the fundamental tendency for a neural network to abruptly and drastically forget some of what it has previously learned upon learning new information. So, how can we prevent neural networks from forgetting important information that they’ve learned?

In artificial neural networks, knowledge is stored in the architecture and weights connecting the computational nodes. Neural networks learn through backpropagation, in which they alter each weight value according to the error it produces. Taking a neural network that has already learned information and training it on a new set of data will cause the network to adapt to the new data and potentially completely lose what it previously learned. Just think, what if you were out on a spacewalk and this was happening to you: after inspecting a damaged solar panel to figure out an issue, you forgot how to maneuver back to the airlock. You would never be able to get anything done without relearning each specific task, which would limit your scope of usefulness and put you in perpetual danger.

Astronauts, thankfully, don’t have this problem. As humans, our brains are built to use complementary conscious and nonconscious learning systems to encode new memories with an interaction between the medial temporal lobe (MTL) and neocortex. Conscious learning and memory leverages the MTL to encode and remember unique episodes after experiencing them just once! The act of imagining a spacewalk that you did at the beginning of this essay – that is an example of the conscious learning and memory system at work.

The conscious learning system can encode memories quickly because it uses sparse activation patterns which allow even similar memories to be encoded distinctly and with vivid detail. For example, before an astronaut goes on a spacewalk, they go through intense training where they are completely submerged in water to simulate a low gravity environment as they perform their procedure. Their first training would be quite memorable, and they will probably remember specific vivid details about the experience and how they felt, despite completing the same task many times after that. This is opposed to the dense activation patterns of the neocortex which include many overlapping nodes that create a large web of interconnected general knowledge. These neocortical memories take a large number of training repetitions to solidify. Conscious, episodic memories formed with the MTL are interleaved with other generalized memories in the neocortex as they are consolidated by repeated reactivation to form this interconnected knowledge web. After many, many repetitions of training for a spacewalk, the mission procedure becomes second nature. Astronauts don’t have to think back to a specific instance of them training to perform the procedure, but rather know it intuitively. This is because the task procedure has solidified in nonconscious memory, and can now overcome catastrophic interference.

At the Air Force Research Laboratory’s Autonomy Capability Team 3, we are exploring this concept by using the Local Error-Driven and Associative Biologically Realistic Algorithm AI model LEABRA which incorporates conscious and nonconcious memory mechanisms to reduce catastrophic interference when training neural networks. Artificial neural networks generally use dense overlapping activation patterns similar to the neocortex, which makes them more susceptible to catastrophic interference. This is because the same nodes that hold learned information need to be altered when learning new information. The neocortex isn’t as susceptible to memory interference because it works together with the conscious memory system in the MTL to encode and retain memories. Complementary learning systems can be used to effectively encode and remember new information. Conscious learning is critical for encoding new memories quickly without interference, and unconscious learning is critical for developing a general statistical world model. This process of encoding and storing episodes to create solidified general knowledge enables humans to retain previously learned information. Using these mechanisms, neural networks are more likely to create new representations for memories rather than overwriting old ones.

The key to reducing catastrophic interference in neural networks may not be to create artificial intelligence, but rather to create artificial consciousness by mimicking brain functions. Artificial neural networks can be altered to reduce catastrophic interference by using the same concepts that conscious learning and memory systems use. This includes using higher inhibition to create sparse activation patterns, and a combination of both error-driven and self-organized learning (O’Rielly, 2011). Memory interference can also be reduced by changing the way we train neural networks. If new information is learned by interleaving it with retrievals of previously learned information, the network can adapt its weights to the new memories while retaining its ability to retrieve old memories. Back on our spacewalk, we can use these consciousness mechanisms to reduce catastrophic interference in memory and allow both AI systems and astronauts to manage multiple tasks without forgetting. With this, we can create reliable AI that can help astronauts with whatever they may need.

References

O’Reilly, R. C., Bhattacharyya, R., Howard, M. D., & Ketz, N. (2011). Complementary learning systems. Cognitive Science, 38(6), 1229–1248.
https://doi.org/10.1111/j.1551-6709.2011.01214.x

* ​The views, opinions, and/or findings contained in this presentation are those of the author and should not be interpreted as representing the official views, position, or policies, either expressed or implied, of the United States Government, the Department of Defense, the United States Air Force or the United States Space Force. The intent of this academic seminar is to discuss publicly available science with thought leaders in the field.