Offline periods during training mitigated ‘catastrophic forgetfulness’ in computer systems

Depending on age, humans need 7 to 13 hours of sleep every 24 hours. During this time, a lot happens: heart rate, breathing, and metabolism come and go; hormone levels adjust; the body relaxes. Not so much in the brain.

“The brain is very busy when we sleep, repeating what we’ve learned throughout the day,” said Maxim Bazhenov, PhD, professor of medicine and sleep researcher at the University of California San Diego School of Medicine. “Sleep helps reorganize memories and presents them in the most efficient way.”

In previously published work, Bazhenov and colleagues reported how sleep builds rational memory, the ability to recall arbitrary or indirect associations between objects, people, or events, and protects against forgetting old memories.

Artificial neural networks take advantage of the architecture of the human brain to improve numerous technologies and systems, from basic science and medicine to finance and social media. Somehow, they have achieved superhuman performance, such as computational speed, but they fail in one key aspect: When artificial neural networks learn sequentially, new information overwrites old information, a phenomenon called catastrophic forgetting.

“In contrast, the human brain continually learns and incorporates new data into existing knowledge,” Bazhenov said, “and typically learns best when new training is interspersed with periods of sleep for memory consolidation.”

Writing in the November 18, 2022 issue of PLOS Computational Biology, Lead author Bazhenov and colleagues discuss how biological models can help mitigate the threat of catastrophic forgetting in artificial neural networks, increasing their utility across a spectrum of research interests.

The scientists used spike neural networks that artificially mimic natural neural systems: instead of information being continuously communicated, it is transmitted as discrete events (spikes) at certain points in time.

They found that when the spike networks were trained on a new task, but with occasional offline periods mimicking sleep, catastrophic forgetfulness was mitigated. Like the human brain, the study authors said, “sleeping” for the networks allowed them to replay old memories without explicitly using old training data.

Memories are represented in the human brain by patterns of synaptic weight: the strength or breadth of a connection between two neurons.

“When we learn new information,” Bazhenov said, “neurons fire in a specific order and this increases the synapses between them. During sleep, the spike patterns learned during our waking state repeat themselves spontaneously. It’s called reactivation or replay.” “.

“Synaptic plasticity, the ability to be altered or shaped, is still present during sleep and may further enhance synaptic weight patterns that represent memory, helping prevent forgetfulness or enabling knowledge transfer from old tasks. to new”.

When Bazhenov and his colleagues applied this approach to artificial neural networks, they found that it helped the networks avoid catastrophic forgetting.

“It meant that these networks could continuously learn, just like humans or animals. Understanding how the human brain processes information during sleep may help boost memory in human subjects. Increasing sleep rhythms may lead to better memory.” .

“In other projects, we use computer models to develop optimal strategies for delivering stimulation during sleep, such as auditory tones, which improve sleep rhythms and enhance learning. This can be particularly important when memory is suboptimal, such as when memory decreases in aging or in some conditions such as Alzheimer’s disease.

Co-authors include: Ryan Golden and Jean Erik Delanois, both from UC San Diego; and Pavel Sanda, Institute of Computer Science of the Czech Academy of Sciences.

.

Leave a Comment