Catastrophic Forgetting
Catastrophic forgetting is a phenomenon observed in artificial neural networks, specifically when a network trained on one task drastically loses performance on that task after being trained on a different, new task. Essentially, the network 'forgets' the information related to the first task in the process of learning the second one.
How Catastrophic Forgetting works
For example, let's consider an artificial neural network model trained to identify images of cars. After achieving good performance on this task, if we now train the same model to identify trees, it may start to 'forget' the features associated with cars. It's because neural networks adjust their weights and biases to minimize the error on the current task, which leads to overwriting or interference with the knowledge of the previous task.
This issue is a significant obstacle to the development of continuous learning systems, which need to adapt to new tasks while preserving performance on previous tasks. Various solutions to combat catastrophic forgetting have been proposed, including methods like "elastic weight consolidation," which selectively slows down learning on the important weights for previous tasks, or "progressive neural networks," which retain the learned features of previous tasks and add new capacity for new tasks.
Download this guide to delve into the most common LLM security risks and ways to mitigate them.
untouchable mode.
Lakera Guard protects your LLM applications from cybersecurity risks with a single line of code. Get started in minutes. Become stronger every day.
Several people are typing about AI/ML security. 
Come join us and 1000+ others in a chat that’s thoroughly SFW.