Demonstrating "Catastrophic Forgetting" in neural networks with 1-D function approximation #AI #ContinualLearning
Is this is the best possible illustration possible in 1-D? I am not sure.
Catastrophic Forgetting in 1D

What is the best way to demonstrate catastrophic forgetting in 1D? Is 1D an oversimplification for this phenomenon that's seen in billion-dimensional neural networks? I am not sure. But here's what I came up with:

Computational Playground