🧠 TripleNet: Simple architecture that beats SOTA in Continual Learning

Problem: Neural networks forget old tasks when learning new ones (catastrophic forgetting).

Solution: Three parallel paths with different activation functions:

• Yang path (ReLU) — fast adaptation
• Yin path (Tanh) — long-term stability
• To path (LeakyReLU) — adaptive balance
Results on Split MNIST (5 tasks, 1 epoch per task):
• MLP: 49.2%
• EWC: 61%
• Hard ASH (SOTA 2024): 78.3%
• TripleNet: 79.58% 🏆

#TripleNet