Continual learning for PyTorch models. Prevent catastrophic forgetting with one line of code.
pip install clearn-ai
When you fine-tune a neural network on new data, it forgets everything it learned before. This is catastrophic forgetting. clearn fixes it.
Wrap your existing PyTorch model. Train on sequential tasks. Inspect what was retained. That's it.
import clearn
# Wrap any PyTorch model with a continual learning strategy
model = clearn.wrap(your_model, strategy="ewc")
# Train on sequential tasks — forgetting is handled automatically
model.fit(task1_loader, optimizer, task_id="q1_fraud")
model.fit(task2_loader, optimizer, task_id="q2_fraud")
# Inspect what was retained
print(model.diff())
The retention report shows exactly what your model remembers after each task. No other continual learning library ships this.
RetentionReport ├── q1_fraud: 94.2% retained (-5.8%) ├── q2_fraud: 100.0% (current task) ├── plasticity: 0.87 ├── stability: 0.94 └── recommendation: "stable — no action needed"
Every strategy is a one-line swap. Start with EWC, scale to DER++, or combine with LoRA for LLMs.
ResNet-18 trained on 20 sequential tasks (5 classes each). Task 1 accuracy measured after completing all 20 tasks.
| Method | Task 1 Accuracy | Retention |
|---|---|---|
| Baseline SGD | ~8% | |
| clearn EWC | ~82% | |
| clearn DER++ | ~88% |
Split CIFAR-100 · 20 sequential tasks · ResNet-18 · SGD lr=0.01
Load pretrained models directly from the Hub. Train with continual learning. Push back when you're done.
import clearn
# Load a pretrained model with continual learning built in
model = clearn.from_pretrained("bert-base-uncased", strategy="lora-ewc")
# Train on your first task
model.fit(loader, optimizer, task_id="sentiment_v1")
# Push to Hub — checkpoint includes full task history
model.push_to_hub("your-name/my-model")