Manifestro Docs

DREAM Documentation

Dynamic Recall and Elastic Adaptive Memory - A PyTorch implementation of continuous-time RNN cells with surprise-driven plasticity and liquid time-constants

DREAM Documentation

Dynamic Recall and Elastic Adaptive Memory

A PyTorch implementation of continuous-time RNN cells with surprise-driven plasticity and liquid time-constants for adaptive neural dynamics.

Overview

DREAM (Dynamic Recall and Elastic Adaptive Memory) is a novel recurrent neural network cell that combines several advanced mechanisms for adaptive sequence processing:

  • Surprise-Driven Plasticity: The cell adapts its learning rate based on prediction error "surprise"
  • Liquid Time-Constants (LTC): Integration speeds change dynamically based on input novelty
  • Fast Weights: Low-rank weight decomposition enables efficient meta-learning
  • Sleep Consolidation: Memory stabilization during low-surprise periods

Why DREAM?

Traditional RNNs (LSTM, GRU) have fixed dynamics once trained. DREAM introduces continuous adaptation during both training and inference:

FeatureLSTM/GRUDREAM
Time constantsFixedAdaptive (LTC)
Learning rateGlobal (optimizer)Local (per synapse)
Memory updateAlways same speedSurprise-modulated
State representationHidden onlyHidden + Fast Weights
Adaptation during inference❌ No✅ Yes

Use Cases

DREAM excels at:

  • Online learning: Continuously adapting to new patterns
  • Non-stationary sequences: Data distributions that change over time
  • Few-shot learning: Rapid adaptation to new tasks
  • Memory-intensive tasks: Long-term dependency modeling
  • ASR/Speech: Acoustic patterns with temporal dynamics

Quick Example

import torch
from dream import DREAM

# Create model
model = DREAM(
    input_dim=64,
    hidden_dim=128,
    rank=8,
    ltc_enabled=True
)

# Process sequence
x = torch.randn(32, 50, 64)  # (batch, time, features)
output, state = model(x)

print(f"Output shape: {output.shape}")  # (32, 50, 128)

Documentation Structure

Version Information

  • Version: 0.1.2
  • Last Updated: 2026
  • Maintained by: Manifestro Team
  • License: MIT

Community

On this page