Getting Started
Installation and basic usage of DREAM
Getting Started
Get up and running with DREAM in minutes.
Installation
Basic Installation
pip install dreamnnRequirements:
- Python 3.10+
- PyTorch 2.0+
- NumPy 1.24+
Development Installation
# Clone repository
git clone https://github.com/karl4th/dream-nn.git
cd dream-nn
# Install with dev dependencies
pip install -e ".[dev]"
# Install with audio dependencies
pip install -e ".[audio]"Basic Usage
Single Timestep
import torch
from dream import DREAMConfig, DREAMCell
# Configure
config = DREAMConfig(
input_dim=80, # Input feature dimension
hidden_dim=256, # Hidden state size
rank=16, # Fast weights rank
ltc_enabled=True, # Enable liquid time-constants
)
# Create cell
cell = DREAMCell(config)
# Initialize state
batch_size = 4
state = cell.init_state(batch_size)
# Process single timestep
x = torch.randn(batch_size, config.input_dim)
h_new, state_new = cell(x, state)
print(f"Hidden state shape: {h_new.shape}") # (4, 256)Sequence Processing
# Process full sequence
sequence = torch.randn(batch_size, 100, config.input_dim)
output, final_state = cell.forward_sequence(sequence, return_all=True)
print(f"Output shape: {output.shape}") # (4, 100, 256)Stateful Processing
Preserve memory across multiple sequences:
# Initialize state ONCE
state = cell.init_state(batch_size)
# Process multiple sequences
for seq in sequences:
# State (U, h, adaptive_tau) is preserved
output, state = cell.forward_sequence(seq, state)
# Model adapts and remembers!Configuration
DREAMConfig Parameters
| Parameter | Default | Description |
|---|---|---|
input_dim | 39 | Input feature dimension |
hidden_dim | 256 | Hidden state size |
rank | 16 | Fast weights rank |
time_step | 0.1 | Integration time step (dt) |
forgetting_rate | 0.005 | Fast weights decay (λ) |
base_plasticity | 0.5 | Hebbian learning rate (η) |
base_threshold | 0.3 | Surprise threshold (τ₀) |
entropy_influence | 0.1 | Entropy effect on threshold (α) |
surprise_temperature | 0.05 | Surprise scaling (γ) |
error_smoothing | 0.05 | Error EMA (β) |
surprise_smoothing | 0.05 | Surprise EMA (β_s) |
target_norm | 2.0 | Fast weights norm constraint |
kappa | 0.5 | Gain modulation coefficient (κ) |
ltc_enabled | True | Enable liquid time-constants |
ltc_tau_sys | 5.0 | Base LTC time constant |
ltc_surprise_scale | 5.0 | Surprise modulation strength |
sleep_rate | 0.005 | Sleep consolidation rate (ζ_sleep) |
min_surprise_for_sleep | 0.2 | Minimum surprise for sleep (S_min) |
Recommended Configurations
For ASR (MFCC 39D)
config = DREAMConfig(
input_dim=39, # 13 MFCC + 13Δ + 13ΔΔ
hidden_dim=256,
rank=16,
forgetting_rate=0.005,
base_plasticity=0.5,
)For Audio (Mel Spectrogram 80D)
config = DREAMConfig(
input_dim=80, # Mel bins
hidden_dim=256,
rank=16,
ltc_tau_sys=5.0,
ltc_surprise_scale=5.0,
)For Time Series
config = DREAMConfig(
input_dim=features_dim,
hidden_dim=128,
rank=8,
ltc_enabled=True,
ltc_tau_sys=5.0, # Faster response
)High-Level API
DREAM (LSTM-like interface)
from dream import DREAM
model = DREAM(
input_dim=64,
hidden_dim=128,
rank=8,
)
# Process sequence
x = torch.randn(32, 50, 64) # (batch, time, features)
output, state = model(x, return_sequences=True)DREAMStack (Multi-layer)
from dream import DREAMStack
model = DREAMStack(
input_dim=64,
hidden_dims=[128, 128, 64], # 3 layers
rank=8,
dropout=0.1,
)
output, states = model(x)Next Steps
- Architecture — Learn how DREAM works internally
- API Reference — Complete API documentation
- Benchmarks — Performance comparison
- Guides — Tutorials and examples