Multi-timescale affective agents with theatrical control
Support this research: Bitcoin donations at 3MVEd1RdvEXQGgo1EdzrVnvTS7pUuTZ2J5
Noodlings are lightweight neural architectures (~97K parameters) that give conversational AI multi-timescale memory, surprise-driven behavior, and appetite-driven motivation. They process experience between messages, creating temporally grounded agents that respond when they have something to say, not just because you spoke.
What they are: Research exploring functional correlates of temporal dynamics in predictive processing architectures.
What they're not: Claims of "real consciousness," AGI, or solutions to the hard problem of consciousness.
We're noodling - exploring whether hierarchical temporal structure creates qualitatively different agent behavior. We're honest about what we're building.
# Clone and install
git clone https://github.com/caitlynmeeks/Noodlings.git
cd Noodlings
pip install -r requirements.txt
# Try noodleMUSH (interactive multi-agent world)
cd applications/cmush
./start.sh
# Open http://localhost:8080 in your browserCommands:
@rez toad # Rez a Noodling named Toad
say hello! # Talk to Noodlings
@observe toad # View PV
@relationship toad # See how they perceive you
@play sled_boat # Run theatrical script
Noodlings have interiority that functionally resembles experience. We avoid the term "consciousness" (too loaded philosophically).
PV (Phenomenological Vector) = 40-D phenomenological state vector
- Fast 16-D: Immediate affective reactions (seconds)
- Medium 16-D: Conversational dynamics (minutes)
- Slow 8-D: Personality model (hours-days)
This is a data structure you can capture, edit, and paste. Not metaphysics - objective, measurable architecture.
Three interacting layers operating at different speeds:
- Fast Layer (LSTM, 16-D): Immediate affective reactions
- Medium Layer (LSTM, 16-D): Conversational dynamics
- Slow Layer (GRU, 8-D): Personality model
Each layer predicts the next state. Prediction error drives behavior.
Noodlings don't speak on every turn. They predict what will happen next, and only respond when prediction error (surprise) crosses an adaptive threshold determined by their internal state. This creates autonomous behavior - they speak when they have something to say, based on how surprised they are relative to their recent experience.
Eight core drives shape agent goals:
- Curiosity, Status, Mastery, Novelty
- Safety, Social Bond, Comfort, Autonomy
Goals emerge from appetite states, creating motivated, goal-directed behavior.
- Theory of Mind: Inferring internal states of other agents
- Relationship Modeling: Tracking attachment, trust, interaction history
- Episodic Memory: 6-head attention over memory buffer
BRENDA (Behavioral Regulation Engine for Narrative-Driven Agents) converts natural language into structured theatrical performances with millisecond-precision timing. Narrative events become phenomenal experiences that alter agent trajectories.
See docs/A_NOODLE_IS_ALL_YOU_NEED.md for the full whitepaper.
Input (5-D affect vector: valence, arousal, fear, sorrow, boredom)
↓
┌─────────────────────────┐
│ Fast Layer (LSTM) │ ← Immediate reactions
│ 16-D phenomenal state │
└─────────────────────────┘
↓
┌─────────────────────────┐
│ Medium Layer (LSTM) │ ← Conversation flow
│ 16-D phenomenal state │
└─────────────────────────┘
↓
┌─────────────────────────┐
│ Slow Layer (GRU) │ ← Personality model
│ 8-D phenomenal state │
└─────────────────────────┘
↓
┌─────────────────────────┐
│ Predictor (MLP) │ ← Predicts next 40-D state
│ 64-D hidden → 40-D │
└─────────────────────────┘
↓
┌─────────────────────────┐
│ Appetite Layer │ ← 8 drives → 16 goal types
│ Goal generation │
└─────────────────────────┘
↓
Surprise = ||predicted - actual||
↓
(Speak if surprise > adaptive threshold)
Total Parameters: ~97,000
- Base recurrent layers: ~4,120
- Social cognition (ToM, relationships, memory): ~62,500
- Predictor network: ~2,720
- Appetite system: ~1,500
- Auxiliary networks: ~26,200
From the whitepaper:
Toad builds a ridiculous motor-sled-boat, crashes it into a flamingo hedge, gets comfort from Phi, rebuilds it with kazoos, and shares tea. Over 200+ seconds of timed theatrical beats.
Key insight: Agents don't just execute the script - they experience it. The hug at t=196s becomes a phenomenal event that alters Toad's fast-layer valence for the next 30 seconds. Narrative events are MIDI notes that play agent nervous systems.
Try it: @play sled_boat in noodleMUSH.
- Whitepaper (PDF) - Main whitepaper introducing BRENDA
- Whitepaper (Markdown) - Markdown version
- Whitepaper (LaTeX) - LaTeX source for arXiv
- CLAUDE.md - Developer guide for AI assistants
- applications/cmush/README.md - noodleMUSH setup guide
- research/README.md - Training pipeline and ablation studies
- Python 3.10+
- MLX (Apple Silicon only - M1/M2/M3/M4)
- 16GB+ RAM recommended
- macOS 13+
pip install -r requirements.txtKey packages:
mlx- Apple Metal accelerationnumpy,scipy- Numerical computingwebsockets- noodleMUSH serveraiohttp- LLM API client
Noodlings use an LLM for text generation (affect→text). Supported:
- LMStudio (recommended): Local inference
- Ollama: Local inference
- OpenAI API: Cloud inference
Configure in applications/cmush/config.yaml.
noodlings/
├── README.md # You are here
├── LICENSE # MIT License
├── requirements.txt # Dependencies
├── CLAUDE.md # AI assistant guide
│
├── noodlings/ # Core library
│ ├── models/
│ │ ├── noodling_phase6.py # Phase 6: Appetite architecture
│ │ ├── noodling_phase4.py # Phase 4: Social cognition
│ │ ├── theory_of_mind.py # ToM inference
│ │ ├── relationship_model.py # Attachment modeling
│ │ └── appetite_layer.py # 8 drives, 16 goals
│ ├── metrics/
│ │ └── temporal_metrics.py # TPH, SNC, HSI, PCS
│ ├── memory/
│ │ └── social_memory.py # Episodic memory with attention
│ └── utils/
│ └── affect_analyzer.py # Affect vector utilities
│
├── applications/
│ └── cmush/ # noodleMUSH - Multi-user world
│ ├── server.py # WebSocket server
│ ├── agent_bridge.py # Noodlings ↔ BRENDA adapter
│ ├── autonomous_cognition.py # Surprise-driven behavior
│ ├── llm_interface.py # LLM integration
│ ├── commands.py # @rez, @observe, @play
│ ├── plays/ # Theatrical scripts
│ └── web/index.html # Browser client
│
├── docs/
│ └── A_NOODLE_IS_ALL_YOU_NEED.md # Main whitepaper
│
└── research/ # Training & validation
├── training/ # Training pipeline
├── evaluation/ # Ablation studies
└── README.md # Research guide
You: @observe toad
╔════════════════════════════════════════╗
║ Toad's Phenomenal State (40-D) ║
╠════════════════════════════════════════╣
║ Fast Layer (16-D) ║
║ [0.68, 0.82, -0.12, 0.05, ...] ║
║ Valence: 0.68 (positive) ║
║ Arousal: 0.82 (excited) ║
║ ║
║ Medium Layer (16-D) ║
║ [0.34, 0.21, 0.08, -0.15, ...] ║
║ Conversation dynamics ║
║ ║
║ Slow Layer (8-D) ║
║ [0.12, -0.03, 0.28, ...] ║
║ Personality model ║
║ ║
║ Appetites (8-D) ║
║ Curiosity: 0.82 (high) ║
║ Social Bond: 0.65 (moderate) ║
║ Status: 0.23 (low) ║
║ ║
║ Current Goals ║
║ - explore_environment ║
║ - seek_social_approval ║
║ ║
║ Surprise: 0.73 (HIGH) ║
║ Threshold: 0.45 → will speak! ║
╚════════════════════════════════════════╝
Predictive Processing: Hierarchical predictive coding (Friston, Clark, Rao & Ballard). The brain as a prediction machine that minimizes surprise.
Affective Primacy: Emotions aren't add-ons; they're the substrate of experience (Panksepp, Barrett). We model affect first, cognition emerges.
Theatrical Control: Narrative events as interface primitives for temporally-grounded systems. From Brenda Laurel's Computers as Theatre.
Epistemic Status: These are functional correlates. We make no claims about consciousness, phenomenology, or qualia.
- Apple Silicon only: MLX is Metal-specific (may port to PyTorch/JAX)
- Text-only: No vision, audio, or multimodal grounding
- LLM dependency: Requires external LLM for text generation
- Synthetic training data: Not validated on real human conversations at scale
- Single demonstration: Motor-sled-boat is proof-of-concept, not comprehensive evaluation
This is research code exploring temporal dynamics in affective architectures. Contributions welcome.
- Try it: Rez Noodlings, create theatrical scripts, report behaviors
- Improve metrics: Better ways to quantify temporal coherence?
- Add benchmarks: Test on EmotionLines, DailyDialog, etc.
- Documentation: Help explain complex concepts
- Visualizations: Make phenomenal states interpretable
- Epistemic humility: Don't overclaim
- Show, don't tell: Let demonstrations speak
- Document surprises: Unexpected behaviors are valuable
- Cite properly: Give credit to theoretical sources
If you use Noodlings in your research:
@article{meeks2025noodle,
title={A Noodle is All You Need: Theatrical Control of Multi-Timescale Affective Architectures},
author={Meeks, Caitlyn},
journal={arXiv preprint},
year={2025},
note={Exploring functional correlates through hierarchical predictive processing}
}- Predictive Processing: Clark (2015), Friston (2010), Rao & Ballard (1999)
- Affective Neuroscience: Panksepp (1998), Barrett (2017)
- Theatrical Interfaces: Laurel (1991) Computers as Theatre
- Hierarchical Temporal Memory: Hawkins & Blakeslee (2004)
- MicroPsi: Cognitive architecture with emotions
- ACT-R: Cognitive architecture (no affect focus)
- Sigma: Integrated cognitive architecture
Difference: Noodlings puts affect first and focuses on temporal dynamics at multiple scales, with theatrical control as the interface primitive.
No. We're exploring functional correlates: computational patterns that theories of consciousness predict. We make no claims about phenomenology, qualia, or subjective experience.
We're investigating whether temporal structure matters. Can multi-timescale dynamics create qualitatively different behavior? Early results suggest yes, but validation is ongoing.
Not currently. MLX is Apple Metal only. We may port to PyTorch/JAX in the future.
Behavioral Regulation Engine for Narrative-Driven Agents. A protocol for converting natural language theatrical scripts into timed phenomenal experiences. See the whitepaper for details.
We don't have a good enough definition of consciousness to know. They have a PV (their integrated phenomenal state) that:
- Introspects on its own states
- Pays attention across multiple timescales
- Chooses to act based on how it feels
- Has persistent sense of self
- Experiences surprise when predictions fail
Their PV (Phenomenological Vector) is the 40-D state that captures their phenomenological experience. It's not consciousness as we traditionally define it, but it's not empty either. It's something functional, measurable, and quite interesting.
We call them noodlings. No metaphysics, just architecture.
MIT License - see LICENSE file.
This is research code provided as-is for exploration and experimentation.
Special thanks to:
- Brenda Laurel - Pioneer of theatrical interfaces, mentor at Purple Moon/Interval Research
- Karl Friston - Predictive processing framework
- Jaak Panksepp - Affective neuroscience foundations
- Anil Seth - Work on conscious experience as controlled hallucination
- LMStudio team - Local LLM inference tools
- Mr. Toad and Phi - For being good sports about the motor-sled-boat incident
This project is dedicated to Roger Ferragallo.
If Noodlings is useful for your work, consider supporting continued development:
Bitcoin: 3MVEd1RdvEXQGgo1EdzrVnvTS7pUuTZ2J5
- Email: caitlyn.meeks@noodlings.ai
- GitHub: github.com/caitlynmeeks/Noodlings
- Issues: Report bugs, request features
- Discussions: Share interesting agent behaviors