Documentation
← Back to Documentation

How the Agent Thinks

Every response your agent gives is built on a carefully assembled context — not a memory dump, not a guess. Here's exactly what goes in, what stays, and what the agent simply cannot know.

🧠

35 years of T1D taught me what actually matters

The hardest part of building an AI agent for diabetes is not the engineering. It's knowing what information actually matters.

A naive implementation dumps everything — full glucose history, every log entry, every note. The result is a bloated prompt where the signal drowns in noise. The model wastes its attention on irrelevant data and misses what actually matters right now.

I've lived with type 1 diabetes for 35 years. That experience — not an algorithm — determined which data gets assembled into every prompt. I know that your active insulin on board matters more than your average from last Tuesday. I know that last night's sleep matters at 9am but not at 9pm. I know that a morning run changes your insulin sensitivity for hours in ways that your CGM graph alone will not tell you.

What the agent knows is an editorial decision. I've been managing T1D every day for 35 years — that's what shaped it.

What the agent reads — built fresh every time

Every time you send a message, the agent assembles your health dashboard from scratch. Nothing is cached from a previous session.

🔴

Recent glucose

Your last hour of readings plus trend direction. Not your full history — the recent window is what determines where you are heading right now.

💉

Active insulin & food

Insulin still on board, carbohydrates still being absorbed. Everything physiologically active in your body right now.

😴

Last night & today

Sleep quality, workouts, steps, stress events. Context that shapes how your glucose behaves today but would be invisible to an agent looking only at glucose numbers.

📋

Your settings

Insulin-to-carb ratios, correction factor, target range, and your personal goals. The rules your agent uses to calibrate every piece of advice.

💬

Short-term memory — this session

The agent also reads the recent history of your current conversation. This is what lets it follow a thread — if you mentioned a correction ten minutes ago, it remembers. This memory lasts for the current session only.

Session memory is not saved to long-term memory automatically. Only what gets written during compaction persists.
📖

Memory compaction — writing the journal

Conversations cannot grow forever — memory has limits. Periodically, the agent does something analogous to writing in a journal: it reads through accumulated raw notes and conversation history and compresses them into a concise narrative summary.

What

Raw notes — everything logged, said, and observed — are distilled into a short, dense summary that captures what matters: patterns confirmed, adjustments made, key moments flagged.

/compact

You can trigger this manually with the /compact command in the chat. The agent also does it automatically when the conversation grows long.

Result

The summary becomes part of long-term memory. The raw notes are retired. The agent carries forward the meaning, not the volume.

🌱

Long-term memory — what the agent has learned about you

This is what persists across sessions, across days, across months.

Your profile

  • Name, diagnosis date
  • Preferences and communication style
  • Goals — what you are working on
  • What has and has not worked for you

Patterns noticed

  • "You spike after pasta"
  • "Morning runs drop you 40 pts"
  • "Stress raises your baseline"
  • Confidence grows as evidence accumulates

Key moments

  • Hypo events and their causes
  • Dose adjustments and outcomes
  • Life changes — illness, travel, new job
  • Milestones and breakthroughs
🚧

Honest limits — what the agent does NOT know

We think being explicit about this matters. An agent that pretends to know everything is more dangerous than one that clearly states its edges.

Not your full glucose history

The agent sees recent readings and long-term summaries — not every data point from the past three years. Summaries capture patterns; they cannot reproduce every individual reading.

Not other apps or the internet

The agent cannot see what you are doing in other apps, browse the web, or access your phone settings. Its world is what you log or sync into Open-D.

Not what you did not log

If you ate something and did not log it, the agent does not know. Gaps in data are flagged as gaps — the agent will say "I don't see any food logged" rather than guessing.

Not things you say in passing

If you say "remember this" in a message, that alone does not save it to long-term memory. It lives in the session. Only compaction writes things permanently. We're working on explicit save commands.

⚙️

Why this is harder than it looks

1

AI models start every conversation from zero. They have no built-in memory. Every conversation starts from zero — the agent remembers nothing from the last time you talked.

2

Open-D 'remembers' because we explicitly rebuild what the agent knows on every single message. The health dashboard, the session history, the long-term memory — assembled, structured, and delivered fresh.

3

A simple chat wrapper forgets everything between sessions, knows nothing about your glucose, and has no concept of what happened last night. Open-D is built to solve every one of those problems.

This memory architecture — shaped by 35 years of living with T1D — is the difference between a real diabetes agent and a generic chatbot wearing a health icon.

Want to see this agent work with your actual data?

Try it with realistic T1D data

The demo may not reflect the latest app version.