Skip to main content
pcollins.tech
Back to all posts
From One Chat Window to a Personal Operations System
7 min read
Internal Tools

From One Chat Window to a Personal Operations System

#internal-tools#data-sovereignty#ai#telegram#personal-systems

Share this post

From One Chat Window to a Personal Operations System

A couple of months ago, I wrote about building my own suite of personal tools in The Digital Root System. The idea was simple: stop renting software, start owning it. Build apps tailored to my life, host them on my own network, and let AI help me do it.

That post was the spark. This one is the fire.

Since then, the system has gone from a handful of standalone apps to something I genuinely rely on every day — a personal admin system with an AI assistant wired into my health tracking, my side projects, and my day-to-day life. All running on a Raspberry Pi behind Tailscale, deployed through my own CI/CD pipeline.

But the biggest lesson wasn't about adding more features. It was about how I talk to the AI.


The single-window problem

When I first wired up the assistant, it was one conversation. One window. I'd ask it to plan a side project, then switch to logging a meal, then jump to managing a task list.

It worked — technically. But it was messy.

The AI would start mixing contexts. I'd be mid-conversation about a coding project and it would reference my gym routine. Important details would get pushed out as the conversation grew. I'd lose the thread of what I was doing because the assistant had lost it too.

It's the same problem as having one giant Slack channel for everything. It technically contains all the information, but good luck finding anything useful when you need it.

I realised the issue wasn't the AI's capability. It was mine — I was treating a powerful tool like a simple chatbot.


Building the foundation

The first couple of weeks were about getting the bones in place.

I built the dashboard in Next.js with a PostgreSQL database and Resend for notifications. The AI assistant ran on the PI SDK with Claude as the underlying model.

The whole stack was containerised with Docker, deployed through GitHub Actions, and locked behind Tailscale with a Caddy reverse proxy. No public endpoints. My network, my rules.

Within a few days I had a working system — I could manage tasks, track habits, and chat with an AI assistant that had access to all of it.

But it was still just one chat window.


Making it actually useful

The next phase was about turning the assistant from a novelty into a tool I'd actually reach for.

I added task management so it could create, update, and query tasks without me opening a browser. Then came fitness tracking — food logging, workout tracking, Strava and Fitbit sync, and an AI coaching report that could analyse my nutrition and training patterns.

I connected it to Telegram with voice transcription through Whisper, so I could send a voice note from my phone and have it processed as a command. Photo uploads. PDF handling for documents. A memory bank with RAG search so the assistant could remember things across sessions.

The assistant now had dozens of tools spanning personal productivity, fitness, content, and development.

It was powerful. But I was still cramming everything through a single conversation, and the cracks were showing.


The shift: one bot, many windows

The fix was obvious once I saw the problem clearly.

Telegram forum topics.

Instead of one conversation doing everything, I set up the bot to work across multiple isolated threads — each one its own context, its own memory, its own focus.

A thread for fitness. A thread for coding projects. A thread for life admin. A thread for general chat.

Each thread gets its own AI agent instance with its own conversation history. When the assistant saves something to its memory bank, it's tagged with the thread topic, so searches in the fitness thread only return fitness context. The project thread knows nothing about my diet. The life admin thread doesn't care about my code.

This isn't just organisation — it's context segregation. And it changed how effective the AI is.


Why this matters

Think about how you work. You don't have one notebook for everything. You have separate spaces for separate concerns — different browser tabs, different folders, different conversations with different people.

AI should work the same way.

When you force a single conversation to handle everything, three things break down:

  1. Topic drift — the AI starts blending contexts, referencing things from unrelated conversations
  2. Context loss — as the conversation grows, earlier details get pushed out of the window and forgotten
  3. Cognitive loadyou have to mentally track what the AI does and doesn't know about each topic

Multiple windows solve all three. Each thread is a clean slate for its domain. The AI stays focused because the context is focused. You don't lose information because each thread's memory is scoped to its purpose.

It mirrors how teams work in Slack — you don't discuss meal prep in the coding channel. The separation isn't bureaucracy, it's clarity.

And this isn't a Telegram-specific insight. The same principle applies to any AI agent framework being built right now. Whether it's a PI agent, a LangChain pipeline, or a custom agent running on your own infrastructure — if you're funnelling everything through a single conversation, you're limiting what the model can do for you. Multi-session, context-scoped agents aren't a nice-to-have. They're how you get from "interesting demo" to "tool I actually depend on."


How it's wired up

Here's a simplified view of how the system connects:

                        ┌──────────────────────┐
                        │     Telegram Bot      │
                        │      (Grammy)         │
                        └──────────┬───────────┘
                                   │
                    ┌──────────────┼──────────────┐
                    │              │              │
              ┌─────▼─────┐ ┌─────▼─────┐ ┌─────▼─────┐
              │  Fitness   │ │  Projects  │ │ Life Admin │  ...more
              │  Thread    │ │  Thread    │ │  Thread    │  threads
              └─────┬─────┘ └─────┬─────┘ └─────┬─────┘
                    │              │              │
              ┌─────▼─────┐ ┌─────▼─────┐ ┌─────▼─────┐
              │ Own Agent  │ │ Own Agent  │ │ Own Agent  │
              │ Own Memory │ │ Own Memory │ │ Own Memory │
              │ Own History│ │ Own History│ │ Own History│
              └─────┬─────┘ └─────┬─────┘ └─────┬─────┘
                    │              │              │
                    └──────────────┼──────────────┘
                                   │
                    ┌──────────────▼──────────────┐
                    │        Router Agent          │
                    │   (Routes to domain tools)   │
                    └──────────────┬───────────────┘
                                   │
          ┌────────────┬───────────┼───────────┬────────────┐
          │            │           │           │            │
     ┌────▼───┐  ┌────▼───┐ ┌────▼───┐ ┌────▼───┐  ┌────▼───┐
     │Fitness │  │ Tasks  │ │Personal│ │Content │  │  Dev   │
     │ Tools  │  │ Tools  │ │ Tools  │ │ Tools  │  │ Tools  │
     └────┬───┘  └────┬───┘ └────┬───┘ └────┬───┘  └────┬───┘
          │            │           │           │            │
          └────────────┴───────────┼───────────┴────────────┘
                                   │
                    ┌──────────────▼──────────────┐
                    │    Admin System (Next.js)    │
                    │  PostgreSQL  │  Memory Bank  │
                    │  Task APIs   │  Strava/Fitbit│
                    └─────────────────────────────┘

Each Telegram thread is a fully isolated conversation. The bot manages sessions keyed by chat and thread ID. If the bot restarts, it replays stored messages to restore full context — nothing is lost.

Creating a new thread is as simple as /thread fitness or /thread side-project. Listing active threads is /threads. Each one is a dedicated workspace.


It's still growing

This system is very much a work in progress. There are parts I'm still building and refining, and there's a bigger story to tell about where it's heading.

But even at this stage, the shift from a single chat window to a multi-channel operations hub has been the single biggest improvement. Not because of any technical complexity — the implementation is surprisingly straightforward — but because it changed how I think about working with AI.

It's not a chatbot. It's infrastructure. And like any good infrastructure, it works best when it's structured with intention.


The barrier to building keeps falling. The question isn't whether you can build your own tools anymore — it's whether you're willing to think carefully about how you use them.

If you're experimenting with AI in your workflow, I'd love to hear how you're structuring the conversation. Reach out on LinkedIn or Twitter.

Found this helpful? Share it!

Enjoyed this post? Subscribe to my newsletter for more insights on web development, career growth, and tech innovations.

Subscribe to Newsletter