Noesis: Building Personal AI Infrastructure
What happens when you stop using AI as a chatbot and start building it into your operating system.
The Lineage
Two ideas converged for me in early 2025.
The first was Daniel Miessler’s work on fabric and his broader thinking about Personal AI (PAI). Daniel’s thesis was simple: AI shouldn’t be a destination you visit. It should be a layer that wraps around you, processing the constant stream of information you’re already swimming in. Fabric operationalized this with “patterns” — reusable prompts you could pipe content through from the command line. Summarize a YouTube video. Extract wisdom from an article. Rate the quality of an argument. The insight wasn’t the prompts themselves. It was the idea that AI should be callable — a tool in your pipeline, not a chat window you alt-tab to.
The second was Andrej Karpathy’s articulation of what he called the LLMWiki concept — the idea that an LLM could serve as the connective tissue for your entire personal knowledge system. Not as a search engine bolted onto your notes, but as something that understands the relationships between everything you know and can act on them. Your notes, your calendar, your messages, your meeting recordings — all of it as context for an AI that can reason across the full surface area of your work.
Both of these ideas clicked because they described something I was already groping toward: AI that isn’t a product I use, but infrastructure I operate.
What Noesis Is
Noesis is my personal AI infrastructure. It runs on Claude Code, sits on top of an Obsidian vault, and connects to the systems where my work actually happens — Slack, Gmail, Google Calendar, Confluence, Zoom, and a voice recorder I carry in every meeting.
It’s not an app. There’s no UI beyond the terminal and Obsidian itself. It’s a system of skills, agents, and tool calls that I invoke throughout my day, all wired together by a set of conventions about how data flows, where state lives, and what gets automated vs. what stays manual.
A typical day looks something like this:
- Morning: I run a briefing that reads my calendar, scans email and Slack, checks Confluence for changes, and cross-references everything against my active work streams and open TODOs. It writes a daily note with the day’s meetings pre-populated and a prioritized summary of what needs attention.
- Before meetings: I run meeting prep on a person or topic. It pulls their person file, recent Plaud recordings we’ve shared, open TODOs involving them, workstream context, and recent comms. One-page brief, ready in seconds.
- End of day: An EOD briefing reconciles what was planned with what actually happened. It matches Zoom and Plaud recordings to calendar events, extracts action items, updates workstream files, and flags anything that went stale.
None of this is magic. It’s plumbing.
Skills, Agents, and Tool Calls
Three primitives make the system work.
Skills are the atomic units — self-contained capabilities with a defined trigger, a set of tools they can use, and a specific job. I have about 50 of them. Some are simple (/now tells me the current date and timezone). Some are complex (/morning-briefing orchestrates calendar, email, Slack, Confluence, and TODO scanning into a single daily note). A skill is just a prompt with access to tools, but the discipline of defining them as discrete units — with names, triggers, and boundaries — is what makes the system composable.
Tool calls are how skills interact with the outside world. Reading a file. Searching Slack. Creating a calendar event. Fetching a Zoom transcript. Each tool is an API boundary — the skill says what it wants, the tool executes it. The model doesn’t have direct access to Slack or Gmail; it has access to tools that talk to those systems, with credentials managed through 1Password. This separation matters. It means I can audit what the system did, control what it can reach, and swap implementations without rewriting skills.
Agents are skills with judgment. Where a skill follows a script, an agent makes decisions about how to approach a problem. I have about 45 of them — a skeptic that pressure-tests reasoning, a devil’s advocate that argues the opposite position, a security architect that evaluates trust boundaries, a strategist that connects dots across workstreams. They can be invoked individually or composed into panels: run a document past a verifier, skeptic, and judge to get a structured critique with dissent preserved.
The key insight is layering: skills handle routine operations, tool calls provide the interface to external systems, and agents bring judgment to problems that need it. Most of my day is skills. Agents come out for decisions that matter.
Why “Infrastructure”
I call this infrastructure deliberately. It’s not an assistant. It’s not an agent. It’s the plumbing and wiring that makes AI useful across every part of my work, without requiring me to context-switch into “AI mode.”
The blog you’re reading this on is part of it. I write in Obsidian, add publish: true to the frontmatter, and push it to the web from the same tool I use for everything else. No separate CMS. No context switch. Just the vault.
More on the architecture, the design decisions, and the failures in future posts. This is the foundation.
Published on notch.org. Views are my own.