Composable Skills: From Theory to Implementation

Composable Skills: From Theory to Implementation
In Part 1, I explored why the industry converged on skills over custom agents. The core insight—that skills are just folders of expertise that agents load on demand—has now been adopted by OpenAI, Microsoft, and others as an open standard.
But understanding the philosophy doesn't tell you how to implement it.
When I started building my "digital self"—a system of AI-assisted workflows for content creation, lead nurturing, and daily operations—I immediately hit a practical problem: where does the logic live? And more importantly, how do capabilities compose without duplicating code everywhere?
The Duplication Problem
When I started with Claude Code, I treated Skills and Commands as parallel systems. Need a content creation workflow? Build a skill. Need users to invoke it? Create a command that reimplements the same logic.
This lasted about a week before the duplication became unbearable.
Every change required updates in two places. The skill would drift from the command. Worse, I couldn't compose capabilities—if my content workflow needed to apply voice formatting, I had to inline that logic or accept inconsistency.
The problem was obvious. The solution took longer to find.
Skills as Source of Truth
After experimenting with several approaches (including one backwards attempt where skills called commands), I landed on a pattern that feels right:
Skills are comprehensive workflow packages. Commands are lightweight entry points that invoke them.
┌─────────────────────────────────────────┐
│ ENTRY POINTS │
│ Auto-trigger: "I want to write..." │
│ OR │
│ Explicit: /express-mode topic │
└─────────────────┬───────────────────────┘
│
▼
┌─────────────────────────────────────────┐
│ SKILL (source of truth) │
│ .claude/skills/expressing-insights/ │
│ ├── SKILL.md ← Discovery overview │
│ ├── workflow.md ← Detailed steps │
│ ├── sources.md ← Where to look │
│ └── frameworks/ ← Content structures │
└─────────────────┬───────────────────────┘
│ composes with
▼
┌─────────────────────────────────────────┐
│ SKILL (specialized capability) │
│ .claude/skills/voice-formatter/ │
│ ├── SKILL.md ← Process │
│ ├── core.md ← Voice identity │
│ └── linkedin.md ← Platform rules │
└─────────────────────────────────────────┘
The command file becomes trivial:
# Command: /express-mode
Invoke the expressing-insights skill at .claude/skills/expressing-insights/
Pass arguments: $ARGUMENTS
No duplicated logic. The skill is the single source of truth.
Skills Compose with Skills
The real power comes from composition. My content creation skill (expressing-insights) handles what to say—gathering sources, structuring content, choosing frameworks. But it doesn't know how I sound.
That's a separate concern. So I built a voice-formatter skill that handles tone, platform adaptation, and anti-patterns for me specifically. When expressing-insights finishes drafting, it explicitly invokes voice-formatter:
## Composition with Voice Formatter
After creating draft structure, invoke voice-formatter skill.
Do not duplicate voice logic here—voice-formatter is the source of truth.
Two skills, single responsibilities, composable.
Why This Matters Now
This isn't just personal preference. The industry has already converged.
When Anthropic open-sourced the Agent Skills spec in December 2025, OpenAI, Microsoft, and Cursor adopted it within weeks. The pattern works because it solves a real problem: how do you package expertise in a way that's portable, auditable, and composable?
Skills are becoming the atomic unit of AI agent capability. The agent itself is increasingly just an orchestrator.
The Three Principles
If you're building AI workflows—whether with Claude Code, Cursor, or any agent framework—consider these principles:
1. Skills = Source of Truth
Put the complete workflow logic in skills. Include discovery documentation, detailed steps, reference materials, and optional automation scripts. Skills are comprehensive packages.
2. Commands = Consistent Execution
Commands serve two purposes. First, they can invoke skills when you want explicit control over triggering a complex workflow. But they also stand alone—when you have a prompt you run repeatedly and want it to execute the same way every time, a command ensures consistency.
My /daily-brief command doesn't invoke a skill. It's a detailed prompt that orchestrates my morning routine: checking calendars, reviewing pipelines, scanning inboxes. The value is consistency. Same steps, same order, every morning. No skill needed—just a reliable prompt.
3. Skills Compose with Skills
When you notice orthogonal concerns (what vs. how, create vs. validate, draft vs. format), separate them into composable skills. Each skill stays focused. The composition happens through explicit invocation.
When to Apply This
Create a Skill when:
- The workflow has multiple steps or supporting files
- The capability should auto-trigger based on context
- The logic will be reused across different entry points
Create a Command when:
- You have a prompt you run repeatedly and want consistent execution
- You need explicit invocation for an existing skill
- The workflow fits in a single file but requires reliability
Compose Skills when:
- Two capabilities are orthogonal
- One skill might be useful independently
- Separation keeps each skill focused
The Temptation to Over-Architect
There's a trap here worth naming.
When I started building AI workflows, my instinct was to create specialized agents for everything. A code reviewer agent. A PM agent. A designer feedback agent. Each with detailed system prompts encoding how to think about their domain.
This felt productive. It was actually counterproductive.
Every few months, the underlying model improves. Claude gets better at code review. It gets better at thinking like a PM. It gets better at design critique. If I've built elaborate scaffolding around yesterday's limitations, I'm now fighting the model instead of leveraging it.
The principle: only build skills for what's specific to you that models can't know.
My voice-formatter skill exists because Claude can't know how I sound. My content sources skill exists because Claude can't know where I keep my notes. These are personal context, not general capability.
But "how to do a code review"? The model already knows. Adding structure there just dilutes the model's native intelligence with my amateur attempt at encoding expertise.
This is why loose scaffolding matters. Skills should be nimble and selective—filling gaps in personal context, not reimplementing what the model does well.
The architecture should grow with model improvements, not calcify around old limitations.
The Shift in Thinking
I started this project thinking about what my AI assistant should do. I've ended up thinking about what reusable capabilities I'm building—and more importantly, what I should leave to the model.
The difference is subtle but significant. Tasks are one-off. Capabilities compound. But over-engineered capabilities decay.
Every workflow I build now is a potential skill. Every skill might compose with others I haven't written yet. The architecture becomes a library, not a list.
That's the mental model shift: from building agents to building skills. From solving problems to building capabilities.
If you're in the early days of building AI-assisted workflows, start with the skill. The commands and composition will follow.
This is Part 2 of the Skills-First Architecture series.
- Part 1: Skills vs Agents — Why the industry converged on skills
- Prequel: The Hidden Cost of MCP Servers — The context overhead problem that skills solve
Building something similar? I'm curious how others are approaching AI workflow architecture. What patterns are emerging in your work?

