I Built a Claude Skill That Creates Claude Skills (Using Claude)

I used Claude Code to build a tool that creates Claude Skills. Open-source CLI tools for validating and packaging skills at scale. Built by AI, for AI, used by engineering teams. Includes practical examples for SDLC automation.

I Built a Claude Skill That Creates Claude Skills (Using Claude)

Here's a sentence I didn't expect to write: I used Claude Code to build a tool that helps Claude Code build better tools for Claude Code.

If that sounds recursive, it is. And that's exactly what makes it useful.

At Launch Consulting, we spend significant time figuring out how AI fits into the software development lifecycle. Not the aspirational vision—the actual day-to-day work of writing code, reviewing architecture, and shipping features. One thing became clear quickly: general-purpose AI agents need specialized capabilities for specific workflows.

That's where Claude Skills come in. And that's why I built a tool to make creating them easier.

What Are Claude Skills?

Claude Skills are modular capability packages that extend what Claude can do. Think of them as onboarding documents that teach Claude specialized procedures.

Each skill is a folder containing:

  • A SKILL.md file with metadata and instructions
  • Optional scripts for executable code
  • Optional reference documentation
  • Optional assets like templates or examples

When you ask Claude to do something, it scans available skills and loads only what's relevant. A skill might use 30-50 tokens until it's needed, then loads the full instructions when activated.

The advantage: You can give Claude deep procedural knowledge without bloating every conversation with context it doesn't need.

Why Build Yet Another Tool?

When Anthropic released Claude Skills in October 2025, they included a skill-creator skill in their repository. It works well, it asks questions, guides you through decisions, and generates the basic structure.

But I needed something different.

At Launch, we work with engineering teams that want to integrate AI into existing development processes. That means:

  • Skills need to be versioned and reviewed like code
  • Validation should run in CI/CD pipelines
  • Packaging should be automated
  • Teams need standalone tools, not just conversational guides

Anthropic's skill-creator is conversational. Mine is programmatic.

I built a skill-creator that includes:

  • Standalone CLI tools for validation and packaging
  • Comprehensive Python type hints for safety
  • Cross-platform packaging for distribution
  • Security-focused implementation
  • Full test coverage

You can run these tools outside of Claude. You can integrate them into your build process. You can version them in Git.

The Meta Part: Using Claude to Build Tools for Claude

I used Claude Code to build this entire project. The irony isn't lost on me.

Claude Code generated the validation scripts, wrote the packaging utilities, created the test framework, and structured the documentation. I guided it, reviewed everything, and made decisions about architecture.

This is what working with AI actually looks like: not replacing developers, but building tools faster than you could manually.

The project took about 8 hours of actual work. Writing it from scratch would have taken 2-3 days. Claude Code handled the tedious parts: YAML parsing, file validation, directory structure management, while I focused on what the tool should do and how teams would use it.

How It Works

The tool provides three main capabilities:

Skill Generation

Ask Claude to create a skill, and it generates the complete structure:

my-skill/
├── SKILL.md
├── scripts/
│   └── helper.py
├── references/
│   └── documentation.md
└── assets/
    └── template.json

Validation

Run validation from the command line:

bash

python validate_skill.py path/to/skill/

The validator checks:

  • Required YAML frontmatter fields
  • Naming conventions (lowercase, hyphens only)
  • Description constraints (no XML tags)
  • File structure requirements
  • Security issues in bundled code

Packaging

Create distributable skill plugins:

bash

python package_skill.py path/to/skill/ --output my-skill.zip

The packager validates first, then creates a properly structured zip file that works across Claude.ai, Claude Code, and the Claude API.

Practical Applications for Software Teams

The real question: what skills should engineering teams actually build?

Based on my experience with Azure quality programs and client work at Launch, here are examples that provide immediate value:

Architecture Review Assistant

Analyzes Terraform or Bicep files for Azure best practices, cost optimization opportunities, and security compliance issues. References your organization's architecture standards and flags deviations.

Deployment Safety Checker

Pre-deployment validation that checks for feature flags, rollback procedures, monitoring setup, and runbook completeness. Integrates with your deployment pipeline.

Incident Postmortem Generator

Structures blameless postmortems following your team's format. Extracts action items from Slack threads or incident channels, tracks follow-up tasks.

Architecture Decision Record Writer

Creates ADRs in your team's format with proper context, alternatives considered, and consequences documented. Maintains consistency across decision documentation.

These aren't theoretical. They solve real problems that consume engineering time.

When I built a RAG system at Microsoft that processed Azure executive meeting notes, it reduced processing time from 4 hours to 5 minutes. That saved over 100 engineering hours per quarter. The value wasn't the technology, it was automating work that previously required manual effort.

Skills do the same thing. They automate procedural knowledge that would otherwise require explanation in every conversation.

What Makes This Different

Anthropic's skill-creator is excellent for interactive skill creation. Mine serves a different need.

Anthropic's Approach:

  • Conversational guidance through skill creation
  • Asks clarifying questions about requirements
  • Generates templates with prompts
  • Best for one-off skill creation

My Approach:

  • Standalone CLI tools that work outside Claude
  • Programmatic validation and packaging
  • Integration with CI/CD pipelines
  • Type-safe Python with comprehensive error handling
  • Best for teams building multiple skills at scale

Neither is better, they solve different problems. If you're creating a single skill, use Anthropic's. If you're building a library of skills for your team with version control and automated testing, mine might help.

Getting Started

The tool is open source and available on GitHub: jgardner04/claude-skills-skill

Installation:

Clone the repository:

bash

git clone https://github.com/jgardner04/claude-skills-skill.git
cd claude-skills-skill
pip install pyyaml

Create Your First Skill:

Ask Claude:

Create a new skill called "git-helper" that helps with git commands and workflow

Claude will ask clarifying questions, generate the structure, and validate the output.

Validate a Skill:

bash

python .claude-plugin/scripts/validate_skill.py path/to/skill/

Package for Distribution:

bash

python .claude-plugin/scripts/package_skill.py path/to/skill/ --output skill-name.zip

The packaged skill works across Claude.ai, Claude Code, and the Claude API.

The Real Value

Building this tool taught me something about working with AI: the goal isn't to replace human judgment with automation. It's to automate the parts that don't require judgment so humans can focus on the parts that do.

Claude Skills are valuable because they capture procedural knowledge, the "how" of specific tasks. But someone still needs to decide what procedures matter, how they should work, and whether they're actually helping.

That's the pattern I've seen in effective AI integration. In my work on Azure quality programs, we didn't eliminate measurement—we focused on measuring the right things. The principle was simple: measure inputs you control, not just outputs you hope for.

Skills follow the same pattern. They don't replace engineering expertise. They package it in a way that makes it reusable, consistent, and scalable.

What's Next

The tool is complete and works well for current use cases. I have no immediate roadmap because it solves the problem it was built to solve.

If you find issues or want features, open an issue on GitHub. Pull requests welcome.

More importantly, if you build interesting skills for software development workflows, share them. The value of this ecosystem grows when teams share what actually works.

Try It

If you're working with Claude Code and building multiple skills, give this tool a try. If you're at a consulting firm or enterprise team thinking about how to integrate AI into your development process, this might be a useful starting point.

The code is MIT licensed. Use it, modify it, share it.

And if you're interested in how teams like Launch Consulting are integrating AI into software development workflows—not the hype, the actual implementation, reach out. We're figuring this out in real time, with real clients, solving real problems.

Anthropic Resources

Supporting Resources