From Fabric User to Pattern Creator: Building Better AI Workflows
Why I Built My Own Pattern MCP Server
I got tired of MCPs that proxy calls through another LLM when I'm already using the LLM, I want to use. It drives me crazy - creates unnecessary complexity, breaks conversation flow, and prevents real-time prompt modification.
So I built the Pattern MCP Server. Simple concept: expose prompt content directly Instead of executing it through a middleman.
While building it, I did a deep dive into Fabric's 215 patterns.
✅ A-tier (15%): Security patterns like analyze_malware
are genuinely excellent
❌ D/F-tier (15%): find_female_life_partner
reduces relationships to algorithms
(icky)
The bigger issues I found: - Cargo cult prompt engineering ("think with 1,419 IQ") - Over-rigid constraints ("write exactly 16 words per bullet") - 90% lack examples despite examples being the most powerful instructional tool - Anxiety-driven repetition ("DO NOT COMPLAIN" x3)
Added some notes around how to fix said issues.
Anyway: Everyone needs their prompt library. The best prompts are ones you've refined for your specific workflow.
The Pattern MCP Server gives you direct access to prompts - both Fabric's collection and your custom patterns - without execution overhead. Mix, match, and modify on the fly.
Building a Personal Prompt Library: Why I Created a Pattern MCP Server based on Fabric
The Problem with Existing Solutions
I won't bury the lede: I created the Pattern MCP Server to learn the ropes of developing MCPs, and because existing solutions, such as Fabric's MCP, have significant limitations that hinder effective AI interactions. The biggest of which is that prompts are executed through Fabric's configured LLM and relay the results back. It drives me crazy that so many MCPs proxy calls through another LLM, when I'm already using the LLM I want to use. This creates unnecessary complexity, breaks conversation flow (and context!), and prevents real-time prompt modification.
What this server does differently:
- Exposes pattern content directly - Returns the actual prompt text instead of executing it
- No middleman execution - Your LLM uses the patterns directly, maintaining context and conversation flow
- Composable - Combine multiple patterns or use parts of them
- Extensible - Easy to add new pattern sources or categories
This approach lets you leverage Fabric's prompts like extract_wisdom
, analyze_claims
, etc., while keeping the execution within your current LLM session. You can also build your custom patterns in ~/.config/custom_patterns/
that work with any LLM or AI tool that supports MCPs. You don't even really need Fabric, just any large collection of prompts.
Why Not Just Use Fabric As Intended?
I suppose I could use Fabric as it was initially intended, piping results around like a unix utility. But at this point, I'm just going to pipe to Claude Code, Simon Willison's ahead-of-the-game llm
tool, or a context-aware wrapper like plz
.
Plus, I want to use these prompts with Copilot and Claude Desktop too! I want to modify and extend my prompts on the fly, rather than using a static set of patterns that I can't change.
Due respect to Fabric for identifying the need for reusable patterns. The idea is powerful, but the implementation is showing its age.
This experience crystallized a bigger realization: if I'm going to build my prompt server anyway, why not think bigger? We're at a point where every knowledge worker needs their own prompt management system.
Why You Need Your Own Prompt Library
- Consistency: Standardized approaches to common tasks
- Quality: Refined prompts produce better outputs
- Efficiency: No more rewriting the same prompt types
- Learning: Building prompts improves your AI interaction skills
- Sharing: Team knowledge captured in reusable patterns
But if personal prompt libraries are so important, why not just use Fabric's collection as-is? Well, that's where things get messy.
Fabric: The Good, Bad, and Ugly
The Good:
- Excellent collection of curated prompts
- Active community and regular updates
- Good CLI tooling for direct use
- Covers many common use cases
- Open source and extensible
The Bad:
- Opinionated about LLM choice and execution model
- MCP implementations route through Fabric's LLM
- Limited customization without forking
- Monolithic approach doesn't suit all workflows
- Pattern discovery can be overwhelming
- Outdated "prompt engineering" stock phrases still present
The Ugly:
- Documentation could be clearer on customization
- Breaking changes in updates can affect custom patterns
- Limited metadata and organization features
- No built-in pattern versioning or collaboration tools
- Some super wack prompts that are more like jokes than useful patterns
To get a better sense of what we're working with, I decided to review Fabric's entire collection systematically.
What I Found: A Pattern Audit
After reviewing ~215 patterns in Fabric's collection, here's my honest assessment:
Here's the breakdown
- A-tier (15%): ~32 patterns - Technical, comprehensive, practical
- B-tier (40%): ~86 patterns - Solid but could improve
- C-tier (30%): ~65 patterns - Basic or limited use
- D/F-tier (15%): ~32 patterns - Poor quality or inappropriate
Those numbers tell a story, but let me get specific about what I actually found:
A Few Examples Worth Noting
The Good: Security patterns like analyze_malware
and create_stride_threat_model
are genuinely excellent - technical, comprehensive, and practical. extract_wisdom
remains popular for good reason.
The Bad: Patterns like extract_videoid
(using AI for what should be regex) and overly constrained outputs like "write exactly 16 words per bullet."
The Ugly: find_female_life_partner
reduces relationships to algorithms. dialog_with_socrates
- seriously, Bill & Ted did it better.
But the individual patterns are just symptoms of deeper issues. After going through the entire collection, I noticed some consistent problems that reveal fundamental misunderstandings about how modern LLMs work.
Key Issues Across Patterns
- Arbitrary numbers (IQ claims, virtual hours)
- Over-rigid constraints (exact word counts)
- Missing examples in most patterns
- Repetitive instructions
- Some inappropriate/questionable patterns
These aren't just nitpicks; they represent fundamental misunderstandings about how modern LLMs work. Let's dig into each one:
What's Actually Wrong Here
1. Arbitrary Numbers (IQ claims, virtual hours)
The IQ claims and virtual hours appear to be cargo cult prompt engineering - mimicking surface features without understanding the mechanism.
Historical context: In early GPT-3 days (2020-2021), people experimented wildly. Some believed that claiming high IQ might trigger "smarter" responses by associating with high-intelligence training data. However:
- There was never evidence this worked
- It likely emerged from misunderstanding how attention and embeddings function
- "Chain of thought" prompting (legitimate technique) got conflated with pseudo-scientific claims
What actually works:
- "Think step by step" - genuinely improves reasoning
- "Take a deep breath" - surprisingly effective, possibly due to training data patterns
- Role assignment ("You are an expert in X") - provides useful context
What never worked:
- Specific IQ numbers (1,419 IQ, 24,221 IQ)
- Virtual time claims ("think for 312 hours")
- Physical space metaphors ("create a 100m x 100m whiteboard")
These likely persisted through copy-paste propagation rather than effectiveness.
But arbitrary numbers aren't the only problem. There's also a troubling obsession with rigid constraints:
2. Over-rigid Constraints
The problem: "Write exactly 16 words per bullet" forces awkward constructions:
- Forced: "The deployment process was significantly streamlined through containerization technology implementation efforts overall." (16 words)
- Natural: "Containerization streamlined deployment." (3 words, clearer)
Why it happens:
- Misunderstanding of consistency vs quality
- Attempt to prevent rambling (overcorrection)
- Cargo-cult copying from academic requirements
Better approach:
- "Keep bullets concise, typically 10-20 words"
- "Aim for clarity over exact length"
- Focus on parallel structure instead
Even worse than bad constraints? No guidance at all. Which brings us to the biggest omission:
3. Missing Examples
Critical omission: 90% of patterns lack examples, despite examples being the most powerful instructional tool.
Why examples matter:
- Disambiguate edge cases
- Show style/tone expectations
- Demonstrate format preferences
- Prevent misinterpretation
Good pattern structure:
- TASK: [Clear description]
- GOOD EXAMPLE: [Actual example]
- BAD EXAMPLE: [What to avoid]
- EDGE CASE: [Tricky situation handling]
Speaking of poor instruction design, there's also the anxiety-driven repetition problem:
4. Repetitive Instructions
Some patterns say "DO NOT COMPLAIN" three times. The irony is such repetition only makes prompts less clear, not more compliant.
But the technical issues pale in comparison to the ethical ones:
5. Inappropriate/Questionable Patterns
Beyond the relationship-matching disasters, there are patterns claiming psychological diagnosis capabilities and encouraging speculation about private lives.
After wading through 215 patterns, analyzing cargo cult prompt engineering, and getting frustrated with middleman execution models, I had a clear vision of what I wanted to build.
The Bottom Line
The Pattern MCP Server exists because I got tired of this middleman nonsense, and I wanted to kick the tires of on the big prompt libraries out there. You get direct access to prompts, both Fabric's collection and your own custom patterns, without the execution overhead. You can mix, match, and modify on the fly.
More importantly, this deep dive into Fabric's patterns reinforced why everyone needs their own manipulatable prompt library. The best prompts are the ones you've refined for your specific workflow. Start with existing patterns if you want, but make them yours.