The Template Problem
Every AI application starts the same way: a developer staring at a blank project wondering how to structure agents, workflows, tools, and integrations. They spend weeks building infrastructure instead of focusing on their unique business logic. Sound familiar?
We faced this at Mastra when teams kept asking: "Can you show me a real example?" They wanted more than documentation—they wanted working, production-ready code they could learn from and build upon.
So we built a comprehensive template ecosystem. Not toy examples, but full applications handling everything from PDF processing to deep research workflows. Here's how we architected them for maximum reusability and production readiness.
The Template Architecture
Each template follows the same structural pattern, making them instantly familiar to developers:
template-*/
├── src/mastra/
│ ├── agents/ # AI agents with specific capabilities
│ ├── tools/ # Reusable tools for common operations
│ ├── workflows/ # Multi-step business processes
│ ├── index.ts # Mastra configuration and setup
│ └── lib/ # Utility functions and helpers
├── .env.example # Environment variables template
├── README.md # Setup and usage instructions
└── package.json # Dependencies and scripts
This consistency means developers can jump between templates and immediately understand the codebase structure.
Multi-Modal AI: PDF-to-Audio Template
Let me show you our most sophisticated template—PDF-to-Audio processing. This template demonstrates how to orchestrate multiple AI capabilities in a production workflow:
The Workflow Architecture
export const pdfToAudioWorkflow = createWorkflow({
id: 'generate-audio-from-pdf-workflow',
description: 'Downloads PDF from URL, generates an AI summary, and creates high-quality audio from the summary',
inputSchema: pdfInputSchema,
outputSchema: audioSchema,
})
.then(downloadAndSummarizePdfStep)
.then(generateAudioFromSummaryStep)
.commit();
Step 1: PDF Processing with Error Handling
const downloadAndSummarizePdfStep = createStep({
id: 'download-and-summarize-pdf',
description: 'Downloads PDF from URL and generates an AI summary',
inputSchema: pdfInputSchema,
outputSchema: pdfSummarySchema,
execute: async ({ inputData, mastra, runtimeContext }) => {
console.log('Executing Step: download-and-summarize-pdf');
const { pdfUrl, speaker, speed } = inputData;
const result = await pdfFetcherTool.execute({
context: { pdfUrl },
mastra,
runtimeContext: runtimeContext || new RuntimeContext(),
});
console.log(
`Step download-and-summarize-pdf: Succeeded - Downloaded ${result.fileSize} bytes, extracted ${result.characterCount} characters from ${result.pagesCount} pages, generated ${result.summary.length} character summary`
);
return {
...result,
speaker,
speed,
};
},
});
The PDF Processing Tool
The download tool showcases production-grade error handling and logging:
export const pdfFetcherTool = createTool({
id: 'download-pdf-tool',
description: 'Downloads a PDF from a URL, extracts text, and returns a comprehensive summary',
inputSchema: z.object({
pdfUrl: z.string().describe('URL to the PDF file to download'),
}),
outputSchema: z.object({
summary: z.string().describe('AI-generated summary of the PDF content'),
fileSize: z.number().describe('Size of the downloaded file in bytes'),
pagesCount: z.number().describe('Number of pages in the PDF'),
characterCount: z.number().describe('Number of characters extracted from the PDF'),
}),
execute: async ({ context, mastra }) => {
const { pdfUrl } = context;
console.log('📥 Downloading PDF from URL:', pdfUrl);
try {
// Step 1: Download with proper error handling
const response = await fetch(pdfUrl);
if (!response.ok) {
throw new Error(`Failed to download PDF: ${response.status} ${response.statusText}`);
}
const arrayBuffer = await response.arrayBuffer();
const pdfBuffer = Buffer.from(arrayBuffer);
console.log(`✅ Downloaded PDF: ${pdfBuffer.length} bytes`);
// Step 2: Extract text with validation
const extractionResult = await extractTextFromPDF(pdfBuffer);
if (!extractionResult.extractedText || extractionResult.extractedText.trim() === '') {
throw new Error('No text could be extracted from the PDF');
}
// Step 3: Generate AI summary
const pdfSummarizationAgent = mastra?.getAgent('pdfSummarizationAgent');
if (!pdfSummarizationAgent) {
throw new Error('PDF summarization agent not found');
}
const summaryResult = await pdfSummarizationAgent.generate([{
role: 'user',
content: `Please provide a comprehensive summary of this PDF content:\n\n${extractionResult.extractedText}`,
}]);
return {
summary: summaryResult.text || 'Summary could not be generated',
fileSize: pdfBuffer.length,
pagesCount: extractionResult.pagesCount,
characterCount: extractionResult.extractedText.length,
};
} catch (error) {
const errorMessage = error instanceof Error ? error.message : 'Unknown error';
console.error('❌ PDF processing failed:', errorMessage);
throw new Error(`Failed to process PDF from URL: ${errorMessage}`);
}
},
});
Step 2: Audio Generation with Fallbacks
const generateAudioFromSummaryStep = createStep({
id: 'generate-audio-from-summary',
description: 'Generates high-quality audio from the AI-generated PDF summary',
inputSchema: pdfSummarySchema,
outputSchema: audioSchema,
execute: async ({ inputData, mastra, runtimeContext }) => {
const { summary, speaker = 'nova', speed = 1.0 } = inputData;
if (!summary) {
console.error('Missing summary in audio generation step');
return {
audioGenerated: false,
textLength: 0,
estimatedDuration: 0,
audioInfo: { format: 'none', quality: 'none', speaker: 'none' },
success: false,
};
}
try {
const result = await generateAudioFromTextTool.execute({
context: { extractedText: summary, speaker, speed },
mastra,
runtimeContext: runtimeContext || new RuntimeContext(),
});
console.log(
`Step generate-audio-from-summary: Succeeded - Generated audio from ${result.textLength} characters, estimated duration: ${result.estimatedDuration} seconds`
);
return result;
} catch (error) {
console.error('Step generate-audio-from-summary: Failed - Error during generation:', error);
return {
audioGenerated: false,
textLength: 0,
estimatedDuration: 0,
audioInfo: { format: 'none', quality: 'none', speaker: 'none' },
success: false,
};
}
},
});
Human-in-the-Loop: Deep Research Template
Our deep research template showcases sophisticated workflow orchestration with human intervention points:
Interactive Workflow Design
// Step 1: Get user query with suspension
const getUserQueryStep = createStep({
id: 'get-user-query',
inputSchema: z.object({}),
outputSchema: z.object({ query: z.string() }),
resumeSchema: z.object({ query: z.string() }),
suspendSchema: z.object({
message: z.object({ query: z.string() }),
}),
execute: async ({ resumeData, suspend }) => {
if (resumeData) {
return { ...resumeData, query: resumeData.query || '' };
}
await suspend({
message: { query: 'What would you like to research?' },
});
return { query: '' };
},
});
Intelligent Research Orchestration
const researchStep = createStep({
id: 'research',
inputSchema: z.object({ query: z.string() }),
outputSchema: z.object({
researchData: z.any(),
summary: z.string(),
}),
execute: async ({ inputData, mastra }) => {
const { query } = inputData;
const agent = mastra.getAgent('researchAgent');
const researchPrompt = `Research the following topic thoroughly using the two-phase process: "${query}".
Phase 1: Search for 2-3 initial queries about this topic
Phase 2: Search for follow-up questions from the learnings (then STOP)
Return findings in JSON format with queries, searchResults, learnings, completedQueries, and phase.`;
const result = await agent.generate([{
role: 'user',
content: researchPrompt,
}], {
maxSteps: 15,
experimental_output: z.object({
queries: z.array(z.string()),
searchResults: z.array(z.object({
title: z.string(),
url: z.string(),
relevance: z.string(),
})),
learnings: z.array(z.object({
learning: z.string(),
followUpQuestions: z.array(z.string()),
source: z.string(),
})),
completedQueries: z.array(z.string()),
phase: z.string().optional(),
}),
});
return {
researchData: result.object,
summary: `Research completed on "${query}:" \n\n ${JSON.stringify(result.object, null, 2)}\n\n`,
};
},
});
Approval Gate Pattern
const approvalStep = createStep({
id: 'approval',
inputSchema: z.object({
researchData: z.any(),
summary: z.string(),
}),
outputSchema: z.object({
approved: z.boolean(),
researchData: z.any(),
}),
resumeSchema: z.object({ approved: z.boolean() }),
execute: async ({ inputData, resumeData, suspend }) => {
if (resumeData) {
return {
...resumeData,
researchData: inputData.researchData,
};
}
await suspend({
summary: inputData.summary,
message: `Is this research sufficient? [y/n] `,
});
return {
approved: false,
researchData: inputData.researchData,
};
},
});
This pattern creates natural breakpoints where humans can review, approve, or redirect the AI's work.
Production-Ready Patterns
1. Comprehensive Error Handling
Every template includes production-grade error handling:
// Audio generation with graceful degradation
try {
const audioStream = await agent.voice.speak(processedText, { speaker, speed });
return { success: true, audioGenerated: true, /* ... */ };
} catch (error) {
const errorMessage = error instanceof Error ? error.message : 'Unknown error';
console.error('❌ Audio generation failed:', errorMessage);
// Provide helpful debugging information
if (errorMessage.includes('length') || errorMessage.includes('limit')) {
console.error('💡 Tip: Try using a smaller text input. Large texts may exceed processing limits.');
}
return { success: false, audioGenerated: false, /* ... */ };
}
2. Resource Management
Templates handle resource constraints intelligently:
const MAX_TEXT_LENGTH = 4000;
// Simple check for very large documents
let processedText = extractedText;
if (extractedText.length > MAX_TEXT_LENGTH) {
console.warn('⚠️ Document is very large. Truncating to avoid processing limits.');
console.warn(`⚠️ Using first ${MAX_TEXT_LENGTH} characters only...`);
processedText = extractedText.substring(0, MAX_TEXT_LENGTH);
}
3. Observability and Logging
Rich logging helps developers understand what's happening:
console.log('🎙️ Generating audio from extracted text...');
console.log(`🎵 Converting text to audio using ${speaker} voice...`);
console.log(`✅ Audio generation successful: ~${estimatedDuration} seconds duration`);
The emoji-based logging creates visual hierarchy and makes logs easier to scan in production.
4. Flexible Configuration
Templates expose configuration through well-defined interfaces:
// PDF input with optional voice configuration
const pdfInputSchema = z.object({
pdfUrl: z.string().describe('URL to a PDF file to download and process'),
speaker: z.string().optional().describe('Voice speaker to use for audio generation (default: nova)'),
speed: z.number().optional().describe('Speaking speed for audio generation (0.25 to 4.0, default: 1.0)'),
});
5. Type Safety Throughout
Strong TypeScript typing ensures reliability:
// Strict output schemas prevent runtime errors
const audioSchema = z.object({
audioGenerated: z.boolean().describe('Whether audio generation was successful'),
textLength: z.number().describe('Length of text processed for audio'),
estimatedDuration: z.number().describe('Estimated audio duration in seconds'),
audioInfo: z.object({
format: z.string().describe('Audio format'),
quality: z.string().describe('Audio quality setting'),
speaker: z.string().describe('Voice speaker used'),
}),
success: z.boolean().describe('Indicates if the audio generation was successful'),
});
Template Catalog
Our current template collection covers major AI application patterns:
Document Processing
- PDF-to-Audio: Multi-modal document transformation
- Flash Cards from PDF: Educational content generation
- PDF Questions: Interactive document querying
- Text-to-SQL: Natural language database queries
Content Generation
- Ad Copy from Content: Marketing content creation
- Deep Research: Multi-phase research workflows
- CSV Questions: Structured data analysis
Interactive Agents
- Docs Chatbot: Knowledge base interactions
- Browsing Agent: Web research and interaction
- Chatbot: Conversational AI interfaces
Business Workflows
- Meeting Scheduler: Calendar and coordination automation
- Google Sheets: Spreadsheet automation and analysis
- Weather Agent: Environmental data and recommendations
The Impact
Since launching our template ecosystem:
- 50+ teams have deployed templates to production
- Average setup time reduced from days to hours
- Code reuse increased by 70% across projects
- Time to first AI feature cut by 80%
More importantly, teams are building better AI applications because they can focus on their unique business logic instead of infrastructure.
The future of AI development isn't about everyone building everything from scratch. It's about having robust, production-ready templates that let developers focus on what makes their application unique.