← All Skills

Vercel Agents

v1.0.1 Agents
5 files, 22.7 KB ~1,592 words · 7 min read Updated 2026-03-27

Generate AGENTS.md using Vercel’s research-backed methodology. 100% eval pass rate vs 53% baseline.

$ npx snappy-skills install vercel-agents
zip ↓
File Tree
├── SKILL.md 8.1 KB ├── compression.md 4.0 KB ├── evals.md 4.4 KB └── templates/ ├── generic.md 3.1 KB └── nextjs.md 3.2 KB
Documents
SKILL.md
8.1 KB

Vercel Agent — Research-Backed AGENTS.md Creator#

Purpose#

Create high-performance AGENTS.md files using Vercel's empirically-validated methodology. Their research proved that passive context via AGENTS.md achieves 100% pass rate vs 53% baseline — outperforming skills (79%) and even explicit skill instructions. This skill applies those findings to any project.

When to Use This Skill#

  • User asks to create or improve AGENTS.md
  • User wants to optimize how AI agents work with their codebase
  • User mentions "agent instructions", "agent context", "coding agent docs"
  • User wants to document a project for AI consumption
  • User invokes /vercel-agent
  • New project setup where AI agents will be primary developers

Quick Start#

bash# 1. Analyze the project
# 2. Generate compressed docs index
# 3. Write AGENTS.md with retrieval-led context
# 4. Test with eval scenarios

The Research (Why This Works)#

Vercel's Eval Results#

Approach Pass Rate Delta
Baseline (no docs) 53%
Skill (default) 53% +0pp
Skill (explicit instructions) 79% +26pp
AGENTS.md (compressed index) 100% +47pp

Three Reasons Passive Context Wins#

  1. No decision point — Agent doesn't need to decide whether to look up docs
  2. Consistent availability — Present every turn, not loaded asynchronously
  3. No ordering issues — Eliminates "read docs first vs explore first" sequencing bugs

The Critical Insight#

"Prefer retrieval-led reasoning over pre-training-led reasoning"

Agents default to training data (often outdated). AGENTS.md redirects them to project-specific truth. This is THE core principle.


Workflow#

Step 1: Audit the Project#

Before writing anything, understand what the agent needs to know:

1. Stack & versions (exact semver matters — training data may know v15, project uses v16)
2. Architecture decisions (monorepo? API routes? serverless?)
3. File conventions (where do components go? naming patterns?)
4. APIs that differ from training data (new/changed/deprecated)
5. Build/test/deploy commands
6. Environment & secrets structure
7. Common pitfalls specific to this project

Key question: What would an agent get WRONG using only its training data?

Step 2: Structure the AGENTS.md#

Follow this exact hierarchy — order matters for context window efficiency:

markdown# AGENTS.md

## Project Overview
[1-3 sentences. Stack, purpose, key constraint.]

## Critical Rules
[Things the agent MUST NOT get wrong. Version-specific APIs. Breaking patterns.]

## Architecture
[File structure, data flow, key abstractions.]

## Commands
[Build, test, lint, deploy — exact commands, not descriptions.]

## Conventions
[Naming, file placement, import patterns, style rules.]

## Docs Index
[Compressed pointer to retrievable documentation.]

Step 3: Compress Aggressively#

Vercel reduced 40KB → 8KB (80% reduction) with zero accuracy loss.

Compression techniques:

  • Use pipe-delimited indexes pointing to files, not inline content
  • Remove prose — use structured key:value pairs
  • Abbreviate obvious patterns
  • Group related items on single lines
  • Omit anything the model's training data already knows correctly

Format:

markdown[Docs Index]|root: ./docs
|IMPORTANT: Prefer retrieval-led reasoning over pre-training-led reasoning
|api/routes:{auth.md,users.md,products.md}
|components:{Button.md,Form.md,Layout.md}
|config:{env.md,deploy.md}

Step 4: Write Version-Specific Corrections#

This is the highest-value section. Identify APIs where training data diverges from project reality:

markdown## Version-Specific (v16.1 — NOT in training data)

### CORRECT patterns:
- `'use cache'` directive (NOT `export const revalidate`)
- `connection()` for dynamic rendering (NOT `cookies()` hack)
- `forbidden()` returns 403 (NEW in v16)
- `cookies()` is NOW async (was sync in v14)

### WRONG patterns (model may suggest these):
- ❌ `getServerSideProps` — removed in App Router
- ❌ `revalidate: 60` export — use `cacheLife()` instead
- ❌ Sync `cookies()` — must await in v16

Step 5: Add Retrieval Pointers#

Instead of embedding full documentation, point to retrievable files:

markdown## Documentation

When you need framework docs, read from `.docs/` directory:
|routing: .docs/routing/{pages,layouts,loading,error}.md
|data: .docs/data/{fetching,caching,revalidation}.md
|api: .docs/api/{route-handlers,middleware,auth}.md

IMPORTANT: Read the relevant .docs/ file BEFORE writing code.
Do NOT rely on training data for API signatures.

Step 6: Validate with Eval Scenarios#

Test your AGENTS.md against scenarios targeting training-data gaps:

markdown## Self-Test Scenarios

An agent reading this AGENTS.md should correctly handle:
1. [ ] Create a cached page using `'use cache'` (not revalidate export)
2. [ ] Add authentication middleware using project's auth pattern
3. [ ] Run tests with the correct command (`npm test`, not `jest`)
4. [ ] Place new components in the correct directory
5. [ ] Use the project's API client, not raw fetch

Anti-Patterns (From Vercel's Research)#

1. Don't Rely on Agents Invoking Tools#

"In 56% of eval cases, the skill was never invoked despite availability."

Fix: Embed critical context directly. Don't make the agent choose to look things up.

2. Don't Over-Specify Ordering#

Subtle wording differences cause dramatically different outcomes:

  • ❌ "You MUST read docs first" → Agent reads docs, anchors on patterns, misses project context
  • ✅ "Explore project first, then reference docs for API specifics" → Better mental model

Fix: Let the agent explore naturally. AGENTS.md provides ambient context.

3. Don't Embed Full Documentation#

40KB of docs = wasted context window. Agents don't need full API references inline.

Fix: Compressed index (8KB) pointing to retrievable files. Same 100% pass rate.

4. Don't Assume Training Data is Current#

The whole point of AGENTS.md is correcting training-data assumptions.

Fix: Explicitly list what's changed. "X is NOW Y" format.

5. Don't Write Prose#

Agents process structured data better than paragraphs.

Fix: Use tables, key:value pairs, bullet lists, pipe-delimited indexes.


Templates#

Template: Full-Stack Next.js Project#

See templates/nextjs.md for a complete AGENTS.md template for Next.js projects.

Template: Generic Project#

See templates/generic.md for a framework-agnostic template.


Compression Reference#

Before (verbose):#

markdown## Routing

The application uses file-based routing with the App Router.
Pages are defined in the `app/` directory. Each folder represents
a route segment. The `page.tsx` file in each folder defines the
UI for that route. Layout files (`layout.tsx`) wrap child routes.
Loading states are handled by `loading.tsx` files.

After (compressed):#

markdown## Routing
|app-router, file-based
|app/[segment]/page.tsx = route UI
|app/[segment]/layout.tsx = wrapper
|app/[segment]/loading.tsx = loading state

Result: 80% smaller, same information density, agent performs identically.


Need to... Read this
Full Next.js AGENTS.md template templates/nextjs.md
Generic project template templates/generic.md
Compression techniques compression.md
Eval scenario design evals.md

Skill Status: COMPLETE

Core Principle: Passive context > Active retrieval. Compress aggressively. Correct training data explicitly.

---
name: vercel-agents
description: Create AI agent instructions (AGENTS.md) using Vercel's research-backed methodology. Proven to achieve 100% pass rate vs 53% baseline in evals. Use when creating agents.md, agent instructions, AI agent context, coding agent documentation, project documentation for AI, or optimizing how AI agents interact with a codebase. Triggers on agents.md, vercel agent, agent instructions, agent context, AI documentation, codebase documentation for agents.
---

# Vercel Agent — Research-Backed AGENTS.md Creator

## Purpose

Create high-performance `AGENTS.md` files using Vercel's empirically-validated methodology. Their research proved that passive context via `AGENTS.md` achieves **100% pass rate** vs 53% baseline — outperforming skills (79%) and even explicit skill instructions. This skill applies those findings to any project.

## When to Use This Skill

- User asks to create or improve `AGENTS.md`
- User wants to optimize how AI agents work with their codebase
- User mentions "agent instructions", "agent context", "coding agent docs"
- User wants to document a project for AI consumption
- User invokes `/vercel-agent`
- New project setup where AI agents will be primary developers

---

## Quick Start

```bash
# 1. Analyze the project
# 2. Generate compressed docs index
# 3. Write AGENTS.md with retrieval-led context
# 4. Test with eval scenarios
```

---

## The Research (Why This Works)

### Vercel's Eval Results

| Approach | Pass Rate | Delta |
|----------|-----------|-------|
| Baseline (no docs) | 53% | — |
| Skill (default) | 53% | +0pp |
| Skill (explicit instructions) | 79% | +26pp |
| **AGENTS.md (compressed index)** | **100%** | **+47pp** |

### Three Reasons Passive Context Wins

1. **No decision point** — Agent doesn't need to decide whether to look up docs
2. **Consistent availability** — Present every turn, not loaded asynchronously
3. **No ordering issues** — Eliminates "read docs first vs explore first" sequencing bugs

### The Critical Insight

> "Prefer retrieval-led reasoning over pre-training-led reasoning"

Agents default to training data (often outdated). AGENTS.md redirects them to project-specific truth. This is THE core principle.

---

## Workflow

### Step 1: Audit the Project

Before writing anything, understand what the agent needs to know:

```
1. Stack & versions (exact semver matters — training data may know v15, project uses v16)
2. Architecture decisions (monorepo? API routes? serverless?)
3. File conventions (where do components go? naming patterns?)
4. APIs that differ from training data (new/changed/deprecated)
5. Build/test/deploy commands
6. Environment & secrets structure
7. Common pitfalls specific to this project
```

**Key question:** What would an agent get WRONG using only its training data?

### Step 2: Structure the AGENTS.md

Follow this exact hierarchy — order matters for context window efficiency:

```markdown
# AGENTS.md

## Project Overview
[1-3 sentences. Stack, purpose, key constraint.]

## Critical Rules
[Things the agent MUST NOT get wrong. Version-specific APIs. Breaking patterns.]

## Architecture
[File structure, data flow, key abstractions.]

## Commands
[Build, test, lint, deploy — exact commands, not descriptions.]

## Conventions
[Naming, file placement, import patterns, style rules.]

## Docs Index
[Compressed pointer to retrievable documentation.]
```

### Step 3: Compress Aggressively

Vercel reduced 40KB → 8KB (80% reduction) with **zero accuracy loss**.

**Compression techniques:**
- Use pipe-delimited indexes pointing to files, not inline content
- Remove prose — use structured key:value pairs
- Abbreviate obvious patterns
- Group related items on single lines
- Omit anything the model's training data already knows correctly

**Format:**
```markdown
[Docs Index]|root: ./docs
|IMPORTANT: Prefer retrieval-led reasoning over pre-training-led reasoning
|api/routes:{auth.md,users.md,products.md}
|components:{Button.md,Form.md,Layout.md}
|config:{env.md,deploy.md}
```

### Step 4: Write Version-Specific Corrections

This is the highest-value section. Identify APIs where training data diverges from project reality:

```markdown
## Version-Specific (v16.1 — NOT in training data)

### CORRECT patterns:
- `'use cache'` directive (NOT `export const revalidate`)
- `connection()` for dynamic rendering (NOT `cookies()` hack)
- `forbidden()` returns 403 (NEW in v16)
- `cookies()` is NOW async (was sync in v14)

### WRONG patterns (model may suggest these):
- ❌ `getServerSideProps` — removed in App Router
- ❌ `revalidate: 60` export — use `cacheLife()` instead
- ❌ Sync `cookies()` — must await in v16
```

### Step 5: Add Retrieval Pointers

Instead of embedding full documentation, point to retrievable files:

```markdown
## Documentation

When you need framework docs, read from `.docs/` directory:
|routing: .docs/routing/{pages,layouts,loading,error}.md
|data: .docs/data/{fetching,caching,revalidation}.md
|api: .docs/api/{route-handlers,middleware,auth}.md

IMPORTANT: Read the relevant .docs/ file BEFORE writing code.
Do NOT rely on training data for API signatures.
```

### Step 6: Validate with Eval Scenarios

Test your AGENTS.md against scenarios targeting training-data gaps:

```markdown
## Self-Test Scenarios

An agent reading this AGENTS.md should correctly handle:
1. [ ] Create a cached page using `'use cache'` (not revalidate export)
2. [ ] Add authentication middleware using project's auth pattern
3. [ ] Run tests with the correct command (`npm test`, not `jest`)
4. [ ] Place new components in the correct directory
5. [ ] Use the project's API client, not raw fetch
```

---

## Anti-Patterns (From Vercel's Research)

### 1. Don't Rely on Agents Invoking Tools

> "In 56% of eval cases, the skill was never invoked despite availability."

**Fix:** Embed critical context directly. Don't make the agent choose to look things up.

### 2. Don't Over-Specify Ordering

Subtle wording differences cause dramatically different outcomes:
- ❌ "You MUST read docs first" → Agent reads docs, anchors on patterns, misses project context
- ✅ "Explore project first, then reference docs for API specifics" → Better mental model

**Fix:** Let the agent explore naturally. AGENTS.md provides ambient context.

### 3. Don't Embed Full Documentation

40KB of docs = wasted context window. Agents don't need full API references inline.

**Fix:** Compressed index (8KB) pointing to retrievable files. Same 100% pass rate.

### 4. Don't Assume Training Data is Current

The whole point of AGENTS.md is correcting training-data assumptions.

**Fix:** Explicitly list what's changed. "X is NOW Y" format.

### 5. Don't Write Prose

Agents process structured data better than paragraphs.

**Fix:** Use tables, key:value pairs, bullet lists, pipe-delimited indexes.

---

## Templates

### Template: Full-Stack Next.js Project

See [templates/nextjs.md](templates/nextjs.md) for a complete AGENTS.md template for Next.js projects.

### Template: Generic Project

See [templates/generic.md](templates/generic.md) for a framework-agnostic template.

---

## Compression Reference

### Before (verbose):
```markdown
## Routing

The application uses file-based routing with the App Router.
Pages are defined in the `app/` directory. Each folder represents
a route segment. The `page.tsx` file in each folder defines the
UI for that route. Layout files (`layout.tsx`) wrap child routes.
Loading states are handled by `loading.tsx` files.
```

### After (compressed):
```markdown
## Routing
|app-router, file-based
|app/[segment]/page.tsx = route UI
|app/[segment]/layout.tsx = wrapper
|app/[segment]/loading.tsx = loading state
```

**Result:** 80% smaller, same information density, agent performs identically.

---

## Navigation Guide

| Need to... | Read this |
|------------|-----------|
| Full Next.js AGENTS.md template | [templates/nextjs.md](templates/nextjs.md) |
| Generic project template | [templates/generic.md](templates/generic.md) |
| Compression techniques | [compression.md](compression.md) |
| Eval scenario design | [evals.md](evals.md) |

---

**Skill Status**: COMPLETE
**Core Principle**: Passive context > Active retrieval. Compress aggressively. Correct training data explicitly.

Keyboard Shortcuts

Search in document⌘K
Focus search/
Previous file tab
Next file tab
Close overlayEsc
Show shortcuts?