Skip to content
⚠️ This document is AI-translated. The Chinese version is the authoritative source.

AI Workflow Patterns

This document is intended for two types of readers: beginners (new to programming or AI tools) and experienced developers. Part one helps you build the right mindset, part two covers universal workflow patterns, and part three covers Presto project-specific practices. Feel free to skip to the sections relevant to you.


Part One: What Is AI-Assisted Development

AI Programming Assistants: Your Coding Partner

An AI programming assistant is not an "automatic code-writing machine" -- it's more like a coding partner who's always online.

Imagine you're writing an essay: you have a knowledgeable friend sitting next to you, and you can ask them at any time "how should I organize this section" or "help me check this syntax." They'll give you suggestions, but the final decision is yours -- you need to judge whether the suggestions are reasonable and whether to adopt them. An AI programming assistant plays exactly this role.

Currently popular AI programming assistants include:

  • Claude Code: Anthropic's command-line AI tool, the primary tool for the Presto project
  • OpenCode: An open-source terminal AI programming assistant compatible with multiple models
  • Cursor, Windsurf: AI assistants integrated into editors

The patterns and principles in this article apply to all AI programming tools, with specific demonstrations primarily using Claude Code.

What AI Can and Cannot Do

Things AI is good at:

  • Generating code scaffolding and implementations from descriptions
  • Explaining the logic of existing code
  • Spotting common bugs and security issues
  • Refactoring code, improving naming and structure
  • Writing test cases and documentation
  • Quickly getting up to speed in unfamiliar tech stacks

Things AI is not good at:

  • Understanding your business requirements (it can only infer from your description)
  • Guaranteeing 100% correct code (it will confidently write buggy code)
  • Making architectural decisions (it lacks a global project perspective unless you provide sufficient context)
  • Handling the latest libraries and APIs (training data has a cutoff date)
  • Replacing your judgment (it's a tool, not a decision-maker)

AI Makes Mistakes -- This Is Normal

This is the most important thing to understand when using AI programming assistants: AI makes mistakes, and it makes them frequently.

Common types of errors:

Error TypeBehaviorExample
HallucinationInvents non-existent APIs or parametersSuggests using a library's v3 method, but the library only goes up to v2
Outdated infoUses old syntax or practicesUses React class components instead of hooks
AssumptionsAssumes your environment matches "typical"Assumes amd64 architecture when you're actually on arm64
OverconfidenceGives definitive answers about uncertain things"This code has no issues" (but edge cases are unhandled)

The right mindset is: Treat AI output as a "first draft," not a "final draft." Every piece of AI-generated code needs your review and verification.


Part Two: Core Workflow Patterns

Pattern 1: Context Engineering

Beginner explanation: AI doesn't know about your project. You need to write "project rules" into files so AI automatically reads them in every conversation. This is Context Engineering -- providing AI with the right background information.

The Role of CLAUDE.md

CLAUDE.md is an instruction file placed in the project root. Claude Code automatically reads it at startup, essentially giving the AI a "new employee onboarding handbook." Other AI tools have similar mechanisms (e.g., .cursorrules, AGENTS.md) -- the principle is the same.

How to Organize Context

A good context file should include:

  1. What the project is (one-sentence description)
  2. Tech stack and architecture (languages, frameworks used)
  3. Code conventions (formatting, naming, error handling approach)
  4. Hard constraints (security red lines, prohibited practices)
  5. Environment information (OS, architecture, package manager preferences)

Real Example: Presto's CLAUDE.md

Below is an excerpt from the CLAUDE.md actually used by the Presto project, showing a typical context file structure:

markdown
# Presto -- AI Development Guide

Presto: Markdown -> Typst -> PDF document conversion platform (desktop via Wails + web via Docker).

## Code Conventions

### Go

- gofmt formatting, return error instead of panic

### Frontend (Svelte 5 + TypeScript)

- Svelte 5 runes ($state, $derived, $effect)

## Security

Security measures marked with SEC-XX in code must not be downgraded

## Prohibited Practices

- Do not modify release.yml
- Do not introduce new third-party Go dependencies

Note several key points:

  • The opening line clearly states what the project is, giving AI immediate global awareness
  • Code conventions are specific to the language level, not vague "write good code"
  • Security and prohibitions use explicit negative statements, leaving no ambiguity
  • It doesn't need to be long -- precision matters more than thoroughness

Context Hierarchy

In multi-repository projects, context can be organized in layers:

LevelFileScopeContent
User level~/.claude/CLAUDE.mdAll projectsLanguage preferences, general habits
Org levelai-guide.mdCross-repo sharedUnified code conventions, Git rules
Repo levelProject root CLAUDE.mdSingle repoTech stack, security constraints, prohibitions

AI tools merge these contexts by hierarchy, with repo-level rules taking highest priority.


Pattern 2: Incremental Development

Beginner explanation: Don't ask AI to write the entire feature in one go. Take it step by step, confirming each step is correct before continuing.

Why You Can't Do It All at Once

Having AI generate large amounts of code at once causes problems to grow exponentially:

  • The more code, the more hidden bugs, and the harder they are to track down
  • AI's "memory" of later code is less accurate than earlier code
  • When something goes wrong, you don't know which step introduced the issue

Three-Step Strategy

Step one: Scaffold. Have AI generate the code structure and interface definitions, without writing implementations.

text
"Help me design the structure of a template management module -- just interface definitions and file organization, no implementations."

Step two: Implement. Once the scaffold looks good, have AI implement functions one by one.

text
"Now implement the InstallTemplate function. It needs to handle: downloading, validating the manifest, and registering locally."

Step three: Optimize. After the feature works, have AI help improve code quality.

text
"Review the error handling in InstallTemplate -- are there any missing edge cases?"

Between each step, you should run the code, execute tests, and confirm the results. This is the "verification loop" discussed next.


Pattern 3: Prompt Iteration

Beginner explanation: The first thing you say to AI (the prompt) is rarely perfect. Adjust your phrasing based on AI's response, just like communicating with a person -- if you're not being clear, rephrase and try again.

Core Iteration Approach

  1. Write an initial prompt
  2. Review AI's output, identify what's unsatisfactory
  3. Add information or adjust requirements, run another round
  4. Repeat until satisfied

Conversation Replay Example

Here's a real iteration process (simplified):

Round 1 -- Initial request:

text
"Write a function to parse a template's manifest.json."

AI produced a basic JSON parsing function, but lacked error handling and had incomplete field validation.

Round 2 -- Adding constraints:

text
"The required fields in manifest are name, version, author, and frontmatterSchema.
Missing any one of them should return a clear error message indicating which field is missing."

AI added field validation, but used panic for error handling.

Round 3 -- Correcting the approach:

text
"Don't use panic, use error returns. Refer to project conventions: Go code returns errors, never panics."

This round, AI produced an implementation that follows project conventions.

Key observation: The round 3 correction could have been avoided -- if CLAUDE.md already stated "return error not panic," AI would have followed it from the first round. This shows how Pattern 1 (Context Engineering) and Pattern 3 (Prompt Iteration) work together: good context reduces iteration rounds.


Pattern 4: Verification Loop

Beginner explanation: After AI writes code, you must verify it's correct. This isn't about "not trusting AI" -- it's a fundamental part of professional development. Human-written code gets tested too.

Four Steps of the Verification Loop

text
AI generates -> Human reviews -> Test verifies -> Iterate and improve
   ^                                                    |
   └----------------------------------------------------┘
  1. AI generates: AI produces code based on your prompt
  2. Human reviews: You read the code, checking if the logic is sound and follows project conventions
  3. Test verifies: Run the code, execute tests, check edge cases
  4. Iterate and improve: When issues are found, return to step 1 for AI to revise

How to Spot Common AI Errors

Here are real cases from the Presto project:

Case 1: Package Manager Preference

AI suggested using pip install for Python dependencies. But the Presto project specifies using uv (a faster, more reliable Python package manager). If CLAUDE.md states "use uv for Python, not pip," AI won't make this mistake.

Case 2: Architecture Assumptions

AI generated a Dockerfile using an amd64 base image. But the development machine is Apple Silicon (arm64). This kind of error is subtle -- the code syntax is completely correct, but it won't run on your machine.

Case 3: Information Source

AI copied a configuration from old documentation in its training data, but the project code had already updated the interface. The correct approach is to have AI directly read the current code file (using @file reference), rather than answering from "memory."

Verification checklist:

  • [ ] Does the code compile/run?
  • [ ] Do tests pass?
  • [ ] Are the project-specified tools and conventions being used?
  • [ ] Are there hard-coded environment assumptions (architecture, paths, OS)?
  • [ ] Do the APIs and methods AI referenced actually exist?

Part Three: Presto Project AI Best Practices

This section is for contributors participating in Presto development, introducing AI collaboration approaches that have been proven effective in the project.

Division of Labor: CLAUDE.md vs CONVENTIONS.md

The Presto project uses two files to manage AI context, each with a different focus:

FileRoleContentWho reads it
CLAUDE.mdRepo-level constraintsCode conventions, security red lines, prohibitionsAI loads automatically
CONVENTIONS.mdDomain knowledgeTemplate Protocol, dev process, Typst referenceDevelopers reference as needed

Why separate them? Because AI has a limited context window. CLAUDE.md stays concise, containing only rules that "must be known in every conversation"; CONVENTIONS.md holds detailed domain knowledge that developers manually include with @CONVENTIONS.md when needed.

The Role of Organization-Level ai-guide.md

Presto-io is a multi-repository organization (Presto main app, template repos, documentation repo, etc.). Rules shared across repos are centralized in Presto-homepage/docs/ai-guide.md, including:

  • Unified Git commit conventions
  • Cross-repo code style agreements
  • General AI tool configuration recommendations

Each repo's CLAUDE.md references this centralized file, avoiding scattered and inconsistent rules.

AI Workflow for Template Development

When developing Presto templates, the following four-step approach is recommended (from CONVENTIONS.md):

Step One: Analyze the Reference File

Provide the target document (PDF or DOCX) to AI and have it analyze the typesetting characteristics:

text
"Analyze the typesetting of this resume PDF: margins, font sizes, heading levels, spacing, column layout."

AI will extract quantifiable typesetting parameters to serve as the basis for template development.

Step Two: Interactive Confirmation

AI's analysis results need developer confirmation and supplementation:

  • Which typesetting features are fixed, and which should be configurable (placed in frontmatter)?
  • How should Markdown syntax map to typesetting elements (e.g., does > blockquote map to an "abstract box" or a "margin note")?
  • Are there details AI missed (such as headers/footers, watermarks)?

Step Three: Generate Code

After confirming requirements, have AI generate the template code. Remember to proceed incrementally (Pattern 2):

  1. First generate manifest.json and basic structure
  2. Then implement core conversion logic (Markdown -> Typst)
  3. Finally handle edge cases and error handling

Step Four: Test and Preview

Use the template's --example output as test input, run the conversion, and check if the PDF output meets expectations. This step requires manual visual inspection -- AI cannot judge whether typesetting "looks good."

Using @file to Reference Code for AI

When conversing with AI, using @ to reference project files lets AI read the current code directly, instead of "guessing" from training data:

text
"@internal/template/install.go -- are there any missing edge cases in this function's error handling?"
text
"Referencing @manifest.json's schema definition, help me write frontmatter validation logic."

This is more reliable than copying and pasting code snippets -- AI can see the full file context, including imports, type definitions, and adjacent functions.

Different tools have slightly different reference syntax (Claude Code uses @file, Cursor uses @ or drag-and-drop files), but the principle is the same: let AI get information from the code, not from memory.


Final Thoughts

AI programming assistants are powerful tools, but their value depends on the user's judgment.

The core idea behind the four patterns can be distilled into one sentence: Give AI sufficient context, proceed incrementally, and verify continuously.

As AI tools rapidly evolve, specific operational steps will change, but the principles behind these patterns won't -- they are fundamentally good engineering practices, just with new manifestations in the AI era.

The specific operational steps involving AI tools in this article are based on versions from February 2026. Please refer to official documentation after tool updates.

Presto — Markdown to PDF