Back to Articles
AI EngineeringTypeScriptNode.jsCursorVercel AI SDK

Setting Up Your AI Development Environment (TypeScript Edition)

Frank Atukunda
Frank Atukunda
Software Engineer
December 9, 2025
9 min read
Setting Up Your AI Development Environment (TypeScript Edition)

TypeScript has become a first-class citizen for AI development.

While Python remains dominant for model training and data science, for the application layer—where you glue models to UIs and databases—TypeScript offers compelling advantages. You get better type safety for structured outputs, a massive ecosystem of web frameworks, and now, first-class support from every major AI provider.

But "just use TypeScript" isn't enough. You need a stack that moves as fast as the models do.

Here is the definitive guide to setting up your AI development environment for 2025.


1. The Runtime: Node.js 23

Forget ts-node. Forget complex build steps for simple scripts.

Node.js 23 (released late 2024) changed the game with native TypeScript support. You can now run .ts files directly.

Install Node.js 23

I recommend using nvm (Node Version Manager) or fnm (Fast Node Manager).

# Install fnm (it's faster)
curl -fsSL https://fnm.vercel.app/install | bash
 
# Install Node 23
fnm install 23
fnm use 23

Why this matters

You can now run a TypeScript AI script like this:

node --experimental-strip-types my-agent.ts

Important caveat: This flag only strips types—it doesn't check them. You're trading compile-time safety for development speed. For production builds or CI/CD, you'll still want proper type checking with tsc.

Use this for:

  • Quick prototyping and scripts
  • Local development iterations

Still use tsc for:

  • Production builds
  • CI/CD pipelines
  • Catching type errors before runtime

2. The Language: TypeScript 5.7+

AI engineering is 50% prompt engineering and 50% schema engineering.

When you ask an LLM for JSON, you need to guarantee the shape of that data. TypeScript is your best friend here.

The Golden tsconfig.json

Run npm init -y and npm install -D typescript @types/node. Then create this config:

{
  "compilerOptions": {
    "target": "ES2022",
    "module": "NodeNext",
    "moduleResolution": "NodeNext",
    "strict": true,
    "skipLibCheck": true,
    "forceConsistentCasingInFileNames": true,
    "outDir": "./dist"
  },
  "include": ["src/**/*"]
}

Why strict: true? When you define tools for an AI agent using libraries like Zod, strict mode ensures your TypeScript types exactly match what the LLM expects. It prevents "hallucinated" data structures from crashing your app.

Note on ESM: This configuration uses ES Modules ("module": "NodeNext"). If you need CommonJS for legacy reasons, adjust accordingly, but ESM is the future.


3. The Editor: Cursor (The AI Native Choice)

In 2025, coding without AI assistance feels like coding in Notepad.

While GitHub Copilot is great, Cursor has become a popular choice for AI engineers. It's a fork of VS Code, so all your extensions work, but it has AI capabilities baked into the core.

Why Cursor?

  • Composer Mode (Cmd+I): Write multi-file features in one go.
  • Codebase Context: It indexes your entire project, so it knows your database schema when writing your AI prompts.

Cost consideration: Cursor is a paid product (~$20/month). If you prefer free alternatives, VS Code with the Continue.dev extension offers similar AI-assisted coding.

The Secret Weapon: .cursorrules

Create a file named .cursorrules in your project root. This tells the AI how to behave.

# .cursorrules
 
You are an expert AI Engineer.
- Always use the Vercel AI SDK for LLM interactions.
- Use Zod for all schema definitions.
- Prefer functional programming patterns.
- Never hardcode API keys; use process.env.
- Add descriptive comments for complex LLM prompts.

Now, every time Cursor generates code, it follows your engineering standards.


4. The Framework: Vercel AI SDK

Don't write raw HTTP requests to OpenAI. Use the Vercel AI SDK. It's the standard interface for all providers.

npm install ai @ai-sdk/openai zod dotenv

Why Zod?

Zod is a schema validation library. In AI engineering, it's the bridge between the fuzzy world of LLMs and the strict world of code.

import { z } from 'zod';
import { generateObject } from 'ai';
import { openai } from '@ai-sdk/openai';
 
// The LLM must return data matching this shape
const recipeSchema = z.object({
  name: z.string().min(1).describe("Recipe name"),
  ingredients: z.array(z.string()).min(1).describe("List of ingredients"),
  prepTime: z.number().positive().describe("Preparation time in minutes"),
});
 
// Extract the TypeScript type from the schema
type Recipe = z.infer<typeof recipeSchema>;
 
// Use with Vercel AI SDK structured outputs
const result = await generateObject({
  model: openai('gpt-4o'),
  schema: recipeSchema,
  prompt: 'Create a recipe for chocolate chip cookies',
});
 
// result.object is fully typed as Recipe
console.log(result.object.name);

5. Security & Environment Variables

This sounds basic, but it's where most beginners expose API keys.

Step 1: Create .env and .env.example

# .env (NEVER commit this)
OPENAI_API_KEY=sk-proj-...
ANTHROPIC_API_KEY=sk-ant-...
NODE_ENV=development
# .env.example (commit this as a template)
OPENAI_API_KEY=your_key_here
ANTHROPIC_API_KEY=your_key_here
NODE_ENV=development

Step 2: Add .env to .gitignore IMMEDIATELY

echo ".env" >> .gitignore

Step 3: Validate Environment Variables at Startup

Don't wait for runtime errors. Validate your environment on startup:

import 'dotenv/config';
import { z } from 'zod';
 
const envSchema = z.object({
  OPENAI_API_KEY: z.string().min(1, "OpenAI API key is required"),
  NODE_ENV: z.enum(['development', 'production']).default('development'),
});
 
// This will throw a clear error if keys are missing
export const env = envSchema.parse(process.env);
 
// Now use env.OPENAI_API_KEY instead of process.env.OPENAI_API_KEY

This catches configuration errors before your app runs for 10 minutes and then crashes.


6. Common Gotchas & Debugging

AI applications fail differently than traditional apps. Here's what to watch for:

Schema Validation Failures

LLMs don't always follow your schema perfectly. Always handle validation errors:

import { z } from 'zod';
import { generateObject } from 'ai';
import { openai } from '@ai-sdk/openai';
 
// Define your schema
const recipeSchema = z.object({
  name: z.string(),
  ingredients: z.array(z.string()),
});
 
// Handle validation errors
try {
  const result = await generateObject({
    model: openai('gpt-4o'),
    schema: recipeSchema,
    prompt: 'Create a recipe',
  });
  console.log(result.object);
} catch (error) {
  if (error instanceof z.ZodError) {
    console.error("LLM returned invalid data:", error.errors);
    // Retry with a more explicit prompt or fallback
  }
}

Token Counting & Rate Limits

Always log token usage to avoid surprise bills:

import { generateText } from 'ai';
import { openai } from '@ai-sdk/openai';
 
const result = await generateText({
  model: openai('gpt-4o'),
  prompt: 'Your prompt here',
});
 
console.log(`Tokens used: ${result.usage.totalTokens}`);
console.log(`Estimated cost: ${(result.usage.totalTokens / 1000000 * 2.50).toFixed(4)}`);

Package Confusion

There are TWO OpenAI packages:

  • @ai-sdk/openai - Vercel's adapter (use this with AI SDK)
  • openai - Official OpenAI SDK (use for direct API calls)

Don't mix them. Pick one pattern and stick with it.

Observability

For production apps, integrate observability early:

  • Langfuse or LangSmith for prompt tracking
  • Log every LLM call with: prompt, response, tokens, latency
  • Track costs per feature/user
import { generateText } from 'ai';
import { openai } from '@ai-sdk/openai';
 
const prompt = 'Explain quantum computing';
const startTime = Date.now();
 
const result = await generateText({
  model: openai('gpt-4o'),
  prompt: prompt,
});
 
// Simple logging pattern
console.log(JSON.stringify({
  timestamp: new Date().toISOString(),
  model: 'gpt-4o',
  prompt: prompt.slice(0, 100), // First 100 chars
  tokens: result.usage.totalTokens,
  latency: Date.now() - startTime,
}));

The "One-Click" Setup Script

Want to spin up a new AI project in 30 seconds?

I've written a setup script that:

  1. Validates Node.js version
  2. Initializes a Node.js project with proper config
  3. Installs TypeScript, Vercel AI SDK, Zod, and Dotenv
  4. Creates the tsconfig.json, .cursorrules, and .gitignore
  5. Creates a "Hello World" AI script
  6. Sets up environment variable templates

Save this as init-ai.sh and run bash init-ai.sh:

#!/bin/bash
set -e
 
echo "Initializing AI TypeScript Project..."
 
# 1. Check Node version
node_version=$(node -v | cut -d'v' -f2 | cut -d'.' -f1)
if [ "$node_version" -lt 23 ]; then
  echo "Node 23+ recommended for native TypeScript support."
  echo "Current version: $(node -v)"
  echo "Install with: fnm install 23 && fnm use 23"
  read -p "Continue anyway? (y/n) " -n 1 -r
  echo
  if [[ ! $REPLY =~ ^[Yy]$ ]]; then
    exit 1
  fi
fi
 
# 2. Initialize project
npm init -y
npm pkg set type="module"
 
# 3. Install dependencies
npm install ai @ai-sdk/openai zod dotenv
npm install -D typescript @types/node
 
# 4. Create tsconfig.json
cat > tsconfig.json <<'EOF'
{
  "compilerOptions": {
    "target": "ES2022",
    "module": "NodeNext",
    "moduleResolution": "NodeNext",
    "strict": true,
    "skipLibCheck": true,
    "outDir": "./dist"
  },
  "include": ["src/**/*"]
}
EOF
 
# 5. Create .gitignore
cat > .gitignore <<'EOF'
node_modules/
.env
dist/
.DS_Store
EOF
 
# 6. Create .env.example
cat > .env.example <<'EOF'
OPENAI_API_KEY=your_openai_key_here
NODE_ENV=development
EOF
 
# 7. Create .cursorrules
cat > .cursorrules <<'EOF'
You are an expert AI Engineer.
- Always use the Vercel AI SDK for LLM interactions.
- Use Zod for all schema definitions.
- Prefer functional programming patterns.
- Never hardcode API keys; use process.env.
- Add descriptive comments for complex LLM prompts.
EOF
 
# 8. Create src directory and index.ts
mkdir -p src
cat > src/index.ts <<'EOF'
import 'dotenv/config';
import { generateText } from 'ai';
import { openai } from '@ai-sdk/openai';
import { z } from 'zod';
 
// Validate environment variables
const envSchema = z.object({
  OPENAI_API_KEY: z.string().min(1),
});
 
const env = envSchema.parse(process.env);
 
async function main() {
  console.log('Generating text with GPT-4o...\n');
  
  const { text, usage } = await generateText({
    model: openai('gpt-4o'),
    prompt: 'Tell me a joke about TypeScript and AI.',
  });
  
  console.log(text);
  console.log(`\nTokens used: ${usage.totalTokens}`);
}
 
main().catch(console.error);
EOF
 
# 9. Create README
cat > README.md <<'EOF'
# AI TypeScript Project
 
## Setup
1. Copy `.env.example` to `.env` and add your API keys
2. Install dependencies: `npm install`
3. Run: `node --experimental-strip-types src/index.ts`
 
## Production Build
```bash
npm run build
node dist/index.js
EOF
 
# 10. Add build script to package.json
npm pkg set scripts.build="tsc"
npm pkg set scripts.dev="node --experimental-strip-types src/index.ts"
 
echo ""
echo "AI Project Setup Complete!"
echo ""
echo "Next steps:"
echo "1. Copy .env.example to .env and add your API keys"
echo "2. Run: npm run dev"
echo ""
echo "Happy coding!"

What's Next?

You have the environment. Now build something.

Here are the next steps:

  1. Run the setup script and verify everything works
  2. Add observability - Even simple logging helps debug LLM issues
  3. Check out my Tech Stack Cheat Sheet to decide which tools to plug into your new setup
  4. Build a small project - Start with a simple chatbot or data extraction tool

Remember: AI applications are unique. They're probabilistic, expensive to run, and fail in unexpected ways. Build in logging, validation, and cost tracking from day one.

Happy coding!

0claps
Frank Atukunda

Frank Atukunda

Software Engineer documenting my transition to AI Engineering. Building 10x .dev to share what I learn along the way.

Share this article

Get more like this

Weekly insights on AI engineering for developers.

Comments (0)

Join the discussion

Sign in with GitHub to leave a comment and connect with other engineers.

No comments yet. Be the first to share your thoughts!