🤖 Your First AI Endpoint
Now it’s time to connect your server to the OpenAI API and interact with their models.
We’ll set up an API endpoint that sends messages to OpenAI and returns responses. This is the core of every AI app you’ll build.
🔍 Choosing the Right OpenAI API
Section titled “🔍 Choosing the Right OpenAI API”OpenAI offers three main ways to interact with models:
- 💬 Chat Completions API – The classic method, still widely used.
- 🤖 Agent SDK – For building complex AI agents (we’ll cover this later).
- ⚡ Responses API – The newest, simplest option (our starting point).
Why start with the Responses API?
It’s clean, flexible, and gives you access to the latest features. It’s also very similar to the Agent SDK, so learning it now sets you up for more advanced work later.
Example:
// Minimal AI requestconst response = await client.responses.create({ model: "gpt-4o-mini", input: "Your message here"});🛠️ Step 1: Import OpenAI in Your Server
Section titled “🛠️ Step 1: Import OpenAI in Your Server”Open index.js and add the OpenAI import at the top:
import express from "express";import { config } from "dotenv";import cors from "cors";import OpenAI from "openai"; // 👈 New import
config();🔧 Step 2: Create the OpenAI Client
Section titled “🔧 Step 2: Create the OpenAI Client”Right after your Express app setup, create the client once (not per request):
const app = express();const PORT = process.env.PORT || 8000;
// Create OpenAI client onceconst openai = new OpenAI({ apiKey: process.env.OPENAI_API_KEY,});
app.use(cors());app.use(express.json());💡 Why once? Creating the client on every request wastes resources and slows down responses.
🤖 Step 3: Create Your AI Chat Endpoint
Section titled “🤖 Step 3: Create Your AI Chat Endpoint”Here’s the code:
app.post("/api/chat", async (req, res) => { try { const { message } = req.body;
if (!message) { return res.status(400).json({ error: "Message is required" }); }
// Calling the openaimodel with our request const response = await openai.responses.create({ model: "gpt-4o-mini", input: message, });
res.json({ // output_text is the response from the model we call our variable response response: response.output_text, model: "gpt-4o-mini", success: true });
} catch (error) { console.error("OpenAI Error:", error); res.status(500).json({ error: "Failed to get AI response", success: false }); }});🧐 What’s Happening Here — Step by Step
Section titled “🧐 What’s Happening Here — Step by Step”-
app.post("/api/chat", ...)
This creates a POST endpoint at/api/chat. Your frontend will send requests here whenever it needs an AI response. -
const { message } = req.body;
Reads themessageproperty from the JSON body sent by the frontend.
Example:{ "message": "Hello AI!" }→message="Hello AI!". -
Validation check
If no message is provided, the server immediately returns a 400 error:{ "error": "Message is required" }This avoids unnecessary API calls (saving you money).
-
📡 Calling OpenAI (
openai.responses.create)
This is the exact moment the backend talks to OpenAI’s API.model: The AI brain to use ("gpt-4o-mini"here for speed + low cost).input: The user’s message.
OpenAI processes this request and generates a response.
-
📬 Handling the OpenAI Response
The API returns a structured object. The property.output_textcontains the AI’s actual text answer.
Example:"Hello there! How can I help you?". -
res.json({...})— Sending Back to Frontend
Packages the AI response, model name, and a success flag into JSON for the frontend to use.
Example:{"response": "Sure! Here's a fun fact...","model": "gpt-4o-mini","success": true} -
Error handling with
try/catch
If anything fails (invalid key, no credits, network issues), thecatchblock sends a safe error JSON instead of crashing the server.
📝 Complete index.js Example
Section titled “📝 Complete index.js Example”import express from "express";import { config } from "dotenv";import cors from "cors";import OpenAI from "openai";
config();
const app = express();const PORT = process.env.PORT || 8000;
// Create OpenAI client onceconst openai = new OpenAI({ apiKey: process.env.OPENAI_API_KEY,});
// Middlewareapp.use(cors());app.use(express.json());
// Test routeapp.get("/", (req, res) => { res.json({ message: "🤖 OpenAI Backend is running!", status: "ready" });});
// AI Chat endpointapp.post("/api/chat", async (req, res) => { try { const { message } = req.body;
if (!message) { return res.status(400).json({ error: "Message is required" }); }
const response = await openai.responses.create({ model: "gpt-4o-mini", input: message, });
res.json({ response: response.output_text, model: "gpt-4o-mini", success: true });
} catch (error) { console.error("OpenAI Error:", error); res.status(500).json({ error: "Failed to get AI response", success: false }); }});
// Start serverapp.listen(PORT, () => { console.log(`🚀 Server running on http://localhost:${PORT}`); console.log(`🤖 AI endpoint ready at /api/chat`);});🧪 Testing the AI Endpoint
Section titled “🧪 Testing the AI Endpoint”Start your server:
npm run devTest with curl:
curl -X POST http://localhost:8000/api/chat -H "Content-Type: application/json" -d '{"message": "Tell me a fun fact about space."}'✅ Example output:
{ "response": "Did you know a day on Venus lasts longer than its year?", "model": "gpt-4o-mini", "success": true}With Postman or Insomnia:
- Method:
POST - URL:
http://localhost:8000/api/chat - Headers:
Content-Type: application/json - Body:
{"message": "Your question here"}
🔧 Common Issues & Fixes
Section titled “🔧 Common Issues & Fixes”❌ “401 Unauthorized”
✅ Check your .env for the correct API key. No spaces allowed.
❌ “insufficient_quota”
✅ Add at least $5 credit to your OpenAI account.
❌ “Cannot POST /api/chat”
✅ Make sure your server is running and you’re using POST.
❌ Server crashes
✅ Check for typos in .env and confirm all dependencies are installed.
🎉 Congratulations! You’ve created your first AI-powered API endpoint.
Your backend can now:
- Accept user messages
- Send them to OpenAI
- Return intelligent responses
- Handle errors safely
Next: Building the Chat Interface — let’s make this talk to a frontend!