Skip to content

🤖 Your First AI Endpoint

Now it’s time to connect your server to the OpenAI API and interact with their models.

We’ll set up an API endpoint that sends messages to OpenAI and returns responses. This is the core of every AI app you’ll build.


OpenAI offers three main ways to interact with models:

  • 💬 Chat Completions API – The classic method, still widely used.
  • 🤖 Agent SDK – For building complex AI agents (we’ll cover this later).
  • ⚡ Responses API – The newest, simplest option (our starting point).

Why start with the Responses API?
It’s clean, flexible, and gives you access to the latest features. It’s also very similar to the Agent SDK, so learning it now sets you up for more advanced work later.

Example:

// Minimal AI request
const response = await client.responses.create({
model: "gpt-4o-mini",
input: "Your message here"
});

🛠️ Step 1: Import OpenAI in Your Server

Section titled “🛠️ Step 1: Import OpenAI in Your Server”

Open index.js and add the OpenAI import at the top:

import express from "express";
import { config } from "dotenv";
import cors from "cors";
import OpenAI from "openai"; // 👈 New import
config();

Right after your Express app setup, create the client once (not per request):

const app = express();
const PORT = process.env.PORT || 8000;
// Create OpenAI client once
const openai = new OpenAI({
apiKey: process.env.OPENAI_API_KEY,
});
app.use(cors());
app.use(express.json());

💡 Why once? Creating the client on every request wastes resources and slows down responses.


Here’s the code:

app.post("/api/chat", async (req, res) => {
try {
const { message } = req.body;
if (!message) {
return res.status(400).json({ error: "Message is required" });
}
// Calling the openaimodel with our request
const response = await openai.responses.create({
model: "gpt-4o-mini",
input: message,
});
res.json({
// output_text is the response from the model we call our variable response
response: response.output_text,
model: "gpt-4o-mini",
success: true
});
} catch (error) {
console.error("OpenAI Error:", error);
res.status(500).json({
error: "Failed to get AI response",
success: false
});
}
});

🧐 What’s Happening Here — Step by Step

Section titled “🧐 What’s Happening Here — Step by Step”
  1. app.post("/api/chat", ...)
    This creates a POST endpoint at /api/chat. Your frontend will send requests here whenever it needs an AI response.

  2. const { message } = req.body;
    Reads the message property from the JSON body sent by the frontend.
    Example: { "message": "Hello AI!" }message = "Hello AI!".

  3. Validation check
    If no message is provided, the server immediately returns a 400 error:

    { "error": "Message is required" }

    This avoids unnecessary API calls (saving you money).

  4. 📡 Calling OpenAI (openai.responses.create)
    This is the exact moment the backend talks to OpenAI’s API.

    • model: The AI brain to use ("gpt-4o-mini" here for speed + low cost).
    • input: The user’s message.
      OpenAI processes this request and generates a response.
  5. 📬 Handling the OpenAI Response
    The API returns a structured object. The property .output_text contains the AI’s actual text answer.
    Example: "Hello there! How can I help you?".

  6. res.json({...}) — Sending Back to Frontend
    Packages the AI response, model name, and a success flag into JSON for the frontend to use.
    Example:

    {
    "response": "Sure! Here's a fun fact...",
    "model": "gpt-4o-mini",
    "success": true
    }
  7. Error handling with try/catch
    If anything fails (invalid key, no credits, network issues), the catch block sends a safe error JSON instead of crashing the server.


import express from "express";
import { config } from "dotenv";
import cors from "cors";
import OpenAI from "openai";
config();
const app = express();
const PORT = process.env.PORT || 8000;
// Create OpenAI client once
const openai = new OpenAI({
apiKey: process.env.OPENAI_API_KEY,
});
// Middleware
app.use(cors());
app.use(express.json());
// Test route
app.get("/", (req, res) => {
res.json({
message: "🤖 OpenAI Backend is running!",
status: "ready"
});
});
// AI Chat endpoint
app.post("/api/chat", async (req, res) => {
try {
const { message } = req.body;
if (!message) {
return res.status(400).json({ error: "Message is required" });
}
const response = await openai.responses.create({
model: "gpt-4o-mini",
input: message,
});
res.json({
response: response.output_text,
model: "gpt-4o-mini",
success: true
});
} catch (error) {
console.error("OpenAI Error:", error);
res.status(500).json({
error: "Failed to get AI response",
success: false
});
}
});
// Start server
app.listen(PORT, () => {
console.log(`🚀 Server running on http://localhost:${PORT}`);
console.log(`🤖 AI endpoint ready at /api/chat`);
});

Start your server:

Terminal window
npm run dev

Test with curl:

Terminal window
curl -X POST http://localhost:8000/api/chat -H "Content-Type: application/json" -d '{"message": "Tell me a fun fact about space."}'

✅ Example output:

{
"response": "Did you know a day on Venus lasts longer than its year?",
"model": "gpt-4o-mini",
"success": true
}

With Postman or Insomnia:

  • Method: POST
  • URL: http://localhost:8000/api/chat
  • Headers: Content-Type: application/json
  • Body: {"message": "Your question here"}

“401 Unauthorized”
✅ Check your .env for the correct API key. No spaces allowed.

“insufficient_quota”
✅ Add at least $5 credit to your OpenAI account.

“Cannot POST /api/chat”
✅ Make sure your server is running and you’re using POST.

Server crashes
✅ Check for typos in .env and confirm all dependencies are installed.


🎉 Congratulations! You’ve created your first AI-powered API endpoint.

Your backend can now:

  • Accept user messages
  • Send them to OpenAI
  • Return intelligent responses
  • Handle errors safely

Next: Building the Chat Interface — let’s make this talk to a frontend!