If you're a CTO, technical co-founder, or engineering lead evaluating whether to integrate the ChatGPT App SDK into your product, this guide covers the practical realities: what the architecture looks like, what your team needs to build, how it connects to your existing data, and what a production deployment involves.
The architecture at a glance
A ChatGPT App SDK integration has three main components:
- Your hosted web application — a standard web app (React, Next.js, or any framework) that renders the UI your customers see inside ChatGPT. It's hosted on your infrastructure, deployed like any other web app.
- The App SDK client library — installed via npm, this handles the communication protocol between your application and the ChatGPT runtime. It manages the handshake, the context bridge, and the event system.
- Your OpenAI App SDK configuration — registered through OpenAI's developer platform, this defines the entry point for your experience, the permissions your app requests, and the metadata shown to users in ChatGPT's app discovery.
What your application needs to handle
The App SDK client fires events to your application as the conversation progresses. Your application listens for these events and updates its UI accordingly. The most important events to handle are:
- Initialisation — your app receives the initial context (user locale, session ID, any passed parameters) and renders its welcome state
- Intent signals — as the conversation progresses, ChatGPT signals the user's intent to your application ("user wants to see product options", "user is ready to book", "user wants to modify their selection")
- Action confirmations — when the user confirms an action (placing an order, booking a slot, submitting an enquiry), your application sends the outcome back through the SDK and triggers your backend logic
Connecting to your existing data
One of the most common questions from technical teams is: "does the App SDK need to replace our existing product catalogue / booking system / CRM?" The answer is no — your existing systems stay exactly as they are. The App SDK application acts as a presentation layer that calls your existing APIs.
For example, if you run a restaurant booking system on ResDiary or OpenTable, your App SDK application calls their API to check availability and create bookings. If you have a product catalogue in Shopify, your App SDK application calls the Shopify Storefront API to fetch products and create orders. The App SDK is the conversation interface; your existing systems are the data source.
This means integration is typically less invasive than teams expect. You're building a new front-end channel — similar in scope to building a mobile app — not re-platforming your backend.
Authentication and user identity
If your integration requires a logged-in user (e.g. the customer placing an order needs to be identified for delivery, or a patient booking needs to match a record), you have two options:
- Session-based flows — the App SDK experience creates a session token at the start of the interaction, which your backend associates with the completed transaction. The customer then receives an email confirmation with a link to their account. No login required during the conversation.
- OAuth handoff — for tighter integration with user accounts, you can initiate an OAuth flow from within the App SDK experience. The customer authenticates with your system, and the experience continues in the authenticated context.
For most B2C use cases, the session-based approach is lower friction and converts better. OAuth is more appropriate for SaaS products where the customer already has an account and the integration is about surfacing account-specific data.
What the development process looks like
For a team building their first App SDK integration, the typical development path is:
- Discovery (1 week) — map the user journey you want to support, define the UI components needed, confirm the API endpoints your application will call, agree on the data contracts
- Scaffold and SDK setup (2–3 days) — initialise the Next.js project, install the App SDK client library, configure the OpenAI developer console registration, set up local development with DevMode
- Core journey build (2–3 weeks) — implement the UI components, wire up the SDK event handlers, connect to your backend APIs, handle the happy path end-to-end
- Edge cases and polish (1 week) — error states, loading states, empty states, mobile layout, accessibility
- Deployment and testing (3–5 days) — deploy to Vercel (or your chosen host), configure the production App SDK registration, test on live ChatGPT with real users
Total for an experienced team: 6–8 weeks. For teams new to the App SDK, add 2–3 weeks for the learning curve on the SDK event model and DevMode setup.
Testing with App SDK DevMode
OpenAI provides a DevMode for the App SDK that lets you test your experience locally without publishing it publicly. In DevMode, you load a local URL into the ChatGPT interface by pressing a keyboard shortcut and entering your localhost address. This gives you full hot-reload development capability — you can iterate on your UI in real-time against the live ChatGPT runtime.
DevMode requires a ChatGPT Plus subscription. For teams without existing Plus subscriptions, budget for this as part of your development setup.
Production deployment considerations
Your App SDK application is a standard web application and deploys like any other. The key requirements for production are:
- HTTPS — the ChatGPT runtime will only load your application over HTTPS. Vercel, Netlify, and most modern hosting providers handle this automatically.
- CORS configuration — your application needs to allow requests from the ChatGPT origin. This is straightforward to configure but easy to miss.
- Performance — the App SDK experience loads inside ChatGPT, so slow load times are directly visible to users. Aim for sub-1 second initial load. Next.js with static generation is ideal.
- Error handling — the App SDK experience should degrade gracefully if your backend is unavailable. A clear "something went wrong, try again" state is better than a broken experience.
What does it cost to run?
Running costs are modest. Your application is a standard web app — hosting on Vercel's free or pro tier covers most use cases. The App SDK itself has no per-request cost beyond the ChatGPT conversation tokens, which your users are already paying for as part of their ChatGPT Plus subscription.
The only additional cost to model is if your backend APIs have per-request pricing — for example, if you're calling an external booking API that charges per transaction. This is the same cost you'd incur through any other integration channel.
Getting help
The App SDK is still relatively new and the available documentation, while improving, assumes a high level of familiarity with the ChatGPT platform. Teams that are new to the SDK typically find the first two weeks the steepest — the DevMode setup, the event model, and the ChatGPT registration process all have sharp edges.
If you want to move faster and with more confidence, a discovery session with an experienced App SDK developer can shortcut this significantly. We've built multiple integrations on the platform and can review your proposed architecture, identify the likely pain points, and give you a clear path to production.
Book a discovery session to get started, or see our fixed-fee packages if you're ready to commission a full build.