How to Build a ChatGPT UI, Using Only AI
Recent advances in front-end AI tooling have significantly accelerated the path from concept to working prototype. In a conversation with my friend Jonathan M. , we discussed how these tools have evolved to the point where building applications using AI is becoming remarkably fast.
To demonstrate this, I set out to build a local-first ChatGPT-style interface with a simple prompt library, using only browser-based tools and the latest OpenAI API. The objective was to document the process, and provide guidance for other people to try it themselves. The result was a fully functional application, built and deployed in just over an hour.
It actually took longer to write this article than to build the application, which I find amazing.
While I’ve used lovable.dev extensively in recent projects, for this experiment I chose to work with bolt.new, another AI-assisted development environment optimised for rapid prototyping. Just like with lovable, it allowed me to move quickly from concept to execution, with minimal configuration and a high degree of flexibility.
The application I built:
This article outlines how it was built, including implementation details, integration challenges, and lessons learned. Hopefully, this will help others looking to explore what’s now possible with modern AI and front-end tooling.
Defining the Application
Before writing any code, I used ChatGPT to define the scope and structure of the application. The idea was: a local-first, browser-based ChatGPT interface with a persistent prompt library. But instead of starting with a requirements document (like I did in previous articles), I refined the concept through a structured Q&A conversation.
This is how the core of the application was defined:
Q: I want to build a simple ChatGPT-style chatbot interface, with chat threads on the left and a prompt library on the right. The prompt library should store a title, a label, and the prompt itself. Prompts can be re-run, edited, filtered by label, and sorted. Any questions?
A: Yes — a few.
This led to the following design decisions:
With this defined, I had the structure: a left-hand sidebar for managing threads, a central chat interface styled like ChatGPT, and a right-hand prompt library with filtering, editing, and execution functionality.
The Build Process
With the specification complete, I opened Bolt and began the implementation phase. Rather than dump the brief in, which would have kickstarted an immediate build, I first told it that I would be giving it a brief and instructed it to read it carefully. I then asked it if it had any questions.
This kicked off a short Q&A loop to clarify the requirements and confirm alignment. Once that was complete, I directed Bolt to begin the build.
The application was scaffolded using Vite with React and TypeScript, styled with TailwindCSS, and managed with Zustand for state and localStorage for persistence. The project was structured into clear components:
From this point forward, development proceeded incrementally, with each new capability implemented via short, conversational prompts.
What Worked Well
Once the initial structure was in place, the core features came together quickly. Bolt was able to interpret and implement each of the main components with minimal intervention. The following areas worked as expected, either on the first attempt or with only minor adjustments:
Thread Management
The left sidebar included all expected thread functionality:
This functionality was implemented cleanly and matched the behaviour seen in typical chat interfaces.
Chat Interface
The central chat window also came together efficiently:
Prompt Library
The right-hand sidebar, which was the main differentiator for this app, was pretty well executed:
Settings & API Integration
The settings modal allowed users to:
What I Added or Had to Fix
While Bolt handled much of the initial scaffolding, several core features were added or refined through direct input and intervention - particularly around API integration, user experience polish, and prompt reuse.
Prompt Creation from Messages
One addition was the ability to save any message bubble directly into the prompt library (in retrospect, I had assumed it would be built, but never actually documented it anywhere). I asked Bolt to implement the following:
“When I hover over a message bubble, I want to click ‘Add Prompt’, which copies the message content into the prompt text.”
Bolt responded by adding an icon to each message, which on click:
This feature transformed past messages into a reusable library of high-quality prompts, which was the point of this demo app.
Fixing Prompt Execution
Initially, clicking “Run” on a saved prompt inserted it into the thread, but failed to trigger an actual API request. I clarified the expected behaviour:
“When I click ‘Run’ on a saved prompt, it should also execute it - just like sending a new message.”
Bolt updated the logic so that saved prompts now:
This change made the prompt library fully interactive.
Upgrading to OpenAI’s /v1/responses API
One of the more significant changes was the migration from the older /v1/chat/completions endpoint to the newer /v1/responses API, which supports advanced features such as web search, structured output, and multi-turn reasoning.
Bolt couldn't access documentation on its own, so I manually copied and pasted the full API reference into our session. Once it had processed the schema, I instructed:
“Update the integration to use the new Responses API, and enable web search support.”
Bolt adapted the code accordingly, but there were a few issues along the way:
Once configured, the upgraded API enabled:
It's important to point out that bolt (and lovable) are trained on past data, but can consume new documentation and use it effectively. It demonstrates both the flexibility of the build and the importance of human-in-the-loop oversight when working with AI-powered development tools.
Sidebar Collapse Fix
Finally, I addressed a lingering layout issue: the left sidebar appeared to collapse visually, but continued to occupy layout space. I prompted (with a screenshot to demonstrate):
“The left sidebar collapse still results in a messed-up interface. Make it behave like the right sidebar.”
Bolt iterated on the styles and behaviour until the issue was resolved. The final result ensured that:
What Broke (and What I Learned)
Despite the speed of the build, a few issues inevitably surfaced. Interestingly, every major bug was introduced by Bolt, but also resolved by Bolt once the problem was pointed out. I didn’t need to write a single line of code to fix them. The key was knowing what to ask.
Infinite Thread Creation Loop
At one point, sending a message triggered an endless cascade of new threads. This turned out to be a race condition caused by deferred message sending before the currentThreadId was ready.
After I flagged the problem, Bolt rewrote the logic to use useEffect and split message handling cleanly across state updates.
Tailwind Version Conflict
The project was scaffolded using Tailwind CSS v4 (beta), which caused utility classes like space-x-1 to misbehave. This broke the build.
I asked Bolt to fix it, and it responded by downgrading to Tailwind v3.4 and updating the configuration files accordingly.
File Search Tool Failures
When setting up the OpenAI /v1/responses endpoint integration, Bolt added both web and file search tools. File search caused errors due to missing vector_store_ids, which weren’t part of the local build.
I asked Bolt to remove file search and retain only web search. It adjusted the configuration without any further issues.
Responses API Format Errors
The initial integration with OpenAI’s /v1/responses endpoint failed due to incorrect use of the input_text field. Since Bolt can’t read external documentation, I pasted the full API reference into the chat.
With the docs available, I asked Bolt to update the integration. It corrected the input format, removed unsupported fields, and handled streaming and tool responses as expected.
Sidebar Collapse Issues
Of all the bugs, this was the most persistent. While the right-hand sidebar (prompt library) collapsed cleanly on the first attempt, the left-hand thread panel did not. It appeared to slide out of view but continued to occupy layout space, breaking the central layout and affecting responsiveness.
I asked Bolt to make the left sidebar behave the same way as the right. It took several rounds of prompting and iteration, with changes to Tailwind classes, transitions, and overflow logic, before the behaviour was consistent across both sides.
What stood out was the asymmetry: the right-hand collapse was implemented perfectly on the first try, yet the left took repeated corrections. The root cause wasn’t clear, and thus fixing it was unexpectedly painful. This is probably the most common problem when using any AI tooling at the moment.
Summary of the Process
I started by asking ChatGPT to generate a product specification. Rather than stopping there, I prompted it to ask clarifying questions. Through that back-and-forth, the brief evolved into something much tighter and more practical. It was a collaborative definition phase, not just a lazy writing exercise.
I used the same approach with Bolt. I told it I’d be supplying a specification, and invited it to ask questions if anything was unclear. It did exactly that, identifying potential ambiguities and raising assumptions that needed to be addressed. That Q&A loop became the real planning process. Only once the spec and context were clear did we proceed with the build.
And for the most part, it worked. Bolt handled the structure well, got most of the functionality right on the first pass, and resolved issues when they were pointed out.
I asked Bolt to use the OpenAI responses API, which would enable web search. It wasn't able to automatically search the web for documentation, so I provided it. Once that was done, it proceeded to implement calls to the new endpoint relatively seamlessly.
The only real sticking point was a UI bug in the left-hand sidebar collapse, which took several iterations to fix. It was surprising that the right-hand collapse worked perfectly from the start, but the left required so much rework.
You can try the app here: 🔗 https://whimsical-salamander-f73faf.netlify.app
It runs locally in your browser, and uses local storage. You'll need to supply your own OpenAI API key, though. To do so, click the cog in the top right, which opens the settings panel. I'd also suggest defaulting to GPT-4o mini to keep costs down.
Don’t have an OpenAI API key? You can create one here - just sign in and generate a new key.
For anyone experimenting with AI-assisted development, this is the workflow to adopt: don’t just give orders. Let the AI interrogate your thinking. Invite its questions. That loop — where you define, and the system refines — is where the real leverage lives.