From Sardinia to the App Store: Building an AI-Powered App (Second in the “Entrepreneurial PM” Series)
Forget another generic “how-to” article. This is the raw, unfiltered story of how an idea can become a live product despite setbacks.
Picture a turquoise beach in Sardinia — not where you’d expect a lightbulb moment. Yet that’s precisely where Abstract Daily was born: AI-powered summaries of the latest research in various fields.
This is the second article in my “Entrepreneurial PM” series (you can find the first one here if you missed it). I will tell you how this whole thing went from a random thought to a product launch and all the crazy stuff that almost stopped it from happening.
The Inspiration
So, the idea hit me when I was on vacation around September 2024. I was experimenting with ChatGPT and other AI models when I noticed that these tools tend to favor information that confirms what they already “know.” Mainly because of how they’re built and the information they’re trained on. Even the latest “deep research” models follow a similar structure:
For someone who regularly dives deep into research papers, I immediately spotted an opportunity. Combining this insight with my personal goal to refresh my coding skills and dive head-first into the AI-agent hype, I envisioned Abstract Daily — a mobile app offering AI-generated research summaries that would operate in the following way:
Why was I brainstorming this while vacationing in Sardinia? That is the million-dollar answer.
Early Market Research
I did a quick market scan, noting that while web-based AI tools like Elicit and Consensus were already established, mobile solutions remained relatively unexplored. Honestly, I didn’t profoundly validate product research — competitive analysis and technical feasibility assessments were done. However, from a product management lifecycle perspective, much more should have been done before the first line of code. Considering my personal learning goal, I was fine with it.
Building a Minimal Lovable Product (MLP)
I planned to start small and build a basic version of the app first — AKA a walking skeleton. This way, I could get it in front of people quickly, hear what they thought, and fill in the gaps from the initial research I hadn’t fully explored yet.
My main goal with this early version was to figure out the core problem Abstract Daily solves for people (the “job to be done”) and what makes it different from other apps (its “unique selling point”). I’d worry about how to make money from it, how to grow it, and how to make it handle more users later.
Honestly, a big part of this was also for me to learn and grow by actually launching something. When it came to choosing the technology, I mostly went with what I already knew from my work at www.w3lcome.com. For the AI agent stuff, I picked the tools with the most learning materials available to teach myself along the way.
For more details on how I went from idea to product live, I created a “geek section” below. For everyone else, the important is that in a few weeks, the application was almost ready but…
Setback 1: The Knee Surgery
I started building the product plan around late September. In total, it took me about five weeks of work to reach MVP:
One week on product/architecture: research, user flows, diagrams, JTBD, and opportunity mapping.
Three weeks on coding: primarily Swift and Firebase functions (with a solid one week alone just for setup and config).
Just as I was about to launch, I found myself facing ACL reconstruction surgery and meniscus repair, setting me back another month and a half.
Setback 2: The Apple Saga
When I finally got back on my feet — literally — I tried to release the app on the App Store. Suddenly, Apple kept denying my application without a sensible reason. The back-and-forth with Apple engineering support lasted more than one month. Eventually, with a friend’s help, I finally managed to get approved.
By then, I had lost close to three months dealing with these setbacks. To top it off, I still have a PM full-time job with a traveling schedule, including trips to Southeast Asia. The time-to-market for Abstract Daily stretched to about 5–6 months — far from ideal, but I’d cut myself some slack, considering I had bigger fish to fry.
Abstract Daily: Live at Last!
Today, Abstract Daily is finally live. Friends and family are using it, and now that key tracking features feeding the AI’s learning loop are in place, it’s ready for broader testing.
Here’s what I’ve learned launching an end-to-end AI product:
End-to-end (E2E) Products Are More Than Code: Design, UX, ASO, tracking, and translations take significant effort. I always knew that, but it feels different when you have to do everything yourself.
Data is Gold: The more data the agent has, the smarter it becomes. Small user bases mean limited insights to fine-tune the model. Scaling is not just about attracting more users but also about gathering enough data for personalization and accuracy.
Success Metrics are Tough: Measuring the AI’s effectiveness isn’t straightforward — accuracy, speed, and impact all matter profoundly. If the AI is supposed to solve a user’s “job to be done,” how do we measure success? This is harder than it sounds and probably THE most important for those working with AI agents. Here is a good article around this topic.
With the product live, I’m leaning into the bullseye methodology from the book Traction to find the best distribution channels.
GEEKS’ CORNER: The Technical Bits
Skip to the conclusion if you’re not into tech.
On Preparing for LLM-Assisted Coding
If you want LLMs to code effectively with you, the magic happens long before writing any prompts. It starts with deeply understanding your users — mapping their Jobs-to-be-Done (JTBD), carefully outlining user flows, and crafting precise architectural diagrams. You also need explicit data structures, API schemas, acceptance criteria, and clearly defined success metrics. In short, the more accurate your vision, the better your results from the LLM.
For Abstract Daily, here’s the stack and approach:
Frontend: SwiftUI, UIKit, MVVM, Firebase SDK
Backend & AI Agents: Firebase Cloud Functions (Python), Firebase Firestore, Firebase Messaging, Semantic Scholar API, OpenAI API
Product Management & Research: ChatGPT + Gemini + Claude.ai for ideation, research, and defining success metrics; Miro AI for diagrams; Notion AI for planning and docs.
Development & Debugging: Gemini + Claude + ChatGPT for coding support and debugging
Visual: DALL-E 3/ChatGPT for visual assets
Architectural Setup & Agents
I will focus mainly on the architectural setup and the agents. For starters, I tested some of those non-code AI agents, and they are great for optimizing or performing tasks on a personal level (a repetitive task at work or personal financial budgeting, for example) but not ideal for a live product.
On the AI side, the whole thing started with nothing more than fancy workflows but evolved over the last few weeks. Let’s break down quickly a few concepts here:
Workflows → A sequence of steps or tasks that transform an input into a specific output (as simple as that.)
Chatbots → A slightly more advanced version of a workflow. They can have some back-and-forth but typically solve one problem at a time (I see chatbots as workflows on steroids).
AI Agents → Specialized systems designed for specific tasks, often working more independently within those tasks. They have some built-in intelligence to follow instructions without constant help.
Agentic AI → A broader idea where systems can perform tasks, adapt, learn, and make decisions within a particular area to reach a goal. They are more proactive and can figure things out as they go.
Autonomous AI → Systems with the ability to operate independently across open-ended challenges.
The progression from Workflow to Autonomous AI is fussy, and I consider it as progressively adding more independence, adaptability, and decision-making power to the system.
Remember that even among the giants of the current AI World, defining the terms is still a work in progress.
OpenAI published a blog post that defined agents as “automated systems that can independently accomplish tasks on behalf of users.” AI lab Anthropic says that agents “can be defined in several ways,” including “fully autonomous systems that operate independently over extended periods.” Salesforce has a lot of content around the topic.
I like the definitions I gave above, and Abstract Daily agents range from fancy workflows to Basic Agentic AI on request.
As an image speaks for a thousand words, here:
More in the Approach & Learnings
With the logic and flow above, at some point, I got myself stuck when larger tasks took around 9 minutes to finish — too slow for users. After revisiting some content about parallel computing (college finally paid off), I broke down tasks into five parallel Cloud Functions, slashing processing time from 9 minutes to 38 seconds.
Cost-wise, it’s been surprisingly cheap — around $0.33 for our modest start. But scaling isn’t just about costs: the real challenge is getting enough quality user data to improve the AI continuously. The better the data, the smarter the agents become, and the closer we get to providing genuinely valuable insights.
More modern models would significantly increase the quality of the aggregator and writer agent. However, the cost is exponential and hits the thousands very fast. This is a sensible next step if the product scales and delivers real value to users willing to pay (PMF).
For anyone interested in coding, I found a setup that worked for me after some trial and error: using Gemini, ChatGPT, and Claude (all the latest versions) together. I had three separate “digital workspaces,” one in each. One was focused on how the app looks and feels (UI/UX), another on the app’s internal logic (MVVM), and the last on the server-side stuff.
I enjoyed coding the main application logic and the Firebase functions (the backend stuff) the most, so I handled that. But for setting up the basic structure and “moving boxes” around on the screen — the LLMs were a huge help and did a lot of the heavy lifting. It ended up being about 35% of my code and 65% generated by the LLMs.
Debugging can be tricky, and I still think it’s essential for a developer to be able to handle that. But honestly, the LLMs have gotten so much better at debugging lately — they’re surprisingly good at it! They are also pretty good at the initial setup and configuration of things like IDEs and containers, which I really dislike doing.
The key takeaway here, especially for entrepreneurial product managers, is that if you do a great job defining the problem and the solution, spend time on research, and create clear plans, diagrams, and artifacts, the coding part becomes much smoother. Give the LLMs the smallest possible problem to solve (divide-and-conquer), not only with coding but UX, marketing, etc.
Next Steps & Future Plans
User Acquisition & Feedback: Acquiring enough users for extensive feedback rounds, user interviews, usability tests, and data-driven product refinement toward validating the JTDB and improving the product where necessary. Maybe pivoting or dropping the product all along.
Deeper Multi-Agent Collaboration: To increase the quality of the agents without relying on more expensive models. Expand from isolated Cloud Functions to dynamic agent networks using LangChain. I’d love to evolve our agents into a more dynamic, collaborative network. It would allow different agents to talk with each other about a common task. We’ve built a solid foundation, and this feels like the natural next step to level up the app (and my learnings.)
Refine AI Learning Loop: Focus on understanding if our AI is truly delivering value. Right now, we’re tracking key metrics like click-through rates and user engagement. We need to rigorously validate if these are the right metrics to ensure the AI is learning effectively and the quality of the aggregation and summaries keeps improving. This is a tough one.
Monetization & Growth: Leverage Traction’s bullseye method to explore distribution channels, aiming for 5K daily active users with solid retention.
Conclusion
It's been quite a journey from Italy to the hospital bed to Apple’s approval. This 5–6 month journey could have taken just a month if I’d had fewer obstacles, but I don’t regret the detours. Abstract Daily is live now — and a reminder that shipping a product is never just about the code.
Building an end-to-end AI product taught me that success hinges on more than algorithmic prowess. It is challenging to wear all the hats simultaneously (Found, Researcher, PM, UX-Designer, FE-Dev, BE-Dev, AI-Dev, Running the Costs, Marketing-ASO, etc.). It's even harder when you already have a full-time job as a PM. But the learnings are worth it, and I have already incorporated some of it into my main job.
Entrepreneurial product management is all about solving real problems — and sometimes, the biggest hurdle is just getting your idea off the beach and into the wild.
My ask: If you’re as curious about the AI World as I am or enjoy tinkering with new tech, I’d love your feedback. Try Abstract Daily and share your thoughts; they are more than valuable!
Where to Find Abstract Daily?
App Store Link: here
#ProductManagement #CareerDevelopment #Innovation #Leadership #Entrepreneurship #AIAgents #AI
Product Leader | Entrepreneur | Data & AI Specialist
4moForgot to mention Perplexity as a tremendous AI search engine
Product Leader | Entrepreneur | Data & AI Specialist
4moHuge thanks to Eduardo Zatoni for getting us through the Apple App Store approval!