How I use AI
Over the past couple of years, LLMs have become an integral part of both my personal and professional life. Here are a few ways I use AI, and a few ways I don’t.
Reading and Writing
“You can't replace reading with other sources of information like videos, because you need to read in order to write well, and you need to write in order to think well.” - Paul Graham
When I read, I still start the traditional way. I rarely summarize books or articles with LLMs. Instead, while reading if I want to dive deeper into a passage or topic, I switch to the LLM, then return to the passage when done. I never want to lose my attention span or the ability to correlate what I’m reading with something I read in the past. The summarization prompts in LLMs can ruin a book's essence, so I avoid them.
I take the same approach with writing. For me, writing is thinking, and I’ll never delegate it to an LLM. I do all the writing; the AI helps polish phrases, not generate content. Some folks take the opposite approach and have LLMs write the vast majority of their articles; the result is AI slop. Spotting AI slop is easy, and once I do, I stop reading.
Reading and writing help me to think better, so I will never let the model take over these activities.
Financial Research
I am a value investor and use many mental models and data points when I pick stocks. I’ve put on paper all my thoughts, the values that matter to me, and the time horizon I care about. This PDF is an example—it contains notes from my favorite investment book, What I Learned About Investing from Darwin. I've uploaded all my thoughts to a ChatGPT project. Every week I use Deep Research to send me stocks that meet my criteria, with detailed reasoning. It’s still early days, but I’ve already found some interesting companies I might invest in.
Teaching my kid
This is my favorite AI use case. I mostly use Claude Artifacts to create a personalised learning plan for my 9 year old on any topic he's interested in. We've explored diverse subjects together—algebra, physics, geography, sports, nutrition, and much more.
Here’s the prompt I use to create content for him. I borrowed most of it from a tweet by Dwarkesh Patel.
"Remember, my son is a 9-year-old. Don't share answers; only nudge. Always make the lessons playful and visual. Create fun games when you can. He would benefit most from an explanation style in which you frequently pause to confirm, by asking him test questions, that he’s understood your explanations so far. Particularly helpful are test questions related to simple, explicit examples. When you pause and ask him a test question, do not continue the explanation until he’s answered to your satisfaction. I.e., do not keep generating the explanation—actually wait for him to respond first!”
Together, we've created a personalised learning environment that feels like play for my kid.
Health Insights
I’ve uploaded all my health reports from the past five years into ChatGPT, along with details of a typical day - when and how much I exercise, whether I get enough sun, etc. When I received my latest blood work this month, I had DeepResearch look at it, correlate it with the historical reports, and suggest a plan. I printed the plan and took it to my doctor, and he agreed with almost all of it!
Next, I plan to aggregate all the health data I collect daily (from Whoop, Ultrahuman CGM, blood pressure, etc.) and have DeepResearch share insights on a weekly basis.
Vibe Coding
Anytime I have an idea that I'm excited about, I vibe code a quick prototype - to visualise the idea better and have a few folks try it out. Tools like Stitch, Figma Make, Firebase Studio, even Claude Artifacts (and so many more) are great at churning out demos that are good enough to solicit feedback.
For production, I switch to real code: Claude Code (my favorite) plus a little Cursor, treating them as assistants, and not putting them in charge. For now, they are in charge only when I'm in bug fixing mode (Codex works great for that as well). I'm firmly in the camp that even in an era where AI coding agents will dominate the field of software engineering, it's still important for anyone building production grade software to first understand the fundamentals and then leverage AI to turbocharge the build-out.
Search
Search is the killer app of the Gen AI era. I've pretty much moved on from Google Search (although the new AI mode is great!). All the major AI research labs have search built in and they are all great. I prefer searching using o3. It's slower but seems to do a more comprehensive job than any other model while searching.
Knowledge garden
I have thousands of bookmarks, notes I've made of books and articles I've read and use Obsidian to jot down all my ideas and fleeting thoughts. I've hooked Claude to my entire knowledge base (via MCP, uploads, etc). One query can pull the Peter Drucker quote I clipped last year and the action item I wrote beneath it. This is the natural evolution of “document everything and make it searchable and leverage it in your future work”.
Travel Concierge
Turns out, LLMs are also fantastic travel assistants. On a recent trip to Spain and Portugal, I fed the LLM my entire itinerary, details of family members traveling with me, our dietary constraints and the kind of holiday we wanted. o3 did a fantastic job of planning every day out for us - It sequenced the days so we hit every must-see place, found great cafes (I'm obsessed about good coffee) and spots my son would love, removing most of the planning stress.
Purchasing products
Most purchases for me now start in chat. When I want to buy an AC for my room, I begin by telling the model what I care about— sustainability, quick cooling, low noise. I share the room’s size and ask it to aggregate reviews from sources I trust (Amazon, Reddit, etc.). Hours of tab hopping are now shrunk to minutes. I use the same approach for most products I buy - laptops, routers, shoes and much more.
LLMs have transformed every aspect of my personal and professional life. From teaching my son algebra to analyzing my health data, from prototyping ideas to planning trips, these LLMs have become integral to how I work, learn, and live. Having said that, I'm very conscious of not being overly dependent on LLMs for everything. I don't want to lose the ability to reason through problems on my own. I will continue to use LLMs as thinking partners, not replacements for thinking. The key is to harness AI's power while preserving our capacity for independent thought.
I'll leave you all with my current AI stack. The stack will evolve as LLMs improve, but the mental model stays the same: choose the right tool for each use case and always, always maintain your agency.
Senior Technical Manager at Infosys
3wLove the way you’re navigating this tech tango with a clear head. It's like using GPS - helpful for directions, but you still got to know the way home.
Sr. Application Developer @ BMC Softwares.
3wThoughtful post, thanks Vikram
Program & DesignOps Management | Product Building (0-1) | AI Innovation in Service Design | Ex-Lollypop, Rahi-Wesco, Amazon
3wThis is such a balanced perspective. I love how you frame LLMs as “thinking assistants” rather than replacements for thought -- that distinction feels crucial as these tools become more embedded in our lives. Vikram Shenoy I’m curious, have you found any specific habits or practices that help you maintain your own problem-solving muscles while still leveraging AI’s strengths?
Holy Child Sc Scr. School || IITM || IT || Analyst at Deloitte USI || Customer & Marketing || Salesforce || Digital Customer || SAP || Automation || TOSCA || Selenium
3wAgreed , AI should empower, not replace, our ability to think critically. Thank you for sharing this perspective!
UX | UI | HMI | Future fanatic. An experience designer of the user, by the user and for the user.
3wThanks Vikram for sharing the modals you used for the specific tasks, really useful 🙏👍