Sitemap
Javarevisited

A humble place to learn Java and Programming better.

How I’m Learning Machine Learning and AI

10 min readAug 10, 2025

--

credit — LLM Engineering Handbook and Decoding ML

Hello guys, I’m not writing this article as someone who has “mastered” Machine Learning and AI.

I’m writing this as someone who’s 18 months into an incredible journey that’s nowhere near finished.

Because last week, I deployed my first LLM-powered chatbot for a client. Three months ago, I didn’t even know what a transformer was.

But two years ago? I thought AI was just science fiction. I know about ChatGPT and AI tools but still thought AI will take some time before becoming mainstream.

The wake-up call came during a team meeting when my manager said:

“We need to add some AI features to our product. Can anyone research this?”

Everyone looked around. I volunteered, thinking: “How hard can it be?”

Spoiler alert: I had no clue what I was signing up for.

That night I googled “machine learning tutorial” and got hit with calculus equations, mathematical proofs, and terminology that made my head spin.

I almost gave up.

But something inside me said: “If teenagers on YouTube can build AI projects, so can I.”

Here’s how I went from being completely intimidated by AI to building production ML models, understanding LLMs, and even getting paid to implement AI solutions (while still learning something new every single day).

1. First, I Accepted That AI is Huge and I Don’t Need to Learn Everything

Machine Learning felt like drinking from a fire hose.

People throw around terms like “neural networks”, “gradient descent”, “attention mechanisms”, “fine-tuning”, “reinforcement learning”…

Initially, it made me feel overwhelmed. But then I realized:

AI is not one skill — it’s an entire universe of interconnected fields.

You have:

The breakthrough moment came when I stopped trying to learn “everything about AI” and started asking:

“What specific problem do I want to solve?”

That changed everything.

I picked one path: building practical applications with existing tools and APIs.

2. I Started With No-Code/Low-Code AI Tools (And I’m Not Ashamed)

Here’s something most AI tutorials won’t tell you:

You don’t need to build neural networks from scratch to create valuable AI applications.

My first “AI project” was embarrassingly simple:

  • A sentiment analysis tool using Google Cloud’s Natural Language API
  • Built with Python requests and a simple Flask web interface
  • Took me 3 hours, including learning the API documentation

But you know what? It worked. It solved a real problem. My team was impressed.

That small win gave me confidence to keep going.

I started exploring:

No advanced math. No PhD required. Just practical problem-solving.

3. I Built My Foundation While Building Projects

Instead of spending months on theory, I learned concepts as I needed them.

Here’s how I structured my learning:

a) Python and Data Fundamentals

Books that actually helped:

  • Pandas Cookbook by Theodore Petrou — essential for data manipulation

Online courses I actually finished:

  • Kaggle Learn micro-courses — free, focused, and immediately applicable

b) Statistics and Math (Just Enough)

I realized I needed some foundation, but not a PhD level.

Resources that clicked:

c) Machine Learning Concepts

The book that changed everything:

Online platforms I found valuable:

  • Fast.ai — practical deep learning for coders (they have both free courses and paid advanced content)
  • DeepLearning.AI courses on Coursera — Andrew Ng explains complex concepts simply
  • Machine Learning Mastery blog by Jason Brownlee — step-by-step tutorials

4. I Started Following AI Practitioners, Not Just Academics

Theory is important, but I needed to see how people actually build and deploy AI systems.

YouTube channels that became my university:

  • Sentdex — practical Python and ML tutorials
  • Two Minute Papers — keeps me updated on latest AI research
  • Lex Fridman — deep conversations with AI researchers and practitioners
  • AI Coffee Break with Letitia — explains complex papers in simple terms

Newsletters and blogs I read religiously:

  • The Batch by deeplearning.ai — weekly AI news
  • Import AI by Jack Clark — technical but accessible
  • Towards Data Science on Medium — community-driven insights

Twitter/X accounts worth following:

  • @karpathy (Andrej Karpathy) — insights from the creator of GPT
  • @jeremyphoward (Jeremy Howard) — Fast.ai founder
  • @fchollet (François Chollet) — creator of Keras

Following these people helped me understand not just the “what” but the “why” behind AI decisions.

5. I Embraced the Era of Large Language Models (And It’s Wild)

18 months ago, GPT-3 was the coolest thing ever.

6 months ago, ChatGPT changed everything.

3 months ago, GPT-4 and Claude made my previous projects look primitive.

Last week, I’m learning about GPT-4o and Claude 3.5 Sonnet.

This field moves incredibly fast.

Here’s how I’m staying current with LLMs:

a) Understanding the Fundamentals

Essential resources:

  • “The Illustrated Transformer” blog post by Jay Alammar — explains attention mechanisms visually
  • Hugging Face NLP Course — free, comprehensive, hands-on
  • AI Engineering by Chip Huyen

b) Hands-on LLM Development

Platforms I’m actively using:

  • OpenAI API — for production applications
  • Anthropic Claude API — for complex reasoning tasks
  • Hugging Face Hub — for open-source models
  • LangChain — for building LLM applications
  • Pinecone or Weaviate — for vector databases and RAG systems

Current project: Building a document analysis system that combines LLMs with vector search.

6. I’m Learning by Solving Real Problems (Not Just Tutorials)

The breakthrough came when I stopped following tutorials and started building solutions for actual problems.

Projects that taught me the most:

Project 1: Customer Review Analyzer

  • Problem: Help e-commerce client understand customer sentiment
  • What I learned: Text preprocessing, sentiment analysis, data visualization
  • Tools used: Python, pandas, VADER sentiment, Streamlit

Project 2: Content Generation Assistant

  • Problem: Marketing team needed help with social media posts
  • What I learned: Prompt engineering, API integration, content moderation
  • Tools used: OpenAI API, Flask, prompt templates

Project 3: Document Q&A System

  • Problem: Company needed to query their internal documentation
  • What I learned: Vector embeddings, RAG architecture, semantic search
  • Tools used: LangChain, ChromaDB, OpenAI embeddings, Streamlit

Each project forced me to learn new concepts, debug real issues, and think about deployment, costs, and user experience.

7. I’m Building My AI Learning System

Since AI moves so fast, I’ve had to develop a system for continuous learning:

a) Daily Learning Routine

  • 15 minutes: Reading AI newsletters over coffee
  • 30 minutes: Working on current project or following a tutorial
  • 10 minutes: Checking Twitter/LinkedIn for AI news and discussions

b) Weekly Deep Dives

  • Saturday mornings: Longer tutorial or new tool exploration
  • Sunday evenings: Writing about what I learned, sharing insights

c) Monthly Experiments

  • Try one completely new AI tool or technique
  • Build a small project around it
  • Document lessons learned

d) Learning Resources I’m Currently Using

Books on my reading list:

Online courses I’m taking:

8. I’m Teaching and Sharing What I Learn

Even though I’m still learning, I’ve found that teaching others helps solidify my own understanding.

How I share my learning:

  • Writing technical blog posts about projects
  • Creating simple tutorials for colleagues
  • Speaking at local developer meetups
  • Contributing to open-source AI projects

Every time I explain transformer attention or RAG architecture, I discover gaps in my understanding.

If I can’t explain it simply, I don’t understand it well enough.

My Honest Advice for Your AI Journey

If you’re starting today, or if you’re feeling overwhelmed by how fast AI is moving — here’s what I wish someone had told me:

AI is not magic, but it is moving incredibly fast.

You don’t need a PhD in machine learning.

You don’t need to understand every algorithm.

You just need to:

  • Start with practical applications, not theory
  • Pick one area to focus on initially (LLMs, computer vision, traditional ML)
  • Build projects that solve real problems
  • Stay curious and embrace continuous learning
  • Don’t try to keep up with everything (you’ll burn out)
  • Focus on fundamentals (they don’t change as fast as the tools)

Even spending 45 minutes daily, you’ll see significant progress in 3–6 months.

The Reality: I’m Still Learning Every Day

Here’s what I’m currently working on:

  • Understanding RAG systems better (retrieval-augmented generation)
  • Learning about AI agents and tool use
  • Exploring fine-tuning techniques for specific domains
  • Building production-ready AI applications (monitoring, scaling, costs)

Next month, there will probably be three new AI tools that change everything again.

And that’s exciting, not overwhelming.

Final Thought: The Journey is the Destination

In AI, you’ll never feel like you “know everything.” The field is evolving too quickly.

What matters is:

  • Can you solve real problems with AI tools?
  • Do you understand the fundamentals well enough to adapt?
  • Are you building useful applications?
  • Can you learn new tools and techniques as they emerge?

That’s what makes you valuable in the AI era.

Not having memorized every algorithm, but being able to learn, adapt, and build.

Start Your AI Journey Today

Begin with a simple project using existing APIs.

Stay curious about new developments, but don’t chase every shiny new tool.

Build something real. Share what you learn.

You’ll be amazed at how much you can accomplish — and how much there still is to discover.

One model at a time. One project at a time. One day at a time.

The future is being built by people who are learning AI, not just by people who already know it.

If you found this helpful, thank you for reading ❤️

I share more about my AI learning journey, project breakdowns, and practical tutorials. The field moves fast, but we can learn together.

Other AI and Cloud Computing Resources you may like

Thanks for reading this article so far. If you find these Udemy Courses for learning Spring AI from scratch, including tools and libraries then please share with your friends and colleagues. If you have any questions or feedback, then please drop a note.

P. S. — If you want to start from books the start with AI Engineering by Chip Huyen and The LLM Engineering Handbook by Paul Iusztin and Maxime Labonne, both of them are great books and my personal favorites. They are also highly recommend on Redditt and HN.

--

--

Javarevisited
Javarevisited

Published in Javarevisited

A humble place to learn Java and Programming better.

javinpaul
javinpaul

Written by javinpaul

I am Java programmer, blogger, working on Java, J2EE, UNIX, FIX Protocol. I share Java tips on http://javarevisited.blogspot.com and http://java67.com

Responses (1)