Skip to content

πŸš€ AI-powered Hugo static site generator with local LLM integration. Create beautiful websites using Streamlit UI, Docker deployment, and intelligent content generation. Privacy-first, production-ready, fully containerized.

License

Notifications You must be signed in to change notification settings

shanojpillai/hugo-ai-studio

Folders and files

NameName
Last commit message
Last commit date

Latest commit

Β 

History

16 Commits
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 

Repository files navigation

πŸš€ Hugo AI Studio

Create beautiful websites with AI through simple chat interface

Hugo AI Studio is a modern web application that lets users create complete websites by simply describing what they want in natural language. The AI analyzes the description and generates a fully functional Hugo website with custom content. image

✨ Features

  • πŸ’¬ Simple Chat Interface - Just describe what website you want
  • πŸ€– AI-Powered Creation - Uses Ollama LLM to understand and create
  • 🌐 Instant Preview - See your website immediately
  • πŸ“₯ Download Ready - Get complete website files as ZIP
  • πŸ’Ύ Database Storage - All websites stored persistently
  • 🐳 Docker Containerized - Easy deployment and scaling

πŸ—οΈ Architecture

β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”    β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”    β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”
β”‚   React Chat    β”‚    β”‚  FastAPI + AI   β”‚    β”‚   Ollama LLM    β”‚
β”‚   Port: 3001    │◄──►│   Port: 8000    │◄──►│  Port: 11434    β”‚
β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜    β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜    β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜
         β”‚                       β”‚                       β”‚
         β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”Όβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜
                                 β–Ό
         β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”    β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”
         β”‚  SQLite DB      β”‚    β”‚     Nginx       β”‚
         β”‚  (Persistent)   β”‚    β”‚   Port: 8080    β”‚
         β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜    β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜

πŸš€ Quick Start

Prerequisites

  • Docker and Docker Compose
  • 4GB+ RAM (for Ollama LLM)
  • Internet connection (for initial setup)

1. Clone Repository

git clone https://github.com/your-username/hugo-ai-studio.git
cd hugo-ai-studio

2. Start Services

docker-compose -f compose.yml up -d

3. Download AI Model

docker exec hugo-ai-studio-ollama-1 ollama pull llama3.2

4. Access Application

πŸ’¬ How to Use

  1. Open the chat interface at http://localhost:3001
  2. Describe your website in natural language:
    • "Create a tech blog about artificial intelligence"
    • "Build a portfolio website for a photographer"
    • "Make a business website for a coffee shop"
  3. Wait for AI to create your website (30-60 seconds)
  4. Preview your site using the provided link
  5. Download ZIP file with all website files

πŸ“ Project Structure

hugo-ai-studio/
β”œβ”€β”€ frontend/                 # React Chat Interface
β”‚   β”œβ”€β”€ src/
β”‚   β”‚   β”œβ”€β”€ App.js           # Main chat component
β”‚   β”‚   └── index.js         # React entry point
β”‚   β”œβ”€β”€ public/
β”‚   β”‚   └── index.html       # HTML template
β”‚   β”œβ”€β”€ package.json         # Dependencies
β”‚   └── Dockerfile           # Frontend container
β”œβ”€β”€ backend/                  # FastAPI Backend
β”‚   β”œβ”€β”€ main.py              # API endpoints & AI logic
β”‚   β”œβ”€β”€ requirements.txt     # Python dependencies
β”‚   └── Dockerfile           # Backend container
β”œβ”€β”€ compose.yml              # Docker Compose configuration
β”œβ”€β”€ nginx.conf               # Nginx configuration
β”œβ”€β”€ data/                    # SQLite database storage
β”œβ”€β”€ generated-sites/         # Generated website files
└── ollama-data/            # LLM model storage

πŸ”§ Configuration

Environment Variables

Port Configuration

  • 3001: React Frontend (Chat Interface)
  • 8000: FastAPI Backend (API)
  • 8080: Nginx (Generated Sites)
  • 11434: Ollama (LLM Service)

πŸ› οΈ Development

Local Development Setup

# Frontend
cd frontend
npm install
npm start

# Backend
cd backend
pip install -r requirements.txt
uvicorn main:app --reload

# Ollama (separate terminal)
ollama serve
ollama pull llama3.2

API Endpoints

  • GET /health - Health check
  • POST /api/create-website - Create website from description
  • GET /api/download/{site_id} - Download website ZIP

🎯 Example Requests

Tech Blog

"Create a tech blog about artificial intelligence and machine learning with posts about recent developments"

Portfolio Website

"Build a portfolio website for a graphic designer showcasing creative work and client testimonials"

Business Website

"Make a professional website for a local coffee shop with menu, location, and contact information"

πŸ” Troubleshooting

Services Not Starting

# Check service status
docker-compose -f compose.yml ps

# View logs
docker-compose -f compose.yml logs frontend
docker-compose -f compose.yml logs backend

AI Model Issues

# Verify model is downloaded
docker exec hugo-ai-studio-ollama-1 ollama list

# Re-download model
docker exec hugo-ai-studio-ollama-1 ollama pull llama3.2

Port Conflicts

# Stop all services
docker-compose -f compose.yml down

# Check port usage
netstat -tulpn | grep :3001

πŸ“Š System Requirements

  • CPU: 2+ cores recommended
  • RAM: 4GB minimum, 8GB recommended
  • Storage: 10GB for models and generated sites
  • Network: Internet connection for initial setup

🀝 Contributing

  1. Fork the repository
  2. Create feature branch (git checkout -b feature/amazing-feature)
  3. Commit changes (git commit -m 'Add amazing feature')
  4. Push to branch (git push origin feature/amazing-feature)
  5. Open Pull Request

πŸ“ License

This project is licensed under the MIT License - see the LICENSE file for details.

πŸ™ Acknowledgments

  • Hugo - Static site generator
  • Ollama - Local LLM inference
  • React - Frontend framework
  • FastAPI - Backend framework
  • Docker - Containerization

Made with ❀️ for easy website creation

About

πŸš€ AI-powered Hugo static site generator with local LLM integration. Create beautiful websites using Streamlit UI, Docker deployment, and intelligent content generation. Privacy-first, production-ready, fully containerized.

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published