Build Your Own Private ChatGPT: Local AI with Docker, Ollama, Open WebUI, and ZImage Turbo
1:30 pm in Workshop TrackEveryone is experimenting with AI — but most organizations are sending sensitive data to third-party cloud services without fully understanding the tradeoffs.
What if you could run your own private ChatGPT-style system locally (and even hook up additional agents and management systems like OpenClaw)?
In this hands-on, live-build session, we’ll create a fully functional private Chat AI assistant that will create code, images, and talk to you via chat and voice interfaces using:
- Docker
- Ollama
- Open WebUI
- Local LLMs
- Local image generation (Z-Image Turbo)
- TTS using Kokoro AI
Using only a laptop and open-source tools, we’ll build a system that:
- Runs entirely on your own machine
- Keeps company data private
- Remembers conversation history
- Supports multiple models
- Generates images locally
- Can be extended for team use
No enterprise AI budget required. No cloud lock-in. No sending your data to the internet.
By the end of this session, you’ll understand how local AI actually works — and how you can deploy it yourself or inside your organization.
This is not theory. We will build it live.
Workshop Prerequisites:
Working computer with:
- Reasonable CPU running reasonable OS
- Windows (Intel 8th gen or newer)
- Linux (Intel 8th gen or newer)
- Mac (Intel 8th gen or newer or M1 or newer)
- Reasonable RAM
- 16 GB or RAM would be a good starting point
- Reasonable GPU (optional)
- Basic ML inference recommends: 12GB of vRAM
- ZImage Turbo: 24GB of vRAM
- Software preinstalled (we can configure it in workshop)
- Docker
- Ollama
- Install a base model with
ollama run phi4-mini
- Install a base model with
- Git
- GIT-LFS (optional)
- Curl (on linux)
The Maungs are going to be working and performing their workshop demos on Intel NUC 10th Gen i5 with 16GB of DDR4 for basic setup. Basic setup:
- Docker
- Ollama
- Kokoro TTS
- Open WebUI
- Basic models that can run on CPU with 16GB of RAM
They will demo / show you steps on how to enable Image creation using ZImage Turbo and add that feature on to OpenWebUI. This will be demoed on MacBook Pro M3 with 128GB of vRAM (and / or) Nvidia DGX Spark (128GB of vRAM), (and / or) Ubuntu Server with Undisclosed NVidia GPUs. ;)