Getting Started
Prerequisites
- Docker and Docker Compose (for production)
- Bun 1.2+ (for local development only)
- An LLM API key (OpenRouter, OpenAI, Ollama, or use GitHub Copilot for free)
One-liner Install (recommended)
The fastest way to get running. Each script checks for git and Docker, installs them if missing, clones the repo, and starts the app.
Linux / macOS
curl -fsSL https://raw.githubusercontent.com/Gurkirat-Singh-bit/Get-that-quick/main/install.sh | sh
Windows (PowerShell — run as Administrator)
irm https://raw.githubusercontent.com/Gurkirat-Singh-bit/Get-that-quick/main/install.ps1 | iex
Both scripts install to ~/GetThatQuick and expose the app at http://localhost:12233.
Manual Install (Docker Compose)
If you prefer step-by-step control:
git clone https://github.com/Gurkirat-Singh-bit/Get-that-quick.git
cd Get-that-quick
docker compose pull
docker compose up -d
Open http://localhost:12233 in your browser.
By default, the compose file pulls the published GHCR image:
ghcr.io/gurkirat-singh-bit/get-that-quick:latest
That means a normal install does not rebuild the app locally.
All persistent data is stored at ~/getthatquick/ on your host machine:
~/getthatquick/
├── prompts/ # Session JSON files
├── templates/
│ ├── local/ # Your custom templates
│ └── community/ # Synced from GitHub
├── models/ # Downloaded Vosk STT models
└── config/
└── settings.json # App configuration
Development Setup
1. Clone & Install
git clone https://github.com/Gurkirat-Singh-bit/Get-that-quick.git
cd Get-that-quick
cd server && bun install
cd ../client && bun install
2. Start Dev Servers
Terminal 1 — Server (port 3000):
cd server
bun run dev
Terminal 2 — Client (port 5173 with API proxy):
cd client
bun run dev
The Vite dev server proxies /api and /ws requests to localhost:3000.
3. First-Run Setup
On first launch, the Onboarding Wizard will guide you through:
- Welcome — Overview of features
- Voice Model — Download a Vosk STT model (optional — skip if not needed)
- LLM Provider — Select and configure your AI provider
- API Keys — Enter API keys for your chosen provider
- Done — Review your configuration
4. Configure an LLM Provider
| Provider | Base URL | Notes |
|---|---|---|
| OpenRouter | https://openrouter.ai/api/v1 | Recommended — access to 200+ models |
| OpenAI | https://api.openai.com/v1 | Direct OpenAI access |
| Ollama | http://localhost:11434/v1 | Local models, no API key needed |
| LM Studio | http://localhost:1234/v1 | Local models, no API key needed |
| GitHub Copilot | (auto) | Free for students — see guide |
| Custom | Any OpenAI-compatible URL | Works with any compatible endpoint |
Build From Source
Only use this path if you are actively modifying the codebase or testing local changes.
# Build the client
cd client && bun run build
# The server serves the built client from client/dist/
cd ../server && bun run start
Docker Configuration
The docker-compose.yml maps:
| Setting | Value |
|---|---|
| Container port | 3000 |
| Host port | 12233 |
| Data volume | ~/getthatquick:/data |
| Image | ghcr.io/gurkirat-singh-bit/get-that-quick:latest |
| Restart policy | unless-stopped |
Environment variables:
PORT— Server port inside container (default:3000)DATA_DIR— Data directory inside container (default:/data)