Skip to main content

Getting Started

Prerequisites

  • Docker and Docker Compose (for production)
  • Bun 1.2+ (for local development only)
  • An LLM API key (OpenRouter, OpenAI, Ollama, or use GitHub Copilot for free)

The fastest way to get running. Each script checks for git and Docker, installs them if missing, clones the repo, and starts the app.

Linux / macOS

curl -fsSL https://raw.githubusercontent.com/Gurkirat-Singh-bit/Get-that-quick/main/install.sh | sh

Windows (PowerShell — run as Administrator)

irm https://raw.githubusercontent.com/Gurkirat-Singh-bit/Get-that-quick/main/install.ps1 | iex

Both scripts install to ~/GetThatQuick and expose the app at http://localhost:12233.


Manual Install (Docker Compose)

If you prefer step-by-step control:

git clone https://github.com/Gurkirat-Singh-bit/Get-that-quick.git
cd Get-that-quick
docker compose pull
docker compose up -d

Open http://localhost:12233 in your browser.

By default, the compose file pulls the published GHCR image:

ghcr.io/gurkirat-singh-bit/get-that-quick:latest

That means a normal install does not rebuild the app locally.

All persistent data is stored at ~/getthatquick/ on your host machine:

~/getthatquick/
├── prompts/ # Session JSON files
├── templates/
│ ├── local/ # Your custom templates
│ └── community/ # Synced from GitHub
├── models/ # Downloaded Vosk STT models
└── config/
└── settings.json # App configuration

Development Setup

1. Clone & Install

git clone https://github.com/Gurkirat-Singh-bit/Get-that-quick.git
cd Get-that-quick

cd server && bun install
cd ../client && bun install

2. Start Dev Servers

Terminal 1 — Server (port 3000):

cd server
bun run dev

Terminal 2 — Client (port 5173 with API proxy):

cd client
bun run dev

The Vite dev server proxies /api and /ws requests to localhost:3000.

3. First-Run Setup

On first launch, the Onboarding Wizard will guide you through:

  1. Welcome — Overview of features
  2. Voice Model — Download a Vosk STT model (optional — skip if not needed)
  3. LLM Provider — Select and configure your AI provider
  4. API Keys — Enter API keys for your chosen provider
  5. Done — Review your configuration

4. Configure an LLM Provider

ProviderBase URLNotes
OpenRouterhttps://openrouter.ai/api/v1Recommended — access to 200+ models
OpenAIhttps://api.openai.com/v1Direct OpenAI access
Ollamahttp://localhost:11434/v1Local models, no API key needed
LM Studiohttp://localhost:1234/v1Local models, no API key needed
GitHub Copilot(auto)Free for students — see guide
CustomAny OpenAI-compatible URLWorks with any compatible endpoint

Build From Source

Only use this path if you are actively modifying the codebase or testing local changes.

# Build the client
cd client && bun run build

# The server serves the built client from client/dist/
cd ../server && bun run start

Docker Configuration

The docker-compose.yml maps:

SettingValue
Container port3000
Host port12233
Data volume~/getthatquick:/data
Imageghcr.io/gurkirat-singh-bit/get-that-quick:latest
Restart policyunless-stopped

Environment variables:

  • PORT — Server port inside container (default: 3000)
  • DATA_DIR — Data directory inside container (default: /data)