Turning a Dusty Gaming PC into a Proxmox Homelab with Tailscale, Terminus, and Claude Code
February 15, 2026
I was staring at my old gaming PC last month – an i7-10700K with 32GB of RAM and an RTX 2070 Super – gathering dust under my desk. It hadn’t been turned on in over a year. Meanwhile, I was paying for cloud VMs to run dev environments, spinning up throwaway EC2 instances for testing, and SSHing into various machines with no consistent way to manage them. The irony finally hit me: I had a perfectly capable machine doing nothing while I was burning money on compute I didn’t need.
That realization kicked off a weekend project that turned into one of the most satisfying infrastructure builds I’ve done. I wiped that dusty gaming rig, installed Proxmox VE, set up a proper dev server, connected everything with Tailscale, added Terminus for server management, and got Claude Code running on it for AI-assisted development. The result is a homelab that rivals the cloud setups I was paying for, running on hardware I already owned.
Why Proxmox Over Bare Metal Linux
Before jumping into the install, I want to address the obvious question: why not just install Ubuntu Server and call it a day? I considered it, but Proxmox gives you something bare metal doesn’t – the ability to run multiple isolated environments on a single machine without the overhead of managing everything manually.
Proxmox VE is a free, open-source virtualization platform built on Debian that supports both KVM virtual machines and LXC containers. The web-based management interface means I can spin up new environments, take snapshots before risky changes, and manage resources without ever touching the physical machine.
The mental model that clicked for me was thinking of Proxmox as my own personal AWS region. Each VM or container is like an EC2 instance, but with zero spin-up cost and no hourly billing.
Installing Proxmox VE
The install itself was dead simple. I downloaded the Proxmox VE ISO, flashed it to a USB drive with Balena Etcher, and booted from it. The entire installation took about 10 minutes.
# Flash the ISO (from my laptop)
sudo dd if=proxmox-ve_8.3-1.iso of=/dev/sdb bs=4M status=progress
During the install, I made a few deliberate choices:
- Filesystem: ZFS in RAID0 (single disk, so no redundancy needed for a dev homelab)
- Network: Set a static IP on my home network (
192.168.1.100) - Hostname:
pve.local
After rebooting, the Proxmox web interface was immediately available at https://192.168.1.100:8006. The first thing I did was remove the enterprise repository nag and add the no-subscription repo so updates would work without a license:
# SSH into the Proxmox host
ssh root@192.168.1.100
# Disable enterprise repo (requires paid subscription)
sed -i 's/^deb/# deb/' /etc/apt/sources.list.d/pve-enterprise.list
# Add the no-subscription repository
echo "deb http://download.proxmox.com/debian/pve bookworm pve-no-subscription" > /etc/apt/sources.list.d/pve-no-subscription.list
# Update and upgrade
apt update && apt dist-upgrade -y
One thing that surprised me was how lightweight Proxmox itself is. The hypervisor layer uses barely 2GB of RAM, leaving 30GB available for VMs and containers. Coming from a cloud mindset where every GB costs money, having this much headroom felt luxurious.
Setting Up the Dev Server VM
This was the core of the build – a dedicated development VM where I’d do all my coding and testing. I created an Ubuntu 24.04 LTS VM through the Proxmox web UI with the following specs:
| Resource | Allocation |
|---|---|
| CPU | 6 cores (of 8 available) |
| RAM | 16GB |
| Disk | 200GB (ZFS thin-provisioned) |
| Network | vmbr0 (bridged) |
After the OS install, I configured it as a proper dev environment:
# Essential dev tools
sudo apt update && sudo apt install -y \
build-essential \
git \
curl \
wget \
vim \
tmux \
htop \
jq \
unzip \
apt-transport-https \
ca-certificates \
gnupg \
lsb-release
# Install Docker
curl -fsSL https://get.docker.com | sh
sudo usermod -aG docker $USER
# Install Go (for my TUI projects)
wget https://go.dev/dl/go1.23.6.linux-amd64.tar.gz
sudo tar -C /usr/local -xzf go1.23.6.linux-amd64.tar.gz
echo 'export PATH=$PATH:/usr/local/go/bin' >> ~/.bashrc
# Install Node.js via nvm
curl -o- https://raw.githubusercontent.com/nvm-sh/nvm/v0.40.1/install.sh | bash
source ~/.bashrc
nvm install --lts
# Install GitHub CLI
curl -fsSL https://cli.github.com/packages/githubcli-archive-keyring.gpg | sudo dd of=/usr/share/keyrings/githubcli-archive-keyring.gpg
echo "deb [arch=$(dpkg --print-architecture) signed-by=/usr/share/keyrings/githubcli-archive-keyring.gpg] https://cli.github.com/packages stable main" | sudo tee /etc/apt/sources.list.d/github-cli.list > /dev/null
sudo apt update && sudo apt install gh -y
The beauty of doing this in a VM is snapshots. Before I make any major changes – upgrading kernels, testing new toolchains, breaking things on purpose – I take a snapshot through the Proxmox UI. If something goes sideways, I roll back in seconds. This is something you simply cannot do this easily on bare metal.
Tailscale: The Networking Glue
This is where the setup went from “local server” to “accessible from anywhere.” Tailscale creates a WireGuard-based mesh VPN that connects all your devices into a single private network. No port forwarding, no dynamic DNS, no firewall rules to maintain.
I installed Tailscale on three things: the Proxmox host itself, the dev server VM, and my laptop.
# Install Tailscale (same command on Proxmox host and dev VM)
curl -fsSL https://tailscale.com/install.sh | sh
# Start and authenticate
sudo tailscale up
# Verify the connection
tailscale status
After authenticating each device, they all appeared on my Tailscale network with stable IPs in the 100.x.x.x range. The moment I realized the power of this setup was when I closed my laptop at home, drove to a coffee shop, opened it back up, and SSH’d into my dev server as if I was still on my home network. No VPN client configuration, no connection drops, just seamless access.
I also enabled MagicDNS in the Tailscale admin console, which lets me use hostnames instead of IPs:
# Instead of remembering IPs
ssh user@100.64.0.2
# I can just use the hostname
ssh user@dev-server
Tailscale on the Proxmox Host
Installing Tailscale directly on the Proxmox host was a deliberate decision. It means I can access the Proxmox web UI from anywhere without exposing port 8006 to the internet:
# Access Proxmox web UI from anywhere via Tailscale
# Just open https://proxmox:8006 in a browser on any Tailscale-connected device
This is a massive security improvement over the alternative approaches. No port forwarding on the router, no public-facing management interfaces, no VPN concentrators to maintain. Tailscale handles the encryption, authentication, and routing automatically.
Terminus: Managing It All from One Place
With multiple VMs and containers running, I needed a way to manage everything without keeping a dozen terminal tabs open. That’s where Terminus came in – it’s a self-hosted server management platform that gives you a clean web UI for SSH connections, server monitoring, and file management.
I deployed Terminus as a Docker container inside an LXC container on Proxmox. The setup was straightforward:
# docker-compose.yml for Terminus
version: '3.8'
services:
terminus:
image: ghcr.io/terminus-terminal/terminus:latest
container_name: terminus
ports:
- "3000:3000"
volumes:
- terminus_data:/data
environment:
- SECRET_KEY=your-secret-key-here
restart: unless-stopped
volumes:
terminus_data:
# Deploy it
docker compose up -d
# Verify it's running
docker ps
Once Terminus was running, I added all my servers – the Proxmox host, the dev VM, and a few other machines I manage. The killer feature is the browser-based terminal with split-pane support. I can have SSH sessions to four different machines visible simultaneously, all within a single browser tab.
What makes Terminus particularly useful in a homelab context is the server monitoring dashboard. At a glance, I can see CPU usage, memory consumption, and disk space across all my machines. When I’m running a heavy build on the dev server while simultaneously pulling Docker images, I can spot resource contention immediately without switching between htop sessions.
With Tailscale in the mix, I access the Terminus web UI from anywhere. Sitting at a coffee shop, I pull up https://terminus:3000 in my browser and have full terminal access to every machine in my homelab. No SSH keys on my phone, no remembering IP addresses, just a clean web interface that works from any device.
Claude Code: AI-Powered Development on Local Hardware
The final piece of the puzzle was getting Claude Code running on the dev server. Having AI-assisted development available on a machine with 16GB of RAM and 6 CPU cores dedicated to it means I’m not fighting for resources with my browser and Slack.
# Install Claude Code via npm
npm install -g @anthropic-ai/claude-code
# Verify the installation
claude --version
# Set up the API key
export ANTHROPIC_API_KEY="your-api-key-here"
# Add to bashrc for persistence
echo 'export ANTHROPIC_API_KEY="your-api-key-here"' >> ~/.bashrc
The workflow I settled into is this: I SSH into the dev server from my laptop (or through Terminus from my phone), start a tmux session, and fire up Claude Code in my project directory. The dev server has all the tooling installed – Go, Node.js, Docker, GitHub CLI – so Claude Code can execute commands, run tests, and interact with the full development environment.
# My typical session
ssh dev-server
tmux new -s work
cd ~/projects/my-project
claude
What makes this setup particularly powerful is the CLAUDE.md file pattern I’ve written about before. Each project on the dev server has its own CLAUDE.md with project-specific context, so when I jump between projects, Claude Code immediately understands the codebase, architecture, and conventions.
Why Run Claude Code on the Dev Server Instead of Locally
Running Claude Code on the dev server instead of my laptop has a few advantages I didn’t anticipate:
-
Persistent sessions:
tmuxsessions survive SSH disconnections. I can start a complex refactoring task, close my laptop, and reconnect hours later to check on progress. -
Consistent environment: The dev server has the exact same tooling every time. No “works on my machine” problems between my laptop and the build environment.
-
Resource isolation: Heavy operations like building Docker images or running test suites don’t slow down my laptop. The dev server handles the compute while my laptop stays responsive for browsing docs and taking notes.
-
Remote access from anywhere: Combined with Tailscale, I can SSH into the dev server from my phone using Terminus and check on long-running Claude Code tasks from literally anywhere.
The Final Architecture
After a weekend of setup and a week of refinement, here’s what the complete homelab looks like:
| Component | Role | Access Method |
|---|---|---|
| Proxmox VE | Hypervisor + resource management | Web UI via Tailscale (https://proxmox:8006) |
| Dev Server VM | Primary development environment | SSH via Tailscale (ssh dev-server) |
| Docker LXC | Containerized services and testing | SSH + Docker CLI |
| Terminus | Server management dashboard | Web UI via Tailscale (https://terminus:3000) |
| Claude Code | AI-assisted development | CLI on Dev Server via tmux |
| Tailscale | Secure mesh networking | Installed on all nodes |
The total cost of this setup was effectively zero dollars in new hardware. The gaming PC was already paid for, Proxmox is free, Tailscale’s free tier handles up to 100 devices, Terminus is open source, and Claude Code just needs an API key I was already paying for.
Compare that to running equivalent cloud infrastructure:
- EC2 t3.xlarge (4 vCPU, 16GB): ~$120/month
- 100GB EBS storage: ~$10/month
- Data transfer: variable
- Total: ~$130+/month or ~$1,560/year
My homelab runs the same workloads for the cost of electricity, which on a machine that’s mostly idle between dev sessions, is negligible.
Lessons from the Build
A few things I learned during this process that aren’t obvious from tutorials:
Proxmox networking can be confusing at first. The default bridge (vmbr0) works for most setups, but if you start adding VLANs or complex network configurations, spend time understanding how Linux bridges work before making changes. I bricked my network config once and had to plug in a monitor and keyboard to fix it.
LXC containers are underrated. For services that don’t need a full kernel (like Docker hosts, web servers, or monitoring tools), LXC containers use a fraction of the resources a full VM would consume. My Terminus container uses about 256MB of RAM compared to the 2GB minimum a VM would need.
Snapshot before everything. I cannot stress this enough. Before installing Tailscale, before changing network configs, before upgrading anything – take a snapshot. The 30 seconds it takes has saved me hours of rebuilding.
Tailscale subnet routing is powerful. If you install Tailscale on the Proxmox host and enable subnet routing, you can access your entire home network through Tailscale without installing it on every device. I didn’t set this up initially and wish I had.
# Enable subnet routing on Proxmox host
sudo tailscale up --advertise-routes=192.168.1.0/24
Key Learnings
- Old gaming hardware makes excellent homelab infrastructure – An i7 with 32GB of RAM is more than enough to run multiple VMs and containers simultaneously for development workloads
- Proxmox provides cloud-like flexibility for free – Snapshots, live migration readiness, and web-based management eliminate the need for paid virtualization platforms
- Tailscale eliminates networking complexity entirely – No port forwarding, no dynamic DNS, no firewall rules. Install it, authenticate, and your devices are connected
- Terminus centralizes server management beautifully – A single web UI for monitoring and terminal access across all machines beats juggling SSH sessions manually
- Running Claude Code on a dedicated dev server beats local execution – Persistent tmux sessions, consistent tooling, and resource isolation create a superior development experience
- LXC containers are the homelab sweet spot – For services that don’t need full VM isolation, containers use dramatically fewer resources while providing the same functionality
- Snapshots are the homelab equivalent of infrastructure as code – They provide the same rollback safety net that Terraform state gives you in the cloud
- The best infrastructure is the hardware you already own – Before spinning up another cloud instance, look at what’s sitting unused on your desk
The most satisfying part of this entire project wasn’t the technical setup – it was hearing that old gaming PC’s fans spin up for the first time in a year, knowing it was finally doing something productive. Every cloud engineer should have a homelab. It’s the fastest way to experiment with infrastructure concepts without worrying about billing alerts, and there’s something deeply satisfying about running your own little datacenter from under your desk.