Skip to main content

Pacey Conning

Linux Systems Enthusiast

Self-taught technologist with 8+ years of Linux experience. Running a multi-server home lab, experimenting with local AI/LLMs, and building practical tools for real-world problems.

About Me

Who Am I

I'm a self-taught technologist with over 8 years of Linux experience as a passionate hobbyist. I run a multi-server home lab hosting everything from AdGuard and Jellyfin to a local llama.cpp inference server for AI experimentation.

My approach is hands-on and practical—I learn by building and breaking things. Whether it's configuring an Arch install, managing a Proxmox Virtual Environment, or writing Python tools to manage local LLMs, I enjoy solving real problems with the right technology.

These days, I leverage AI tools like OpenCode to accelerate my development workflow while maintaining a strong foundation in systems administration and infrastructure.

Tech Stack

Linux
Systems
Docker
Containers
Python
Development
Bash
Scripting
llama.cpp
AI/LLM
Nginx
Web Server
React
Frontend
TypeScript
Language
OpenCode
AI Tools
Git
Version Control

Home Lab Infrastructure

Multi-server environment hosting containerized services, local AI inference, and self-hosted applications

Hardware Setup

Dell Precision T7810

Primary Server

Xeon workstation, DDR4-2400, Dual RTX GPUs

Mini PC

Network & Media

Low-power x86, 2.5GbE networking

NAS

Storage Server

Multi-bay storage, RAID configuration

2.5GbE Network

Infrastructure

5-port gigabit switch, high-speed links

Running Services

llama.cpp Server

AI/LLM

Local AI inference server running 15+ optimized GGUF models for privacy-first machine learning

Dell T7810

AdGuard Home

Network

Network-wide ad blocking and DNS management for enhanced privacy and performance

Mini PC

Jellyfin

Media

Self-hosted media streaming server for movies, TV shows, and music

Mini PC

n8n

Automation

Workflow automation platform for integrating services and automating tasks

Dell T7810

OpenWebUI

AI/LLM

Web interface for interacting with local LLM models via llama.cpp

Dell T7810

Immich

Media

Self-hosted photo and video management platform with automatic backups

Mini PC

Nginx Proxy Manager

Infrastructure

Reverse proxy with SSL termination for secure service access

Mini PC

Open Media Vault

Storage

NAS management with centralized storage and qBittorrent integration

NAS
8+
Active Services
3
Servers
15+
LLM Models
10+
Years Linux

Why Self-Host?

Privacy, control, and learning. By running services locally, I maintain complete ownership of my data, reduce dependence on cloud providers, and gain hands-on experience with production-grade infrastructure. Every service teaches something new about systems administration, networking, or application deployment.

Learning Journey

A self-directed path through Linux, infrastructure, and emerging technologies

~2015Learning

Linux Journey Begins

Started exploring Linux as a hobby, learning command line basics and system administration fundamentals.

2020Infrastructure

First Home Lab Setup

Built initial home server infrastructure, experimenting with self-hosted services and network configuration.

2023Growth

Multi-Server Expansion

Expanded to multi-server environment with containerization (LXC/Docker), running AdGuard, Jellyfin, and more.

2024AI/ML

Local AI/LLM Deep Dive

Deployed llama.cpp inference server, built model management tooling, and integrated AI into daily workflows with OpenCode.

2024-2025Development

Web Development & Portfolio

Learned React, Next.js, and TypeScript to build modern web applications. Created this portfolio and CycleSync PWA.

What's Next?

Seeking opportunities to transition hobby expertise into professional roles in systems administration, infrastructure engineering, or DevOps. Continuing to expand knowledge in Kubernetes, infrastructure as code, and advanced networking while contributing to open-source projects.

Loading GitHub activity...

Loading projects...

Get In Touch

Open to opportunities in systems administration and infrastructure

I'm currently open to new opportunities and collaborations. Feel free to reach out!

paceyconning@protonmail.com

Find me on GitHub