Portfolio

Dakota Christopher Rose

IT Support Technician  ·  Systems Administrator  ·  Builder

Professional Summary

Technically adept and people-focused Level 2 IT Support Technician with a proven track record in endpoint management, enterprise device deployment, and Microsoft 365 administration. Experienced supporting over 1,400 devices across multiple campuses in a dynamic K12 school district. Skilled in solving novel technical issues, mentoring junior technicians, and bridging the gap between users and engineers. Committed to professional development through lab-based learning, certifications, and hands-on experimentation.

Technical Skills

Endpoint Management (Mosyle, Microsoft Intune)
Apple & Windows Device Support
Microsoft 365 Admin Centers & Azure
MFA, Conditional Access, Compliance
Active Directory & Group Policy
PowerShell Scripting
Pi-hole, PiVPN, NTP, Web Server Configuration
Network Troubleshooting (DHCP, DNS, ARP)
nslookup, traceroute, network diagnostics
Ticketing Systems (Incident IQ)
Knowledge Base Management
Escalation Management & Customer Empathy

Certifications

Microsoft 365 Fundamentals (MS-900)Earned
Incident IQ Agent & Admin CertificationsEarned
SC-900 – Security, Compliance, Identity FundamentalsIn Progress
Apple Device Support CertificationIn Progress
🖥️

How I Built and Deployed a Web Server from Scratch Using Proxmox

From bare-metal VM creation to a live HTTPS website — Proxmox, Ubuntu Server, Nginx, and Let's Encrypt.

Read More →
🌐

Website Hosting Project (original)

The original self-hosted Ubuntu web server build — from bare metal to public access.

Read More →

Project Write-Up

How I Built and Deployed a Web Server from Scratch Using Proxmox

There's a difference between renting a $5/month VPS and actually understanding what a web server is. I wanted the latter. This is a full walkthrough of how I built a web server from the ground up — from creating a virtual machine inside my home lab to serving a real website over HTTPS on a public domain. No managed hosting, no control panels, no hand-holding. Just Linux, a terminal, and a willingness to break things and fix them.

If you want to follow along, you'll need a machine running Proxmox VE, a domain name, and some patience. Everything else used here is free and open source.

What is Proxmox?

Proxmox Virtual Environment is a free, open source hypervisor — software that lets you run multiple virtual machines on a single physical machine. Think of it like VMware or VirtualBox, but built for servers and serious about it. Each virtual machine is completely isolated, has its own operating system, its own resources, and its own network identity. This means I can run a web server, a VPN, a home automation system, and a dozen other things all on the same hardware without them interfering with each other.

If you're building a home lab or a personal server setup, Proxmox is one of the best foundations you can use.

Part 1 — Creating the Virtual Machine

The first decision when creating a VM is the machine type. Proxmox offers two options: i440FX (the default) and Q35. Q35 is a modern Intel chipset emulation that supports PCIe, UEFI boot, and TPM. For any new VM, Q35 is the right choice — the i440FX chipset dates from 1996 and exists mainly for legacy compatibility.

Alongside Q35, I chose UEFI over the traditional SeaBIOS. UEFI is the modern firmware standard that replaced the old BIOS on consumer hardware around 2012. It supports secure boot, larger disks, and faster startup. There's no reason to use SeaBIOS on a new VM.

For the virtual hardware, the choices that matter most for performance are the disk bus and the network adapter. Both should be set to VirtIO — Proxmox's paravirtualized drivers purpose-built for virtual machines. Regular emulated hardware adds overhead because the hypervisor has to pretend to be real hardware. VirtIO cuts out that pretense and gives near-native performance.

For resources, a web server serving static files doesn't need much. Two CPU cores and 2GB of RAM is comfortable. The disk can be as small as 20GB — Ubuntu Server with Nginx installed uses less than 5GB. I enabled memory ballooning, which allows Proxmox to reclaim unused RAM from the VM when needed.

Part 2 — Installing Ubuntu Server

I used Ubuntu Server 24.04 LTS. LTS stands for Long Term Support — Canonical supports these releases with security patches for five years, making them the right choice for anything you want to run reliably long-term. Non-LTS releases get 18 months of support, which isn't enough for a server.

A few installer choices worth calling out. For storage layout, I used LVM (Logical Volume Manager) rather than a plain partition layout. LVM makes it possible to resize partitions, add disks, and take snapshots without reinstalling the OS — the right foundation for a server you expect to grow. I also installed OpenSSH Server during setup, which is how all server management happens from this point forward.

After installation, the first thing to do is set a static IP address. A server with a changing IP is a server you'll constantly be chasing. Ubuntu uses Netplan for network configuration. First, find your interface name:

ip a

Then edit the Netplan config at /etc/netplan/00-installer-config.yaml:

network:
  version: 2
  ethernets:
    enp6s18:            # replace with your interface name
      dhcp4: false
      addresses:
        - 192.168.1.100/24
      routes:
        - to: default
          via: 192.168.1.1
      nameservers:
        addresses: [1.1.1.1, 8.8.8.8]

One important note: use sudo netplan try rather than sudo netplan apply when testing changes. netplan try applies the config but automatically reverts after 120 seconds if you don't confirm it. If you make a mistake and lose your SSH connection, it rolls back on its own.

NoteAfter setting a static IP, run ip a again and confirm only one IP appears on the interface. A lingering DHCP lease can coexist with your static address and cause subtle networking issues — including breaking port forwarding rules at the router level.

Part 3 — Hardening the Server

A fresh Ubuntu Server install is reasonably secure but not hardened. These are the steps I take on every new server before it touches the internet.

Update Everything

sudo apt update && sudo apt upgrade -y

Then set up automatic security updates so you don't have to think about it:

sudo apt install unattended-upgrades -y
sudo dpkg-reconfigure --priority=low unattended-upgrades

SSH Key Authentication

Password authentication over SSH is convenient but weak. SSH key authentication uses something you have (a private key file) rather than something you know (a password). Generate a key pair on your local machine:

ssh-keygen -t ed25519 -C "your-label-here"

Ed25519 is a modern elliptic curve algorithm — shorter keys, more secure than RSA. Set a passphrase when prompted. It encrypts your private key file so that even if someone gets your machine, they can't use your key without it.

On Windows, copy your public key to the server with this PowerShell one-liner:

type $env:USERPROFILE\.ssh\id_ed25519.pub | ssh user@server-ip "mkdir -p ~/.ssh && chmod 700 ~/.ssh && cat >> ~/.ssh/authorized_keys && chmod 600 ~/.ssh/authorized_keys"

Test that key-based login works first, then open /etc/ssh/sshd_config and set:

PermitRootLogin no
PasswordAuthentication no
PubkeyAuthentication yes
X11Forwarding no
MaxAuthTries 3

Restart SSH after saving: sudo systemctl restart ssh

Firewall (UFW)

sudo ufw default deny incoming
sudo ufw default allow outgoing
sudo ufw allow 22/tcp
sudo ufw allow 80/tcp
sudo ufw allow 443/tcp
sudo ufw enable

Only three ports open: SSH, HTTP, and HTTPS. Every other port is silently dropped.

Fail2ban

Fail2ban watches log files for repeated failed authentication attempts and automatically bans offending IPs. Install it, then create /etc/fail2ban/jail.local:

[DEFAULT]
bantime  = 1h
findtime = 10m
maxretry = 5

[sshd]
enabled = true
port    = 22
sudo systemctl enable fail2ban
sudo systemctl start fail2ban

Part 4 — Installing and Configuring Nginx

sudo apt install nginx -y
sudo systemctl enable nginx
sudo systemctl start nginx

Nginx uses a clean pattern for managing multiple sites. Config files live in /etc/nginx/sites-available/. To activate a site, you create a symlink to it in /etc/nginx/sites-enabled/. To deactivate it, you delete the symlink without touching the original config.

Remove the default site and create your web root:

sudo rm /etc/nginx/sites-enabled/default
sudo mkdir -p /var/www/mysite/html
sudo chown -R $USER:$USER /var/www/mysite/html

Create a config file at /etc/nginx/sites-available/mysite:

server {
    listen 80;
    listen [::]:80;

    server_name yourdomain.com www.yourdomain.com;
    root /var/www/mysite/html;
    index index.html index.htm;

    add_header X-Frame-Options "SAMEORIGIN" always;
    add_header X-Content-Type-Options "nosniff" always;
    add_header X-XSS-Protection "1; mode=block" always;
    add_header Referrer-Policy "no-referrer-when-downgrade" always;

    location / { try_files $uri $uri/ =404; }
    server_tokens off;
    location ~ /\. { deny all; }

    access_log /var/log/nginx/mysite_access.log;
    error_log  /var/log/nginx/mysite_error.log;
}

Enable the site and verify it's working:

sudo ln -s /etc/nginx/sites-available/mysite /etc/nginx/sites-enabled/
sudo nginx -t
sudo systemctl reload nginx
sudo ss -tlnp | grep nginx
ImportantAlways run sudo ss -tlnp | grep nginx after starting Nginx. A service can show as "running" without actually listening on any ports — this is a real condition that will fool you if you only check service status. The ss command shows ground truth: what ports are actually open at the network level right now.

Part 5 — Going Public

DNS

Find your public IP: curl ifconfig.me

Then create two A records at your domain registrar pointing to that IP — one with a blank host (root domain) and one with www. Use dnschecker.org to verify propagation before going further.

NoteMost residential internet connections have dynamic public IPs that can change after router reboots. If your IP changes, your DNS records become stale and your site goes offline. The long-term solution is DDNS (Dynamic DNS), which automatically updates your records when your IP changes.

Port Forwarding

In your router's admin interface, create two rules forwarding ports 80 and 443 (TCP) to your server's local IP address. Also add a DHCP reservation that associates your server's MAC address with its static IP — without this, the router may hand that IP to another device or create conflicting entries that break port forwarding silently.

HTTPS with Let's Encrypt

Verify port 80 is reachable externally using a port-checking tool before running Certbot. Let's Encrypt verifies domain ownership over HTTP and enforces a rate limit on failed attempts — confirm everything is working first.

sudo apt install certbot python3-certbot-nginx -y
sudo certbot --nginx -d yourdomain.com -d www.yourdomain.com

Certbot handles everything: it verifies domain ownership, obtains the certificate, updates your Nginx config, and sets up automatic renewal. When asked whether to redirect HTTP to HTTPS, say yes. The certificate is valid for 90 days and renews automatically — you never have to think about it again.

What I Learned

The most important thing this project taught me is that understanding the stack matters more than any individual tool. When things broke — and they did break — being able to reason through each layer (is the VM running? is the service running? is it bound to the right port? is the firewall open? is the traffic reaching the server?) made the difference between a five-minute fix and a hours-long mystery.

A few specific lessons worth passing on: always verify with ss -tlnp after starting any network service. Use netplan try instead of netplan apply for network changes on remote machines. Check ip a after configuring a static IP to make sure no duplicate DHCP lease is lingering. And if SSH ever breaks completely, the Proxmox console gives you direct VM access regardless of network state — it's the equivalent of plugging a keyboard directly into the server.

This stack — Proxmox, Ubuntu Server, Nginx, Let's Encrypt — is the foundation that a huge portion of the internet runs on. Understanding it from first principles, rather than clicking through a managed hosting control panel, gives you the ability to debug anything, optimize for your specific needs, and build whatever comes next.

This server is part of an ongoing home lab project. Future additions include a reverse proxy configuration for application backends, VLAN network isolation, and dynamic DNS automation.

Project Write-Up

Website Hosting Project (original)

Overview

This project documents the process of self-hosting a secure website (https://iamdakota.com) using an Ubuntu Server installation, NGINX web server, basic firewall setup, and manual HTML page creation. All steps were performed on personal hardware and configured from scratch as part of a technical learning initiative.

Getting Started

I began by repurposing an older laptop. I downloaded Ubuntu Server and used Rufus to image it onto a USB drive. After booting from the USB, I went through the guided installation process. One major hurdle was during the network configuration stage — I had to resolve a mismatch between my static IP settings and the subnet address format. This required understanding how subnets are written in CIDR format and adjusting values so my IP fit correctly within the subnet.

Installing NGINX & UFW

Once the server was online, I ran system updates and installed NGINX using sudo apt install nginx. To secure the server, I enabled UFW (Uncomplicated Firewall) and allowed HTTP, HTTPS, and SSH traffic. This ensured only the necessary ports were exposed to the network.

Understanding Port Forwarding

Port forwarding was required to expose the web server to the public internet. This involved logging into my router and forwarding ports 80 (HTTP) and 443 (HTTPS) to my server's internal IP address. Port forwarding allows external traffic on specific ports to be passed through the router and directed to a device inside the local network. Without this, external users would be blocked from accessing the server entirely.

NoteYour internet service provider assigns a public IP address to your home network — this is the address users on the internet will use to reach your server. This public IP is separate from your internal IP and is what you'll need to reference when configuring DNS records with your domain registrar.

Pointing the Domain to Your Server

To make the website accessible using a domain name like iamdakota.com, I logged into my domain registrar and added a DNS record that points to the public IP address provided by my internet service provider. Specifically, I created an A record for the root domain (and optionally for www) that targets the server's public IP. This allows browsers to resolve the domain name to the actual IP of my server.

NoteDNS changes can take a few minutes to propagate depending on the TTL settings.

Creating HTML Files

With NGINX serving the default page, I navigated to /var/www/html, which is the default web root on Ubuntu. To add new pages, I created directories using mkdir and then added HTML files inside them.

Importantmkdir creates directories, not HTML files. If you attempt to open a directory directly using nano, it will fail. You must first create the directory (e.g., /projects), then use nano /projects/project-name.html to create a file within it.

Switching to SSH and Local Editing

Once SSH access was working reliably, I began editing HTML files locally on my Windows machine using Sublime Text. Before committing those changes to the live server, I would open the files directly in my browser to preview them. This allowed me to test and refine the layout and styling before manually recreating or updating the files on the server using nano over SSH.

Summary

This project served as an entry point into self-hosted web infrastructure, covering core topics like Ubuntu server management, static IP setup, firewall configuration, port forwarding, DNS setup, and simple web development. It laid the foundation for the more comprehensive Proxmox-based build documented in the newer project writeup.

Let's Connect

Get In Touch

Currently open to new opportunities and interesting conversations.