I’ve been there. It’s 11 PM, you just pushed a hotfix, and suddenly your production site throws a 500 Error because the internet connection dropped halfway through the file upload. Drag-and-drop FTP is a gamble, and manual git pull on the server is a recipe for downtime.
If you are serious about engineering, you need a pipeline that is boringly reliable. But here is the problem: most tutorials on the web are teaching you dangerous habits. They tell you to disable SSH security checks or give your CI runner full root access to your server. That is a security nightmare waiting to happen.
In this guide, we are going to build a bulletproof deployment pipeline. We won’t just copy files; we will implement an Atomic Symlink Swap (for zero downtime) and secure it with Restricted SSH Commands. Let’s fix your workflow.
🛡️ SECURITY & LIABILITY DISCLAIMER:
- Advanced Users Only: This guide modifies
authorized_keyswith restricted commands. A typo here can permanently lock you out of your server. Always keep a secondary root session open while applying these changes. - Private Repo Requirement: If your code is private, you MUST follow the “Deploy Key” instructions inserted before Step 3, otherwise the deployment script will crash.
- Linux Exclusive: The atomic swap uses
mv -T, a command exclusive to GNU/Linux. Do not run this on macOS or BSD servers. - No Warranty: This pipeline is provided “as is”. The author assumes no liability for downtime, data loss, or failed deployments. Test in a staging environment first.
Why Automation Beats “Sudo Git Pull”
Atomic Deployment is a strategy where the new version of your application is fully built in a background folder before it goes live. Traffic is switched instantly using a symbolic link, ensuring the site is never in a broken or “half-uploaded” state.
According to the 2024 State of CI/CD Report by the CD Foundation, elite performers who automate their deployment frequency are significantly less likely to suffer from change failures. Manual deployments introduce human error—forgetting to run `npm install` or missing a migration. Automation forces consistency.
But we aren’t just automating; we are strictly following the principle of Least Privilege. If your GitHub account gets hacked, we want to ensure the attacker can’t use your deployment keys to wipe your server.
Prerequisites: What You Need
- A VPS: Ubuntu/Debian preferred (DigitalOcean, Linode, Hetzner, or AWS EC2).
- Terminal Access: You need `sudo` privileges on the server.
- A GitHub Repository: Where your code lives.
- Coffee: Optional, but recommended.

Step 1: Generating the SSH Keys (The Right Way)
Forget RSA keys. They are the floppy disks of cryptography—clunky and outdated. We are going to use Ed25519, which is faster and more secure.
Open your local terminal (not the server) and run this:
ssh-keygen -t ed25519 -C "github-actions-deploy" -f ~/.ssh/github_deploy_key
Crucial Note: When asked for a passphrase, press Enter (leave it empty). Since GitHub Actions runs automatically, it can’t type a password for you. “But isn’t that insecure?” you ask. Yes, normally. That’s why in Step 2, we are going to lock this key down so tight it won’t matter.
Step 2: The “Valet Key” Strategy (Restricted SSH)
This is where 90% of tutorials fail. They tell you to paste your public key into `authorized_keys` and move on. This gives GitHub full shell access to your server.
Instead, we are going to use the command="..." option. Think of this like a valet key for your car: it starts the engine, but it won’t open the trunk or glovebox.
- Copy the public key content:
cat ~/.ssh/github_deploy_key.pub. - Log into your VPS.
- Edit the authorized keys file:
nano ~/.ssh/authorized_keys. - Paste the key, but prepend the restriction configuration like this:
command="/usr/local/bin/deploy.sh",no-port-forwarding,no-X11-forwarding,no-agent-forwarding,no-pty ssh-ed25519 AAAAC3NzaC1lZDI1NTE5...
What did we just do?
| Restriction | What it does |
|---|---|
command="/.../deploy.sh" |
Forces the server to run ONLY this script. Even if the attacker runs ssh user@host "rm -rf /", the server ignores it and runs deploy.sh instead. |
no-port-forwarding |
Prevents the attacker from using your server as a VPN to hack your internal network. |
no-pty |
Prevents allocating a terminal session (shell). |
🛑 STOP! Are you using a Private Repository?
If yes, the script in Step 3 will fail with a “Permission Denied” error. Why? Because we disabled SSH Agent Forwarding in Step 2 for security, so your server effectively has “no ID” when it tries to talk to GitHub.
The Fix (Do this now): You must give your server a read-only “Deploy Key”.
- On your VPS, generate a new identity key:
ssh-keygen -t ed25519 -C "server-identity" -f ~/.ssh/id_ed25519_github_identity
- Copy the Public Key:
cat ~/.ssh/id_ed25519_github_identity.pub
- Go to your GitHub Repo -> Settings -> Deploy Keys -> Add Deploy Key.
- Paste the key and ensure “Allow write access” is UNCHECKED.
Now your server has its own “ID card” to clone the repo, but no power to push code changes. This maintains the “Least Privilege” security model.
Step 3: Creating the Atomic Deployment Script
Now we need that deploy.sh script we referenced above. This script will handle the “Atomic Swap” logic.
On your VPS, create the file:
sudo nano /usr/local/bin/deploy.sh
Paste in this logic. I’ve designed this to prevent the dreaded “404 not found” error that happens with standard `ln -sf` commands.
#!/bin/bash
set -e
# Configuration
REPO_URL="[email protected]:yourusername/your-repo.git"
DEPLOY_DIR="/var/www/your-app"
BLOCKED_DIR="/var/www/your-app/releases"
TIMESTAMP=$(date +%Y%m%d%H%M%S)
NEW_RELEASE_DIR="$BLOCKED_DIR/$TIMESTAMP"
# 1. Clone the new version to a fresh directory
mkdir -p $BLOCKED_DIR
git clone --depth 1 $REPO_URL $NEW_RELEASE_DIR
# 2. Build steps (Modify as needed)
cd $NEW_RELEASE_DIR
# npm install --production
# composer install --no-dev
# 3. The Atomic Swap
# We create a symlink to the new folder, then atomically move it over the old one.
# 'mv -T' is the secret sauce here (Linux only).
ln -s $NEW_RELEASE_DIR $DEPLOY_DIR/current_tmp
mv -T $DEPLOY_DIR/current_tmp $DEPLOY_DIR/current
# 4. Cleanup old releases (keep last 5)
cd $BLOCKED_DIR
ls -dt * | tail -n +6 | xargs -d '\n' rm -rf
# 5. Reload Services
# sudo systemctl reload nginx
echo "Deployed successfully to $NEW_RELEASE_DIR"
Make it executable:
sudo chmod +x /usr/local/bin/deploy.sh
Why mv -T? Standard `ln -sf` actually unlinks the file for a split second before creating the new link. During that millisecond, your site is down. mv -T calls the `rename` syscall, which is an atomic operation in Linux. No downtime, ever.
Step 4: Configuring GitHub Secrets
Go to your GitHub Repository -> Settings -> Secrets and variables -> Actions. Add these secrets:
- SSH_PRIVATE_KEY: The content of the private key you generated in Step 1.
- SSH_HOST: Your VPS IP address.
- SSH_USER: The username you log in with (e.g.,
ubuntuordeploy). - SSH_KNOWN_HOSTS: Run
ssh-keyscan -H YOUR_VPS_IPon your local machine and paste the output here.
The “Known Hosts” Trap
Many guides tell you to use StrictHostKeyChecking=no. Do not do this. It leaves you vulnerable to Man-in-the-Middle attacks. By storing the `known_hosts` signature as a secret, we verify we are connecting to our server, not an imposter.
Step 5: The GitHub Actions Workflow
A recent GitLab Global DevSecOps Report revealed that 67% of code comes from open-source libraries, highlighting significant supply chain risks. Because of this, we will NOT use the popular third-party appleboy/ssh-action. We will use native SSH commands. It’s safer and one less dependency to break.
Create .github/workflows/deploy.yml in your repository:
name: Deploy to VPS
on:
push:
branches: [ "main" ]
jobs:
deploy:
runs-on: ubuntu-latest
steps:
- name: Configure SSH
env:
SSH_USER: ${{ secrets.SSH_USER }}
SSH_PRIVATE_KEY: ${{ secrets.SSH_PRIVATE_KEY }}
SSH_HOST: ${{ secrets.SSH_HOST }}
run: |
mkdir -p ~/.ssh/
echo "$SSH_PRIVATE_KEY" > ~/.ssh/deploy_key
chmod 600 ~/.ssh/deploy_key
echo "Host production" >> ~/.ssh/config
echo " HostName $SSH_HOST" >> ~/.ssh/config
echo " User $SSH_USER" >> ~/.ssh/config
echo " IdentityFile ~/.ssh/deploy_key" >> ~/.ssh/config
echo " StrictHostKeyChecking yes" >> ~/.ssh/config
ssh-keyscan -H $SSH_HOST >> ~/.ssh/known_hosts
- name: Trigger Deployment
run: ssh production "true"
# The command "true" is ignored because of the authorized_keys restriction.
# It will execute /usr/local/bin/deploy.sh instead.
Troubleshooting Common Errors
Even the best plans hit snags. Here are the most common errors I’ve encountered when setting this up.
| Error Message | Likely Cause | The Fix |
|---|---|---|
| Permission denied (publickey) | Incorrect file permissions on server. | Run chmod 600 ~/.ssh/authorized_keys and chmod 700 ~/.ssh on the VPS. |
| Host key verification failed | The SSH_KNOWN_HOSTS secret is missing or mismatching. |
Re-run ssh-keyscan -H IP and update the GitHub Secret. |
| Dial TCP Timeout | Firewall (UFW/AWS Security Group) blocking Port 22. | Ensure Port 22 is open to the world (0.0.0.0/0) BUT protected by Key-Only auth and Fail2Ban. |
Conclusion
You now have a deployment pipeline that puts you in the top 1% of developers regarding security and reliability. You aren’t just flinging files over the internet; you are using cryptographic verification, restricted execution scopes, and atomic filesystem operations.
If you found this helpful, your next step is to set up a “Rollback” script. Since we keep the last 5 releases in our folder, a rollback is as simple as repointing the symlink to the previous timestamp. Happy deploying!