A cheap live streaming server setup does not require expensive managed platforms or enterprise infrastructure. With a $5 VPS, some open source live streaming tools, and about thirty minutes of configuration, you can build a fully functional budget live streaming relay server that ingests RTMP streams and delivers HLS video to hundreds of concurrent viewers. This guide walks through every step, from provisioning the server to configuring nginx-rtmp and optimizing for real-world performance.
Why Self-Host a Live Stream Relay Server?
Commercial live streaming platforms like Mux, Wowza, and AWS MediaLive are excellent for large-scale production. But for indie developers, hobbyist streamers, small community events, or personal projects, their pricing can be prohibitive. A single hour of live streaming through a managed service can cost more than an entire month of VPS hosting.
A self hosted live stream relay gives you complete control over your streaming pipeline at a fraction of the cost. Here is what you gain by building your own:
- Cost: A capable VPS costs $5-$10/month versus $50-$500+ for managed streaming services. True live streaming on a $5 server is entirely achievable for small audiences.
- Control: You own the entire stack. No vendor lock-in, no arbitrary limits on stream keys, no surprise bandwidth overage fees.
- Privacy: Your stream data stays on your infrastructure. No third-party analytics, no content moderation algorithms, no terms-of-service surprises.
- Customization: Integrate authentication, custom recording pipelines, multi-destination relaying, or webhook notifications exactly the way you want.
- Learning: Understanding streaming infrastructure from the ground up makes you a better developer and gives you transferable DevOps skills.
This is the approach I took when building LatestRelayer, an open-source relay server designed specifically for budget VPS deployments. The principles in this article apply whether you use LatestRelayer, nginx-rtmp, or another open source tool.
Understanding the Streaming Protocol Stack
Before diving into configuration, it helps to understand the protocols involved in live streaming infrastructure. The two that matter most for a budget setup are RTMP and HLS.
RTMP (Real-Time Messaging Protocol)
RTMP is the protocol your streaming software (OBS Studio, Streamlabs, ffmpeg) uses to send video to the server. Originally developed by Macromedia for Flash, RTMP has outlived Flash itself and remains the dominant ingest protocol for live video. It operates over TCP on port 1935 by default, maintains a persistent connection, and supports low-latency delivery of H.264 video and AAC audio.
When you point OBS at rtmp://your-server.com/live with a stream key, OBS establishes an RTMP connection and begins pushing encoded video frames to your server. The server receives these frames and can do several things: record them to disk, relay them to another RTMP server, or convert them to a different protocol for viewer delivery.
HLS (HTTP Live Streaming)
HLS is the protocol your viewers use to watch the stream. Developed by Apple, HLS works by segmenting the live video into short .ts (MPEG-2 Transport Stream) files, typically 2-6 seconds each, and serving them over standard HTTP. A manifest file (.m3u8) lists the available segments, and the video player fetches them sequentially.
The beauty of HLS for a budget server is that it uses standard HTTP. This means you can serve HLS segments through any web server, cache them with a CDN, and handle far more concurrent viewers than a protocol that requires persistent connections. A $5 VPS with 1 TB of bandwidth can serve HLS to hundreds of simultaneous viewers for a typical stream.
The Relay Pipeline
The complete pipeline looks like this:
Broadcaster (OBS/ffmpeg)
|
| RTMP push (port 1935)
v
Your Relay Server (nginx-rtmp / LatestRelayer)
|
| Transmux to HLS segments (.ts + .m3u8)
v
Web Server (nginx HTTP / built-in)
|
| HTTP/HTTPS delivery
v
Viewers (browser, VLC, mobile player)
The relay server sits at the center. It accepts the RTMP ingest, transmuxes the video into HLS segments on the fly, and serves those segments over HTTP to any number of viewers. Because it is transmuxing rather than transcoding, it remaps the video container format without re-encoding, which means it uses almost no CPU. This is the key insight that makes live streaming on a $5 server possible.
Cheap Live Streaming Server Setup: Step by Step
Let's build a working relay server from scratch. I will cover two approaches: the traditional nginx-rtmp setup and the lighter-weight LatestRelayer approach.
Step 1: Provision a VPS
Any VPS provider will work. DigitalOcean, Vultr, Linode, and Hetzner all offer capable $5/month plans. For a single-stream relay you need:
- 1 vCPU (transmuxing is lightweight)
- 512 MB - 1 GB RAM
- 1 TB+ monthly bandwidth
- Ubuntu 22.04 or Debian 12 (this guide uses Ubuntu)
Provision the server, SSH in, and run the standard hardening steps:
# Update packages
sudo apt update && sudo apt upgrade -y
# Set up a firewall
sudo ufw allow OpenSSH
sudo ufw allow 1935/tcp # RTMP ingest
sudo ufw allow 80/tcp # HTTP for HLS
sudo ufw allow 443/tcp # HTTPS (optional, for TLS)
sudo ufw enable
# Create a non-root user (if you haven't already)
sudo adduser streamer
sudo usermod -aG sudo streamer
Step 2: Install nginx with the RTMP Module
The nginx-rtmp-module is the workhorse of self-hosted live streaming. It adds RTMP server capabilities to nginx, including live stream ingestion, HLS transmuxing, and relay/push features. On Ubuntu, you can install it from the package manager:
# Install nginx with the RTMP module
sudo apt install -y libnginx-mod-rtmp nginx
# Verify the module loaded
nginx -V 2>&1 | grep rtmp
If you need a more recent version or custom compilation flags, you can build from source. But for most budget setups, the packaged version works perfectly.
Step 3: Configure nginx-rtmp for Live Streaming
The nginx configuration needs two blocks: an rtmp block for ingest and the standard http block for serving HLS. Edit /etc/nginx/nginx.conf:
# /etc/nginx/nginx.conf
worker_processes auto;
events {
worker_connections 1024;
}
rtmp {
server {
listen 1935;
chunk_size 4096;
application live {
live on;
record off;
# HLS output
hls on;
hls_path /var/www/hls;
hls_fragment 3;
hls_playlist_length 30;
# Optional: restrict publishing by IP
# allow publish 203.0.113.50;
# deny publish all;
# Optional: push to another RTMP server
# push rtmp://backup-server.com/live;
}
}
}
http {
include mime.types;
default_type application/octet-stream;
sendfile on;
keepalive_timeout 65;
server {
listen 80;
server_name your-server.com;
location /hls {
alias /var/www/hls;
# CORS headers for browser playback
add_header Access-Control-Allow-Origin *;
add_header Access-Control-Allow-Methods 'GET, OPTIONS';
add_header Cache-Control no-cache;
# MIME types for HLS
types {
application/vnd.apple.mpegurl m3u8;
video/mp2t ts;
}
}
location / {
root /var/www/html;
index index.html;
}
}
}
Create the HLS output directory and set permissions:
sudo mkdir -p /var/www/hls
sudo chown -R www-data:www-data /var/www/hls
sudo systemctl restart nginx
That is a fully functional live streaming relay. Point OBS at rtmp://your-server-ip/live with any stream key (e.g., mystream), and the HLS output becomes available at http://your-server-ip/hls/mystream.m3u8.
Step 4: Add Stream Key Authentication
Without authentication, anyone who discovers your RTMP port can push a stream. The nginx-rtmp module supports an on_publish callback that sends a POST request to an HTTP endpoint for validation. Add this inside the application live block:
application live {
live on;
record off;
hls on;
hls_path /var/www/hls;
hls_fragment 3;
hls_playlist_length 30;
# Authentication callback
on_publish http://127.0.0.1:8080/auth;
on_publish_done http://127.0.0.1:8080/done;
}
Then create a minimal authentication service. Here is a simple example in Node.js:
// auth-server.js
const http = require('http');
const VALID_KEYS = new Set(['my-secret-key-123', 'backup-key-456']);
const server = http.createServer((req, res) => {
if (req.url === '/auth') {
let body = '';
req.on('data', chunk => body += chunk);
req.on('end', () => {
const params = new URLSearchParams(body);
const streamKey = params.get('name');
if (VALID_KEYS.has(streamKey)) {
console.log(`Stream authorized: ${streamKey}`);
res.writeHead(200);
res.end();
} else {
console.log(`Stream rejected: ${streamKey}`);
res.writeHead(403);
res.end();
}
});
} else if (req.url === '/done') {
console.log('Stream ended');
res.writeHead(200);
res.end();
} else {
res.writeHead(404);
res.end();
}
});
server.listen(8080, '127.0.0.1', () => {
console.log('Auth server listening on port 8080');
});
Run it with node auth-server.js or set it up as a systemd service. The RTMP module will send a POST request with the stream name (your stream key) to the auth endpoint. Return a 200 to allow, or any other status code to reject.
An Affordable Alternative to Live Streaming Services: LatestRelayer
While nginx-rtmp is the industry standard, it was designed as a general-purpose module and carries some overhead that is unnecessary for pure relay use cases. That is why I built LatestRelayer as a purpose-built, lightweight alternative specifically optimized for budget VPS deployments.
LatestRelayer differs from nginx-rtmp in several important ways:
- Lower memory footprint: Designed from the ground up to run on 512 MB servers without swapping.
- Simpler configuration: A single config file with sane defaults means you can be streaming in under five minutes.
- Built-in authentication: Stream key validation is built into the server rather than requiring a separate HTTP callback service.
- Automatic cleanup: HLS segments are automatically purged when a stream ends, preventing disk usage from growing indefinitely.
- Integrated HTTP server: No need to configure a separate nginx HTTP server for HLS delivery.
You can find the source code, documentation, and installation instructions on the LatestRelayer GitHub repository. It is fully open source and designed specifically as an affordable alternative to live streaming services for indie developers and small teams.
Optimizing Performance on a Budget Server
A $5 VPS has real constraints: limited CPU, RAM, and bandwidth. Here are the optimizations that matter most for a low cost live stream relay.
HLS Fragment Tuning
The hls_fragment and hls_playlist_length values directly affect latency, buffering, and disk I/O:
# Lower latency (2-second fragments, 6-second window)
hls_fragment 2;
hls_playlist_length 6;
# More stable (4-second fragments, 20-second window)
hls_fragment 4;
hls_playlist_length 20;
Shorter fragments reduce latency but increase disk I/O and the number of HTTP requests from viewers. On a $5 server, 3-second fragments with a 15-30 second playlist length is a good balance. If you are streaming a live event where latency matters less than stability, use 4-second fragments.
Encoder Settings (OBS / ffmpeg)
Your server is not transcoding, so the encoder settings on the broadcaster side determine everything about quality and bandwidth. For a $5 VPS with 1 TB of monthly transfer, target these settings:
# Recommended OBS settings for budget relay
Video Bitrate: 2500-4000 Kbps (1080p) or 1500-2500 Kbps (720p)
Audio Bitrate: 128 Kbps AAC
Encoder: x264 or hardware (NVENC/QuickSync)
Keyframe Interval: 2 seconds (must match or be a multiple of hls_fragment)
Profile: Main or High
Preset: veryfast (for x264)
The keyframe interval is critical. HLS can only split segments on keyframes, so if your keyframe interval is 4 seconds but your fragment length is 3, you will get irregular segment sizes. Set the keyframe interval to match your fragment length exactly.
Bandwidth Calculations
Understanding bandwidth math helps you plan capacity. Here is a practical example:
# Bandwidth calculation for a budget relay
Stream bitrate: 3000 Kbps (3 Mbps)
Concurrent viewers: 50
Total egress: 50 x 3 Mbps = 150 Mbps
Hourly bandwidth: 150 Mbps x 3600s = 540,000 Mb = 67.5 GB/hour
Monthly (1 TB cap): ~14.8 hours of streaming at 50 viewers
# At 720p / 2 Mbps with 20 viewers:
Hourly bandwidth: 20 x 2 Mbps x 3600s = 18 GB/hour
Monthly (1 TB cap): ~55 hours of streaming
For small audiences of 10-30 viewers at 720p, a $5 VPS with 1 TB of bandwidth gives you dozens of hours of streaming per month. That is more than enough for weekly community events, game nights, or personal projects.
tmpfs for HLS Segments
HLS segments are written to disk and deleted within seconds. This constant write/delete cycle can wear on SSDs and add latency. Mount the HLS directory as a tmpfs (RAM disk) to eliminate disk I/O entirely:
# Add to /etc/fstab
tmpfs /var/www/hls tmpfs nodev,nosuid,size=128M 0 0
# Mount it
sudo mount -a
# Verify
df -h /var/www/hls
On a 1 GB VPS, allocating 128 MB for tmpfs is reasonable. A 3 Mbps stream with 3-second fragments produces roughly 1.1 MB segments, and a playlist window of 10 segments uses about 11 MB. Even with multiple streams, 128 MB is generous.
Adding HTTPS with Let's Encrypt
Modern browsers require HTTPS for many media APIs. Adding TLS is free with Let's Encrypt and takes about two minutes:
# Install certbot
sudo apt install -y certbot python3-certbot-nginx
# Obtain and install certificate
sudo certbot --nginx -d your-stream-domain.com
# Certbot automatically updates your nginx config
# and sets up auto-renewal via systemd timer
After this, your HLS streams are available at https://your-stream-domain.com/hls/mystream.m3u8 and can be embedded in any web page.
Building a Viewer Page
To embed your live stream on a web page, use hls.js, the open-source JavaScript library that enables HLS playback in browsers that do not natively support it (essentially everything except Safari, which handles HLS natively):
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="UTF-8">
<title>Live Stream</title>
<script src="https://cdn.jsdelivr.net/npm/hls.js@latest"></script>
<style>
video {
max-width: 100%;
background: #000;
}
</style>
</head>
<body>
<video id="player" controls autoplay muted></video>
<script>
const videoSrc = '/hls/mystream.m3u8';
const video = document.getElementById('player');
if (Hls.isSupported()) {
const hls = new Hls({
liveDurationInfinity: true,
liveBackBufferLength: 0,
maxBufferLength: 10,
maxMaxBufferLength: 30
});
hls.loadSource(videoSrc);
hls.attachMedia(video);
hls.on(Hls.Events.MANIFEST_PARSED, () => {
video.play().catch(() => {});
});
} else if (video.canPlayType('application/vnd.apple.mpegurl')) {
// Safari native HLS support
video.src = videoSrc;
video.addEventListener('loadedmetadata', () => {
video.play().catch(() => {});
});
}
</script>
</body>
</html>
Place this file in /var/www/html/index.html and you have a complete viewer page. The hls.js configuration options shown above are tuned for live streaming: they limit the buffer to keep latency low and disable the back buffer to minimize memory usage on the client side.
Multi-Destination Relaying
One of the most powerful features of a self-hosted relay is the ability to push your stream to multiple destinations simultaneously. This is especially useful if you want to broadcast to YouTube, Twitch, and your own server from a single OBS session. Add push directives to your nginx-rtmp config:
application live {
live on;
record off;
hls on;
hls_path /var/www/hls;
hls_fragment 3;
hls_playlist_length 30;
# Relay to YouTube Live
push rtmp://a.rtmp.youtube.com/live2/YOUR-YOUTUBE-STREAM-KEY;
# Relay to Twitch
push rtmp://live.twitch.tv/app/YOUR-TWITCH-STREAM-KEY;
# Relay to a backup server
push rtmp://backup.example.com/live/mystream;
}
Your broadcaster sends one stream to your server, and your server fans it out to every destination. This saves upstream bandwidth on the broadcaster side and gives you a single point of control for multi-platform delivery. This capability alone makes a self hosted live stream relay worth the minimal setup effort.
Monitoring and Troubleshooting
Running a production stream means knowing what is happening in real time. Here are the essential monitoring techniques for a budget relay.
nginx-rtmp Statistics
Enable the built-in stats module by adding a location block to your HTTP server:
location /stat {
rtmp_stat all;
rtmp_stat_stylesheet stat.xsl;
# Restrict access
allow 127.0.0.1;
allow YOUR_IP;
deny all;
}
This gives you a real-time XML feed of active streams, connected clients, bandwidth usage, and uptime. You can also parse this endpoint programmatically to build custom dashboards or alerting.
System Resource Monitoring
On a constrained VPS, keep an eye on CPU, memory, and bandwidth:
# Watch CPU and memory in real time
htop
# Monitor network throughput
sudo apt install -y nload
nload eth0
# Check disk usage (important if not using tmpfs)
df -h /var/www/hls
# Watch nginx error log for issues
sudo tail -f /var/log/nginx/error.log
Common Issues and Fixes
After running budget relay servers across several projects (including the work behind my projects), these are the issues I encounter most often:
- Stream connects but no playback: Usually a keyframe interval mismatch. Ensure your OBS keyframe interval matches or divides evenly into your
hls_fragmentsetting. - High latency (30+ seconds): Reduce
hls_fragmentto 2 andhls_playlist_lengthto 6. Also check that hls.js is configured with a shortmaxBufferLength. - Choppy playback for viewers: The broadcaster's upload speed may be insufficient for the configured bitrate. Reduce the video bitrate or switch to 720p.
- Server runs out of disk: HLS segments are not being cleaned up. Check that
hls_cleanup on;is set (it is the default) or switch to tmpfs. - Permission denied errors: Ensure the nginx worker process (usually
www-data) has write access to the HLS output directory.
Cost Comparison: Self-Hosted vs. Managed Services
To put the budget advantages in perspective, here is a realistic comparison for a small streamer doing 20 hours per month at 720p with 25 average concurrent viewers:
- Self-hosted ($5 VPS): $5/month. Total annual cost: $60.
- Mux: ~$0.075/min delivery + $0.025/min encoding. ~$120/month for the same usage.
- AWS MediaLive + CloudFront: ~$80-150/month depending on region and configuration.
- Wowza Streaming Cloud: ~$75-149/month on their starter plans.
The managed services offer features you do not get with a bare VPS: adaptive bitrate transcoding, global CDN distribution, analytics dashboards, and 24/7 support. For large audiences or professional broadcasts, those features matter. But for indie projects, community streams, and personal use, the self-hosted approach delivers 95% of the functionality at 5% of the cost.
When to Scale Beyond a $5 Server
A budget VPS has real limits. Consider upgrading when:
- You regularly exceed 100 concurrent viewers: At this point, bandwidth becomes the bottleneck. Move to a $20 VPS with more bandwidth or put a CDN (Cloudflare, BunnyCDN) in front of your HLS output.
- You need adaptive bitrate (ABR): Serving multiple quality levels requires transcoding, which needs significantly more CPU. A 4-vCPU server or dedicated transcoding service becomes necessary.
- You need geographic distribution: If your viewers are spread across continents, a single server means high latency for distant viewers. A CDN or multi-region relay setup helps.
- Uptime is critical: A $5 VPS is a single point of failure. For production use, add redundancy with a backup server and health checks.
The good news is that your nginx-rtmp configuration scales with you. The same config that runs on a $5 VPS works identically on a $40 dedicated server. You just get more headroom.
Wrapping Up
Self-hosting a live streaming relay is one of the best bang-for-the-buck infrastructure projects a developer can take on. For the price of a coffee per month, you get a fully functional streaming server that you control completely. The combination of nginx-rtmp for RTMP ingest, HLS for viewer delivery, and a handful of optimization tweaks makes live streaming on a $5 server not just possible but genuinely practical.
If you want an even simpler path, check out LatestRelayer on GitHub for a purpose-built solution that handles the common cases with less configuration. And if you are interested in how streaming infrastructure fits into a broader application architecture, my article on building AI-powered web applications covers patterns for integrating real-time services into modern web stacks.
The tools are open source, the servers are cheap, and the knowledge is transferable. There has never been a better time to build your own streaming infrastructure.