Blogs
NGINX Load Balancing Explained: Deploy React and Node.js with Multiple Servers
Learn how to deploy React and Node.js applications using NGINX with load balancing. This complete guide explains server setup, reverse proxy configuration, React static hosting, and load balancing algorithms like Round Robin, Least Connections, IP Hash, and Weighted balancing.
%2520(1).jpg%3Falt%3Dmedia%26token%3D5c9b87d1-33e1-440a-8b36-24724534931b&w=1080&q=75)
Deploying React and Node.js Applications with NGINX Load Balancing
This guide explains how to deploy a React frontend and Node.js backend using NGINX as a reverse proxy and load balancer. The architecture allows applications to scale and handle large numbers of users by distributing traffic across multiple servers.
Application Architecture
Internet
|
v
+------------+
| NGINX |
| LoadBalancer|
+------------+
| |
v v
Server A Server B
NodeJS API NodeJS API
|
v
React Static Build
Step 1: Install Required Software
Install Node.js
sudo apt update
sudo apt install nodejs npm
Check versions
node -v
npm -v
Step 2: Deploy Node.js Backend
Example Node.js API server:
const express = require("express");
const app = express();
app.get("/", (req, res) => {
res.send("Response from Server A");
});
app.listen(3000, () => {
console.log("Server running on port 3000");
});
Install dependencies:
npm install express
Run server:
node app.js
The API runs at:
http://localhost:3000
Step 3: Deploy Backend on Multiple Servers
Run the same backend on two machines.
Server A → 192.168.1.10:3000
Server B → 192.168.1.11:3000
Test servers:
curl http://192.168.1.10:3000
curl http://192.168.1.11:3000
Step 4: Build React Application
Build your React project:
npm run build
This generates a build folder:
build/
├── index.html
├── static/
Step 5: Copy React Build to Server
Typical static location used by NGINX:
/var/www/react-app
Copy files:
sudo mkdir -p /var/www/react-app
sudo cp -r build/* /var/www/react-app
Step 6: NGINX Configuration Location
Main config:
/etc/nginx/nginx.conf
Virtual hosts:
/etc/nginx/sites-available/
/etc/nginx/sites-enabled/
Create configuration:
sudo nano /etc/nginx/sites-available/myapp
Step 7: Basic NGINX Configuration
server {
listen 80;
server_name example.com;
root /var/www/react-app;
index index.html;
location / {
try_files $uri /index.html;
}
}
This serves the React static application.
Step 8: Add API Reverse Proxy
Proxy API calls to backend servers:
location /api/ {
proxy_pass http://backend_servers;
}
Step 9: Configure Load Balancing
upstream backend_servers {
server 192.168.1.10:3000;
server 192.168.1.11:3000;
}
Complete Configuration
upstream backend_servers {
server 192.168.1.10:3000;
server 192.168.1.11:3000;
}
server {
listen 80;
server_name example.com;
root /var/www/react-app;
index index.html;
location / {
try_files $uri /index.html;
}
location /api/ {
proxy_pass http://backend_servers;
}
}
Load Balancing Algorithms in NGINX
1. Round Robin (Default)
Requests are distributed sequentially across servers.
Request 1 → Server A
Request 2 → Server B
Request 3 → Server A
Request 4 → Server B
upstream backend_servers {
server 192.168.1.10:3000;
server 192.168.1.11:3000;
}
2. Least Connections Load Balancing
Requests go to the server with the least active connections.
Server A → 120 connections
Server B → 30 connections
Next request → Server B
upstream backend_servers {
least_conn;
server 192.168.1.10:3000;
server 192.168.1.11:3000;
}
3. IP Hash Load Balancing
Same client IP is always routed to the same backend server.
User IP → 192.168.1.50 → Server A
User IP → 192.168.1.60 → Server B
upstream backend_servers {
ip_hash;
server 192.168.1.10:3000;
server 192.168.1.11:3000;
}
4. Generic Hash Load Balancing
Traffic is distributed based on a custom hash value.
/product/101 → Server A
/product/202 → Server B
upstream backend_servers {
hash $request_uri;
server 192.168.1.10:3000;
server 192.168.1.11:3000;
}
5. Weighted Load Balancing
Assign traffic weight based on server capacity.
Server A weight = 3
Server B weight = 1
upstream backend_servers {
server 192.168.1.10:3000 weight=3;
server 192.168.1.11:3000 weight=1;
}
6. Backup Server
Backup server only receives traffic if main servers fail.
upstream backend_servers {
server 192.168.1.10:3000;
server 192.168.1.11:3000;
server 192.168.1.12:3000 backup;
}
7. Failure Handling
upstream backend_servers {
server 192.168.1.10:3000 max_fails=3 fail_timeout=30s;
server 192.168.1.11:3000;
}
If a server fails multiple times within the specified time, NGINX temporarily removes it from the load balancing pool.
Full Request Flow
User Browser
|
v
+-----------+
| NGINX |
+-----------+
| |
v v
ServerA ServerB
(NodeJS) (NodeJS)
|
v
Database
Testing Load Balancing
curl http://example.com/api
Each request may return responses from different backend servers, showing that load balancing is working correctly.
Benefits of This Architecture
High availability
Better scalability
Improved performance
Fault tolerance
Efficient traffic distribution
Using this architecture, modern applications can handle large traffic while maintaining stability and performance.



