How to Deploy MERN Application with DigitalOcean

So today, let's have a look at how to deploy the full MERN application.
You can deploy the complete MERN application on a Digital Ocean VPS using the same workflow, even if you choose MySQL instead of MongoDB. The only part that changes is the database configuration—everything else in the deployment process remains identical.
I will skip the server creation part in this article, you can use any droplet to follow me along.
First Login to your digital ocean server
ssh root@<server-ip> -i <private-key>
Initial Server Setup with Ubuntu 20.04
Before we jump into the actual deployment, we need to handle a few essential setup tasks on the server. The first step is creating a user account with limited privileges so we don’t rely on the root user for everyday operations. This is an important security practice because it reduces the risk of accidental system‑wide changes or unauthorized access.
Creating a user account with limited privileges
Before we continue with the deployment, let’s create a new user account that we’ll use for day‑to‑day server operations. This is safer than using the root account directly.
adduser <user-name>
This command creates a regular user with limited privileges, which is the recommended way to manage a Linux server
Granting administrative privileges (without becoming full root)
Even though this user is non‑root, we still need the ability to install packages, manage services, and perform system‑level tasks during setup. For that, we can add the user to the sudo group.
This does not give full root privileges permanently. Instead, it allows the user to run specific commands as root using sudo, which is safer because every privileged action requires confirmation
usermod -aG sudo <user-name>
Now the new user can perform administrative tasks when needed, but still operates as a normal user by default
Copying your SSH key to the new user
To log in as this new user using SSH, we need to make sure they have the same SSH key we used earlier. The easiest way is to copy your existing .ssh directory to the new user’s home folder and set the correct ownership.
rsync --archive --chown=asela:asela ~/.ssh /home/asela
This ensures:
The new user can log in using your existing SSH key.
File permissions remain correct.
You don’t need to generate a new key unless you prefer to.
If you want to use a separate SSH key for this user, you can generate a new one and place it in /home//.ssh/authorized_keys instead.
Next, we’ll enable SSH access through UFW (Uncomplicated Firewall). If you’re new to UFW, think of it as a simple tool that controls which network traffic is allowed into your server. By default, most servers keep all ports closed for safety, and UFW helps you explicitly open only the ones you need. For example, SSH uses port 22, so enabling SSH in UFW ensures you can connect to your server remotely without exposing unnecessary services to the internet.
Checking available UFW application profiles
UFW comes with predefined profiles for common services. You can list them with:
ufw app list
This shows entries like
OpenSSH,Nginx Full,Nginx HTTP, etc. Each profile contains the ports that service needs.
Allowing SSH connections
If OpenSSH appears in the list, you can allow it directly:
ufw allow OpenSSH
This opens port 22 so you can continue connecting to your server via SSH. If this rule isn’t added, enabling the firewall would block SSH and disconnect you.
Enabling UFW and verifying status
Once SSH is allowed, you can safely enable the firewall:
ufw enable
After enabling, check the status to confirm your rules are active:
ufw status
This should show the services that enabled by the ufw.
Later in the article, we’ll configure UFW more precisely for your application—opening only the required ports and keeping everything else locked down. And if all of this sounds unfamiliar, don’t worry. I’ll walk you through UFW step by step, explain what each rule does, and show you how to keep your server secure without needing deep networking knowledge.
Alright, Now let's switch the user and start doing the server setup
Switching to the new user
Now that the user account is created and SSH access is configured, switch into that user so all remaining setup happens under a safer, non‑root environment.
su - asela
The
-ensures you load the user’s full environment, including their home directory and shell configuration
Updating package lists
Before installing anything, refresh your package index so the server knows about the latest versions available in the repositories
sudo apt-get update
This doesn’t install updates yet—it simply updates the list of available packages. It’s a good habit to run this before any installation.
Checking server status with htop
To get a quick overview of your server’s performance—CPU usage, memory usage, running processes—you can use .
htop
htopgives you a live, interactive dashboard that’s much easier to read than the traditionaltopcommand. It’s especially useful right after deployment to confirm your server is running smoothly and not overloaded.
Installing Node.js
Node.js isn’t included in Ubuntu’s default repositories in its latest LTS form, so the recommended approach is to use the official NodeSource repository. This ensures you always get the correct and up‑to‑date version.
Start by moving to your home directory:
cd ~
Then download and run the NodeSource setup script for Node.js 20.x (the current LTS release):
curl -sL https://deb.nodesource.com/setup_20.x | sudo -E bash
This script adds the NodeSource PPA and updates your package list automatically. Once that’s done, install Node.js:
Install the nodejs to the server.
sudo apt-get install -y nodejs
Installing build tools for npm package
Some npm packages need to compile native code during installation. To support that, install the build-essential package, which includes GCC, G++, and make.
sudo apt install build-essential
This ensures your environment can handle any package that requires compilation.
Cloning the project repository
With Node.js installed and the server ready, the next step is to bring our project code onto the machine. Use git clone to download your repository:
git clone https://github.com/zaselalk/MERN-Workshop---7th-Batch.git
This creates a new folder containing both the backend and frontend code for your MERN application.
Installing backend dependencies
Before running anything, move into the backend directory and install the required Node packages.
cd client
npm install
At this point, you may notice errors—especially if your backend expects a database connection. That’s completely normal because the database isn’t installed yet. Let’s fix that next.
Installing MySQL Server
Your backend needs a database to store and retrieve data. Install MySQL Server using:
sudo apt install mysql-server
Once installed, run the security script to configure your MySQL instance
sudo mysql_secure_installation
This helps you set a root password, remove anonymous users, disable remote root login, and apply other recommended security settings
Creating the project database
CREATE DATABASE mern_project;
Checking existing MySQL users
Now we want to know the username and password to access the mysql instance via the application.
let's see which users currently exist in your MySQL instance, for that run,
SELECT User, Host FROM mysql.user;
By running above command you can see that there are number of system users available, Let's create new user to use application database, then we can only give the permission for our database.
Creating a dedicated MySQL user
update the user name with your own name and also make sure to add secure password.
CREATE USER 'asela'@'localhost' IDENTIFIED WITH mysql_native_password BY 'password';
Granting database privileges
Once the user exists, you need to give them permission to work with your project database:
GRANT ALL ON mern_project.* TO 'asela'@'localhost';
This grants full access (SELECT, INSERT, UPDATE, DELETE, etc.) to all tables inside the database, but only for this database. The user cannot touch other databases on the server.
Applying the changes
MySQL caches privilege information, so you must reload it:
FLUSH PRIVILEGES;
This ensures the new permissions take effect immediately
Now let's try to up the server by providing the database information. Once the server is on the vps, we can test that using the curl command.
Now let's available our application for outside users as well. for that we need a web server. let's install the nginx.
Installing Nginx as the web server
Nginx will act as the front-facing web server for your application. Install it using:
sudo apt install nginx
After installation, confirm that the service is running, You should see it listed as active (running).
systemctl status nginx
Enabling UFW and allowing necessary services
Before enabling the firewall, make sure SSH is allowed so you don’t lock yourself out. Since you already enabled SSH earlier, now you can safely turn on UFW:
sudo ufw enable
To see the available application profiles
sudo ufw app list
You’ll typically see profiles like:
Nginx Full
Nginx HTTP
Nginx HTTPS
OpenSSH
Since we plan to serve application through Nginx, let's allow the full profile:
sudo ufw allow 'Nginx Full'
Then verify the firewall status by sudo ufw status , confirms that Nginx and SSH are allowed while all other ports remain protected
Opening a custom port (for testing only)
Let's do experiment with temporarily expose our Node.js server directly, you can open port 3000 by using following command.
sudo ufw allow 3000/tcp
This allows external access to your Node.js app running on port 3000.
Why exposing port 3000 directly is not ideal
Direct exposure is fine for development or internal tools, but not great for production because:
No HTTPS
No caching or rate limiting
No protection from malformed requests
No domain support
That’s why most production setups use Nginx as a reverse proxy. Let's remove that rule and enable the reverse proxy,
Remove the rule
Find the specific number of the rule by running the first command and then delete that rule using the second command.
sudo ufw status numbered
sudo ufw delete <number>
Config the Nginx Reverse Proxy
Navigate to the Nginx sites‑available directory and create a new configuration file for our app:
sudo nano /etc/nginx/sites-available/mern
Add the following server block:
server {
listen 80;
server_name [ip or domain];
location / {
proxy_pass http://localhost:3000;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection 'upgrade';
proxy_set_header Host $host;
proxy_cache_bypass $http_upgrade;
}
}
This tells Nginx to listen on port 80 (HTTP) and forward all incoming requests to your Node.js app running on localhost:3000. The headers ensure WebSocket support and proper request forwarding.
Enabling the configuration
Nginx only loads configurations that exist inside sites‑enabled. Create a symbolic link from your new config file:
sudo ln -s /etc/nginx/sites-available/myapi /etc/nginx/sites-enabled/
This activates the configuration without duplicating files
Testing and reloading Nginx
Before applying changes, test the configuration for syntax errors and if there is no any error run the reload command to reload the configurations.
sudo nginx -t
sudo systemctl reload nginx
If that process success, the server should be working properly. Then we can also map the domain and setup SSL, for that we need to map the domain to the api
Setup the DNS record to the VPS public IP and update the server block to that newly pointed domain. after we can also install the SSL for the site.
sudo apt install certbot python3-certbot-nginx
make sure that server name on the block has set properly. the run the following command to setup the SSL.
sudo certbot --nginx -d mern.zasela.site
Let's build the react app.
Before building the app make sure to update the backend URL with new URL.
npm run build
sudo cp -r dist/ /var/www/mern-client/
We also want to create a server block for the client as well. make sure to update the server name.
server {
listen 80;
server_name [server_ip];
root /var/www/mern-client;
index index.html;
location / {
try_files $uri /index.html;
}
}
After enable the config as we did in the API
sudo ln -s /etc/nginx/sites-available/mern-client /etc/nginx/mern-client
reload the server to see the site.
sudo systemctl reload nginx
You might notice that we need to stop the server because it's running on the terminal, we can run that in background using pm2, with more capabilities.
Installing PM2
Next let’s install PM2, a process manager for Node.js applications. PM2 makes it possible to daemonize applications so that they will run in the background as a service.
sudo npm install pm2@latest -g
Now you can start Application by pm2 start hello.js

![[Day 03] - Laptop to Lab -- Lot of Questions](/_next/image?url=https%3A%2F%2Fcdn.hashnode.com%2Fres%2Fhashnode%2Fimage%2Fupload%2Fv1770574152771%2F6e8524ec-fca2-4129-a360-cae13e7046a4.png&w=3840&q=75)

