So, you’ve got a cool app running locally or on a server, and you want to know how to add access permissions and let it out in the wild? In this guide I’ll add login to a Node.js app using Cloudflare Tunnel (previously known as Argo Tunnels). This is a new and highly-effective way to go from 127.0.0.1:8080 to a hosted https://myapp.com/login, for any app (not just node apps).
Table of Contents
- Intro to Tiddlywiki on Nodejs
- Intro to Cloudflare Tunnel
- Setting up your Nodejs server on Digital Ocean
- Setting up your domain and the cloudflared daemon
- Linking your new Cloudflare Tunnel to your app in the Cloudflare portal
- Provisioning login access to the app with Cloudflare Teams
At Digital Mark, we are passionate about enabling small teams to work together more effectively. Often this means building custom workflows on tools like Airtable and Notion; but sometimes these SaaS products don’t quite have the flexibility we need. As a result, we often find ourselves developing custom database software for things like order tracking, dashboards, and the like. But how can we make ad-hoc software quickly and secure it on the internet with login capability?
TiddlyWiki, our low-code framework for small business databases
Enter Tiddlywiki, our favorite low-code platform, which doubles as an ultra-flexible node js framework for building custom database software for small teams.
In the context of small business databases, think of Tiddlywiki as an alternative to Microsoft Access. It’s what you use when you have a data workflow that’s outgrown a paper process or a spreadsheet. In this guide we’ll be using Tiddlywiki to run custom order tracking software for a retail business (a locksmith).
Tiddlywiki is known for running as just a single HTML file in the browser; but leveraging its lesser-known node js version unlocks a world of possibilities for server-based applications. There’s only one problem: it’s great to build cool things on the server in node, but how do we then provision access to the server in a way that is secure and low-maintenance? Tiddlywiki offers a webserver with basic auth, but is basic auth secure enough for an actual business? (Hint: no!)
We’ll need to:
- Provide stable access to the app from anywhere (i.e. put it on a server).
- Secure the server and node js app so unauthorized people can’t see the data.
- Add login to our node.js app using Cloudflare Tunnel + Cloudflare Zero Trust.
What is Cloudflare Tunnel?
Cloudflare is a major player in the CDN and DNS space. What does this particular alphabet soup mean to you? Well, let’s say you have some really cool application running locally or on a server. You want to add authentication for your team, but how? Should I use Express? Is this where I finally learn AWS Cognito? There’s just so many options you can choose from to create some middleware and authenticate users, and most of them are pretty darn confusing!
Cloudflare, on the other hand, is dead simple. By managing your actual domain, Cloudflare can stand in the middle between your server and the public internet. Since Cloudflare intercepts all traffic bound for your domain, you can set up its built-in suite of tools to create a zero-trust policy in front of your server and app. This means Cloudflare can handle user authentication and issue access tokens. Nice!
What is Cloudflare DNS, anyway?
You can think of the Cloudflare DNS underlying all this as a sort of “Google Maps” application, just for the information superhighway instead of for the roadways. There’s all these IP Addresses out there on servers we want to visit, and Cloudflare’s DNS offers a great way to navigate to the right place. Importantly, by letting Cloudflare manage the DNS for your domain and “proxy” your server, Cloudflare can also do the opposite, and direct people away from your domain if they are in the wrong place.
If you know Nginx, think of this method as that — just a reverse proxy. Except now our reverse proxy comes with a portal and a whole suite of middleware tools built-in, helping us manage access lists and authenticate users.
To demonstrate what I mean, I’m going to run through the following in this guide:
- Run an unsecured node js app on a Digital Ocean VPS.
- Lock down the ports and install a daemon on the VPS called “cloudflared,” which will create a secure tunnel between my server and Cloudflare’s DNS. Cloudflare calls these “Tunnels.”
- Integrate my new Tunnel with Cloudflare’s Zero Trust / Teams product, so I can easily add login to our node.js app as well as backend user management.
- Add my email address as an authorized human, and log on in to my app!
It doesn’t matter if your app is running locally on your network (even on your Mac!), on a VPS, a Raspberry Pi, or wherever. By locking down traffic to your domain with cloudflared, only Cloudflare itself has access to your tunnel and to whatever device/app you put on the other side of your tunnel. Cloudflare Teams then allows you to easily provision access using email addresses or Access Control Lists like Active Directory or Ping Identity.
Setup Node JS Tiddlywiki on a Digital Ocean VPS
You’ll need a Digital Ocean account and a credit card.
- We’re using a $5/month droplet that’s billed hourly, so you can destroy it after this tutorial and spend almost nothing.
- If you use the link to DO above, you can also get $100 in credit over 6 months.
- You can use any VPS provider, but we like Digital Ocean because it has a great interface and a super-easy OS image that’s prefigured with some of the packages we will need. We’ll also get a pre-configured firewall on the server, and an additional cloud firewall in the Digital Ocean dashboard itself. (You may think AWS Lightsail’s entry-level VPS is cheaper at $3.5, but that will cost you an extra dollar for a static IP anyway)
1. Login to Digital Ocean and Create a Droplet
2. Select the cheapest VPS under Distributions
For our purposes, we like to segregate client data on its own server and domain, and the Basic shared CPU model running Ubuntu usually fits the bill.
Note that a small Tiddlywiki app with 1-3 users should be fine on whatever Digital Ocean is offering on its entry-level VPS. We routinely run this setup locally for customers on old Raspberry Pis, so on a solid VPS near your location it will really scream. That said, I’ve found the $15-$20 range to be the right spend with DO (2GB RAM or more plus premium processor).
On that, scroll down and select a data center near you (or far away, whatever you like). I’m going to leave off all the other extras on this screen.
IMPORTANT: for authentication, I’m going to go ahead and use a password since this is a temporary setup. But you should always use SSH keys if you are letting this thing out in the wild!
3. Select the Marketplace tab and search for the Node.js prefigured droplet.
To save us some time installing packages, we’ll use a preconfigured OS from Digital Ocean. To find the OS image, click the Marketplace tab and then search “nodejs” in the search bar.
The image we’re looking for is called just NodeJS. It works great because:
- It’s got Node.js installed already as well as NPM, the Node Package Manager.
- It also comes with pm2 for process management, which is the easiest way to manage apps on a remote server.
- There’s a firewall already running with only a few ports open. We can easily control this using the ufw command. We can also use Digital Oceans cloud firewall to lock down the server.
- Nginx is pre-installed. We won’t need much of the functionality of Nginx because we will be getting that from Cloudflare; but it’s helpful to have Nginx on-hand, not least because we may want to run apps without cloudflared at some point.
4. Scroll all the way to bottom of the screen and click “Create Droplet.”
If the Create Droplet button is greyed out, double check that you entered a root password (or selected an SSH key).
5. Wohoo! You can now copy the IP address of your droplet and paste it into your web browser.
You’ll be greeted by a helpful landing page walking you through some of the basic info about your new server.
6. Let’s get Tiddlywiki installed now.
Open a Terminal on your machine and connect to your server.
At this point, it’s always a good idea to create a new user so that everything you do isn’t run as root; however, the Cloudflare setup process is best run as root, so that’s what we’ll be doing here. Refer to Digital Ocean’s default helper app in Step 5 above for details on setting up a secure server, which will direct you to this link. Alternatively, this article by Nick Major is a great go-to bookmark for the proper setup of a secure Nodejs server (thanks Nick!).
Once we’re logged in, we’ll note some useful information, including:
- A message about the UFW firewall being enabled, with all ports blocked except port 22 (for SSH), 80 (for HTTP traffic) and 443 (for HTTPS traffic).
- A default SFTP account created so we can easily access files on the server.
- The directory locations of your server Keys and the sample Nodejs app which came with the OS.
Now let’s install Tiddlywiki globally.
npm install -g tiddlywiki
To make sure that went smoothly, let’s verify our Tiddlywiki vesion.
In our case the response to this command is 5.2.3 (September 2022). Sweet!
Setting up your domain and cloudflared
- A Cloudflare account.
- A domain name, and the ability to log into your domain registrar and change its nameservers.
1. Log into your Cloudflare account and select “Add a Site.”
2. Enter your domain name and select “Add Site.”
3. Choose the free plan, which is under the paid plans. Click “Continue.”
Cloudflare now allows you to pre-configure a bunch of DNS records, even though Cloudflare doesn’t yet control your domain.
4. For now, let’s just click Continue and we’ll come back to the DNS records later.
The next screen will give you detailed instructions on how to change your nameservers, based on whoever Cloudflare detected as your domain registrar.
5. Go ahead and visit your domain registrar and change your nameservers now.
Note that you may have to give it some time (hours, or even a day) for these changes to take effect. When you’re done, click “Done, check nameservers.”
5. Return to your Cloudflare account and complete the “Quick Start” settings.
On this screen, you’ll be greeted with a “Quick Start Guide.” This is a bit of a misnomer, because it’s actually just a place to set up http-to-https redirects and caching. In item (1), usually I toggle to always redirect to HTTPS. In (2), I usually forgo caching and compression for this use case. The “Get Started” button will instead say “Finish” once you open the items 1/2/3 and save the settings.
7. You’re in! Now let’s get started with setting up Cloudflare’s daemon
cloudflared on the VPS.
I have found that the most consistent way to set up Cloudflare Tunnel is via the CLI on the VPS itself. This is as opposed to using GUI-based tools in the Cloudflare dashboard.
To get started on your VPS, run the following command to install cloudflared (September 2022 from here):
$ wget -q https://github.com/cloudflare/cloudflared/releases/latest/download/cloudflared-linux-amd64.deb && dpkg -i cloudflared-linux-amd64.deb
To verify that worked as expected, run:
Your response should look something like:
cloudflared version 2022.9.0 (built 2022-09-07-0832 UTC)
8. Next, we need to authenticate with Cloudflare.
This is an important step to link your server (or other device) with your Cloudflare account.
cloudflared tunnel login
Cloudflared will now return a special URL for you to copy-paste into your browser. Leave your terminal running while you do this.
9. Paste the URL from Step 8 above into the same browser where you’re logged into your Cloudflare account.
You’ll be met with an “Authorize Tunnel” screen, where you simply select the domain you recently added, and click-through to confirm. Once complete you should get a success message in the browser like this:
Back in your terminal, you will see a separate success message telling you that cloudflared is aware that you authenticated using the special URL. The server also tells you where to find your cert files in case you need them later.
10. Now you are ready to create your first Tunnel (previously called “Argo Tunnels”).
My tunnel is being set up to secure an app we call Smithy, so that’s what I’ll name my tunnel. Smithy is just a customized Tiddlywiki running on Node.js. It’s a great low-code starter-pack for anyone looking for a small business POS system with Order Tracking and CRM capability.
Choose whatever name you like and use it with the following command:
cloudflared tunnel create smithy
The command will create a Tunnel with the name provided and associate it with a UUID. The UUID will be returned in the terminal. Look for the last line of the response in the terminal where it says “Created tunnel smithy with id 8675309-arf8-bar4-arf9-95iamuuid671.”
Open a text editor and paste in two important pieces of info from the terminal’s response: your UUID, and the location of the credentials file. In my case the credential file location looks like this /root/.cloudflared/8675309-arf8-bar4-arf9-95iamuuid671.json.
11. Use the tunnel’s UUID from Step 10 above to edit your domain’s DNS records.
September 2022 Update
In the screengrabs below, I have manually edited DNS records. As of 2022, it’s easier and safer (in case CF updates their service domains) to use the CLI:
#example #cloudflared tunnel route dns <UUID or NAME> <hostname> #actual in my case cloudflared tunnel route dns smithy smithy.mydomain.com
Manual Method for Illustrative Purposes
If you used the CLI above to update your DNS records, you don’t need to do this, it’s just to show you what happens when you run the command. From your Cloudflare dashboard, select the domain you added earlier. This brings you to the control panel for that domain. In the blue buttons on top, click DNS.
Using your UUID from Step 10, we will add a CNAME record and append a special Cloudflare domain (“cfargotunnel.com”) to the UUID as pictured:
12. Edit the cloudflared configuration file.
If you’ve ever configured Nginx, you’ll be in familiar territory here. We need to set up some rules for what happens when (authenticated) people visit our new smithy.mydomain.com URL.
To do so, switch back to the text file where you pasted your UUID and the location of your credentials file. Add lines to the file as follows. Remember to use your own codes and names, not mine.
tunnel: 8675309-arf8-bar4-arf9-95iamuuid671 credentials-file: /root/.cloudflared/8675309-arf8-bar4-arf9-95iamuuid671.json ingress: - hostname: smithy.mydomain.com service: http://localhost:8089 - service: http_status:404
Just like Nginx, you’ll notice that I first defined what happens when a user visits a known location at smithy.mydomain.com. Nothing is happening on port 8089 yet, but we’ll be setting up Tiddywiki there soon enough.
Also like Nginx, it is important to have a “catch-all” rule at the end, defining what happens for all requests that don’t match the other rules. In this case, anyone who doesn’t visit smithy.mydomain.com will get a 404 error.
13. Name the file config.yml and drop it on your server under /etc/cloudflared.
Cloudflared will look for this file in a few default locations, depending on your OS. Since we can FTP into our server, go ahead and navigate to /etc/. Create a directory called cloudflared and and drop the file into the directory.
IMPORTANT: since we’re tinkering around in a tutorial, we may want to start and stop the tunnel at will. Once you like your setup, though, you’ll want to run your tunnel as a “service,” so it stays alive on your machine through reboots and the like. You’ll just need to run
cloudflared service install, but we’re not going to do that here.
Linking your new Cloudflare Tunnel to your app
We’ve been having so much fun setting up Cloudflare, we almost forgot about launching our super cool Tiddlywiki node app.
To get our POS system installed and running, let’s go ahead and clone the Smithy repository and launch it on port 8089, the location we set when we were editing cloudflared’s config.yml file.
1. Clone the Smithy app from github to your server.
We’ll go ahead and house the Smithy app right in our root directory.
Now we can pull the latest version of Smithy from github.
git clone https://github.com/philwonski/TW5-SmallBusiness-POS.git
2. Set up Tiddlywiki to run with pm2.
Now let’s run Smithy using the wicked pm2 process manager, so we can easily start-stop it and set it to launch at startup.
We’ll run “pm2 list” just to verify what we’re starting with and to daemonize pm2.
If you’ve followed along since the beginning, this should return an empty list. To prepare pm2 to launch our app(s) at startup, run:
Now let’s create a very basic shell script to launch tiddlywiki on port 8089 as we specified earlier.
This will launch a text editor your terminal. Go ahead and paste in the following text:
#!/bin/bash # smithy launch script tiddlywiki smithy/TW5-SmallBusiness-POS --listen port=8089
Per the bottom of the terminal, exit nano by pressing Control-X, then type y to say “yes, save it.” You are of course also welcome to create this smithy.sh file on your local machine and drop it in via FTP.
3. Launch Smithy!
pm2 start smithy.sh
Now we’re up and running! You can verify this by running “pm2 list” again, and you’ll see something like this:
root@nodejs-ubuntu-s-1vcpu-1gb-nyc3-12:~# pm2 list ┌────┬────────────────────┬──────────┬──────┬───────────┬──────────┬──────────┐ │ id │ name │ mode │ ↺ │ status │ cpu │ memory │ ├────┼────────────────────┼──────────┼──────┼───────────┼──────────┼──────────┤ │ 0 │ smithy │ fork │ 0 │ online │ 0% │ 3.1mb │ └────┴────────────────────┴──────────┴──────┴───────────┴──────────┴──────────┘
To save this configuration so Smithy will launch if the server is rebooted, run:
Great stuff, but we can’t see our app just yet because 8089 is not an open port. So what’s that about?
If you’ve made it this far, you may already understand that the reason we are going through all of this trouble is so that we don’t have to open any ports to use our app. Rather, cloudflared is going to provision access to our app, using our tunnel and our config rules.
Let’s make it happen.
Add login to our node.js app with Cloudflare Zero Trust
Rather than open up a port and expose our app to the internet without protection, we can use Cloudflare Zero Trust aka Cloudflare Teams to set a list of authorized people who can access all the fun we’re having over at port 8089. Cloudflare will also help us add login to our node.js app by redirecting traffic to their hosted login screen.
1. Add your application to your Cloudflare portal.
From your Cloudflare portal, select “Zero Trust.”
Once Cloudflare Zero Trust launches, select “Access” in the left pane and then click to “Add an application.”
On the next screen select “Self Hosted,” since this is an app we’re hosting on our own server. Note that the other option for SaaS allows you to secure third-party SaaS apps as well.
Now we’re almost there!
On the configuration screen, simply add your subdomain from your DNS settings from earlier. Then select your domain in the dropdown.
Below these settings you’ll notice that you can add a custom logo to the login screen. You can also revisit the login methods down here if necessary.
Click “Next” and you will reach the culmination of our journey to add login to nodejs apps. This is where we get to determine who all should be able to access whatever we put behind this tunnel.
You’ll note that I created a new rule called “justPhil.” In this rule I’ll select “Emails” from the dropdown and enter my own email. This means that anytime someone visits smithy.mydomain.com, they will be prompted to enter their email address. If their email address matches the rule (in this case, if they enter my email address), Cloudflare will send a one-time pin to that address.
Typically for our apps, like Smithy, we might add a handful of authorized emails here. You can see that this approach works great for small teams; for larger teams, you’ll want to go by organization (“emails ending in”) or use ACLs like Active Directory. ACLs are outside the scope of this tutorial, but you get the idea: this may not be a true AWS Cognito or Amplify killer because it lacks signup and all that, but it is a really solid and user-friendly way to provision access to internal apps.
In the next screen we can configure CORS and Cookie settings, but these aren’t really necessary for our purposes, so we’ll leave those all as-is.
Finally, click “Add application.”
2. Start your Cloudflare Tunnel to bring the connection live.
Since we opted not to setup cloudflared as a service just yet, we’ll need to start it manually. To do so, simply run:
cloudflared tunnel run smithy
Note that this will tie up your terminal session and if you kill the tunnel it may delete your config file. Lately I’ve been adding lots of future routes to my config file and just running it as a service from the jump!
# alternative to tunnel run cloudflared service install
If you ran
run, your terminal should now spit out a bunch of feedback about how it registered itself with Cloudflare’s servers around the US/World. Cool. If you ran
service, you get a 2-line success message.
3. Ohh baby! It’s time to log in!
Finally, simply visit smithy.yourdomain.com and you will be redirected to Cloudflare’s standard login screen, where you’ll have to enter your email address to receive a code. It should look like this:
Note that my email provider sends the code to the “Newsletter” inbox, so don’t forget to check your filtered/spam folders for the code.
Once you enter your code, you will be redirected to the Smithy home screen!
Congradulations! You have now protected your Tiddlywiki app by adding login to nodejs using Cloudflare Tunnel!
As satisfying as that all was, there is still one small caveat.
Remember how our server’s firewall was pre-set to keep all ports closed except 80, 443 and 22? Well, since Cloudflare is protecting our domain based on our rules in config.yml, we’re in good shape from a domain perspective — but what about if someone tries to reach our IP address directly?
Note that, even with cloudflared running, we can still see our Digital Ocean sample app by visiting our IP address directly. It’s serving the sample app on port 80, just like it did before we jumped through all these hoops.
For some admins, leaving this as-is may be useful for the purpose of monitoring the uptime of your server. You can use a free tool like Uptime Robot to make sure the Digital Ocean sample app always resolves, for example. But if you really want to lock things down, you can:
A) Edit your Nginx file to give everybody the
middle finger 404 using a catch-all rule, or
B) Use UFW to actually close ports 80 and 443 (I wouldn’t close 22, because then you can’t access your server via SSH).
I’m going to run with Option B and close the ports, since I don’t need monitoring… I have no doubt my customer will call me the second they can’t access their app!
ufw status numbered
It should return something like this:
To Action From -- ------ ---- [ 1] 22/tcp LIMIT IN Anywhere [ 2] Nginx Full ALLOW IN Anywhere [ 3] 22/tcp (v6) LIMIT IN Anywhere (v6) [ 4] Nginx Full (v6) ALLOW IN Anywhere (v6)
“Nginx Full” is another way of saying that the default Nginx ports, 80 and 443, are set to “Allow In.” To close them, simply run:
ufw delete 2
Confirm it, then run “ufw status numbered” again, since your numbers will change without the old rule there. In my case Nginx Full v6 changed to Number 3, so as a last step I closed it by running “ufw delete 3.”
Now you can sleep easy!
If you enjoyed this tutorial, sign up for our newsletter for more great updates from the team and me. Since we try to spare our broader audience from thinking about yaml files and whatnot, you can also reach me or follow me on Twitter @philwonski if jargon is your thing.