An awesome Linux terminal (with Microsoft Terminal and Oh-My-ZSH) on Windows 10

Oh-My-ZSH makes your terminal look and feel awesome – I used it daily on my MacOSX machines and also on Linux desktops in the past too!

I’ve recently switched my Apple MacBook Pro over to a Dell XPS and am now using Windows as my daily driver. More information on the move can be found in an earlier blog post.

I’ve put this quick tutorial together to demonstrate how you can run Oh-My-ZSH and have a really awesome Linux terminal on Windows (knowing that others will be switching back to soon Windows too ;))…

Oh-My-ZSH running on Windows 10 – A modern Linux Terminal for Windows 10

First of all, we need to enable WSL (Windows Sub-system for Linux), we can do this by opening PowerShell as Administator and then running the following command:

Enable-WindowsOptionalFeature -Online -FeatureName Microsoft-Windows-Subsystem-Linux

It should look something like so:

Once completed, Windows will require you to reboot your machine, at the prompt (shown above, type Y and press the return key (aka the “Enter” key), once logged back in we’ll open up the Microsoft Store and find “Ubuntu 18.04” and then install it by clicking the blue Get button:

Install “Ubuntu 18.04 LTS” from the Microsoft Store.

Once installed we will now go and install “Windows Terminal”, this is a new terminal developed by Microsoft that supports a whole host of goodness and makes this feel more like iTerm and other modern day terminal applications, using the Windows Store search box, search for “Windows Terminal” and install that too:

Install Windows Terminal (Preview) from the Microsoft Store.

Great stuff, once the Windows Terminal is installed, you’ll see the Launch button, click this to open the newly installed Windows Terminal, it should appear as follows:

The Windows Terminal opens PowerShell by default…

The next step is for us to initiate our WSL distrubution, to do this, open up the Start Menu and type (or find in the Programs List) “Ubuntu”, you should see it appear like so:

Find “Ubuntu 18.04 LTS” on the start menu and open it for the first time.

Click the icon to open up the default Ubuntu 18.04 terminal window, as this is the first time it has been run, it’ll first install some files that are required and you will then be prompted for a UNIX username – This account does not have to be the same as your windows login but I would recommend ensuring that it’s lowercase and only contain standard characters.

Once you have set a username and password you will then be presented with the standard BASH terminal like so:

Although we used the default “Ubuntu Logo” icon to open up our Linux terminal for the first time, going forwards we will access this from the Windows Terminal application instead (as it supports all the modern features of a Terminal client).

You can now close this window and we’ll move on the Windows Terminal configuration…

Now that we have the Windows Terminal open, we can see that it opens Powershell by default, you can open new tabs (which can be different shells, by default, Windows Terminal Preview as the time of writing includes PowerShell (the default), the standard Windows Command Prompt (cmd) and the Azure Cloud Shell.

The default shell options after installation.

As you’ve probably noticed, at this point we don’t yet have the Ubuntu Shell (bash) setup, we will move on to that next…

From the dropdown menu (shown in the screenshot above), click on the Settings option. The Windows Terminal settings are stored in JSON format, clicking the Settings option from the menu will open up your default text editor (in my case, I use Visual Studio code, the file looks as follows):

It’s worth noting that simply saving this file will automatically update your Windows Terminal configuration (settings are made on the fly when you save changes to this configuration file).

We are now going to add a new profile to our Windows Terminal settings file, to do this scroll down this file until you find the “profile” section, we we now paste the following JSON block into the file:

            "acrylicOpacity" : 0.75,
            "closeOnExit" : true,
            "colorScheme" : "Bobstar",
            "commandline" : "wsl.exe -d Ubuntu-18.04",
            "cursorColor" : "#FFFFFF",
            "cursorShape" : "bar",
            "fontFace" : "Source Code Pro For Powerline",
            "fontSize" : 10,
            "guid" : "{9caa0dad-35be-5f56-a8ff-afceeeaa6101}",
            "historySize" : 9001,
            "icon" : "ms-appx:///ProfileIcons/{9acb9455-ca41-5af7-950f-6bca1bc9722f}.png",
            "name" : "Ubuntu",
            "padding" : "5, 5, 5, 5",
            "snapOnInput" : true,
            "startingDirectory" : "%USERPROFILE%",
            "useAcrylic" : false

This profile is tells Microsoft Terminal to create a new item on to the terminal dropdown menu, the above code block’s commandline value is set to open up our WSL container named “Ubuntu-18.04“. If you want to check what other WSL containers you have running on your computer you can run wsl -l which will then output the list of WSL distrubutions you have installed from the Microsoft Store.

Saving the file at this point will now add the “Ubuntu” option onto your terminal dropdown menu as demonstrated here:

Before opening “Ubuntu” we need to do a few more things given that the profile configuation you copied in a few moments ago is customised and requires some additional fonts to be installed on your PC first.

Now download the free Source Code Pro for Powerline font and install from GitHub. You can also download it here if navigating GitHub is not something you really want to do.

Next up, back in the settings file (the JSON file), locate the “schemes” section and paste the following scheme in:

            "background" : "#2B313A",
            "black" : "#0C0C0C",
            "blue" : "#406BA4",
            "brightBlack" : "#767676",
            "brightBlue" : "#7C9FD1",
            "brightCyan" : "#61D6D6",
            "brightGreen" : "#6D4344",
            "brightPurple" : "#B4009E",
            "brightRed" : "#96BAC9",
            "brightWhite" : "#F2F2F2",
            "brightYellow" : "#F9F1A5",
            "cyan" : "#3A96DD",
            "foreground" : "#B1C0B4",
            "green" : "#FFFFFF",
            "name" : "Bobstar",
            "purple" : "#881798",
            "red" : "#7B5761",
            "white" : "#CCCCCC",
            "yellow" : "#8DA76E"

Next, I personally like to set “Ubuntu” as my default terminal you locate this section:

"defaultProfile" : "{61c54bbd-c2c6-5271-96e7-009a87ff44bf}",

and change to:

"defaultProfile" : "{9caa0dad-35be-5f56-a8ff-afceeeaa6101}",

Great – Now close the Windows Terminal application and re-open it – You should notice that you are now presented with the Ubuntu shell instead (by default) and should look something like this:

Our terminal starting to look good…

The next step we want to do is to update our default home directory that Ubuntu uses, instead of /home/{user} we’ll instead change it so our Linux home directory is mapped to our Windows home directory (C:\Users\{username}).

WSL mounts Windows drive letters under /mnt/{drive letter} by default, we will however change this in the wsl.conf in a few steps (we set the root to be ‘/’ instead), so we know that we therefore need to update our home directory from /home/{username} to /c/Users/{username}, we do this by editing the /etc/passwd file:

sudo nano /etc/passwd

Now, scroll to the bottom of this file, you should see you Linux username as the last entry, update the home path section to match as the above, this screenshot demonstrates what it should be (my username is ballen in this example, you’ll need to replace that with your own ;))

A vital step (to ensure Linux permissions are translated properly for SSH key permissions etc. We must now create a new file (/etc/wsl.conf) and add some settings, we can do this like so:

sudo nano /etc/wsl.conf

Add the following contents into this file and then save and exit!

enabled = true
root = /
options = "metadata,umask=22,fmask=11"

At this point, I’d recommend that you close all Microsoft Terminal windows and then run the following command from a standard command prompt window, this will ensure that WSL is terminated properly as the wsl.conf file is re-ready when the light-weight Linux container is restarted:

wsl.exe --terminate Ubuntu-18.04

Let’s now take a minute to recap…

At this point we have WSL installed, Ubuntu 18.04 LTS and the Windows Terminal installed and configured with a custom theme and updated the configuration to set our default terminal tab to open BASH (through our Ubuntu 18.04 LTS Linux distribution).

We will now move on to updating our Ubuntu 18.04 environment and install and configure Oh-My-ZSH.

Lets now update Ubuntu:

sudo apt-get update
sudo apt-get upgrade

Next we’ll install ZSH:

sudo apt-get install zsh

Now we’ll install Oh-My-ZSH using the following command:

sh -c "$(curl -fsSL"

When prompted to “Would you like to change your default shell”, type Y and press the return key (ENTER):

Once the installation is completed, close the terminal.

We’ll now test that Oh-My-ZSH is configured to start up when we open our terminal, open up the Windows Terminal from the start menu and you should now be looking at the default Oh-My-ZSH theme like so:

Next up you will need to download my custom ZSH theme.

Extract the file named bobstar.zsh-theme (from the downloaded ZIP file) and copy this to the following location:


We can now go and enable this theme to do this, back in the Windows Terminal type the following command:

sudo nano ~/.zshrc

We will update the Oh-My-ZSH theme from the default robbierussel theme to our new bobstar theme, update the value in this file as shown here:

Now save the file and exit the file editor.

It’s now time to test it all out… Close your terminal and re-open it!

All going we’ll it should now look a little like this…

Tip: Once you change your home directory to match your Windows home directory path, you should set the Linux Ownerhsip settings correctly, this can be achieved by running the following command (assuming your Linux username is the same as your Windows username):

sudo chown -R $USER:$USER /c/Users/$USER 

I moved away from Apple products!

I’ve been using MacOS and various iPhones for the past ten years. Apple computers have been my daily driver for general web browsing and as my development machine of choice. As a developer that works on a lot of Linux/UNIX based projects having a “proper” terminal was the main deal breaker for while!

I first started with MacOSX when I purchased an iMac back in 2009 – It looked cool, sleek and I loved the aluminium as well as it’s opererting system’s foundation being UNIX. A year or so later I then decided that I would get myself a MacBook Pro – I opted for the 13inch i5 model and at the time, given that the RAM and HDD was upgradable on this particular model, I very quickly upgraded the slow 5400rpm mechanical hard drive to an SSD (wow, what a speed boost!) and the 8GB of RAM that it came with to 16GB – It was great, I hooked it up to an external 27inch monitor, connected an external bluetooth keyboard and mouse and all was good!

Three years later (in 2016), it was time to upgrade it, this time I opted for the MacBook Pro 15inch model, I maxed out on the specs on this one – I went with the 512GB SSD and the i7 processor knowing that the money would be worth it as they’re built so solid.

Most “old skool” MBP owners will know that the late-2015 MacBook Pro was the last model that didn’t come with soldered on RAM… I’ve known for a while now that my next MacBook Pro would lock me in on my specs – whatever I upgrade to in future would be “set” with a soldered storage device and RAM. I also knew that further investment in Apple products would mean that I’d have to start collecting all the “dongles” if I wanted to plug stuff into it! I’m also not keen on their latest keyboards too!

I know a lot of people, esepecially content creators that have made the switch to premium non-Apple ultrabooks of late and I too have toyed with the idea of switching my daily driver back to Microsoft Windows (from MacOS) for several months now.

I did it! – I bit the bullet and a couple of weeks ago took delivery of a new Dell XPS 15 7590, the machine has an i9 processor, an NVIDIA GTX1660 graphics card and 32 gigs of RAM (upgradable to 64GB).

I’ve already upgraded the NVME drive to a superfast 1TB Samsung EVO PRO (far better performance over the one I could have upgraded to on the Dell website) – Although still rather expensive it’s far superior over any of the latest MacBook Pro’s and should last me a good while now!

I knew that in order to be happy with this setup and not run back to using Apple products in the next month or two, The laptop must have a “premium” non-plastic feel to it, it needs to be “solid” – The Dell XPS are built using Aluminium and Carbon Fibre! I also needed to ensure that I could setup a development environment on Windows 10 Pro that I would happy with…

In the past week, I’ve installed and configured WSL (Windows Sub-system for Linux) enabling me to run a linux terminal on Windows, I’ve also setup Docker and I’m super happy with the results!

Microsoft are also due to release WSL 2 later this year which provides further speed improvements and allows me to run Docker inside WSL – Awesome!

Why didn’t I just got for a Linux based setup? – I already run several Linux based servers and have used Linux Desktop machines a lot in the past too but as a daliy driver, using a machine that fully supports all the software I need and use (Adobe Photoshop, Lightroom, Micrsoft Office, Microsoft Visual Studio – Yes, previously I was using a second Windows laptop just for Visual Studio!!) means that a Linux desktop isn’t really something that I could use as a primary laptop. Battery life on Linux laptops (for the most part) sucks too!

Monitoring Nginx on Ubuntu 16.04 with Netdata

Netdata is a fantastic real-time monitoring solution for your Linux based servers and I’ve particularly found it useful for monitoring my web servers.

Netdata provides a web-based dashboard with real-time graphs and alerting; it provides a myriad of information about your servers out of the box and the interface is slick!

The Netdata live dashboard interface

Before we start we’ll make sure that we have the latest packages from the apt repository available to us, we’ll run this command…

sudo apt-get update

We can install the software simply by running the following command and accept the new packages that the installer will ask to install (DO NOT use sudo here, it’ll do it automatically if required!)

bash <(curl -Ss

The installer will prompt you to confirm the build and will advise of the file locations etc (shown below) – simply press return to accept them!

Once you have installed the software, you should then start the daemon and enable it to run at system boot using the following commands:

systemctl start netdata
systemctl enable netdata

By default, Netdata does not monitor Nginx, we need to enable a few things first, lets start by editing our Nginx configuration file and enabling the Nginx stub_status module…

vi /etc/nginx/sites-available/default

Paste the following block of code under the location /{} block:

location /stub_status {
        deny all;

Now save the file and exit Vim.

We now need to restart Nginx, we can do this by running the following command:

service nginx restart

Assuming all went well, you should now be able to use cURL on the server and “hit” the stub_status endpoint, if it succeeded it means that the Nginx stub_status module is now enabled in Nginx and we can move on…


Next up we need to tell Netdata about the Nginx stub_status module, we do this by editing one of Netdata’s configuration files, type the following command:

vi /etc/netdata/python.d/nginx.conf

If the following code block does not exist in the file, add it!

  name : 'local'
  url  : 'http://localhost/stub_status'

Finally, save the file and then restart the netdata daemon like so:

systemctl restart netdata

You should now be able to access the monitoring dashboard using the following URL (if you’re using a firewall (which you should be ;)) please ensure you’ve added TCP 19999 to the allowed ports):


Your Netdata dashboard is now available to ALL on the internet, it is therefore recommended that you either firewall the connection on TCP 19999 from trusted networks or client IP addresses alternatively, if you have a static IP address you can add it to the /etc/netdata/netdata.conf file like so (in the [web] section):

 allow connections from = localhost
 allow dashboard from = localhost

Simply replace the example ‘’ IP address above with your static IP address, you can add as many IP addresses to the configuration by separating each one with a space!

Once you’ve made the required changes you should then restart the netdata daemon in order for the changes to take affect, you can do this by using the following command:

service netdata restart

There are a ton of other plugins you can configure such as the web_log plugin for Nginx, this will provide monitoring for request types, bandwidth and response types, find out more about these plugins and how to enable them at the Netdata website!

Windows Server 2016 as an NFS server for Linux clients

In this post I will explain how you can configure an NFS Server on a Windows 2016 Server and connect/mount the NFS exports on Linux clients, in my case I wanted to run a Linux virtual machine whist ensuring that the actual data resides on physical disks on my host machine ensuring that the data is automatically part of my nightly backup routine and did not need to run separate backup scripts on the VM(s).

A bit of background first…

In my home network I have a single (in an attempt to be eco-friendly) Intel i7 server running Windows Server 2016 Standard edition, I use this server for hosting my family media,  files and various database engines and ActiveDirectory for local development (I’m a software engineer by trade!) in addition to several Hyper-V virtual machines that do various tasks, all of the virtual machines are running a derivative of Linux.

I currently have the following virtual machines setup and running on it (under Hyper-V):

  • A Web Server and reverse proxy running Ubuntu Server 16.04 LTS – Hosting Gogs, Minio and various Nginx reverse proxy configurations for sites, services and API’s that sit on other VM’s in my network.
  • An OpenVPN server running CentOS 7 – Providing secure VPN tunnel access for me when  away from home.
  • A Jenkins server running Ubuntu Server 16.04 LTS– Used for automated code testing and continuous integration.
  • A MineCraft server running Ubuntu Server 16.04 LTS – Used by my daughter and friend to play online together.

In the past I used to run VMWare ESXi and hosted everything in their own virtual machine for better isolation and performance although since then, I had tested and was extremely happy with the performance of running virtual machines on top of Hyper-V and Windows Server so when I re-built my home server several months ago I decided to go down that route instead.

Anyway, enough of all that, let me explain why I have such a need for this kind of set-up…

My home server has 1x SDD (500GB for the host operating system and local applications) in addition to 2x WD Red 4TB hard drives in a hardware RAID1 configuration, I periodically backup this array over my LAN to a Buffalo NAS device.

My plan is to install a new VM running Ubuntu Server 16.04 that will host an instance of NextCloud, this will provide me, my family and friends with a free alternative to DropBox with masses of space in addition to all the other cool things that NextCloud offer such as encrypted video calls and the like.

By setting up an NFS server on the host operating system, instead of provisioning this Linux VM with a massive virtual hard disk (and taking drive space away from the host OS) I have instead provisioned it with a single 20GB virtual hard drive and will then use NFS shares on my Windows Server to host the files on the physical disk and thus be automatically part of my backup routine and alleviate the need for using rsync or rsnapshot etc. on the VM and transferring it at regular intervals.

Installing NFS Server on Windows Server 2016

First up, we need to login to our Windows Server and open up the Server Management tool, once open, click on the large text link labelled “Add Roles and Features” as shown here:

Once you have clicked on the “Add Roles and Features” link you should then be presented with this wizard:

Accept the default “Role-based or feature based installation” and then click Next

On the next screen you’ll be asked to choose the server that you want to add the role or feature to, select your server from the list that appears (you’ll probably only have one in the list anyway!) and then click Next

You will now be presented with a screen titled “Select server roles“, expand the following sections, then check the “Server for NFS” option as shown in the screenshot below:

Once checked, click on the “Next” button…

The next screen will just ask you to “Select features“, you can simply click “Next“!

Finally, you’ll be shown a screen asking you to confirm the installation items, we now choose “Install“, this screen and the selection of features and roles to add should look as follows:

Great! We now have an NFS server running on our Windows 2016 Server!

Creating an NFS share (export)

Now that we have the NFS server installed we can now go and share (or “export” as NFS likes to call it) a directory, as per my intro notes to this blog post, I plan to add this to my data RAID array.

So first up, lets go and create a new directory on our data disk (in my case this is my D: drive), I’ve decided to call the directory “NFS” and then, inside that folder we’ll create another directory called “VSVR-WEB040_data” – This folder will be explicitly shared with my VM (that is named ‘VSVR-WEB040‘, the “_data” portion I’ve just named as that is what I will mount the share locally on the VM as eg. /data).

Now that you have an NFS server installed you can share/export numerous directories to individual or multiple VM’s or even other physical servers in your network.

The result of setting up this directory structure is as follows:-

Next up, we’ll right-click on the newly created folder and choose Properties” – This will enable us to “Share” it as well as lock down the access to only a specific IP address (that being my NextCloud VM)…

From the Properties window, select the “NFS Sharing” tab and then click on the button named “Manage NFS Sharing” this should then display the following window:

Ensure that the above screenshot matches your folder (eg. select all the checkboxes as per the above)

Next we’ll configure the permissions for the share, clicking on the “Permissions” button in the above screenshot will then display the following window:

As you can see from the above screenshot, the permissions for this share are very restrictive by default, this is basically saying that for ALL MACHINES trying to access this share they WILL NOT be granted any access.

We should leave the defaults as is as we will instead create another permission only granting our specific VM access, to do this click on the “Add” button, the following screen should then appear:

I’ve entered my virtual server IP address in to the “Add names” field already (, you’ll then need to change the “Type of access” drop-down box to “Read/Write” and check the “Allow root access” checkbox.

Once completed, click the “OK” button!

That’s great, our Permissions form should now look as follows:-

Perfect! – We’re all done on the NFS server side configuration now!

Mounting the NFS share on the client-side

We can now mount the NFS share on our Ubuntu Server (the virtual machine), first we need to install the NFS tools, so we’ll login to our server (I’m using root but you should really use a user with sudo rights!)…

sudo apt-get install -y nfs-common

So before we configure fstab to automatically mount our NFS share at system boot, we’ll first test using the command line to make sure everything works as expected…

Before we can mount the NFS share we must first create a mount point, we will do this like so:

sudo mkdir /data

Now that we have created the mount point, we can mount the remote file-system as follows:

sudo mount -t nfs /data

Once mounted you should be able to run the following commands, these commands will essentially create a file on our server and a text file with some content…

echo "This is a test file" > /data/test.txt

We can now jump back over to our Windows server and check our NFS server directory, we should see a file named test.txt and when we open it in Notepad, the contents should appear as follows:-

All going well, that has hopefully worked a charm for you and we can now move on to ensuring that our VM automatically mounts the NFS share at boot.

If for whatever reason wish to un-mount the share you can do it like so:

umount /data

Configuring fstab to mount our NFS share on system boot

Using a text editor on the Linux VM we will open up the /etc/fstab file and add the following line to the bottom of the file: /data nfs auto,nofail,noatime,nolock,intr,tcp,actimeo=1800 0 0

You’ll obviously need to replace your server’s IP address with your own 😉

The result should look something like the below:

Once completed, save the file and you can now either attempt to automatically mount the share using this command:

sudo mount -a

…or you can reboot your server!

Once rebooted, login and you can then check and confirm that you have re-mounted your /data directory to the remote server by checking the output of:

df -h

You should be able to see and confirm the remote disk usage as demonstrated here:

OK, well that’s it, I hope this has been useful for you 🙂

Deploying and hosting ASP.NET Core applications on Ubuntu Linux

If you’ve read my other blog posts or seen some of my projects on my GitHub profile, you’ll see that I’m a real Linux fan and although I develop in both PHP and C#, C# based web applications are commonly hosted on the Microsoft Windows platform (although you could use Mono) so for this reason, it has previously made my first choice for developing web applications choosing a technology stack that enabled me to control whether I wanted to host on Windows, Linux, Mac or BSD.

Microsoft not so long ago released .NET Core – .NET Core is open-source and supports cross-platform operating systems.

I wanted to make a blog post about how to configure and host a simple (but functional) C#/ASP.NET MVC application on Ubuntu Linux, ror this tutorial I created a very simple, working but fictional URL shortner service. I’ve built this using ASP.NET Core 5 and Entity Framework 6. Source code available here.

In this tutorial we will use the latest stable version of Nginx (from the Nginx PPA) of which will be configured to reverse proxy requests to our application server which will be running Kestrel. systemd will be used for starting our application server and monitoring the process.

This tutorial assumes that you already have a user account setup and has been granted sudo rights. My virtual machine is configured with 2GB of RAM.

Download the source code

First up, we’ll download (clone) the C# source code to our user home directory from my public GitHub repository. Using the Git CLI tool we can download as follows:

mkdir ~/source
cd ~/source
git clone

That’s great, we now have the public repository downloaded to our user home directory under a folder name ‘source’.

Further on in this tutorial we will “publish” our application from this directory to our web hosting directory which will then be used to serve the application.

Setup Nginx

We will now add the official Nginx package repository to our server as this will enable us to then download the required packages from the Nginx package repository.

So using vim, we will create a new file under /etc/apt/sources.list.d/ named nginx.list like so:

sudo vi /etc/apt/sources.list.d/nginx.list

The file content should be as follows:

deb xenial nginx
deb-src xenial nginx

Once done, save the file and we now need to update our package sources like so:

sudo apt-get update

If you get an error when running the update command (as per my screenshot below) you will need to import the key first (eg. sudo apt-key adv –keyserver –recv-keys ABF5BD827BD9BF62
) and then re-run the apt-get update command (the key can be found in the error message!!) like so:

Great! We can now get on with installing Nginx:

sudo apt-get install nginx

We can now start Nginx by running:

sudo systemctl start nginx

To test it’s all working as expected, enter the IP or FQDN into a browser to test that you are presented with the Nginx “welcome” page like so:

Install Microsoft .NET Core

We can install .NET Core on our Ubuntu server by running the following commands (taken from the official Microsoft page):

sudo sh -c 'echo "deb [arch=amd64] xenial main" > /etc/apt/sources.list.d/dotnetdev.list'
sudo apt-key adv --keyserver hkp:// --recv-keys 417A0893
sudo apt-get update
sudo apt-get install dotnet-dev-1.0.1

We can now test that we have it installed by running:

dotnet --help

All going well, you should see the following output:

Lets move…

Setup a hosting location for your web application

We need somewhere on our server to deploy to and store our production published web application this could be in a dedicated user’s home directory (and make the service run as that user) but for this example we will just store is under /var/webapps/. Let’s create that directory now…

sudo mkdir /var/webapps


Deploy (publish) our web application

Although we’ve downloaded the source code for our application, we need to “publish” it, Publishing a .NET Core application essentially bundles all the required run-time libraries (in the form of DLL’s) and copies across our appsettings.json file as well as any public assets.

Our source code contains our application code but also makes references to external libraries, I’ve used NuGet to manage these dependencies.

We’ll now change into our application’s root directory which will then able us to pull in/download all the required NuGet packages, lets do that now…

cd ~/source/lk2
dotnet restore

This will take a few minutes to complete but once done you should see the following output:

Now that’s completed we can now “publish” our application, this effectively compiles our code and bundles all the required dependencies to a specific directory:

sudo dotnet publish -c Release -o /var/webapps/lk2

The above command states that we want to build our project for “Production” and we have specified the output path as /var/webapps/lk2.

You should see the following output:

We then need to set the permissions so that the user that our service is running under (by default this will be www-data in this example) can have the required access rights, we do this by running:

sudo chown -R www-data:www-data /var/webapps/lk2

Lets see if we can now run the server in this directory as the www-data user…

sudo -u www-data dotnet /var/webapps/lk2/lk2-demo.dll

I have enabled “auto migrations” at startup – Although not recommended for production environments, I wanted to ensure that the user following this tutorial can get up and running as quickly as possible and after all, this tutorial is really to prove the ability to host ASP.NET Core web applications on Linux…

Running the above command you will see the following output, as you can see our Entity Framework database migrations have been run automatically for us (and we now have a SQLite database that has been created under /var/webapps/lk2).

You could also use Microsoft SQL Server if you wanted, you’d simply need to edit the appsettings.json file found in /var/webapps/lk2 and change the driver to mssql and set the corresponding connection string. You could also check out my recent blog post about installing and use Microsoft SQL Server on Linux.

Lets now kill the local Kestrel process by pressing CTRL+C on our keyboard and we’ll move on…

Create our systemd file

So in order for us to control our web application service (and have it start automatically) we will use vim to create a daemon (service) init script, using Vim we can run:

sudo vi /etc/systemd/system/lk2.service

Now copy the following content into it (making any path and environment variable changes you require for you application or environment):

 Description=LK2 .NET Core App

 ExecStart=/usr/bin/dotnet /var/webapps/lk2/lk2-demo.dll


Great, now save the file!

We can now “enable” the service (making it auto-start at boot time) by running:

sudo systemctl enable lk2.service

Now we can manually start it:

sudo systemctl start lk2.service

If you wanted to check the service status you can run:

sudo systemctl status lk2.service

Checking the status of the service at this point should output the following:

Good stuff – lets’ move on…

Configuring Nginx

I appreciate that we’re jumping around a bit here but now that you have the daemon configured and your application deployed and running we will now head back to Nginx and configure it to reverse-proxy requests to Kestral (running locally on port 5000)…

In this example I will show you the simplest way to reverse-proxy traffic to your application – In another blog post I will elaborate on this further with Nginx virtualhost configurations in addition to securing your application with HTTPS and setting some security headers too but for now we’ll simply prove this works…

Lets open the Nginx configuration:

sudo vi /etc/nginx/conf.d/default.conf

Now then replace the contents of the file with:

server {
 listen 80;
 location / {
 proxy_pass http://localhost:5000;
 proxy_http_version 1.1;
 proxy_set_header Upgrade $http_upgrade;
 proxy_set_header Connection keep-alive;
 proxy_set_header Host $host;
 proxy_cache_bypass $http_upgrade;

Next we will create a proxy configuration file so that we can re-use in other virtual host configurations in future, create the new file like so:

sudo vi /etc/nginx/proxy.conf

Populate it with the following content:

proxy_redirect off;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
client_max_body_size 10m; 
client_body_buffer_size 128k;
proxy_connect_timeout 90;
proxy_send_timeout 90;
proxy_read_timeout 90;
proxy_buffers 32 4k;

In order for these settings to be picked up by Nginx we must now include them into our main Nginx configuration file, to do this lets edit the file:

sudo vi /etc/nginx/nginx.conf

So, below the ‘http {‘ line, add ‘include /etc/nginx/proxy.conf;‘ as shown here:

Save the changes and quickly test that we’re still all good:-

sudo nginx -t

Now save the file and then we’ll reload the Nginx configuration like so (alternatively you could use sudo nginx -s reload which essentially does the same thing!):

sudo systemctl reload nginx.service

If all went well you should be able to access the web application by entering your server’s IP or FQDN in a browser and you should then see your application’s homepage like so:

Deploying code changes

In future, when you have made source changes you would need to commit those to Git, then change into the source code directory (cd ~/source/lk2), pull the latest changes (git pull) and the re-run the publish command. Once the latest source has been built you would then need to restart the application service by running (sudo systemctl reload lk2.service).

We’re done!

I hope you found this tutorial useful – If there is anything specific you want me to write about/elaborate on this post please drop me an email 🙂

Installing Microsoft SQL Server on Ubuntu Server 16.04

I’m really liking what Microsoft have been doing recently with pushing more of their products down the open-source channels and supporting Linux and Mac operating systems.

Earlier this year Microsoft released SQL Server vNEXT which can be installed and ran on Linux.

In this post I will cover the process of installing and configuring Microsoft SQL Server on a Ubuntu Linux LTS (16.04) server and addition of creating a database, a user account, configuring the server firewall and automating daily backups of your databases.

This tutorial requires your server to have at least 4GB of RAM (recommended as a minimum for MSSQL server vNEXT by Microsoft) and that your server is running Ubuntu 16.04. It is also recommended that you are logged into your server with a non-root user which has sudo rights granted.

Let’s get going…

Installing SQL server

To install SQL server we need to add a package repository, we will do this by running the following commands:

curl | sudo apt-key add -
curl | sudo tee /etc/apt/sources.list.d/mssql-server.list

We will now update our package lists:

sudo apt-get update

We can now install Microsoft SQL server using this command:

sudo apt-get install mssql-server

All going well, Microsoft SQL Server vNEXT is now installed and as per the below screenshot we now need to configure it using the supplied command:


Lets run that now:

sudo /opt/mssql/bin/mssql-conf setup

You will now be asked to accept the terms (if you do, type Y and the press the return key), next you’ll be asked to set a password for the system administrator password (this will be the sa account), the password will need to be a complex one, if you don’t enter a strong enough password you’ll get an error message and need to re-run the above command to try again.

That’s it – Microsoft SQL Server is now running on your Ubuntu Linux server.

Before we can connect to it on our server we need to install the CLI tools… we will do this in the next step! – You can skip this step if you don’t require CLI access (or wish to do local backups)

Install SQL Server Agent

SQL Server agent enables the running of scheduled jobs on Microsoft SQL server, this is not installed out of the box and is totally optional, if you decide you want it installed you can do so as follows:

sudo apt-get install mssql-server-agent

For this to take affect we need to restart the MSSQL service service by executing:

sudo systemctl restart mssql-server.service

Great – That’s all done now! Let’s move on…

Installing Full-text search support

OK, so we now have Microsoft SQL server installed on our Ubuntu machine but, out of the box it doesn’t support full-text search, no sweat – we can add this using these commands:-

sudo apt-get install mssql-server-fts

Now restart the MSSQL service to enable the full-text search functionality…

sudo systemctl restart mssql-server.service

Simple right!

Installing the SQL Server CLI Tools

Like we did for the MSSQL server packages, we need to add another package repository before we can install the CLI tools.

Import the repository key and add the package repository like so:

curl | sudo tee /etc/apt/sources.list.d/msprod.list

Again, we now need to update our package sources:

sudo apt-get update

Then install the CLI tools package like so:

sudo apt-get install mssql-tools unixodbc-dev

We can now access and use the MSSQL CLI tools (sqlcmd – SQL CLI client and bcp – Backup CLI tool) under /opt/mssql-tools/bin/.

You may wish to add this directory to your user’s PATH alternatively you can symlink them (which I prefer to do) like so:-

sudo ln -sfn /opt/mssql-tools/bin/sqlcmd /usr/bin/sqlcmd
sudo ln -sfn /opt/mssql-tools/bin/bcp /usr/bin/bcp

Awesome!  – That’s now installed and ready for us to use further down!

If you would like to test it out, try connecting like so:

sqlcmd -S localhost -U SA -P '<YourPasswordHere>'

Then you can attempt to query the list of databases like so:

SELECT Name from sys.Databases;

To exit out of the sqlcmd tool type:


…and press enter!

Configuring our firewall

If you’re not running a firewall on your server (eekk!!) you can skip this step!

Microsoft SQL Server requires TCP 1433 to be open on your firewall, if you are using IPTables you should add this to your configuration:

-A INPUT -p tcp --dport 1433 -j ACCEPT

UFW users can type this command:

sudo ufw allow 1433

You may wish to further lock down the allowed incoming IP address to only allow connections from your application server and/or office IP address.

Creating a new database

Firstly we need to connect to the server using the sqlcmd tool like so:

sqlcmd -S localhost -U SA -P '<YourPasswordHere>'

Once connected we can run the following statement to create a new database, in this example, we’ll create a database called “MyDatabase”:



Now leave the sqlcmd tool again by typing exit.

If you want to delete the database in future (but please leave it for now), you can run the following statement:

DROP MyDatabase;

Creating a database user

Although at this point we are able to login using the sa account it is best that we configure our applications to connect using a dedicated account, using a dedicated account reduces the ability for any leaked account details to enable that account to access and potentially destroy/access all of the databases on our server.

sqlcmd -S localhost -U sa -P '<YourPasswordHere>'

We now need to select the database and then we can create the account and grant permissions like so:

USE MyDatabase;

Next up, we create a server login:

CREATE LOGIN joebloggs WITH PASSWORD = '<NewUserPasswordHere>';

We now create a database user that will be linked to the login name:

CREATE USER joebloggs FOR LOGIN joebloggs;

We can now grant database “ALTER” permission to our new user:

GRANT ALTER TO joebloggs;

And also, grant database “CONTROL” permission to the user


Again, type exit and press return to leave the CLI tool.

Configuring database backups

I have knocked together a shell script that you can use in order to automatically backup all user databases on your server. The script connects to your SQL server and gets a list of all databases (minus the system ones) and then proceeds to do a “full backup” of each database.

The script can be found and downloaded here:

If you wish to use this, lets download it and set it up now…

sudo curl -o /usr/bin/mssqlbackup
sudo chmod +x /usr/bin/mssqlbackup

Lets see if it works, type the following command – We expect it to moan at this point (as we’re not passing any commands) and not set our backup directory!


By default the script will try to use /var/dbbackups but this is optional and will be set by the user when they call the script but for now, lets use this directory – we’ll create it now and give our mssql user access to it:

sudo mkdir /var/dbbackups
sudo chown mssql /var/dbbackups
sudo chmod -R 0740 /var/dbbackups

You can now attempt a database backup if you wish…

mssqlbackup "/var/dbbackups" 5 "localhost" "sa" "<YourPasswordHere>"

Running the above command (assuming you have created the example “MyDatabase”) should show something similar to:

Checking the directory contents (sudo ls -alh /var/dbbackups) should show a list of backups (assuming you have created some already!)

Lets now take this a step further – Using a cron job, we can setup automated these daily backups, so lets’ create a cron job to have all our databases backed up daily at 20:00:

sudo vi /etc/cron.d/mssql-backups

Now add the following content (replace with your password):

0 20 * * * root /usr/bin/mssqlbackup "/var/dbbackups" 30 "localhost" "sa" "<YourPasswordHere>"

Save the file and exit! – You should now start noticing automated nightly backups of all your databases – This command is set to automatically keep 30 days worth of backups too!

Managing our databases using Microsoft SQL Server Management Studio

If you have any client machines running Microsoft Windows, you can download SSMS (Sequel Server Management Studio) from the Microsoft website, installing and using that you can connect to your new database server, manage your databases and execute queries etc.

When running SSMS, you will need to enter your servers’ host name or IP address and login using SQL Server Authentication like so:

Keeping your server updated…

To update Microsoft SQL Server you should periodically download the latest package sources and update all of the components that you have installed; If you have installed everything (SQL Server, SQL Agent and the Full-text search functionality you should be running the following commands:-

sudo apt-get update
sudo apt-get install mssql-server
sudo apt-get install mssql-tools unixodbc-dev # Updates the SQL Server CLI tools
sudo apt-get install mssql-server-agent # Only run if you have installed the SQL Server Agent
sudo apt-get install mssql-server-fts # Only run if you have enabled the Full-text search functionality

After upgrading all of your packages it is recommended that you restart the Microsoft SQL server like so:

sudo systemctl restart mssql-server.service

I hope you found this blog post useful and I wish you all the best with running your Microsoft SQL databases on Linux!!

Monitoring Linux Servers with Monitorix

Monitorix is a mature and open-source system monitoring tool for UNIX and Linux; it is written in Perl and is resource-friendly enabling it to be used quite happily on Raspberry Pi’s and embedded devices too.

Although there are other monitoring and graphing tools for Linux that I’ve used before such as Nagios and Cacti, I wanted to customise and create a single dashboard view for all of my Linux servers and therefore really just wanting something small that could monitor, graph (using RRD) and enable me to then embedded the graphs to a custom web page.

Following some research (Googling’) I settled on using Monitorix.

Monitorix works by running a Perl daemon in the background which monitors and record resource usage of system services, network interfaces, HDD temp etc. (these can be customised in the config file explained below) and has a lightweight, embedded web server which then enables you to view the graphs via. a web browser.

I’m running Ubuntu Server 16.04 LTS on most of my servers now but the same instructions should work for older versions of Ubuntu and other versions of Debian too!

First of all we need to add the APT repository to our server (you could alternatively download the .deb and install that way instead if you wanted), so first we’ll edit our sources.list file:

sudo nano /etc/apt/sources.list

At the bottom of the file, add the following lines:

# Monitorix repository
deb generic universe

Next we need to import the GPG key for the repository, run the following command to add it:

sudo apt-key add izzysoft.asc

Now we will need to update our local package list cache, run the following command to pick up the new packages that are now available to us so we can then install Monitorix:

sudo apt-get update
sudo apt-get install monitorix

Excellent! At this point you should now be able to navigate to http://{your_server_ip}:8080/monitorix and assuming that your firewall permits it, you should now see the Monitorix landing page.


Graphs will take a few minutes to start outputting statistics but you’re now up and running.

Personally I prefer to change the default port from 8080 to a different number and remove the /monitorix path requirement from the end, this is possible by updating the monitorix.conf file located in /etc/monitorix/monitorix.conf, after making changes you need to restart the Monitorix daemon by running:

service monitorix restart

If you intend on monitoring your MySQL server, you will need to install the Perl MySQL driver, you can do so by running this command, the MySQL module also requires a non-privileged user to be setup:

sudo apt-get install libdbd-mysql-perl

I also found that initially the statistics for MySQL on my server was not being captured, this was due to the fact that on Debian based systems, there is an addition configuration file found under /etc/monitorix/conf.d/00-debian.conf which was overriding the default MySQL connection parameters of which you will need to update.

I hope you found this useful, I would certainly recommend checking out the configuration file as there are loads of possible features and settings to really customise your setup.


Installing PHP7 on Windows Server 2012 R2 and IIS 8

At the time of writing PHP 7.0.6 is the latest stable release and given that we are going to be using Microsoft’s IIS as the web server on Windows Server 2012 R2 (x64) we will need to download the latest version of PHP 7.0.x ensuring that we download the NTS (Non-thread safe) version and the x64 build.

To get PHP7 installed on our server will will break down the installation into a number of separate, logical step; these are as follows:-

  • Download the PHP binaries
  • Download and install the VC14 (Visual C++ 2015) runtime.
  • Install the CGI module in IIS
  • Configure PHP in IIS

Downloading the PHP binaries

You can download the latest PHP binaries from the PHP website found here:

You must pay special attention to the version that you download, given that my server(s) run the 64-bit version of Windows, I will naturally choose to download and go for the x64 NTS (none-thread safe) version of PHP.

Once downloaded, extract all the files to C:\PHP, this directory will be used to store our PHP binaries and configuration files.

Download and install the Visual C++ 2015 runtime

PHP7 (at the time of writing at-least) has been compiled in Visual Studio 2015 and thus needs the VC 2015 runtime installed on the server, again, it is important that you install the version that matches your servers hardware architecture (x32 or x64) alternatively install both and you cannot therefore go wrong!

You can download the latest VC C++ 2015 runtimes from the Microsoft website here:

Install the runtime as described above, it is now recommended that you reboot your server for that changes to take full affect!

Installing the CGI Module for IIS

We now need to install the CGI module to enable IIS to “talk” to PHP.

Using explorer, open up the Administrative Tools section in the Control Panel of which can be found at this location: Control Panel\All Control Panel Items\Administrative Tools

Once the Server Manager tool opens, you should see a Add Roles link, click that and now ensure that “CGI” is ticked as per the below screenshot:

Screenshot 2016-05-02 23.54.46

Then click Next and Continue to install the CGI module, it is at this point that I would recommend you restart your server (or IIS at the minimum!)

Note: Although this post is for Windows Server 2012 R2, if you have stumbled across it specifically for installing PHP7 on Windows and are using Windows Vista or 7, you can find the CGI feature under: Control Panel\Programs and Features. After opening the Programs and Features icon you should click on the link to the left labeled Turn Windows Features On or Off – In the tree of services that are then displayed, navigate to Internet Information Services\World Wide Web Services\Application Development Features and then tick CGI.

Installing PHP7

Now that we have the required runtimes installed and IIS has the CGI module enabled we can now start the final part of the setup and that is to install PHP!

Using the Administrative Tools found under the Control Panel again, this time we are going to open up the Internet Information Services (IIS) Manager application:

Now, from the left hand menu click on the server’s name and then from the main panel double click the Handler Mappings icon as shown below:


You will now be presented with the current handler mappings supported by the server, on the right hand side of the window you should see a list of Action links, click on the link named Add Module Mapping… as shown here:


Once the Add Module Mapping window appears, populate the values as follows:


The click on the Request Restrictions button and tick the Invoke handler only if request is mapped to: and then select the File radio button…


Now click Ok and Ok again, the module mapping is now configured!

Although not madatory, it is recommended that you now set a default document so that directory level access to pages will automatically serve the “index” page, it is common when serving PHP sites to have “index.php” configured as a the default index page…

To set a new index page, select the server name from the left hand menu and then double-click on the Default Document icon as shown below:


On the right hand menu of the Default Document window you will have the option to Add a new one, click the Add link and then, in the window that pops up type index.php and then click Save as shown here:


Great stuff! – That’s it, adding a new site and add a index.php file into the root of the home directory should now work!

To test it out, create a file named index.php with the following content:

<?php phpinfo(); ?>

Load the file and you should then be able to see all of the PHP runtime configuration and loaded extensions.

At this point we have PHP7 installed in it’s vanilla form, this means that there are no other PHP extensions enabled at present and the timezone etc. has not yet been set.

We will now copy one of the PHP configuration templates to the “working” copy and then make some adjustments, using the Command Prompt run the following command:

copy C:\PHP\php.ini-production C:\PHP\php.ini

Firstly we will set the server’s timezone, so find and uncomment this line and then set your timezone accordingly to this list:

;date.timezone =

As I live in England (GB) and is where my server is located, I will personally choose to use ‘Europe/London’ as my timezone, therefore my line becomes:

date.timezone = Europe/London

Now we will configure the extension directory, so find this section:

; Directory in which the loadable extensions (modules) reside.
; extension_dir = "./"
; On windows:
; extension_dir = "ext"

…and un-comment (remove the preceding “;” character) the extension_dir = “ext” line so that it now becomes:

; Directory in which the loadable extensions (modules) reside.
; extension_dir = "./"
; On windows:
extension_dir = "ext"

Finally we will uncomment some CGI “fixes” for IIS, this will improve security and performance, so uncomment the following lines and set them to match the values below:

; cgi.force_redirect is necessary to provide security running PHP as a CGI under
; most web servers. Left undefined, PHP turns this on by default. You can
; turn it off here AT YOUR OWN RISK
; **You CAN safely turn this off for IIS, in fact, you MUST.**
cgi.force_redirect = 0

; cgi.fix_pathinfo provides *real* PATH_INFO/PATH_TRANSLATED support for CGI. PHP's
; previous behaviour was to set PATH_TRANSLATED to SCRIPT_FILENAME, and to not grok
; what PATH_INFO is. For more information on PATH_INFO, see the cgi specs. Setting
; this to 1 will cause PHP CGI to fix its paths to conform to the spec. A setting
; of zero causes PHP to behave as before. Default is 1. You should fix your scripts

; FastCGI under IIS (on WINNT based OS) supports the ability to impersonate
; security tokens of the calling client. This allows IIS to define the
; security context that the request runs under. mod_fastcgi under Apache
; does not currently support this feature (03/17/2002)
; Set to 1 if running under IIS. Default is zero.
fastcgi.impersonate = 1

Once you’ve saved the php.ini file you can restart your app pool(s) or simple restart IIS for the changes to take effect…

I hope you found this useful!


Setup an OpenVPN site-to-site remote router (OpenVPN client) on Ubuntu Server 14.04 LTS

In my last couple of blog posts (here and here) I demonstrated how to setup an OpenVPN server using Windows Server 2012 R2 and enable IP forwarding to enable OpenVPN client roaming access to the server network; today I will explain how to setup a Ubuntu Server 14.04 LTS based server which we will ultimately use as a site-site client router.

To give you some background of what I’m doing, I’m going to use my existing OpenVPN server setup as per my last couple of posts (here and here) but I now want to setup a Raspberry Pi2 (running Ubuntu Server 14.04 LTS) as a router given that it consumes a low amount of power  and small in size. The plan is to set this up at my mums house so that when it’s connected she can access my home network and vice-versa (a site-to-site VPN) this will enable her to then access my file servers, store all her photos and ensure that they are aways backed up to the cloud (my home file storage server runs FreeNAS using the ZFS file system and automatically backs up to a cloud storage service daily).

A couple of considerations here, firstly my mum’s home router is a BT HomeHub and not something you can set static routes on (these kind of features normally only come with commercial grade routers), therefore I will instead install a DHCP server on the Raspberry Pi2 in addition to the OpenVPN client to enable the publishing of static routes to the clients on her local network, the set-up will ensure that only local network traffic bound for the server network is routed via. the Raspberry Pi2 (over the VPN) and local internet browsing traffic continues to route through the BT HomeHub (the default gateway) to maintain a fast web browsing experience!

Lets start…

So first of all, I will make the assumption that you already have a clean version of Ubuntu Server 14.04 installed on your Pi2 (or other hardware).

The next thing you should set the hostname of the server, it is important that the hostname of the server matches the SSL certificate common name that we will set up later, so for now I would recommend you set the hostname (I’ll use ovpn-router-aaa in this example, just replace with whatever you wish to use!) and update the /etc/hosts file (so that the hostname matches the address like so:

echo "ovpn-router-aaa" > /etc/hostname

Next, update the line that starts with (normally this will be on line #2) so that is follows with the hostname (unless it already does):

nano /etc/hosts

…and ammend like so:            ovpn-router-aaa

Now save the file and try to ping your hostname like so:

ping ovpn-router-aaa

It should resolve and respond back with and by doing this, the sudo command will stop complaining about being unable to lookup the hostname – As well as many other benefits ofcourse! 😉

Next you should give some thought to your IP addresses that you will use, for the new remote network I will be using 10.10.0.x subnet you can ofcourse use whatever you like and I would recommend that you adjust any references to this subnet below if you choose to use a different one.

So, for clarity, will be the static IP address that be assigned to the BT HomeHub router at my mums house, will be the static IP address that will be assigned to the Ubuntu Server that will be running the OpenVPN client, the DHCP server and will provide the routing for our VPN LAN’s. We will keep – free for any static IP address requirements and will configure the DHCP server running on the Ubuntu Server to distribute a scope of – to internal clients.

Now that we have the IP addressing stuff out of the way, it is now time to configure the servers NIC with these details, so we will now edit the /etc/networking/interfaces file like so:

nano /etc/network/interfaces

…and adjust it so it looks similar (or the same if you are using the same IP addressing as myself):

# This file describes the network interfaces available on your system
# and how to activate them. For more information, see interfaces(5).

# The loopback network interface
auto lo
iface lo inet loopback

# The primary network interface
auto eth0
iface eth0 inet static

Next you will need to restart the network interfaces (or simply reboot the server) for the changes to take affect.

Create client certificates

We will need to create the SSL certificate for the OpenVPN client (router) that we are configuring, so in order to do this, logon to your Windows Server and open up the command prompt and run the following commands:

cd \
cd "C:\Program Files\OpenVPN\easy-rsa"
build-key.bat ovpn-router-aaa

You can accept the defaults when asked for the Country, Org unit etc. but ensure that when prompted for the Common Name and Name attributes that these match your hostname (eg. ovpn-router-aaa) and ensure that when prompted for a challenge password you skip it!!

If you would like to re-cap the process of creating the client certificates, see my original post here.

Once you have created the client certificates you now need to locate the following files in the C:\Program Files\OpenVPN\easy-rsa\keys directory and keep them for later (we will need to upload these to our Ubuntu Server later!

  • ca.crt
  • ovpn-router-aaa.crt
  • ovpn-router-aaa.key

Enable IP forwarding

We now need to enable IP packet forwarding on our Ubuntu Server to enable the routing functionality on our server. To enable IP forwarding we need to run the following commands:

nano /etc/sysctl.conf

We need to uncomment the following line and make sure it is set to ‘1’ (and not ‘0’):


Save the file and now run the following command to enable the change:

sysctl -p /etc/sysctl.conf

Great stuff! – Lets move on…

Install OpenVPN

We will now install OpenVPN and configure it:

apt-get install openvpn

We now need to configure the client VPN configuration file, to do this we will create a new configuration file:

nano /etc/openvpn/vpn.conf

We will need to populate it with the following content (replace XXX.XXX.XXX.XXX with your OpenVPN server public IP address or FQDN):


dev tun
proto udp
remote XXX.XXX.XXX.XXX 1194

log-append /var/log/openvpn.log

resolv-retry infinite


ca ca.crt
cert client.crt
key client.key

verb 3


Now upload the ca.crt, ovpn-router-aaa.crt and ovpn-router-aaa.key to the server  (in the /etc/openvpn directory) and for best practice (seems to be the convention around the OpenVPN community), we will now rename both the ovpn-router-aaa.* files to simply ‘client’:-

cd /etc/openvpn
mv ovpn-router-aaa.crt client.crt
mv ovpn-router-aaa.key client.key

We will now configure OpenVPN to start automatically and connect to the VPN at boot, to do this we need to edit the following file:

nano /etc/default/openvpn

…now set (or uncomment) the AUTOSTART directive to ALL like so:


Save the file!

By restarting the openvpn service the VPN configuration should now try and connect to the remote network, lets test it out:

service openvpn restart

You can check the status of this by looking at the OpenVPN logs like so:

tail /var/log/openvpn.log

Any error messages should be visible here but you should be all good!

Test ping!

If you followed my other tutorials then you will remember that the IP address for our VPN server (on the Tun interface) is:, therefore lets see if we can ping it:


Assuming all went well you should now be getting a response back from the server!

Test reboot functionality…

At this point, it may be a good idea to reboot the Ubuntu Router and then login and try pinging immediately (to check that the OpenVPN service has auto-connected at reboot)

 Install a DHCP server and advertise the static routes to our LAN clients.

So at this point we have our router connecting to our remote network and we have tested that we can route to the server using the 10.8.0.x subnet but now we need our local client machines (those machines on the client network) to know how to route traffic to the remote (server-side) LAN – In my situation this will be my home network!

We will therefore configure DHCP on the Ubuntu Server and advertise a static route for our clients to ensure that they don’t have to manually add the static routes their side:

# Stop OpenVPN service just for interface reasons...
service openvpn stop

# Install the DHCP server...
sudo apt-get install isc-dhcp-server

Now we will tell the DHCP service to only run on our eth0 interface (so we don’t try to advertise DHCP leases to other networks (including over VPN tunnel) by adding eth0 into the INTERFACES=”” directive:

nano /etc/default/isc-dhcp-server

So simply replace the line INTERFACES=”” with INTERFACES=”eth0″ and then save the file!

Now we will set out configuration up, edit the main DHCP service configuration file here:

nano /etc/dhcp/dhcpd.conf

And we will now set up as per our requirements, the following file contains everything that we will need, copy and paste it and tweak where required:

ddns-update-style none;
log-facility local7;

subnet netmask {
    # We set the BT HomeHub as the default gateway...
    option routers;
    option subnet-mask;
    option broadcast-address;
    # The domain name for the local network
    option domain-name "arads.local";
    # The name of your ActiveDirectory Domain (and other local domains)
    option domain-search "arads.local", "";
    # To resolve computer names from local DNS, I set this to the Windows 2012 R2 Server
    option domain-name-servers;
    # Put all the static routes for your other networks here!!
    option static-routes,;
    # If you have a server running NTP, you can set it here!
    option ntp-servers;
    default-lease-time 86400;
    max-lease-time 86400;
    # The IP range that we will serve...

Save the file and restart the service like so:

service isc-dhcp-server restart

OpenVPN Server Configuration changes

Now we need to logon to our Windows Server and make a slight change, we need to add our new route using the “push” directive and add a new directive for a CCD (Client Configuration Directory)

On the Windows server, using Notepad (or another text editor) open up the file: C:\Program Files\OpenVPN\config\server.ovpn

Add his new “push” directive (under the ones that already exist):

push "route"

We now need to un-comment (or add) the following directives further down to enable client specific configuration:

client-config-dir ccd

Lets create the ‘ccd‘ directory now, this should be created under C:\Program Files\OpenVPN\config – Do this now!

Next up, we have to create new configuration file in the new ccd directory (C:\Program Files\OpenVPN\config\ccd), this file should be named the same as the hostname/cert common name – Therefore in my scenario, the file should be name ‘ovpn-router-aaa‘ and you MUST ensure that the file DOES NOT have a file extention eg. if you are using for example, Microsoft Notepad, this will automatically add this for you, make sure you remove it if required!

In the file we have to specify the internal route (using the iroute directive) this will force the remote LAN router (in our case, this will be the Ubuntu Router that will be located at my mums house) NOT to route the local LAN traffic over the VPN!

The contents of the file should be as follows:

# Set a static IP address for the Router's client connection (to OpenVPN)

# Set the internal IP range for this network.

Save the file and restart the OpenVPN service using the Administrative Tools > Services panel.

Given that we have just restarted the OpenVPN service you should force all your clients/other VPN routers to reconnect to ensure they get the latest configuration!

Configure IPTables

Ubuntu Server, by default comes with IPTables tables installed, we need to add two persistent rules to ensure that all routed traffic is permitted like so:

apt-get install iptables-persistent

# Add the rule to enable IP packet forwarding through the firewall...
sudo iptables -t nat -A POSTROUTING -o tun0 -j MASQUERADE
sudo iptables -t nat -A POSTROUTING -o eth0 -j MASQUERADE

# Save the current IP tables configuration so it is persistent.
sudo /etc/init.d/iptables-persistent save
sudo /etc/init.d/iptables-persistent reload

I would suggest rebooting your Ubuntu Server at this point!!

Adding the new route to the DHCP server on the server network.

We will now RDP to our Windows Server (the one that is running the OpenVPN server) and we will now add the new static route to our DHCP configuration.

Add the following static route ( to your existing routes:

Screenshot 2016-03-07 16.54.55

Ensuring that the router address is your Windows Server’s physical NIC address.

To check that it has been successfully added to the DHCP configuration, renew the IP address on one of your server LAN clients and then run the following command:

route print

Or if you are using a Mac, instead use:

netstat -nr

If using a Mac, You should now see it in the list like so:

Screenshot 2016-03-07 16.58.46

We should now be able to ping the remote client network devices from our client machines (and our Windows server) like so:

# Ping the BT HomeHub (the client network Default Gateway)

# Ping the Ubuntu Server (VPN router)

# Ping out first DHCP client (assuming one is connected!)...

Congratulations – You should now be all up and working!!

A couple of things to keep in mind…

  • If you have any servers with manually entered static IP addresses (not distributed by DHCP) these will need the static routes added manually if you wish to have them accessible to the other subnets.

Enabling OpenVPN clients to access to the LAN.

In my previous post I wrote about how to setup an SSL VPN server on Windows 2012 R2 and enable external network access to the server using OpenVPN.

This article will walk you through the process of configuring IP forwarding on our Windows server and exposing static routes to enable VPN clients to access network devices on the LAN given that Out-the-box OpenVPN will only allow the clients to access the resources on the OpenVPN server.

This article will cover the following things:

  • Enable IP Forwarding on Windows Server 2012 R2 (so that our VPN traffic can route to our internal network and vice-versa)
  • Add static routes to our server.ovpn configuration so the routes are advertised to the client machines so they understand how to route to our LAN network.
  • Add static routes to our internal network clients (using Windows DHCP and I will also demonstrate adding them manually for servers using static IP addresses) so that LAN clients and servers can “see” the VPN clients. – What goes up must come down!!
  • Use our internal DNS server for name resolution by adding some additional client configuration to the server.ovpn file to enable better hostname resolution for a more “transparent” configuration.

Lets get going…

Enable IP forwarding

To enable IP forwarding on the server we will need to use Regedit (Windows Registry Editing Tool), this change is very simple to make and although this can also be achieved by enabling Routing and Remote Access on the server there is little point given that we simply don’t need it.

On the server, open up Command Prompt and run:


Navigate to: HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\Tcpip\Parameters

Double click the IPEnableRouter entry and set the Value data field to ‘1’

The result of which should look as follows:


You can now close Regedit!

At this point I had to restart my server as the IP Forwarding did not appear to work immediately! – I’d therefore recommend that you restart your server at this point too!

Add static routes to our server.ovpn configuration

By adding a static route for our internal network to the server.ovpn file, these static routes will be downloaded and set on the client machines when they connect to the VPN and is required to enable the client machines to understand how to route to our LAN.

In our example we will assume that our internal network subnet is: and we will use the default OpenVPN subnet of for the VPN clients.

To add the static route we need to edit our OpenVPN Server Configuration file; using notepad open the following file:

C:\Program Files\OpenVPN\config\server.ovpn

Now scroll down the file until you find this section:

# Push routes to the client to allow it
# to reach other private subnets behind
# the server. Remember that these
# private subnets will also need
# to know to route the OpenVPN client
# address pool (
# back to the OpenVPN server.
;push "route"
;push "route"

As you can see there is already two examples of how to add routes but instead of deleting the examples (The ‘;’ character is an comment!) we’ll add a new one below it:

push "route"

This will tell OpenVPN clients that when the computer tries to access any IP address in the subnet that it should route through our OpenVPN server (as the default gateway for this network).

You should also find the following configuration section and uncomment (remove the ‘;’ character) the client-to-client directive as demonstrated below:

# Uncomment this directive to allow different
# clients to be able to "see" each other.
# By default, clients will only see the server.
# To force clients to only see the server, you
# will also need to appropriately firewall the
# server's TUN/TAP interface.

For the changes to take effect, save the file and restart the OpenVPN Service from the Control Panel > Administrative Tools > Services panel.

If all has gone well, your VPN clients should not be able to route to the network.

Add static routes to our LAN connected computers so they can “talk” to our VPN clients

There are a number of ways in which we can advertise the route to our network devices on the LAN, for example you could add the static route on the primary gateway (eg. your router) but for simplicity I will show you how to add these static routes in via. DHCP using Microsoft DHCP services given that we are also using Microsoft DNS services it makes sense to do it this way:

Lets open up the DHCP Server MMC by navigating to:

Control Panel > Administrative Tools > DHCP

Expand your current server and expand “IPv4“, and then expand “Scope” now select “Scope Options“, if you don’t already have an option setup called:

121 Classless Static Routes

Then add a new route as per this screenshot:


That’s it, now on your internal network machines, the next time they get a new IP address they will also obtain the static route information!

The other way in which you can add these routes (if you have servers or machines that do not get their network configuration from a DHCP server) is to add it manually using the terminal/command prompt.

On a Windows-based PC/Server the command you need to run is:

route add -p mask

This will add a static route for the network with a netmask of to route via.; is the IP address of the “gateway” and is our Windows Server 2012 R2 server which is running the OpenVPN server software as well as our DHCP and DNS server.

To test that the route has been added successfully use the following command to “print” out the routing table:

route print

Now test that the route is successfully working by using an internal network machine to “ping” a connected VPN client using it’s IP address eg. ping (that is ping-able as most firewalls will block ICMP requests!!)

Configure VPN clients to query our internal DNS servers

By default OpenVPN is configured to use a split tunnel configuration and therefore client-side DNS settings will default to use the ISP’s DNS servers and due to this, internal server name resolution will fail to work (unless you are using a manually updated hosts file)

On my network I’m using Windows DNS services to manage DNS name resolution for all my internal servers and dynamic hostnames from DHCP leases.

Given that we have already added a static route to the internal network, we can now specify to the OpenVPN clients to use our internal DNS server, in this example my DNS server has an IP address of, we will also set the domain suffix and search suffix properties so that clients do not have to use the FQDN when attempting to locate hostnames.

Open up the server.ovpn file again as we did when we added the static routes and locate the following configuration block:

# Certain Windows-specific network settings
# can be pushed to clients, such as DNS
# or WINS server addresses. CAVEAT:
# The addresses below refer to the public
# DNS servers provided by
;push "dhcp-option DNS"
;push "dhcp-option DNS"

We will now add our internal DNS server (for any external address our DNS server is configured to forward requests to Google’s external DNS servers) under the above configuration block:

# Use our internal DNS server
push "dhcp-option DNS"
# Custom Domain and Search Suffix
push "dhcp-option DOMAIN mydomain.local"
push "dhcp-option SEARCH mydomain.local"

Save the file and restart the service again and reconnect all VPN clients for the changes to take effect!

For your reference, you can see my server.ovpn example that is tested as working here.