Setup a Ubuntu 14.04 LTS (MATE) terminal server with LTSP

In this post I’m going to explain how to set-up a terminal server that you can use at home or in the office; the terminal server (running LTSP) will enable you to run a central terminal server or “mainframe” type network where you can have a large number of “diskless clients” (also know as “dumb terminals” or “thin clients”) connecting to the central server and of which is used to run all the applications.

Once we have set-up our terminal server you can then use any PXE bootable device to connect to our terminal server, you could use a normal desktop PC, a branded “thin client” device and even a Raspberry PI as a super cheap thin client solution!

There are various tutorials on the internet for setting up such an environment but most of them force you to run DHCP on the terminal server itself, in this tutorial I will be configuring a DHCP proxy (dnsmasq) to enable our server to provide PXE boot information whilst still using our existing DHCP server configuration.

So lets get started…

Installing the operating system

I’ve personally chosen to use Ubuntu MATE distribution as I perfect the Gnome 2 style desktop environment but this tutorial should work exactly the same for the standard Ubuntu distribution, Lubuntu, Xubuntu and potentially any other Ubuntu derivative!

So, I’ll assume now that you’ve installed the operating system and logged in using your account that you setup during installation!

At this point it would also be advisable that you set a static IP address on your server!

Installing the LTSP server, dnsmasq and the TFTP server

We need to install the following software of which makes up all the core of what we will need for our setup; lets run the following commands:

sudo apt-get update
sudo apt-get install ltsp-server dnsmasq tftpd-hpa

As you probably noticed we are installing three pieces of software here, firstly we install the ltsp-server package (note that we’re NOT installing the ltsp-server-standalone package as this will force the installation of a DHCP server on our Terminal server of which will break our “existing” DHCP set-up) secondly we’re installing dnsmasq; Dnsmasq will provide us with the DHCP proxy functionality of which will enable us to use our existing DHCP server on our network and then still provide the required PXE boot information and then last but not least we install tftpd-hpa of which provides us with TFPP functionality thus allowing our “thin clients” to download the boot images.

Though Dnsmasq can do both DHCP and TFTP, There has been a crucial bug identified when functioning as a TFTP server specifically with Ubuntu 14.04, so we’ll be using the tftpd-hpa package for managing the TFTP services instead.

Building the “thin client” images

Now that the server has the packages installed, we now need to generate the “thin client” bootable image, this image is based on the current terminal server’s setting and is then served to our “thin clients” via. TFTP when they boot using PXE.

 sudo ltsp-build-client --arch i386

The above command will take some time as it will download all packages for this stripped-down Ubuntu-based “thin client” operating system and install them under: /opt/ltsp/i386/. It will then compress this into /opt/ltsp/images/i386.img, this is the the actual image that will be served to the client via. TFTP.

If you wish to build x64 client images (although these won’t work for older hardware and the Raspberry Pi’s) simply omit the ‘–arch i386‘ from the above command and all other related commands in this post, you’ll also have to pay special attention to the rest of this post as you’ll need to manually updated paths in the configuration files.

It is worth noting that client boot image will be based on your current Ubuntu version using the at the current state of packages in its repositories. Since this is merely a thin client operating system, it shouldn’t really matter if the image is stays “old”, as long as it works. Remember, the software is actually running on your server, not the client!

Enabling DCHP proxy support in our image

The configuration of PXELINUX does not ship with support for DCHP Proxy support by default so we need to fix this by running the following command:

sudo sed -i 's/ipappend 2/ipappend 3/g' /var/lib/tftpboot/ltsp/i386/pxelinux.cfg/default

Note: If you’ve chosen to build the 64bit version of the LTSP client images, you’ll need to replace the i386 directory with amd64.

It is also important to note that if you update your thin client images using the command below (in the Upgrading your “thin client” boot images section) that you’ll have to re-run this command afterwards!

Creating the LTSP configuration for Dnsmasq

Next we need to create a new configuration file so that Dnsmasq will provide the PXE boot information and thus serving our boot image using our new TFTP server.

Create the new configuration file by running this command:

sudo nano /etc/dnsmasq.d/ltsp.conf

….then add the following configuration (copy and paste) to the file:

########################################
# Dnsmasq running as a proxy DHCP
########################################

#
# TFTP
#
#enable-tftp
#tftp-root=/var/lib/tftpboot

#
# DHCP
#
dhcp-range=192.168.1.0,proxy
# Tell PXE clients not to use multicast discovery
# See section 3.2.3.1 in http://tools.ietf.org/html/draft-henry-remote-boot-protocol-00
dhcp-option=vendor:PXEClient,6,2b
# Better support for old or broken DHCP clients
dhcp-no-override
# Enable this for better debugging
#log-dhcp

#
# PXE
#
# Note the file paths are relative to our "tftp-root" and that ".0" will be appended
pxe-prompt="Press F8 for boot menu", 3
pxe-service=x86PC, "Boot from network", /ltsp/i386/pxelinux
pxe-service=x86PC, "Boot from local hard disk"

Before saving the file ensure that you update the dhcp-range setting to match your network settings and, if applicable the “i386” in the /ltsp/i386/pxelinux path and “x86PC” section in the pxe-service section at the bottom of this configuration file.

The last thing we need to do now is to restart the the dnsmasq service using the following command:

sudo service dnsmasq restart

Great, we’re now ready to test it out… You can now either create a virtual machine in Oracle VirtualBox, setting the network adapter type to ‘bridged’ you should now be able to start it (and if Network boot is enabled) and you will, after a few seconds see the LTSP client login screen like so…

Ubuntu LTSP Client Login screen.

 

Upgrading your “thin client” boot images in future….

It is possible to upgrade this image to latest Ubuntu packages this will include updates for hardware drivers too, this something that you might want to do once in a while and for security reasons this is good practice, it’s simply a case of running the following command from the terminal on your server:

# Regenerate the thin client boot image
sudo ltsp-update-image

# As mentioned above we need to reset the PXELINUX DCHP proxy settings:
sudo sed -i 's/ipappend 2/ipappend 3/g' /var/lib/tftpboot/ltsp/i386/pxelinux.cfg/default

For these changes to take affect, you’ll need to reboot your “thin clients” in order for them to download and use the new image.

Changed your server IP address after the initial installation of LTSP?

If after this point you change your IP address on the LTSP  server you need to update your ssh keys and rebuild your client image using the following commands (the first one updates our SSH keys and the following two are our standard “rebuild the client image” commands):

sudo ltsp-update-sshkeys
sudo ltsp-update-image
sudo sed -i 's/ipappend 2/ipappend 3/g' /var/lib/tftpboot/ltsp/i386/pxelinux.cfg/default

…then reboot your “thin clients” for the changes to take affect!

Understanding how the “thin client” devices boot

The network boot process is a two process so in order to download, decompress and boot into the i386.img (or amd64.img depending on your choice during installation) we need a network OS bootloader; so once our thin-client has got it’s IP address from our existing network DHCP server, it will then receive TFTP information from our DHCP Proxy server (provided by our new terminal server) and then load the network OS bootloader of which is provided by PXELINUX, PXELINUX is a tiny Linux-based bootloader. The “ltsp-build-client” command that we ran above created it for us under /var/lib/tftpboot/ltsp/i386/. It is approximately 20MB in size. So, the client gets this pxelinux directly over TFTP and executes it, PXELINUX then downloads the i386.img image files, decompresses it into a RAM drive, and then boots.

Configuring and hosting Laravel 5.x applications on Windows Azure

Over the past three weeks myself and a colleague have been busy (both at work and a lot of overtime at home) building a new light-weight CMS built on top of the Laravel framework, this will power a new recruitment campaign website, of which is to be hosted on the Windows Azure cloud platform (our corporate hosting strategy means moving as many sites and application to Microsoft Azure Cloud hosting as possible)

So anyway, today I had a play around with the Azure hosting platform but with the very small amount of information available on the web, it took me at-least three hours trial and error trying to get my head around how to get Laravel working on what is essentially a Windows environment of which is not something I normally deploy PHP applications/webstes to.

I worked on getting our new application automatically deployed (from BitBucket) to Windows Azure Websites (PaaS) so I thought that now would be a good time to document it, in the hope that it may help others in future.

In this guide, I’ve cloned the vanilla Laravel repository here to a private BitBucket repository just to demonstrate how the automatic deployment via. BitBucket work.

Configuring the web application instance

First thing is first, login to the Azure management portal by visiting: https://manage.windowsazure.com/

Once logged in, on the left side of your browser you will see a set of icons, click on the ‘Web apps’ icon to display the list of current web applications/sites you have configured, then click on the ‘New +’ link,  a new modal window will open and enter your application details like so:

Screenshot 2015-06-26 17.50.10

Once you click on the ‘CREATE WEB APP’ button at the bottom of the window, your application instance will be created and then you’ll be taken back to the ‘WEB APPS’ dashboard where your newly created application will be listed.

The next thing we’ll do is, configure the application instance to use the latest version of PHP (5.6) at the time of writing, so click on the application name to bring up the application information section and then from the top links, click ‘CONFIGURE’.

The following screen will then be displayed, click on PHP 5.6 as shown below:

Screenshot 2015-06-26 17.52.57

Continue scrolling down this page and now add the following environment variables:

Screenshot 2015-06-26 18.52.32

By default the ‘WEBSITE_NODE_DEFAULT_VERSION’ will already be set, simply ignore this. You’ll need to (at a minimum) add the following with the values shown above in the screenshot:

  • SCM_REPOSITORY_PATH
  • SCM_TARGET_PATH

You’ll find that if you fail to set the APP_KEY value to a valid 32 character encryption key (you can use php artisan key:generate command to create one if you wish!) otherwise you’ll encounter a Laravel Encryption exception as demonstrated here.

In my example above, I’ve also added APP_ENV and APP_DEBUG for testing purposes but ideally these should obviously be set appropriately in a production environment, you’ll also have a load more environmental variables that you’ll want to add here (basically the ones that you would normally place in your .env file!)

Now, we’re going to update the default applicaiton root directory; by default your application will serve the root directory from ‘site\wwwroot’, we need to change this to ‘site\public’ as per this screenshot:

Screenshot 2015-06-26 17.54.03

After adding or modifying the environmental variables and set the virtual application directory path ensure you SAVE your changes and it’s a good idea for you to ‘RESTART’ the application to ensure that these environmental variables are passed through and accessible to your Laravel application.

Adding IIS rewrite support

Out of the box Laravel provides an Apache .htaccess file that routes all requests to the public/index.php file of which is the entry point to our application and  passes all requests to the routing layer of which then dispatches the required controller requests/responses etc.

In order for Laravel to work correctly we need to now create a new file in our D:\sites\public directory (in your project you’ll want to commit this file to source control to save you having to do it manually in future), the file should be named ‘web.config’, in this file you should copy and paste the content of this GitHub Gist.

Restart your web application for the changes to take effect, you’ll now find that your application request routing works the same as it does with Apache or your Homestead development VM.

Enabling Composer support

You should already know by now that you can access your site on a sub-domain of azurewebsites.net, in this example, I can access my test site at: http://bobbytest.azurewebsites.net

In order for us to enable Composer support for our application, we need to access the Kudu management panel; to do this we simply add ‘scm‘ between our application name and the ‘azurewebsites.net’ part, so in this example, it would be: http://bobbytest.scm.azurewebsites.net/

Screenshot 2015-06-26 17.58.25

When the Kudu management panel loads, you should click on ‘Site extensions’ from the top navigation bar…

Next, click on the ‘Gallery’ tab (as at present you won’t have any installed extensions in the ‘Installed’ tab), scroll down and find ‘Composer’ and then click on blue ‘+’ button to install it.

Screenshot 2015-06-26 17.58.43

Great, now we can move on to configuring the automatic deployment of our application using Git.

We’ll come back to Kudu later as this provides us with CLI access to our hosting instance and enables us to run Composer and Artisan commands.

Configuring automatic deployment from BitBucket

From our web application dashboard, click the ‘Setup deployment from source control’ of which is situated under the ‘Integrate source control’ section as shown below.

Screenshot 2015-06-26 18.00.58

A modal window will then appear this will allow you to choose your SCM hosting provider…

Screenshot 2015-06-26 18.01.11

Click on ‘BitBucket’ and the click the next button.

Next you will have to authorise your BitBucket account and then choose your repository that you wish you deploy from, you’ll also have the option to choose the deployment branch.

For this test I’ve simply used ‘master’ but I would highly recommend using a dedicated deployment branch, in our configuration I plan to have both a ‘deploy-production’ and ‘deploy-uat’ branches of which can then be used to push-deploy once we are happy with the code state of the ‘master’ branch.

Screenshot 2015-06-26 18.01.29

Clicking on the finish button, will start an automatic deployment of your code to your application instance as shown here:

Screenshot 2015-06-26 18.01.50

Once deployment has finished the screen will change to following showing the latest commit message and author details etc.

Screenshot 2015-06-26 18.05.26

Any new code pushed to the ‘master’ branch will now trigger an automatic deployment of the code (including a “composer install”).

If you’ve deployed a clone of the vanilla Laravel codebase, you should now be able to view your site using your application URL, in this example it would be: http://bobbytest.azurewebsites.net.

If you need to run database migrations however (your application is using a DB) then you’ll need to access the CLI to run those first of which we’ll cover now…

Running artisan tasks and using the CLI

Using the Debug Console in Kudu we can execute CLI commands, we can check the version of PHP that is running, run artisan console commands as well as run composer commands etc.

Let’s now jump back to the Kudu management panel (eg. http://bobbytest.scm.azurewebsites.com) and lets bring up the CLI, so just click on ‘Debug Console > CMD’ from the top navigation bar and lets run a few example commands to check that everything is working as required:

Screenshot 2015-06-26 18.06.59

Change (cd) into the ‘D:\home\site’ directory of which will then enable you to execute artisan commands and general command prompt commands (eg. copy, move etc.)

As per my screenshot above, you can type ‘php -v‘ to display the current version of PHP that is running, you can also call your artisan console commands and composer commands too as required.

That’s pretty much it….

Well hopefully this post helped you get your Laravel 5.x application hosted on Windows Azure, feel free to comment on this post and ask any questions you may have etc.

Setting up and running a server with Ubuntu Server 14.04 on Raspberry Pi 2

The RaspberryPi is something that I’ve always been interested in but never really got around to actually ordering one… until this week!

In February the RaspberryPi Foundation released the RaspberryPi 2, this models sports a quad core, 900Mhz ARM7 processor with 1GB of RAM, this is a massive improvement on the previous models so I thought, well why not order one now, they’re even more powerful and easily be able to run a web server and various other tasks relatively fast!

So this morning, my bits came and in under an hour I had assembled the Pi2 in it’s fancy case that I bought for it too, my parts list that I ordered and received are as follows:

  • 1x Raspberry Pi 2
  • 1x FLIRC case
  • 1x 5v 2A Micro-USB charger
  • 1x SanDisk Ultra 32GB MicroSD (Class 10) Card (Plus MicroSD to SD adapter)

Originally I really wanted to run FreeBSD 10.1 on it but after doing some research online currently FreeBSD is not yet compatible on the RPi2 which was a little disappointing but I knew I had my trusty Ubuntu Server instead so though, I’d use that instead!

Downloading the Ubuntu Server 14.04.2 LTS Server Image for ARMv7

You can download the Ubunu Server 14.04.2 LTS Server image that is compatible with the Raspberry Pi2 using this link.

The default username and password for this image is “ubuntu” and “ubuntu”!

Installing the image on to your MicroSD card is covered on the Raspberry Pi2 website, so instead of me repeating what they’ve already documented, see these links for copying the image to your MicroSD card:

Once you’ve formatted and copied the image to the MicroSD card (as documented in the links provided above) it’s time to insert the MicroSD card into your Pi2 and start it up…

Logging in for the first time

Once you have booted the Pi up using the Ubuntu Server 14.04 image that we flashed to the MicroSD card we should now be able to login, the default username and password is: ubuntu / ubuntu

It’s now a good idea to change this password, to do that type:

passwd

Then enter your new password!

I’ve personally, set a new ‘root’ password and created a new user account and deleted the old ‘ubuntu’ account as a safety precaution, I recommend you do the same really!

Install NTP Client

The Raspberry Pi/Pi2 does not have a realtime clock therefore unless using NTP and having it updated on boot-up the RaspberryPi2 will start it’s clock from Jan 1st 1970 each time, to avoid this I’ll now install the NTP (Network Time Protocol) client to ensure that each time the Pi boot’s up that it connects to web to set it’s time automatically…

sudo apt-get install ntp

You will also find that by default the Ubuntu Server image that we are using is set to use UTC (Universal Time Coordination), I’m now going to set this to my actual timezone (BST), so to do that run the following command to reconfigure your server’s current timezone:

sudo dkpg-reconfigure tzdata

Now once you’ve set the new timezone, test and ensure that the correct date shows when you run

date

Excellent stuff… lets move on!

Installing OpenSSH server

Ideally, we want to run the Pi without a keyboard, mouse or monitor. There is no need for me to keep a keyboard and mouse attached as all configuration can be done remotely using SSH.

I plan to run the RPi2 completely headless and the only two cables being connected is the Power and Ethernet!

So to do this, we’ll install OpenSSH server like so:

sudo apt-get install openssh-server

Although I don’t recommend it, if you want to enable ‘root’ user logins over SSH then you should edit the SSH server configuration file here: /etc/ssh/sshd_config.

Installing some useful tools

Personally I like using htop, htop provides a nice display of running processes and memory utilisation, you can install it like so:

sudo apt-get install htop

Expanding the available disk space

The image size that we’re using is configured for a 4GB flash card, if you are using a card larger than 4GB like myself, it is a good idea to expand the partition sizes so we can utilise the entire card, luckily this can be easily achieved by following these simple steps:

sudo fdisk /dev/mmcblk0

You’ll then be presented with a ‘Command’ prompt, press the following keystrokes in order to delete the second partition:

  1. d
  2. 2

Now you need to re-create it like so, by recreating it, it will then use all the available space, so now press the following keystrokes a the ‘Command’ prompt

  1. n
  2. p
  3. 2
  4. [enter]
  5. [enter]

Finally to save and write the changes to disk and then exit out of the fdisk tool you must now press the follow key at the ‘Command’ prompt

  1. w

Great stuff, we’re done with fdisk now!

Now quickly reboot the Pi using this command:

sudo shutdown -r now

Once logged back on, we need to complete the resize of the partition by running this command:

sudo resize2fs /dev/mmcblk0p2

Adding SWAP space

The image is not setup with an SWAP space, if you would like to add some you can install a cool tool that will automatically configure a default SWAP space for you (~2GB by default but you can change if you wish).

To install like I did, run the following command:

sudo apt-get install dphys-swapfile

Woohoo, you are now ready to install whatever server software your require, I won’t cover installing a web server here as I’ve already covered this in some of my other blog posts and will be the same regardless of running a Raspberry Pi2!

Set-up your own MineCraft server on FreeBSD 10.1

My daughter (Ruby) has been bugging me to get her MineCraft and so I thought I’d set her up with her own server that we could both play on together so I thought I’d compile this blog post as a quick tutorial on how to set up a MineCraft server on FreeBSD 10.1, hopefully others will find this tutorial will useful and easy to follow!

If you don’t already have a server, you can go rent one from DigitalOcean, they’re $5 a month and I could not recommend them enough, the server’s from DigitalOcean all come with SSD’s as standard and run Minecraft servers perfectly! If you need one, go get one now and then carry on with this tutorial!

Installing the server

Firstly ensure that you have a FreeBSD 10.1 server setup and ready to go, lets
first setup a couple of packages that we will need:

pkg install screen nano wget

Next we’ll install the OpenJDK, you may wish to use the Oracle JDK but I’m not
covering that in this post, so lets install the OpenJDK like so:

pkg install java/openjdk7

Fantastic! – Lets just check that it is installed correctly by checking the version
number like so:

java -version

As long as you see some OpenJDK Runtime information then it’s looking good, we can move on!

Lets now download MineCraft to our server and put it into a seperate directory
in case we wish to run more than one game server on our server 🙂

cd ~
mkdir minecraft_server_1
cd minecraft_server_1
wget https://s3.amazonaws.com/Minecraft.Download/versions/1.8.4/minecraft_server.1.8.4.jar --no-check-certificate

So now we have a folder in our home directory called ‘minecraft_server_1’ and in their we have the main Minecraft server application.

Now we have to accept the EULA, simply run this command to ‘accept’ it

echo "eula=true" > eula.txt

Now we create the main start-up script for this server, run this:

echo "java -Xmx1024M -Xms1024M -jar minecraft_server.1.8.4.jar nogui" > startup.sh

Now we’ll create a new user on the server named ‘minecraft’ this user will be used for running all our Minecraft server instances, create the new user using this command:

adduser minecraft

Then lets update the file ownership like so:

chown -R minecraft:minecraft .

You can now start your server by running:

screen su -m minecraft -c "sh startup.sh"

If you wish to run multiple servers, you can add a server-port=XXXX setting to the server.properties file to enable you to run multiple game servers on the save physical server.

Congratulation, you should now be able to access your new Minecraft server running on the awesomely stable FreeBSD OS!

Using Supervisor to automatically start and keep your MineCraft server running

I noticed after a couple of days that every now and again the MineCraft server was crashing whilst no one was connected to it and mean’t that I had to log into the server via. SSH and re-start it.

I’ve used Supervisord in the past and seems like a good idea in this sitaution also, the result will be that if the server reboots or the game server crashes Supervisor will restart the Minecraft server for us automatically…

So lets install it now:

sudo pkg install py27-supervisor

Now we need to enable it by adding it to the rc.conf file, do it using the following command:

sudo sh -c 'echo supervisord_enable=\"YES\" >> /etc/rc.conf'

Now we have to add our Minecraft server configuration to the supervisor.conf file so that Supervisor knows what it needs to execute and handle the task etc.

So, edit /usr/local/etc/supervisord.conf and add the following content (obviously make path changes as required):

[program:minecraft_server_1]
command=/usr/local/openjdk7/bin/java -Xmx1024M -Xms1024M -jar minecraft_server.1.8.4.jar nogui
directory=/root/minecraft_server_1
user=minecraft
autostart=true
autorestart=true
stdout_logfile=/root/supervisor_logs/minecraft_server_1/stdout.log
stderr_logfile=/root/supervisor_logs/minecraft_server_1/stderr.log

Now lets just create our supervisor logs directory like so:

mkdir -p /root/supervisor_logs/minecraft_server_1/

Excellent, now lets start the service like so:

service supervisord start

That’s great, your Minecraft server should now be running and is managed via. Supervisor!

So in future you can check the status of your Minecraft server(s) using supervisorctl tool like so:

supervisorctl

You can start, stop and restart your Minecraft servers using:

stop minecraft_server_1
start minecraft_server_1
restart minecraft_server_1

Have fun and hope this was helpful!

Configure a FreeBSD 10.1 server for hosting Laravel 5.x applications.

In this post I’ll document the installation of a web application hosting environment using FreeBSD 10.1; I will install the core components that will enable you to host a Laravel 4.x/5.x application.

In this tutorial I will be using FreeBSD’s pkg tool to install the packages, pkg is similar to the package management tools that you find on Ubuntu (aptitude) and CentOS (yum) and enables us to install these packages and upgrade them much faster than compiling from source.

If you are considering running a FreeBSD server in the cloud, I would highly recommend using DigitalOcean – You can get a FreeBSD server for $5 a month of which runs on quality hardware and SSD disks as standard!! – I personally host several of my own personal servers with them and have found them not only to be great customer service, speed and availability but also band on price!

Installing a few tools and utilities before we begin…

Everyone comes to have their favourite Shell, Editor and set of utilities, and I’m no different…  This first section I’m just going to set-up my favourite set of utilities and tools that I use on a regular basis!

Setting the correct server timezone

First up it’s a good idea to ensure that your server’s timezone is correctly set, you can check your server’s current date and timezone by running:

date

If this is incorrect, you can use tzsetup tool to set the correct timezone, so if you need to set it correctly, run the following command and follow the prompts…

sudo tzsetup

Configure NTP for automatic time updates

It’s also a good idea to enable NTP to ensure that our server’s system clock is kept up to date automatically, lets do this like so:

sudo sh -c 'echo ntpd_enable=\"YES\" >> /etc/rc.conf'
sudo sh -c 'echo ntpd_sync_on_start=\"YES\" >> /etc/rc.conf'

…and we’ll start the service now (instead of waiting for the first reboot)…

sudo service ntpd start

That’s great – we can now rest assured that our server will attempt to keep it’s system time up to date automatically!

Lets move on…

Installing BASH

I personally prefer BASH instead of tsh, and nothign else really feels ‘right’ to me,  so I’ll install BASH like so:

sudo pkg install bash

Then ensure that I add the fdescfs mount point as mentioned at post installation message of BASH, add the following line to /etc/fstab, we can do this automatically like so:

sudo sh -c 'echo "fdesc /dev/fd fdescfs rw 0 0" >> /etc/fstab'

Excellent – Now we’ll remount everything like so…

sudo mount -a

Then set our shell to use BASH:

sudo chsh -s /usr/local/bin/bash {your_username_here}

With that now done, you’ll just need to logout and back in again for the changes to take affect!

Installing Nano

I personally find that using Nano as an CLI editing tool is perfect for my requirements and doesn’t give me any issues when using my MacBook Pro  and the absence of the ‘Insert’ key (Yes, even the ‘i‘ key doesn’t work for me consistently either!)

I’m used to using Nano and personally like it very much over the likes of vi, emacs etc.

So I’ll simply install it like so:

sudo pkg install nano

Now it’s installed, if you type:

nano

You should now see Nano on your server – excellent stuff, press Ctrl+X and lets move on…

Installing Wget (and cURL)

Wget is a great tool for downloading files from the internet from the console, some people prefer/use cURL but personally I use Wget but given it may just be worth installing both….

sudo pkg install wget curl

The reason that I’ve installed both is that it’ll probably save you hassle in the long run as sometimes installation scripts that you use from the web – such as the Composer one that we’ll run later on in this post!

Install Git Client

As a software developer myself and the fact that I deploy 99% of my code to production servers using Git nowadays, having the Git client installed on my web server is very important as otherwise deploying later versions of my code would be a hell of a lot of hassle.

sudo pkg install git

Now, give it a little test by checking the current version number that was just installed:

git --v

Excellent stuff – although technically not required (if you’re just pulling code) you can set your Git username and password like so:

git config --global user.name "Bobby Allen"
git config --global user.email ballen@bobbyallen.me

Great stuff… let move on…

Setting a root password

If you’re using a FreeBSD VPS from DigitalOcean you’ll find that the root password has not yet been set (FreeBSD by default does not allow root access over SSH too btw) so DigitalOcean use a ‘freebsd’ account instead…

To set the ‘root’ password you can use this command:

sudo passwd

Excellent stuff, if you need to access and run commands exclusively as root (rather than just sudo’ing) you can switch user to the root account like so:

su -

Installing Nginx

I personally prefer Nginx for hosting Laravel applications, Nginx is faster than, and has a much smaller resource footprint in comparison to the Apache HTTPd server, so I’ll install Nginx as the web server…

sudo pkg install nginx

Then enable it…

sudo sh -c 'echo nginx_enable=\"YES\" >> /etc/rc.conf'

Now start the service…

sudo service nginx start

Try accessing your servers IP address and you should then get the Nginx ‘Welcome’ page, assuming all went well lets move on and install PHP…

Installing PHP

At the time of writing PHP 5.6 is the latest stable release, we’ll now install this as well as all of the modules required by Laravel 5.x:

sudo pkg install php56 php56-gd php56-hash php56-phar php56-ctype php56-filter php56-iconv php56-json php56-mcrypt php56-curl php56-mysql php56-mysqli php56-pdo_mysql php56-sqlite3 php56-pdo_sqlite php56-tokenizer php56-readline php56-session php56-simplexml php56-xml php56-zip php56-zlib php56-openssl openssl

We now need to copy the php.ini-production file out to php.ini to give us a working file (based on the production default settings)

sudo cp /usr/local/etc/php.ini-production /usr/local/etc/php.ini

To ensure that OpenSSL works (Composer amongst other things will requires this when accessing files over the web using SSL) we need to configure the path to download and set the cacert.pem file…

Firstly however we’ll download the latest cacert.pem file from the Curl website and ensure we place it in the /etc/ssl directory like so:

cd /etc/ssl
sudo wget http://curl.haxx.se/ca/cacert.pem

Lets now edit the PHP.ini file and set the path so that PHP uses the correct file:

sudo nano /usr/local/etc/php.ini

Change this line (normally found near the very bottom of the PHP.ini file):

;openssl.cafile=

to (we’ve set the path to the cacert.pem file we downloaded and uncommented the option so that it becomes enabled):

openssl.cafile=/etc/ssl/cacert.pem

Save the file and we’re done!

You may also wish to update the contents of this file such file upload sizes, time zone etc. whilst editing it!

Excellent stuff, check that PHP is installed and the modules are available, to do this you can run PHP from the CLI with the -m argument…

php -m

Now that we’ve got PHP installed we now need to configure PHP-FPM to work with Nginx so that the PHP script engine handles PHP requests…

Configuring PHP-FPM

We now need to “glue” Nginx and PHP together, PHP-FPM stands  for PHP Fork Process Manager and is the SAPI module that we’ll use for this installation.

As we’ve already installed PHP we should have automatically had PHP-FPM installed by default so we can now enabled the PHP-FPM service like so:

sudo sh -c 'echo php_fpm_enable=\"YES\" >> /etc/rc.conf'

and then start the service…

sudo service php-fpm start

We now need to configure Nginx and the PHP-FPM socket side of things…

sudo nano /usr/local/etc/php-fpm.conf

Now we are going to make a few changes in this file as currently PHP-FPM is listening on a TCP port but to improve performance we’ll configure it to use a socket instead (we’ll bypass the TCP layer by using a UNIX socket), so first of all locate the following line:

listen = 127.0.0.1:9000

and change it to:

listen = /var/run/php-fpm.sock

Next we’ll uncomment the listen.user, listen.group and listen.mode sections, so locate this block and then remove the comments (remove the ‘;‘ from the start of these lines:

;listen.owner = www
;listen.group = www
;listen.mode = 0660

Great, save the changes to the file!

One more thing we need to do now and that is to add a quick “security fix” to ensure that PHP does not try to execute parts of a path if the file does not exist.

So we need to edit out PHP.ini file like so:

sudo nano /usr/local/etc/php.ini

and locate this line:

;cgi.fix_pathinfo=1

change it to (uncomment it by removing the ‘;‘ at the start of the line and set it’s value to ‘1’) like so:

cgi.fix_pathinfo=0

Save the file and then restart PHP-FPM for the changes to take affect!

sudo service php-fpm restart

Great, we’re done here now!

Installing Composer

Composer is a package manager for PHP and is used heavily by Laravel for dependency and package management…

sudo curl -sS https://getcomposer.org/installer | sudo php -- --install-dir=/usr/bin
sudo mv /usr/bin/composer.phar /usr/bin/composer

Now test it out by running…

composer --version

All going well you should see the available Composer CLI options!

Installing MySQL Server

Next we’ll install MySQL…

sudo pkg install mysql56-server mysql56-client

Then enable it…

sudo sh -c 'echo mysql_enable=\"YES\" >> /etc/rc.conf'

Now start the service…

sudo service mysql-server start

Test that the server is running by logging into MySQL like so:

mysql -u root

If you see the MySQL CLI prompt all should be good! – Simply type ‘quit‘ to exit and return to BASH!

MySQL, by default is installed without a MySQL root password (for accessing via. localhost atleast), it is however recommended that you secure your installation by running the mysql_secure_insta script like so:

sudo mysql_secure_installation

Once you’ve done that we’ll move on…

Installing Redis

For a lot of my applications I like to use Redis for caching, I’ll install it now like so:

sudo pkg install redis

As like the other applications we now need to add this to the /etc/rc.conf file and start the service, so lets add it to the file like so…

sudo sh -c 'echo redis_enable=\"YES\" >> /etc/rc.conf'

Now we start the service like so:

sudo service redis start

Now we’ll test it to ensure that our redis server is running, so now lets enter the Redis CLI tool like so:

redis-cli

Now we can run a couple of commands to check that Redis is responding as we would expect, enter the commands as per my screenshot below and you should get the same output…

Screenshot 2015-04-07 00.02.26

Installing Beanstalk

Beanstalk is a message queue service that is useful when using queues in your Laravel applications.

Let’s install it now…

sudo pkg install beanstalkd

Now lets enable the service like we did previously by adding it to the /etc/rc.conf file like so:

sudo sh -c 'echo beanstalkd_enable=\"YES\" >> /etc/rc.conf'

Now lets start the service by running:

sudo service beanstalkd start

Done – Beanstalkd is now running!

Installing and configuring a firewall

You may now wish to lock down your new web application server with a software firewall – I definitely recommend you do it now and I have another post  demonstrating how to configure such a firewall on FreeBSD.

Read the post about setting up a Web server firewall on FreeBSD…

Setting up your application

As you probably already know, Nginx much like other web servers is capable of hosting multiple web sites and applications in the form of ‘Virtual Hosts’.

As you may wish to set-up multiple applications/websites on your server we’ll now set-up a virtual-host configuration template and set up a standardised directory structure for your site/application data and log files.

First of all we’ll create some new directories:

sudo mkdir /var/webapps
sudo mkdir /var/log/nginx
sudo mkdir /usr/local/etc/nginx/sites-enabled/
sudo mkdir /usr/local/etc/nginx/sites-available/

These directories will enable us to have a common directory where all web applications will sit under and be hosted from. We create a new directory for Nginx access and error logs to be logged in and then we create a place to store and enable our Nginx virtual host configuration files for our sites/application.

In order for Nginx to load our site configurations we need to make a few changes to the main Nginx configuration file, so lets open the file now:

sudo nano /usr/local/etc/nginx

Firstly update the worker_processes value to match the current number of CPU cores your server has. – You can use this command on FreeBSD to detect this if you don’t know off the top of your head run:

sysctl hw.model hw.machine hw.ncpu

…the specific information you need is the value of hw.ncpu but the other information that is displayed might be useful to know too!

Next we need to include our virtualhost configuration files, we’ll therefore need to add these lines (just before the last closing brace ‘}’).:

server_names_hash_bucket_size 64;
include /usr/local/etc/nginx/sites-enabled/*;

Great stuff, now save the file and we can move on!

At this point it’s worth just checking that our Nginx configuration file(s) are still ok and we haven’t introduced any configuration errors, lets run:

sudo nginx -t

So now unless you have any errors reported by Nginx we’ll move on…

In an attempt order to speed things up in future I like to use a template and then just copy it and replace the main parts (eg. path to the application, domain name etc.) so for speed and simplicity download this template file (nginx_laravelapp_freebsd.conf) and and upload the contents it to the server in this file:-

sudo nano /usr/local/etc/nginx/sites-available/webapp.tpl

Great, the template is configured specifically for hosting Laravel 4.x and 5.x web applications but once you’ve copied for of your application to you tweak it as much as you like for example, you could use it to host a WordPress blog too (just remove the ‘/public’ directory from the end of the ‘root’ directory path setting.

Deploying web sites/applications

Now that the core server is setup, Every time you wish to deploy a new web site/application you can now following these steps:

1) Create two new directories, these will be for the web site/application files and the virtual host log files:

sudo mkdir /var/webapps/{APPNAME}
sudo mkdir /var/log/nginx/{APPNAME}

2) Copy the virtual host template file into the sites-available directory so that you have a customisable version of the virtualhost template and that you will configure to server your application at a given domain.

sudo cp /usr/local/etc/nginx/sites-available/webapp.tpl /usr/local/etc/nginx/sites-available/{APPNAME}.conf

3) Enable the web application (to be served by Nginx)…

sudo ln -s /usr/local/etc/nginx/sites-available/{APPNAME}.conf /usr/local/etc/nginx/sites-enabled/{APPNAME}.conf

4) You now need to edit the virtualhost file to configure site/application specific variables, edit the file by running:

sudo /usr/local/etc/nginx/sites-available/{APPNAME}.conf

…simply replace the following place-holders with your application/site specific settings:

@@APPNAME@@ – This should be the same as your application folder name that you createe under /var/webapps/ eg. ‘testapp
@@DOMIN@@ – This should be all hostnames (seperated by spaces) that the appliction will respond to eg. ‘mytest.com www.mytest.com
@@ENVIROMENT@@ – eg. ‘production

5) Reload Nginx’s configuration to start serving the new site/application:

sudo service nginx restart

Congratulations, your site should now be running on Nginx – If you wish to configure and host other sites/applications in future on this server simply re-run steps 1 to 5!

File and configuration paths

I figured that it may be worth me just jotting down the key configuration files and their locations (may save you hunting around for them later):

  • Main Nginx Configuration file: /usr/local/etc/nginx/nginx.conf
  • Nginx virtual host configurations:  /usr/local/etc/nginx/sites-available
  • Main PHP.ini file: /usr/local/etc/php.ini
  • PHP extensions configuration: /usr/local/etc/php/extentions.ini
  • PHP-FPM configuration file: /usr/local/etc/php-fpm.conf
  • MySQL configuration file: /usr/local/my.cnf
  • Sudoers configuration file: /usr/local/etc/sudoers
  • SSHd configuration file (great for changing default port etc.): /etc/ssh/sshd_config
  • CA Certficiate file (as used by OpenSSL): /etc/ssl/cacert.pem
  • Laravel 4/5 Nginx virtual-host template file:  /usr/local/etc/nginx/sites-available/webapp.tpl
  • Your web site/application data: /etc/webapps
  • You web site/application log files: /var/log/nginx

Configuring a simple web server firewall on FreeBSD 10.1

FreeBSD comes shipped with three software firewalls but personally I found that IPFW is pretty easy to configure and gets the job done nicely.

This quick post will help you configure your FreeBSD server protected with the IPFW firewall in just a few minutes.

First of all we need to enable the firewall, we do this by adding the following lines to our /etc/rc.conf file:

firewall_enable="YES"
firewall_quiet="YES"
firewall_type="workstation"
firewall_myservices="22 80 443"
firewall_allowservices="any"
firewall_logdeny="NO"

As you can see from the code snippet above, we’re using a base template (workstation) that provides some saine defaults and then we are allowing specific incoming TCP ports these being 22 (for SSH), 80 (for HTTP) and 443 (for HTTPS), if you require other ports simply add them there!

You may wish to log any denied requests, if so simply set ‘firewall_logdeny=”YES”‘ but given that I just want to make sure the server stays secure and don’t really want the added Disk IO I decided to disable that in the example above.

Once done, save the changes and then we can start the firewall using the following command:

sudo service ipfw start

Being a web server that really is only going to be serving web traffic over HTTP and HTTPS this configuration is a good starting point!

More information about IPFW and advanced configuration options can be found here: https://www.freebsd.org/doc/en_US.ISO8859-1/books/handbook/firewalls-ipfw.html

Get nearest places (rows) from MySQL based on a Latitude and Longitude co-ordinate

I’ve recently been developing a C# desktop application for Microsoft Windows that integrates with Microsoft Flight Simulator and Lockheed Martin’s Prepar3d – The software is designed to monitor the landing rate of a user’s aircraft upon landing and then displays this information to the user to aid them perfect their landings whilst flying in the simulated environment.

Anyway, the application gives the user the ability to upload their landing statistics including their Aircraft type, heading and the latitude and longitude of the the aircraft upon touch-down to this website.

The website displays a map and plots the landing on the globe, as well as this I wanted to determine the closest airport (within 3KM) in order to determine the airport at which airport they’ve landed at.

So the scenario is that I have a large database table of ~24,500 airports of which contain their name, ICAO code and of course their Lat/Long co-ordinates, I also know the latitude and longitude of the user’s aircraft at touchdown so with this data I then want to return the closest airport (or airports) so we can use the data in the application.

So although my use-case is specific to returning the closest airport this can of course be used in any scenario where you want to return the closest location to a given lat/long point such as a user’s/site visitors location, so I thought I’d share the MySQL code that I used to do this with….

-- Set your users current location in LAT/LONG here
set @lat=51.891648;
set @lng=0.244799;

-- The main SQL query that returns the closest 5 airports.
SELECT id, icao, lat, lng, 111.045 * DEGREES(ACOS(COS(RADIANS(@lat))
 * COS(RADIANS(lat))
 * COS(RADIANS(lng) - RADIANS(@lng))
 + SIN(RADIANS(@lat))
 * SIN(RADIANS(lat))))
 AS distance_in_km
FROM airports
ORDER BY distance_in_km ASC
LIMIT 0,5;

You can simply copy and paste this and then replace the MySQL variables at the top of the SQL to set what in affect could be your user’s current location and have it query a table such as ‘shops’ to then return and allow your users to see your closest shops to their current location.

The above SQL example will only return the top 5 closest airports but you can obviously remove or adjust the last line “LIMIT 0,5;” to change this.

I hope this was useful to others!

How to host a Laravel (4 and 5) application on a VestaCP account.

I wanted to test out VestaCP this week as I’ve heard good things about it and as currently Sentora is missing DKIM signing and various other nice things that Vesta has (and I’ve not got the time to implement them) I thought I’d test it out.

Being a primarily a PHP developer, my go-to framework of choice for most of my applications these days is Laravel, I have a number of various virtual machines that I host my sites and applications on and normally I use my “Conductor” tool to host dedicated applications on one of my DigitalOcean VM‘s but I wanted to try out and just see how easy it was to get one of my Laravel 4 (and a Laravel 5) application hosted and running on Vesta.

Firstly as you’ll soon notice, Vesta uses a folder inside the root of the website folder named as follows: ‘/web/yourwebsite.com/public_html’, this is where the web server will serve your web requests from and is accessible to the public – knowing this, we need to separate the contents of the Laravel’s /public directory and the rest of the application files, my solution is as follows….

  1. Upload the contents of your Laravel’s public directory into the /web/yourwebsite.com/public_html directory.
  2. Open up  /web/yourwebsite.com/public_html/index.php and change this line:
    require __DIR__ . '/../bootstrap/autoload.php';

    to

    require __DIR__ . '/../private/app_data/bootstrap/autoload.php';

    and

    $app = require_once __DIR__. '/../bootstrap/start.php';

    to

    $app = require_once __DIR__ . '/../private/app_data/bootstrap/start.php';

    Save the file and let’s move on…

  3. Create a new directory called ‘app_data‘ under /web/yourwebsite.com/private.
  4. Upload ALL files and folder from your Laravel project’s root directory except the ‘public’ directory.
  5. Edit the paths.php file found under /web/yourwebsite.com/private/app_data/bootstrap and change this line:
    'public' => __DIR__ . '/../public',

    to

    'public' => __DIR__ . '/../../public_html',

    Save the changes!

Now if all went well you should now see your Laravel application running just fine!

Remember, if the page is blank or you’re getting the ‘Something went wrong’ message, check that you have the correct permissions set and that you’ve updated any database connection details etc.

I’ve also noticed that it’s best to set the ‘url‘ option correctly inside your app.php file, like so:

'url' => 'http://yourdomain.com',

Without this I had some issues with some pages appearing when using the HTML and URL helper classes within Laravel.

Enabling anti-aliased fonts in Netbeans

Having recently switched back to Linux on my main development machine I’ve been using my trusty IDE – Netbeans, I noticed however that on Microsoft Windows and Apple OSX Netbeans text is anti-aliased by default but is not the case on my current Linux distrubution (Ubuntu 14.04).

So this is a quick post to demonstrate how to enable this on Linux (I assume it will work for most distrubutions atleast!)

It’s very simple, all we need to do is sudo and edit the netbeans configuration file, so using the terminal lets use Nano to edit the file:

sudo nano /usr/local/netbeans-8.0.2/etc/netbeans.conf

Obviously, you may just have to change the Netbeans version number to match the directory, as you can tell at the time of writing the Netbeans version I am using is 8.0.2!

Now, on (or around) line number 46, you should see a string of text like so:-

netbeans_default_options="-J-client -J-Xss2m -J-Xms32m -J-XX:PermSize=32m -J-Dapple.laf.useScreenMenuBar=true -J-Dapple.awt.graphics.UseQuartz=true -J-Dsun.java2d.noddraw=true -J-Dsun.java2d.dpiaware=true -J-Dsun.zip.disableMemoryMapping=true"

We are now simply going to append that line with a couple of additional flags (settings), so lets add -J-Dswing.aatext=true -J-Dawt.useSystemAAFontSettings=on so the line now reads:-

netbeans_default_options="-J-client -J-Xss2m -J-Xms32m -J-XX:PermSize=32m -J-Dapple.laf.useScreenMenuBar=true -J-Dapple.awt.graphics.UseQuartz=true -J-Dsun.java2d.noddraw=true -J-Dsun.java2d.dpiaware=true -J-Dsun.zip.disableMemoryMapping=true -J-Dswing.aatext=true -J-Dawt.useSystemAAFontSettings=on"

Great! – Save the file and restart Netbeans…. You should  now notice that the IDE is using Anti-aliased text!

Installing MailCatcher support in Laravel Homestead

Mailcatcher is an awesome tool for capturing emails generated and sent from your application.

I’m an avid Laravel fan and recently Laravel released Homestead, Homestead is a pre-configured vagrant box for developing Laravel applications.

Previously I had been using Vaprobash for each of my Laravel applications of which had MailCatcher support out of the box of which I loved and now miss (as I’m now using Homestead for new projects). – I’m still using Vaprobash for other non-Laravel projects that I’m working on and I’d definitely recommend it to others!

This quick forum post is going to demonstrate the steps that you need to follow in order to get Mailcatcher installed on your Homestead box.

So before you start make sure that your Homestead Vagrant instance is not running,  we need to add a new port forwarding rule to enable us to access the MailCatcher web interface, so lets do this by editing the file named homestead.rb of which exists in your homestead/scripts/ directory (the directory which you cloned as part of the Homestead installation process)

If you’ve installed Homestead2, using the Composer global include method, you will be able to find the homestead.rb file here instead: ~/.composer/vendor/laravel/homestead/scripts.

Add the following port forwading configuration below the existing one, the rule you will need to add is as follows:-

config.vm.network "forwarded_port", guest: 1080, host: 1080

Now you’ve added the port forwarding configuration, save the file and start up the Homestead vagrant instance like so:

vagrant up

Once your Vagrant box is fully started you should notice that the new port forwarding rule is in affect by checking the following:

Port forwarding port 1080 confirmed!
Port forwarding port 1080 confirmed!

Next we need to install some dependencies and then install MailCatcher (at the time of writing the version of the Homestead vagrant box is v.0.2.0)

Login to the Vagrant server using this command:

vagrant ssh

Now run the following commands:

sudo apt-get update
sudo apt-get install ruby1.9.1-dev
sudo gem install mailcatcher

Now we need to create an Upstart service on the virtual machine, this will ensure that we don’t have to manually start MailCatcher each time we restart the Vagrant box.

So we’ll do this by creating a new file like so:

sudo nano /etc/init/mailcatcher.conf

Now add the following content and then save the file!

description "Mailcatcher"

start on runlevel [2345]
stop on runlevel [!2345]

respawn

exec /usr/bin/env $(which mailcatcher) --foreground --http-ip=0.0.0.0

At this point you can now either restart the Vagrant box by running vagrant halt && vagrant up or manually start the new MailCatcher service by running this command:

sudo service mailcatcher start

Back on your desktop machine you should now be able to access the MailCatcher web interface using:

http://localhost:1080

Remember, before the web interface will show you emails that have been received you need to configure your server to send emails to it’s SMTP service of which is running on port 1025.

If you wish to have other machines/servers end emails other than just the Vagrant box you will also need to add a port forwarding rule for port 1025 also (in the same way you did for the web interface port 1080).