How to create a PHP linux daemon (service)

There are many ways to automate scripts and applications running on Linux, you can obviously create a cron task, that can execute from between a minute to once a year if you wanted there are however other ways in which you can have unattended background tasks execute.

Today I will explain how you can create a Linux daemon using ‘Upstart’, this tutorial has been carried out on Ubuntu Server 14.04 and will work with various other distributions too. If you wish to find out more about Upstart check out it’s project website.

To demonstrate it works we will create a small PHP script that will literally append a text file with a timestamp every couple of seconds!

A few benefits of running a PHP script as a daemon as opposed to a cron task are as follows:-

  • You can easily start and stop the execution of these scripts without having to edit the crontab each time (simply issue a single command like ‘stop my-service-name’).
  • You can run tasks more frequently than once per minute (which is currently the crontab’s minimum execution winow)
  • Specify which run-levels your script is to be run on.
  • …plus various others!

It is actually amazingly easy to create a Linux daemon using Upstart, you simply create a daemon configuration script under /etc/init (take not, NOT /etc/init.d as per the older  System-V init scripts)

This is what our service script will look like, its pretty simple, there are many more awesome directives you can use for your service configuration script such as ‘pre-start’ configuration so your service can go and create initial directories or file etc. that it may depend on (check out the Upstart documentation for a full list of configuration directives.)

description "Bobbys Test Daemon"
author "Bobby Allen"

start on startup
stop on shutdown

 sudo -u root php -f /usr/share/test_daemon.php
end script

As you can see, we reference our PHP script within the configuration file, this is what will be run by the service, keep in mind that a daemon/service is designed to be ran continuously in the background and will only stop when told to.

You must keep in mind that you must develop your PHP service script to ‘loop’ continuously as by default a PHP script will run once and then exit out, here is a very quick example script of which should be placed on the filesystem somewhere and then referenced accordingly, in my example I’ve just placed it under /usr/share/test_daemon.php

The contents of the PHP script is as follows:-


// The worker will execute every X seconds:
$seconds = 2;

// We work out the micro seconds ready to be used by the 'usleep' function.
$micro = $seconds * 1000000;

 // This is the code you want to loop during the service...
 $myFile = "/home/ballen/daemontest.txt";
 $fh = fopen($myFile, 'a') or die("Can't open file");
 $stringData = "File updated at: " . time(). "\n";
 fwrite($fh, $stringData);

 // Now before we 'cycle' again, we'll sleep for a bit...

You should now be able to start, stop and view the status of your new linux daemon like so:-

status test-daemon
start test-daemon
stop test-daemon

If you start the daemon using the command ‘start test-daemon‘ and give it 10 seconds, then stop it using ‘stop test-daemon‘ you should notice when you read the log file that our daemon is writing to every 2 seconds then it should include 5 lots of timestamps proving that the daemon is working successfully! (obviously it will have more if you’ve already been playing with the start command :))


How to install Git without having to install Xcode on MacOSX

Today I re-installed one of my Mac’s with the latest version of OSX (my laptop has been using Mavericks for a while so I thought I’d better upgrade my family iMac too!)

Anyway, annoyingly, if you download Git for MacOSX from the official Git website ( and install it on your Mac,  you’ll notice that unless you have XCode installed, OSX will keep pestering you to install XCode!

Not for long… Download and install the latest version from the Git website and then simply open up your terminal and run the following command:

echo "PATH=/usr/local/git/bin:\$PATH" >> ~/.bash_profile

You’ll now have added the new ‘git’ binary path into your .bash_profile of which will precede Apples git stub (will bypass the XCode installation notice and need to install XCode altogether).

After typing the above command, simple close and re-open the terminal, or to re-load your bash profile without closing and re-opening, simply run:

source ~/.bash_profile

Test it works by checking the version of git like so:

git --version

Enjoy! – No need to bloat out your Mac anymore if you’re not using XCode 🙂

Setup your own private GitHub server using GitLab and Ubuntu Server 12.04 LTS

I’m the first to admit that I absolutely love writing code and applications, I’m also a huge fan of GitHub, GitHub is a fantastic tool for hosting and collaborating on public projects such as open-source projects however and although you can pay to enable you to host private Git repositories on GitHub if you have more than a few you’ll soon start paying big bucks!

Recently I found a fantastic ‘clone’ of GitHub, it’s called GitLab and pretty much has all the features that GitHub does and like GitHub, GitLab is written in Ruby (on Rails).

In this blog post, I will demonstrate how to configure you’re own GitLab server specifically on Ubuntu Server 12.04 LTS – for those that follow my blog posts quite closely you’ll not doubt of realised by now that I do love Ubuntu Server (I prefer the debian based distros over the likes of the RedHat derivatives!)

So, this is a little longer than most of my other blog posts on here so I hope you can keep up as we delve into getting the raft of dependencies installed and everything up and running!

Lets get going…

From the console of your server we need to download and install some initial packages, execute the following commands:

sudo apt-get update
sudo apt-get upgrade
sudo apt-get install -y build-essential zlib1g-dev libyaml-dev libssl-dev libgdbm-dev libreadline-dev libncurses5-dev libffi-dev curl git-core openssh-server redis-server checkinstall libxml2-dev libxslt-dev libcurl4-openssl-dev libicu-dev

Next, by default Ubuntu 12.04 LTS comes bundled with Ruby 1.8 we need to replace this with Ruby 2.0, so lets remove the old version and then we’ll build the version 2.0 from source… (this bit can take some time, you might want to consider running the rest of the installation in a screen session incase you get an SSH idle connection timeout!)

mkdir /tmp/ruby && cd /tmp/ruby
curl --progress | tar xz
cd ruby-2.0.0-p247
sudo make install

Now, lets confirm that we are now running version 2.0 of Ruby:

ruby --version

We will now create a git user for GitLab to use, lets add the new system account like so:

sudo adduser --disabled-login --gecos 'GitLab' git

Next, as we will be installing our RoR dependencies using Bundler, we’ll download the Bundler Gem like so:

sudo gem install bundler --no-ri --no-rdoc

Ok, so that was the first bit out of the way, now lets move on to installing GitLab Shell, we’ll download GitLab Shell with the following commands:-

cd /home/git
sudo -u git -H git clone
cd gitlab-shell
sudo -u git -H git checkout v1.8.0
sudo -u git -H cp config.yml.example config.yml

At the time of writing, the version of GitLab Shell was v1.8.0, you may want to replace the branch version above with the latest as per the branches found here:

Now that we have GitLab Shell installed on the server, we should now edit the config.yml file and set the ‘gitlab_url’ property so that it points to our server, so using Nano we’ll edit and update the file:

nano config.yml

Update this line to reflect your server’s FQDN:

gitlab_url: "http://localhost/"

Now we finalise the installation of GitLab shell but running the installer:

sudo -u git -H ./bin/install

Setting up MySQL

It is now time to prepare the MySQL database, you can also use Postgres if you wish but I will not be covering that in this post.

So, lets install the MySQL server and client libraries (when prompted for the MySQL root password be sure to remember it as you will need to use it shortly!):

sudo apt-get install -y mysql-server mysql-client libmysqlclient-dev

We will now create a dedicated MySQL user and create a database for GitLab, so lets now login to MySQL like so:

mysql -u root -p

and now execute the following queries, be sure to change the ‘YOUR_PASSWORD_HERE’ string for your own random password, this will be used by the dedicated GitLab DB account.

CREATE DATABASE IF NOT EXISTS `gitlabhq_production` DEFAULT CHARACTER SET `utf8` COLLATE `utf8_unicode_ci`;
GRANT SELECT, LOCK TABLES, INSERT, UPDATE, DELETE, CREATE, DROP, INDEX, ALTER ON `gitlabhq_production`.* TO 'gitlab'@'localhost';

Fantastic! – Now lets just make sure that you can login as expected to MySQL using the new ‘gitlab’ account:

mysql -u gitlab -p

If all went well, you should be able to execute ‘show databases;‘ command at the MySQL prompt you should see the gitlabhq_production database in the list!

Grabbing GitLab

We’ve now got everything prepared so we can now start the actual installation of the GitLab application!

So lets now clone GitLab and switch to the latest ‘stable’ branch:

cd /home/git
sudo -u git -H git clone gitlab
cd /home/git/gitlab
sudo -u git -H git checkout 6-4-stable
sudo -u git -H cp config/gitlab.yml.example config/gitlab.yml

Like with GitLab Shell, at the time of writing the latest stable release is 6.4, so hence why the above code checked out ‘6-4-stable’ you should change this to the latest if you prefer!

Again, we have some more editing to do in the GitLab configuration file, like before we need to set the server’s FQDN, lets open the file first:-

sudo -u git -H nano config/gitlab.yml

Now lets find the section below and update it accordingly, basically you should be replacing ‘localhost‘ with your server’s FQDN, for example ‘‘, this should match the same FQDN that we set up earlier in the GitLab Shell config.yml file…

## Web server settings
host: localhost
port: 80
https: false

Ok, now we are going to set some directory permissions and configure the ‘git’ user’s Git configuration; simple stuff just copy and execute the following commands on your server…

cd /home/git/gitlab
sudo chown -R git log/
sudo chown -R git tmp/
sudo chmod -R u+rwX log/
sudo chmod -R u+rwX tmp/
sudo -u git -H mkdir /home/git/gitlab-satellites
sudo -u git -H mkdir tmp/pids/
sudo -u git -H mkdir tmp/sockets/
sudo chmod -R u+rwX tmp/pids/
sudo chmod -R u+rwX tmp/sockets/
sudo -u git -H mkdir public/uploads
sudo chmod -R u+rwX public/uploads
sudo -u git -H cp config/unicorn.rb.example config/unicorn.rb
sudo -u git -H git config --global "GitLab"
sudo -u git -H git config --global "gitlab@localhost"
sudo -u git -H git config --global core.autocrlf input
sudo -u git -H cp config/initializers/rack_attack.rb.example config/initializers/rack_attack.rb
sudo -u git cp config/database.yml.mysql config/database.yml

We now need to configure the database connection file with our MySQL user account details we created earlier…

sudo -u git -H nano config/database.yml

By default, the file looks like as follows, update the ‘production‘ section to match your database username and password that we configured earlier, you should only need to change the username and password…

 adapter: mysql2
 encoding: utf8
 reconnect: false
 database: gitlabhq_production
 pool: 10
 username: root
 password: "secure password"

Now that we’ve set that up, lets secure the file but setting some more restrictive file permissions:

sudo -u git -H chmod o-rwx config/database.yml

Ok, we are now ready to start the GitLab application install (don’t panic when you see ‘postgres’ in the command below, notice the operator actually says  ‘–without’ so this is 100% correct! :D)

cd /home/git/gitlab
sudo -u git -H bundle install --deployment --without development test postgres aws

After that is complete, lets run this command, and type ‘yes’ when prompted:

 sudo -u git -H bundle exec rake gitlab:setup RAILS_ENV=production

After the above command completes, you will be shown the default administrator credentials, it should look like this:

Administrator account created:

Lets now configure GitLab to start automatically whenever the server is booted, to do this execute the following commands:

sudo cp lib/support/init.d/gitlab /etc/init.d/gitlab
sudo chmod +x /etc/init.d/gitlab
sudo update-rc.d gitlab defaults 21

Lets keep the log files in check, lets enable logrotate:-

sudo cp lib/support/logrotate/gitlab /etc/logrotate.d/gitlab

Lets now check that everything installed ok on the server and we are not getting any error messages that we should be concerned about:-

sudo -u git -H bundle exec rake gitlab:env:info RAILS_ENV=production

We can now start GitLab and hopefully all will work without any issues:

sudo service gitlab start

Lets compile the assets:

sudo -u git -H bundle exec rake assets:precompile RAILS_ENV=production

….this should speed up the initial loading time!

Time to install Nginx

Ok, GitLab is now running but we now need to install a web server, my personal favorite for speed and stability is Nginx, so that is what I will be installing…

sudo apt-get -y install nginx
cd /home/git/gitlab
sudo cp lib/support/nginx/gitlab /etc/nginx/sites-available/gitlab
sudo ln -s /etc/nginx/sites-available/gitlab /etc/nginx/sites-enabled/gitlab

We now need to edit the Nginx gitlab configuration file to set the server’s FQDN and again, this should match the ones we added to both ‘GitLab Shell’ and GitLab’s config.yml files earlier, so:

sudo nano /etc/nginx/sites-available/gitlab

and now restart Nginx like so:

sudo service nginx restart

I’m sure you’ll be over the moon to hear that we are finished!!!

Having issues? 502 error(s)?

First of all, its a good idea to run this command to ensure that everything that should be ok, is ok…

sudo -u git -H bundle exec rake gitlab:check RAILS_ENV=production

I actually found that on my installation, it was complaining that the version of git was out of date, this didn’t really matter too much so I just left it!

I found that the first time I tried accessing GitLab after installation, the CPU and memory was battered by ‘unicorn_rails’ process, to get around this I updated the ‘timeout’ value in the /home/git/gitlab/config/unicorn.rb file to ‘300’ and then restarted both Nginx and GitLab, once complete I tried accessing GitLab once again, it took a while but after it was complete it loaded and all was good! – See more infomation about this issue here:!topic/gitlabhq/u9AMESyd-N0

I found that after installation whenver I tried to connect to my server, I was being thrown a 502 error by Nginx, I did some digging around on Google and found that to resolve the issue I had to add my servers FQDN to the /etc/hosts file on the line!

So if you have the same issue, change the two lines in /etc/hosts from:       localhost ap2

to:       localhost ap2

and restart nginx and GitLab and you should then be good to go!

If you are having other issues, it is well worth knowing where the log files are so that you can view them and get clues as to what may be causing the issue, I’d recommend checking the following log files for errors:-

tail /var/log/nginx/gitlab_error.log -n50
tail /var/log/syslog -n50
tail /home/git/gitlab/log/application.log -n50
tail /home/git/gitlab/log/production.log -n50
tail /home/git/gitlab/log/unicorn.stderr.log -n50
tail /home/git/gitlab/log/unicorn.stdout.log -n50

Setting up Teamspeak 3 on Ubuntu Server 12.04 LTS

I have a few personal VPS servers which I run and have purchased through DigitalOcean, one of my VPS servers act as a mail server at present, it runs Ubuntu Server 12.04 with Postfix, Dovecot, ViMbAdmin and some SPAM busting tools, I’ve decided that as I frequently work on projects with others online and normally generates a whole load of emails when all you really need to do is jump on Skype… However, Skype is great but sometimes its nice to have your own VOIP solution which you personally host and means that you can have multiple chat participants without much hassle!

As my mail server VPS has more than enough system resources I decided to turn it into a unified communications server (running both Emails and VOIP) so for my VOIP solution I’ll be hosting Teamspeak 3 server of which I can then not only use to communicate and hold ‘group meetings’ with guys that I work on open-source projects with but also can use with my brother and friends when we get online and hit up some Battlefield 3!

So this is a very quick guide that assumes you are using Ubuntu Server 12.04 LTS (this should work fine on other versions of Ubuntu too though), lets begin…

First of all you’ll need to login to your server, now lets begin by downloading the latest version of the Linux Teamspeak server:-


Ok, so now lets extract the contents of the downloaded archive like so:-

 tar xzf teamspeak3-server_linux-amd64-

Next we’ll create a user account of which Teamspeak will run under on our server, we’ll simply use ‘teamspeak3’ as the username and disable the ability for a user to login to the server with this account (in effect making it a ‘local daemon account’ only)

sudo adduser --disabled-login teamspeak3

Perfect! – Lets now move the Teamspeak binaries and configuration files into it’s new home, we’ll place these under /usr/local/teamspeak3

sudo mv teamspeak3-server_linux-amd64 /usr/local/teamspeak3

…and change the ownership to our ‘teamspeak3’ user that we set up a few minutes ago…

sudo chown -R teamspeak3 /usr/local/teamspeak3

Fantastic, we are now very nearly done! – The last thing that we should do is to get Teamspeak to start on ‘boot up’ so first we will create a symlink (symbolic link) to the default init script that is included in the download archive:-

sudo ln -s /usr/local/teamspeak3/ /etc/init.d/teamspeak3

and now we set it to start on system boot up like so…

sudo update-rc.d teamspeak3 defaults

Here we go… we will now start the Teamspeak 3 server for the first time…

sudo service teamspeak3 start

Woohoo, you now have a Teamspeak 3 server up and running and as long as you’ve not got a firewall running you should now be able to connect  to this server using your server’s hostname or IP address! (If you are running IPTables see my extended instructions below!)

Just before you get carried away though, you should be shown a screen as follows:-


This screen is very important, the screen text makes is obviously as to why you need these details so I won’t go into too much depth but make sure you make a note of the settings so you can administer your TS3 server from your TS3 client!

Adding firewall rules for IPTables
If you have a firewall installed you’ll need to enable a few ports, if you are running IPtables on your server the rules required are as follows:-

-A INPUT -p udp --dport 9987 -j ACCEPT
-A INPUT -p udp --sport 9987 -j ACCEPT
-A INPUT -p tcp --dport 30033 -j ACCEPT
-A INPUT -p tcp --sport 30033 -j ACCEPT
-A INPUT -p tcp --dport 10011 -j ACCEPT
-A INPUT -p tcp --sport 10011 -j ACCEPT

So anyway, this is a great solution if you want to run your own hosted VOIP solution for gaming or group chats etc, I hope you have found this post useful!

Using Gmail with Postfix as an SMTP relay

Many times in the past I’ve configured Linux server (in this case, specifically Ubuntu LTS Servers ) where I need to recieve automated emails such as CRON logs etc. to an external email account but don’t want or need the hassle of configuring a fully blown SMTP server with SPF records, DKIM signing etc. etc. so in this very quick tutorial I’ll demonstrate how you can use a standard Google Mail (GMail) account to relay all server emails through, this will ensure that emails are safely delivered without being picked up as SPAM etc. by many third-party mail server providers.

Beware however that ALL emails sent from the server regardless of setting any ‘from address’ or ‘from name’ will appear to come from your actual GMail address and name (Google forces this!) so I’d recommend using this for servers where you are the sole recipient of any emails such as automated administration emails, if you intend to send emails from a web application server to many public users I’d recommend either setting up a Postfix to send direct (and therefore ensuring that all DNS entries are configured correctly) or use a dedicated SMTP relay service like SendGrid or DNSExit’s Mail Relay Outbound Service.

Anyway, lets get a move on…

So technically we will be installing and configuring Postfix as a ‘smarthost’, so first of all lets install Postfix on the server like so:-

apt-get install postfix

Next, the deb-installer will prompt you for your desired Postfix configuration settings, these are the settings that you should use:-

  • Type of mail server: Satellite System
  • Mail name: (the name you want for your outbound mail)
  • SMTP relay host:
  • Postmaster: (You can leave this blank if you wish, this server will not be configured to receive emails)
  • Other destinations: (I’ve left this blank!)
  • Synchronous Queues: (Your choice, I left this as default, this won’t affect the relaying of emails)
  • Network blocks to allow relay: (Leave as default unless you want to allow all machines in your LAN etc to relay through this relay server, make sure you know what you are doing here though!)
  • Mailbox size: (Your choice, this is for incoming email only and therefore is not really important, I just left it as default!)
  • Local address: (I left it as default ‘+’)
  • Listen address: (Again, this is your choice, the default will do if you don’t want to segregate your network cards and access from other internal sub-domains, I therefore just left this as default also!)

Fantastic, we are now nearly there! We now just need to make some changes tho the main Postfix configuration file located in /etc/postfix/ and add some extra configuration options to enable TLS and the Gmail account password required to send emails via. Gmail.

So now, edit the file using Vi or Nano for example, like so:-

nano /etc/postfix/

…and now add the following lines to the bottom of this file:-

smtp_sasl_auth_enable = yes
smtp_sasl_password_maps = hash:/etc/postfix/sasl_passwd
smtp_sasl_security_options = noanonymous
smtp_sasl_tls_security_options = noanonymous

Now we create the username/password authentication file, using Vi or Nano, create a new file here: /etc/postfix/sasl_passwd, the contents should be as follows:-

You obviously need to replace the above example with your own email address and ‘PASSWORD’ should be replaced with your current Gmail password.

For best practice it is advisable that you change the ownership of the credentials files that you’ve just created and set restrictive permissions so that others with access to the server can not view the Gmail account credentials, lets do this now like so:

chmod 640 /etc/postfix/sasl_passwd*
chown postfix:postfix /etc/postfix/sasl_passwd*

Now we need to rebuild the hash, execute the following command:-

postmap /etc/postfix/sasl_passwd

Now finally, for the changes to take effect lets now restart Postfix so that emails can start being relayed:-

service postfix restart

That is it! – Emails should now route through your Gmail account when the server attempts to send external emails!

As a side note, by default CRON jobs will attempt to email the local users mailbox, you can therefore forward local emails for users to an external email address using Aliases, see one of my other posts on how to configure this.

You can also add a ‘’ setting to the top of any CRON file to force automated CRON output emails to send directly to an external acccount instead of the local account of whom the CRON job is executed under. For more infomation about MAILADDR setting, see this post over at *NIXCraft.


Installing a simple NFS server for use with VMware ESXi

OK, just a very quick post here, This week I installed my new ESXi host so I can run a few test VM’s etc and wanted to easily mount installation media (eg. ISO’s) from my home server (running Ubuntu Server 12.04) so I thought I’d quickly set-up an NFS (Network File Server) as VMware ESXi supports these ‘datastores’ so here are the simple steps for anyone wanting to achieve the same kind of setup:-

Firstly I create a new folder on my Ubuntu server where the actual data is going to to be stored:-

 mkdir -p /data/nfs/install_media

Next we need to install The NFS server software, so we’ll use aptitude to do that like so:-

 apt-get install nfs-kernel-server

We now need to edit the /etc/exports file, so using nano we’ll add a new line to the /etc/exports file like so:-


^ Replace the IP address with your ESXi’s server’s IP address, (in my case my ESXi server’s IP address is (No space between the IP address and the options.)

Now we should run the following command to re-export our new configuration (this should also be run each time you add any additional configs/shares to the /etc/exports file!)

 exportfs -ra

…and now we’ll also just restart the NFS server daemon like so..

 service nfs-kernel-server restart

You should now be able to add a new NFS data store from your ESXi server to your remote server 🙂


Setting up internal DNS on Ubuntu Server 12.04 LTS

In the past I’ve never really bothered with internal DNS at home, I simply add host records to my servers/laptops and workstations as and when I need too by updating the hosts file but now that I have just finished my new loft conversion and have moved my entire office up there, I’ve dedicated some time in my busy schedule to implement some internal network changes and have decided that I’ll automate internal host resolution so I’ve decided to install BIND on my home Ubuntu 12.04 LTS server, as an avid fan of PowerDNS I would have used that but due to the need to ‘forward’ un-resolved queries I’ve had to use BIND.

So my simple guide here is to help anyone else get started by creating their own local DNS server which will also forward un-resolved requests to public DNS servers to enable transparent external DNS lookups too.

The software we are going to use for the DNS server is ISC BIND (version 9), we can install this simply from the terminal of your server like so:-

apt-get install bind9

Now that BIND is installed we are going to edit /etc/bind/named.conf options and configure BIND to cache requests and forward unresolved queries.

nano /etc/bind/named.conf.options

Ensure that the file is updated (remove the comments from the ‘forwarders’ section and add your external DNS servers), in the below example I’m using Google’s public DNS servers ( and

forwarders {;;

On your server (I assume you have configured a static IP address) edit /etc/network/interfaces and we’ll add these three settings:-

dns-search home.local
dns-domain home.local

This will ensure that your server now queries itself first before checking the external DNS servers ( and and by using dns-search and dns-domain options this means that instead of typing say ‘server1.home.local‘ in a browser or when using ping etc you can actually just type ‘server1‘ and this will resolve automatically also!

Now we for the changes to take effect we need to restart the network interface, so to do this run the following command:-

nohup sh -c "ifdown eth0 && ifup eth0"

So now the next thing that we need to do is to create the actual zone file for our local domain (of which in this example is ‘home.local‘), we’ll do so like so:-

nano /etc/bind/named.conf.local

Add a zone for our local domain like so:-

zone "home.local" IN {
 type master;
 file "/etc/bind/zones/home.local.db";

and so we can also do reverse lookups too, we’ll also add a reverse lookup zone too:-

zone "" {
 type master;
 file "/etc/bind/zones/";

Now we create the actual the zone database file for our ‘home.local‘ local domain, we’ll do this like so:-

mkdir /etc/bind/zones
nano /etc/bind/zones/home.local.db

Now add the following content into the file (obviously replace the hostnames/IP address with your own personal setup etc.):-

; Use semicolons to add comments.
; Host-to-IP Address DNS Pointers for home.local
; Note: The extra "." at the end of the domain names are important.
; The following parameters set when DNS records will expire, etc.
; Importantly, the serial number must always be iterated upward to prevent
; undesirable consequences. A good format to use is YYYYMMDDII where
; the II index is in case you make more that one change in the same day.
$TTL 86400 ; 1 day
home.local. IN SOA server1.home.local. hostmaster.home.local. (
 2013091901 ; serial
 8H ; refresh
 4H ; retry
 4W ; expire
 1D ; minimum

; NS indicates that 'server1' is a/the nameserver on home.local
; MX indicates that 'mail-server' is the mail server on home.local
home.local. IN NS server1.home.local.
home.local. IN MX 10 mail-server.home.local.

$ORIGIN home.local.

; Set the address for localhost.home.local
localhost IN A

; Set the hostnames in alphabetical order
print-srv IN A
router IN A
server2 IN A
server1 IN A
xbox IN A
mail-server IN A

Great, now save the file and we will now create the ‘reverse’ DNS zone file (IP-Host name resolution), so now we’ll create a new file like so:-

nano /etc/bind/zones/

and now add the following content, again, replace IP addresses and host names with your own!

; IP Address-to-Host DNS Pointers for the 192.168.0 subnet
@ IN SOA server1.home.local. hostmaster.home.local. (
 2013091901 ; serial
 8H ; refresh
 4H ; retry
 4W ; expire
 1D ; minimum
; define the authoritative name server
 IN NS server1.home.local.
; our hosts, in numeric order
1 IN PTR router.home.local.
2 IN PTR server1.home.local.
3 IN PTR xbox.home.local.
5 IN PTR server2.home.local.
9 IN PTR print-srv.home.local.
11 IN PTR mail-server.home.local.

Fantastic! – we’re nearly there, now we simply need to restart the BIND daemon for the changes to take effect, we do this like so:

service bind9 restart

Great, our server should now be able to resolve both external (forwarded DNS) queries and our new local DNS records, so lets do some testing:-


The response received should look as follows:- has address has IPv6 address 2001:6b0:7::18

Thats great, now lets do a reverse lookup on all our internal machines like so:-

host -l home.local

You should now see a full list of the host’s (‘A’ records) that we had previously set-up and so one final test – lets test out a reverse lookup, lets execute:-


The response should have been: domain name pointer server1.home.local.

Super stuff!! – That’s it, there you have your own internal DNS server which supports query caching and forward lookups… enjoy!

A few things to be aware of/concious about:-

  • Always remember to increment the ‘serial’ when updating the zone files.
  • Ideally you should ensure that your router/firewall is not allowing public access to your DNS server (TCP port 53) on your internal DNS server as otherwise you DNS server will be available to everyone on the internet which obviously isn’t ideal/a security risk in this instance seems as its been set-up for local network DNS queries.
  • In this set-up we configured the server to use itself for DNS lookup, this also needs to be set-up on the other clients on your network, If you have a DHCP server you should specify your DNS server’s IP in its settings, as well as the search domain. If you don’t have a DHCP server in your network you should configure these manually for the network card/interface.

I hope you found this guide useful!

DLNA Server (miniDLNA/ReadyDLNA) on Ubuntu Server 12.04 LTS

In a previous post on my blog I posted about setting up a home server with the ability to stream music, video and pictures to devices connected to the same network.

I previously used MediaTomb, of which to be honest was rather complex to set-up and manage and now due to the fact that I am about to buy an XBOX (mainly to play GTA 5) I found that MediaTomb does not support XBOX and will not work.

So some research on the internet led me to look at and install ReadyDLNA (previously named ‘miniDLNA’) of which is super-simple to install and configure and works great.. a much smaller resource footprint too!

So lets get on and install it, on this set-up I am running a Ubuntu Server 12.04 LTS machine of which hosts all my shared files, movies, photos and various other things.

So first up we need to install the software, so now run the following command from the terminal:

apt-get install minidlna

If you don’t already have folders for storing your movies, music and photo’s you should create them like so:

mkdir /media
mkdir /media/music
mkdir /media/pictures
mkdir /media/movies

Now we need to edit the /etc/minidlna.conf file, you should edit it to make it look as follows:-

friendly_name=Bobbys DLNA Server

Now it’s time to reload the minidlna daemon and all should now be working, try connecting and streaming some content from one of you devices!

/etc/init.d/minidlna force-reload

Please note that when changing the settings (such as media paths) in the configuration file you should execute the following command to rebuild the media database:

# minidlna -R

You may find that adding the above command as a daily cronjob will keep your media library in sync and keeping track of all new movies, videos and photos.

So your new DLNA server should now be found on your devices and you should now be able to browse your media and stream them!

Hope you enjoy!

How to view the size of MySQL database(s) via the MySQL CLI

I thought I’d quickly post up this quick MySQL snippet to help others who are trying to quickly view the size of a MySQL database from the MySQL CLI.

The following query displays the size of all your current MySQL databases on your server in megabytes:-

SELECT table_schema "Data Base Name", sum( data_length + index_length ) / 1024 / 1024 "Data Base Size in MB" 
FROM information_schema.TABLES GROUP BY table_schema;

Hope this helps at least a few people, I find this extremely useful when accessing a remote server over SSH and wanting to quickly determine the size of the DB without the need for any GUI clients or exporting the database out.

Installing PHP and MySQL on MacOSX using MacPorts for development.

Ok so this is aimed at being a very quick guide to installing a PHP (specifically PHP 5.4, but you can install PHP 5.3 or.PHP 5.5 if you wish!) development environment and MySQL server to enable you to develop (using the built in PHP development server) and MySQL on an MacOSX based machine, so to begin you should download and run the MacPorts .dmg file from the MacPorts website.

After you’ve ran and installed MacPorts, its time to bring up the terminal, next type the following at the terminal prompt:-

sudo port -d selfupdate

….it should now automatically check and update itself.

To install MySQL Server, simply run the following command:-

sudo port install mysql55 mysql55-server

Then we’ll set it to start automatically on system boot up:-

sudo /opt/local/lib/mysql55/bin/mysql_install_db --user=mysql

To be able to access MySQL using a desktop client (using TCP instead of a ‘socket’) I’d recommend you edit the mysql configuration file and comment out ‘skip networking’, you can do this like so:-

sudo nano /opt/local/etc/mysql55/macports-default.cnf

You can now start the MySQL server like so:-

sudo /opt/local/share/mysql55/support-files/mysql.server start

Now we’ll configure MySQL to launch at System Startup like so:

sudo launchctl load -w /Library/LaunchDaemons/org.macports.mysql55-server.plist

Thats it, we can now move on to install PHP…

Now lets look at the PHP packages that we want to install…

You can use the ‘package search‘ feature on the MacPorts website to look at what packages you want to install, I’d recommend and will be installing the following packages for our ‘development setup’:-

  • php54
  • php54-gd
  • php54-curl
  • php54-imagick
  • php54-intl
  • php54-mbstring
  • php54-mcrypt
  • php54-iconv
  • php54-ming
  • php54-mysql
  • php54-soap
  • php5-sockets
  • php54-sqlite
  • php54-xdebug
  • php54-zip

So we’ll now install them like so:-

sudo port install php54 php54-curl php54-imagick php54-intl php54-mbstring php54-mcrypt php54-iconv php54-ming php54-mysql php54-soap php54-sockets php54-sqlite php54-xdebug php54-zip

You may notice after the first package ‘php54’ has been installed you’ll see this message appear in the console:-

Screen Shot 2013-07-30 at 17.57.52

So now that everything is installed, we’ll make a copy of the ‘distribution’ copy so that we can make tweaks if needed in future, we’ll do this like so:-

sudo cp /opt/local/etc/php54/php.ini-development /opt/local/etc/php54/php.ini

So now and in future if you wish to edit this file, you can do so like (again, from the command line):-

sudo nano /opt/local/etc/php54/php.ini

Now, you can run PHP and check the version number from the terminal like so:-

php54 -v

Now we’ll symlink the php54 binary to ‘php’ so we can run PHP using the normal ‘php’ command, like so:-

sudo ln -s /opt/local/bin/php54 /usr/bin/php

So now, you should be able to run:-

php -v

Thats great! so now you can start your development PHP development server like so from the Terminal:-

sudo php -S localhost:8000

Hope you found this useful!