Degrees of value


What’s this all about?

It’s my attempt of categorising the functions of a company or the roles in a project in a way that focuses on the value to the end client and hopefully some ideas on how to increase it.

I’m a software developer so the examples and ideas are around software development but feel free to try and apply it to your own industry.

Where do we start?

The first thing we’ll do is to create four columns and start placing different functions, actions or roles in the columns. Honestly I think this is the most difficult part, figuring out where everything belongs. Anyway back to the columns, each of the four columns has some special attributes.


The first column is the stuff that the client asked you to do. As a software developer I’ve included things like bug fixing, features, design and documentation.


This is the stuff the client does not ask you to do but you do anyway and bill the client for because its part of your expertise and needed for the things in the 1st column. As part of developing software we need servers, to test what we’ve done, etc… It greatly adds value to what finally gets delivered to the client!

As an example I can deliver the software on a CD to the client and expect them to install it on their server themselves. It’s far more valuable if I can install it for them on their servers, it’s still kinda work that was asked for, even if not explicitly.

So this column is kinda like the stuff we ask the client if we can do.


So in this column is the business functions we need to support our value to the end client. It’s things like planning: how to tackle the project, sprint planning, etc …


This is the general business functions like HR, billing, etc …

I’ve gone a bit lite on the 3rd and 4th column because by far most of my time is spent in the 1st and 2nd column and none in the 4th column. It’s in no way a reflection of the importance of the functions or roles in the 3rd and 4th columns. I’ll hopefully show you why in the next section.

Left to Right

So the idea here is what happens when we start chopping off column. So we’ll start with the 1st column. When we chop of the 1st column the next column becomes redundant.

So on the 2nd column some of those items might seem value adding and something that indeed a client might ask a provider to fulfil. The problem; we don’t provide it as a 1st degree value. You’ll likely find companies that offer this as a 1st degree value service, they’ll have a 2nd degree value column full of other things that make them worth engaging with. If a company can’t explain their investments in the 2nd column they’re likely not worth engaging with.

When we drop the 2nd column we’re left with planning nothing and no way of providing concrete value to the client, without a 2nd column, the functions in the third column become redundant.

Likewise when we drop the 3rd column the 4th becomes redundant because we have a HR role with no employees and no positions to fill.

Right to Left

This is the part where I explain why the columns on the right are equally important, simply put: without those we can’t do any of the other. Without HR we don’t have resources to do planning or execute those plans. Without planning we can’t do anything because we don’t know what to do. Without being able to setup server or development environments we can’t develop software.

Ah. but now you’ll say we’ve been able to function without one of those functions before. If you think hard enough, you’ll likely(I’m still trying to build this model at the moment) find that you filled one of those roles. They’re roles not job descriptions or people. At its simplest HR might be a decision to do something, planning a simple thought on how to do it and then a set of various tasks, some more valuable and some less. This model can be applied to a single person, single project, a train of thought or maybe writing a blog post.

So in a way the rightmost columns are the most important if you’d still like to rank them by importance. I’d still consider all of them equally important.

Billable and Non Billable

So I’ve separated columns one and two as billable and columns three and four as non-billable. I doubt this is accurate but so far it’s working for me.

One of things I’ve found so far is that the more left you go the less it becomes about soft skills and more and domain expertise and vis versa.

Ultimately it’s the domain expertise that the client needs and splitting it in the middle makes sense so far.

What’s the point of all this

It’s a different angle on how a company works, or at least the start of a view of one. It’s a view that’s focused on the end client and the engagement with the end client.
What to take away from this?
It’s a way to look at a company that helps to systematically increase profits, let’s say that you spent 50% of your resources on the 1st and 2nd column that means you’ll need a 200% markup to break even.

It’s a way to look at a company that helps streamlining it. Start by streamlining a company from right to left, making sure not to drop the quality. Having issues delivering quality to a client make sure you’re hiring the right people, streamline and make sure your hiring practices are up to standard.

My role is spent mostly in the 1st and 2nd columns. I’m focusing on making things on the 2nd column easier and cheaper while maintaining quality, In many cases increasing quality.

Another interesting thought for a software company is where does something like code review fits in? I think where it fits in, has a lot to do it your attitude towards it. Some people consider code review as a necessity of delivery and that the software has to have zero technical debt. Others consider it as a teaching tool, to grow the overall skills of the company rather than a condition of delivery. Depending on our view we can have a look at placing it in the second or third column. If it’s a condition of delivery: 2nd column, teaching tool: 3rd column.

Also, where do we put project management, 2nd column or 3rd column? If we have the verifiable resources(talent) for project management like an employee with a degree in project management it belongs in the second column. If it’s more a soft skill that we’re applying it’s a 3rd column thing.

Docker & Golang

So I wrote an article about Swift with Docker and made an issue of using Swift with Docker vs Golang with Docker, I thought I’d write up something about using Golang with Docker to illustrate my point. To start with I don’t need a base Golang image. The compiled application can work with Docker out of the box without adding any libraries, so no need for a base Docker image that contains a runtime.

My Dockerfile for the application

FROM alpine:latest
ADD main /app

Main file

package main

func printSomething() {

func main() {

Jenkins build script

go get -d
go build main.go
docker build -t fritsstegmann/golang .

The result of this process is an image that’s about 6MBs. My Swift application’s Docker image is more than 1.1GB.

Docker & Swift Lang

Swift is a relatively new language from Apple that replaces Objective C for IOS and MacOS development. Apple open sourced the language a while ago and it now has official support on Ubuntu(using it on other Linux distributions is a bit troublesome). The issue using it on other Linux systems is the dynamic linking of libraries, I’ve tried using Swift’s ability to statically link libraries, it succeeded in including the swift runtime libraries but it failed to include other lower level runtime libraries. I had to create another docker base image that includes the swift tools for compiling the application based on Ubuntu 16.04. To run the Swift application in docker we have to compile the application in the container it will run in.

My Dockerfile for the base Swift image

FROM ubuntu:16.04

RUN apt-get update && apt-get install -y wget clang libstdc++6 libicu-dev libbsd-dev libxml2 libcurl3 \
        && cd /root \
        && wget \
        && tar xfzv swift-3.0.1-RELEASE-ubuntu16.04.tar.gz \
        && cd /usr/local/bin \
        && ln -s /root/swift-3.0.1-RELEASE-ubuntu16.04/usr/bin/swift \
        && ln -s /root/swift-3.0.1-RELEASE-ubuntu16.04/usr/bin/swiftc

My Dockerfile for the Swift application

FROM fritsstegmann/swift

RUN mkdir /app
ADD . /app/

RUN swift build --configuration release

ENTRYPOINT [".build/release/app"]

Package file

import PackageDescription

let package = Package(
    name: “app”,
    targets: []

Main file

func sayHello() {
  print("Hello, World!")


My gripe with Swift is that the linking issue makes distributing an application over different Linux distributions difficult. It makes Swift a less attractive option over Golang in a container crazed world. I’d prefer to have compiled the application with Jenkins and then copy only the binaries to the application container.

Laravel & Doctrine Tutorial 2

I wrote an article a while back on Laravel and Doctrine, since then a new library for Laravel and Doctrine integration has appeared and it’s pretty good.


– To install the Laravel Doctrine package for Laravel 5.2 add the following to composer under require:


– Add




under the service providers section. – Run this command to publish the config settings for Doctrine:

php artisan vendor:publish --tag=“config”

First Model

I prefer using Yaml mappings for my models. By default Laravel Doctrine will look for them in the app directory which I don’t like. I put my models in a subdirectory called models, in that directory I put a subdirectory called mappings to store the Yaml files. To tell doctrine to look for these files in my custom path I have to update the Doctrine configuration to with this:

    'paths'      => [
        base_path('app') . '/Models/mappings',

In the YAML mappings directory create the following file


and add the following content:

  type: entity
  table: users
      type: integer
        strategy: AUTO
      type: text
      type: text
      type: text
      type: text

In the Models folder create this file


and add the following content

namespace App\Models;
use JsonSerializable;

class User implements JsonSerializable
    use JsonSerializer;
    public $id;
    public $name;
    public $email;
    public $password;
    public $rememberToken;

    private $__hidden__ = ['password', 'rememberToken'];
     * User constructor.
     * @param $id
     * @param $name
     * @param $email
     * @param $password
     * @param $rememberToken
    public function __construct($id, $name, $email, $password, $rememberToken)
        $this->id = $id;
        $this->name = $name;
        $this->email = $email;
        $this->password = $password;
        $this->rememberToken = $rememberToken;

Two things that stands out about this file

  • The fields are public
  • The JsonSerializer

The fields are public because I feel that the getter-setter pattern isn’t relevant anymore even in languages like Java. In the Java the getter returns, in many cases, a reference to a private variable rather than a copy, breaking whole reason for the pattern. The JsonSerializer is a trait I wrote to exclude some fields from being returned by for example: an API. It excludes the Doctrine proxy custom fields as well. In this case the password and remember me tokens are excluded similar to Eloquent.

namespace App\Models;

trait JsonSerializer
     * @param  int    $options
     * @return string
    public function toJson($options = 0)
        json_encode($this->jsonSerialize(), $options);

     * Specify data which should be serialized to JSON
     * @link
     * @return mixed data which can be serialized by json_encode,
     *               which is a value of any type other than a resource.
     * @since 5.4.0
    public function jsonSerialize()
        $vars = get_object_vars($this);

        $r = [];
        if (isset($this->__hidden__)) {
            foreach($vars as $k => $v) {
                if (!starts_with($k, '__') && !in_array($k, $this->__hidden__)) {
                    $r[$k] = $v;
        } else {
            foreach($vars as $k => $v) {
                if (!starts_with($k, '__')) {
                    $r[$k] = $v;

        return $r;

Create a repository

Thanks to Laravel’s ability to inject services without providing a ServiceProvider this step is relativity simple. The two thing to note is the automatic binding of a entity manager by Laravel (The doctrine plugin already created one for us) and how the EntityRepository constructor is called within our constructor.


namespace App\Repositories;

use App\Models\User;
use Doctrine\ORM\EntityManager;
use Doctrine\ORM\EntityRepository;

class UserRepository extends EntityRepository
     * @var EntityManager
    private $em;

     * PageViewRepository constructor.
     * @param EntityManager $em
    public function __construct(EntityManager $em)
        $this->em = $em;
        parent::__construct($em, $em->getClassMetadata(User::class));

Creating a service

Like before because Laravel knows how to construct a Repository object because it knows how to inject a entity manager we can create a service level API object without providing a ServiceProvider to instantiate an object.


namespace App\Services;

use App\Repositories\UserRepository;
use App\User;

class UserService
     * @var UserRepository
    private $userRepository;

     * UserService constructor.
     * @param UserRepository $userRepository
    public function __construct(UserRepository $userRepository)
        $this->userRepository = $userRepository;

    public function findAll()
        return $this->userRepository->findAll();

Laravel & Doctrine Tutorial

Doctrine is a widely used independent(not affiliated with a framework) ORM that uses the entity manager pattern.

Why does it exist?

Doctrine uses the entity manager pattern, this makes it a bit odd in the PHP community, most of the framework ORM tools use the active record pattern. The entity manager pattern splits the database communication layer from the domain layer. Objects are fetched and stored through an entity manager. The entity manager controls their access and storage rather than the models themselves. The entity manager is usually given a description(ex. yaml) of the database and a model that represents the data in the application. In this way the model becomes an abstraction of the database in the application, with the entity manager controlling how and when the data in the model is fetched or persisted.

What is an ORM?

An ORM is a database(or generically a data) abstraction in a application that exposes the storage and management of the data in developer friendly way. Database tables map to classes and columns to variable declerations. In some programming language updating a variable automatically updates the matching database record in other a method to save the data must be called.

Active Record vs Entity Manager

The entity manager pattern is the dominant strategy for accessing data in Java. This quickly makes any talk of doctrine into an argument over Java vs PHP rather than Active Record vs Entity Manager. Trying to pick a fight over Java vs PHP is kinda against the point of what we do and most of the community heads of PHP agree. Both languages offer advantages and both languages can learn something from each other and in most cases do. Anyway back to the database thing. Active Record combines the data representation and the database access. Database operations are done via the models rather than a separate system like an entity manager. Creating or updating a model invokes some database code.

Separate Database from Domain Code

Separating the database access from the domain logic makes the domain logic much faster and lighter. In larger system that deal with a variety of use cases the domain logic will not always be driven by an underlying database. Ex. serialising data to be sent via a socket. Tying the database to the domain logic in that context does make sense and makes create, destroying and administrating objects much more expensive. In the entity manager pattern is very cheap to create 10000 models and persist or manipulate them later.

Laravel and Doctrine

There are a few solutions out there for tying in Doctrine to Laravel. We are going to look at “atrauzzi/laravel-doctrine”.

Add the following the require section in composer.json:

    “atrauzzi/laravel-doctrine”: “dev-master”,
    “doctrine/migrations”: “dev-master”,

Run this command:

php artisan vendor:publish —provider=“Atrauzzi\LaravelDoctrine\ServiceProvider” —tag=“config”

In the config folder there should be a new doctrine config file called doctrine.php. Uncomment the database connections mysql section. Doctrine does not support the mysql driver and Laravel does not support the mysqli driver so we need to separate these for Laravel and Doctrine. Doctrine provides a few ways to add the mapping information for models. For now we are picking the simplest one and adding a static loadMetaData function to the model.

namespace App\Models;

//use Doctrine\ORM\Mapping as ORM;
use Doctrine\ORM\Mapping\ClassMetadata;

 * @Entity
 * @Table(name=“users”)
class Users
     * @Id
     * @GeneratedValue
     * @Column(type=“integer”)
    protected $id;

     * @Column(type=“string”, unique=true)
    protected $email;

     * @return mixed
    public function getId()
        return $this->id;

     * @param mixed $id
    public function setId($id)
        $this->id = $id;

     * @return mixed
    public function getEmail()
        return $this->email;

     * @param mixed $email
    public function setEmail($email)
        $this->email = $email;

    public static function loadMetadata(ClassMetadata $metadata)
            ‘name’ => ‘users’

            ‘id’ => true,
            ‘fieldName’ => ‘id’,
            ‘type’ => ‘integer’

            ‘fieldName’ => ‘email’,
            ‘type’ => ‘string’

In routes add the following in the default Laravel welcome screen.

Route::get(‘/‘, function (\Illuminate\Contracts\Foundation\Application $app) {

    $em = $app->make(‘Doctrine\ORM\EntityManager’);
    $user = $em->find(‘App\Models\Users’, 1);

    return view(‘welcome’);

Why I likely won’t use Doctrine

I don’t really see myself using doctrine. Most frameworks chose the active record pattern for a reason. The entity manager pattern does not make that much sense in PHP. PHP usually deals with much less data at a time than something like Java and is not constrained by the same memory environment. In the case of PHP the ease of use of the active record pattern for me wins over the system benefits of an entity manager. As for the serialisation benefits mentioned above: because PHP is a dynamically typed language serialising models on the fly is very simple compared to statically typed languages. I feel that the PSR standards will likely go the way of the active record pattern when they get to ratifying database access.

Docker Tutorial

Docker is quickly becoming a new paradigm in software development, so I became curious to find out why it is so special.

Why does it exist?

Originally software was small self-contained parcels, with no worries about setting up complicated servers or processes to support applications. But with the advent of the internet and an interconnected world, it all changed. A business had to buy hardware, and a system administrator spent days setting it up for development or production with no path to ease scalability. Fast forward a bit, now had more powerful hardware, capable of running smaller virtual computers, allowing crude and inefficient scalability. There was no supporting infrastructure or software tools, but we could copy virtual computers across hardware, across the internet. Hardware vendors caught up to the new trend and started creating new products that made virtualisation much more viable and efficient, but this approach was never really going to be the best solution. We needed a better way. Linux namespaces gives us the scalability of a fully-fledged virtual machine but with much less resource usage. A process runs in Linux in a sandboxed environment with a virtual file-system and access to underlying hardware with a light weight wrapper. In this new paradigm we needed a way to replace the disk images of virtual machines and their management (execution, storage and distribution), we call one of these tools Docker.

So what is Docker?

This part of the story is about Linux namespaces and we start it with something called a process identifier or PID. When you open up an application on your computer it is assigned a number so Windows or Linux can keep track of it, this number is called a PID. Linux stores its list of PIDs in a tree structure similar to a folder hierarchy on a desktop PC. When Linux starts up it creates a PID 1 and all other processes are placed under this PID. The problem is that standard permissions allow processes to access and inspect the tree. Linux namespaces create little virtual trees inside of the bigger tree that can not access or know about processes outside of itself. This enhances the security of Linux and creates the isolation needed for Docker to function. Linux provides the same concept for network and disk IO. Docker starts an image using these namespaces to isolate the executable within from the rest of the system. Docker is so much more though, because it deals with the creation and distribution of these images too, it uses Git like functionality to build new images from base images and provides a way of exporting and importing these images onto your local repository. I can pull a Linux image from the docker repository, run a few commands that enables that image to run a Laravel project, save my changes to the new image and publish it back to a docker repository. Someone else can then download my image and be sure it runs exactly the way it would on my host, using the same Apache, the same PHP and the same version of Laravel.

How do I use Docker?

Docker is a command line tool, most Linux developers prefer using the command line anyway. A normal Docker work flow starts by searching the Docker repository for images that fits the need of the user either to run as is or to extend. The command for searching docker is “docker search ‘image’”. To download an image from the repository you use the “docker pull ‘image’” command similar to Git. To execute a command in a Docker image use the “docker run ‘image’ ‘command’“ syntax. I suggest using the “-it” flags if you need to make the command interactive. Using the “-d” flag runs the Docker container (running image) in daemon mode. Using the “docker run -it Ubuntu /bin/bash” command opens up a terminal to the container that allows you to run commands like “apt-get install mysql-server”. At this point you would want to save the changes you have made to a container. First we need to find the container ID, we can do this by using the “docker ps” command to list all the running containers. To actually save the changes we need to save it to a new image by using the “docker commit ‘container ID’ ‘new image name’” command. Now we have a brand new docker image that we can reuse time and again.

Docker Files

Now we come to the really useful part. I mentioned that you can run commands inside a docker container by running its bash command and then save the changes using “docker commit”. Docker files help automate this process by providing a syntax that is stored in a file, making quick revision and automation scripts possible. A Docker file starts with a FROM tag, this specifies the base image that the file is using. The MAINTAINER tag is normally next and provides a reference to the author of the image. A RUN tag allows you to run executables on the system like “apt-get’ or “yum”. It’s best to chain RUN tags using the && operator because Docker creates a temporary image every time you execute a RUN tag command. WORKDIR is another useful tag, this sets the current active directory and is likely required by some less well designed applications to work. Some docker image can be run without providing an explicit command (“docker run cassandra”), it’s because the image has an ENTRYPOINT tag, this tag specifies a default command to use when executing the image. And finally the ADD command, this adds files from the host file-system to the docker image.

Why Docker is so useful

Normally you would store a copy of the Docker file in the project somewhere and have a CI server build artefacts using the syntax. The artefacts are deployable components that the Dev Ops and QAs can use. Here’s the scenario: A developer is asked to implement a new feature, they make some changes and pushes it up on a new branch. The CI server picks up the changes and builds a new Docker image. The QA pulls the new Docker image and runs it on their own desktops. They test the feature and they find a bug. The bug is logged against that docker build and communicated to the developer. The developer wants to see the bug for themselves so they pull the image too and go through the same steps. They come across the bug because they are able to replicate the exact same environment as the QA and are able to fix bug. They push up a new commit and a new image is created. It’s tested and it passes QA. The changes are merged to the master branch in Git and this fires off a production Docker image. The Dev Ops team pull the image on the production server and simply executes the “docker run ‘image’” command. Solving the problem that even with the best of intentions the deployment environments have subtle but important differences. A good example is the difference in configuration between Apache 2.2 and Apache 2.4. A developer might use Windows or OS X but the production environment is a Linux server.

Docker Volumes

An important part of knowing Docker, is knowing how to manage the data inside of a container, for example Docker does not persist file changes between containers. Docker has the concept of data volumes that’s similar to AWS’s EBS. Docker volumes provide sharing of volumes between containers and persistence on the container file-system. When using Docker files the changes to files being added to an image forces docker to bypass cache and redo every step to ensure consistency. Changes to a data volume bypasses this. In other words when updating an image file changes do not trigger a clean Docker build. Docker allows for volumes to be created from the command line too by using the “docker run -v ‘volume’ ‘image’ ‘command’”. Creating volumes from command line has one very useful feature: they can mount the host file-system in a container ex: “docker run -v ‘host dir’:’image dir’ ‘image’ ‘command’”. This allows you to update your project files in real time, reflecting changes immediately. The best way to persist data volumes between container instances is to create a data volume container by calling “docker create -v ‘image dir’ —name ‘image name’ ‘base image’”. The volumes can be mounted to new containers with the “docker run -d —volumes-from ‘volume name’ ‘image name’”. Multiple run calls like this one can be made and the containers will share the same mounted directory. To back up a data volume use the following command “docker run —volumes-from ‘volume name’ -v $(pwd):/backup Ubuntu tar cvf /backup/backup.tar /’image dir’”.

Linking Containers

The first and most useful linking tool to learn is port mapping. When creating server containers they normally expose a network port. By default it is not available to the host machine. By specifying the “-P” flag when running a container, the port the container is listening on is mapped to a high value port on the host machine. By using the “-p ‘host port’:’container port’“ the container port can be explicitly mapped to the host machine. Docker has a linking service as well, a common and good use case to demonstrate how this works is a web application communicating with a database server. Start a database server with the following “docker run ––name db training/postgres”. By explicitly setting the name of a container the following example is much easier to follow. To run a new container while linking the database we run the following command: “docker run -d -P ––name web ––link db:db training/webapp python”. By linking the database container to the web container the web container can inspect the properties of the database container. The properties of the database container is exposed to the web container by environment variables that the web application can use to configure a connection back to the database server.


Docker is an amazing tool for standardising the runtime environment of an application allow a more structured work flow for server side application and allowing more robust testing and deployments.

BDD & Behat

BDD is an amazing way to test software. Behat is a PHP BDD framework that works well with Laravel.

History of testing

Testing has always been a part of software development, but only recently has it been formalised into a work methodology. TDD(test driven development) dictates that the developer writes the tests first then the actual functionality. TDD suffers from two issues: 1.) It’s very difficult to get right. 2.) It’s very developer centric, developers usually only test the happy case scenario. TDD for functional testing is difficult because it’s very code orientated. A skilled QA(quality assurance tester) has a different skill set from a developer and they should feel free to strengthen those core skills instead of developing programming skills.

Why BDD?

BDD was created out of combining TDD, Domain Driven Design and object orientated programming. It focuses on testing the core functionality of the product rather than testing every single part and providing a means for both technical and business interests to be represented in the tests.

What I like the most about BDD is a QA does not have to write any code to write functioning(not just functional) tests. They rely on a set of pre-written pieces of code that are bound to plain English sentences. The sentences are strung together into a paragraph. The paragraph represents a test. Using this approach makes it clear what the test is actually for, the test is self documenting. The net result is developers can concentrate on writing code and QAs can concentrate on writing tests.


Behat is a BDD framework for PHP that has excellent integration with Laravel. It’s installed via composer like any other modern PHP package and has an executable in the *vendor/bin* folder. To add behat to the project put the following in the require-dev section in the composer file:

“behat/behat”: “^3.0”,
“behat/mink”: “^1.6”,
“behat/mink-extension”: “^2.0”,
“laracasts/behat-laravel-extension”: “^1.0”

It’s necessary to install the mink packages for browser(functional) testing. Mink is an extension package for Behat to allow for browser or web testing. Once installed execute

vendor/bin/behat --init

to create the features directory where the behat files are stored.

In the features directory create a “webpages.feature” and copy in the following

  In order to prove that Behat works as intended
  We want to test the home page for a phrase

  Scenario: Root Test
    When I am on the homepage

Create this file in the project directory: “behat.yml”, copy in the following

             env_path: .env
            default_session: laravel
            base_url: http://docrepo.lh
            laravel: ~

Alter the FeatureContext class definition to look like that following

class FeatureContext extends MinkContext implements Context, SnippetAcceptingContext


vendor/bin/behat —dl

to get a list of command that you can use when creating new tests.

When you are done creating the tests, run the following

./vendor/bin/behat features/webpages.feature

to execute the tests, you should see the following output

  In order to prove that Behat works as intended
  We want to test the home page for a phrase

  Scenario: Root Test         # features/webpages.feature:5
    When I am on the homepage # FeatureContext::iAmOnHomepage()

1 scenario (1 passed)
1 step (1 passed)
0m0.14s (29.27Mb)

Features file

The feature file can be called anything, I just used webpages as a way to specify what it tests. The feature file uses the gherkin language which defines a set of keywords to differentiate areas and actions. Gherkin was designed especially for describing system behaviour. It gives us the ability to remove logic from behaviour tests.

  • Feature Some high level description of the functionality, this ties in with the functional specification.
  • Scenario This is a use case for how a user will use the functionality.
  • Given This sets up the precondition for the test, assigns values to variables, …
  • When Would be a user action, ex. a user presses a button.
  • Then Tests an assertion, in other words that the test failed or passed.


In the FeatureContext PHP file create the following method

 * @When I click on photography
public function IClickOnPhotography()
    $this->getSession()->getPage()->find(“css”, “[href=‘/photography’]”)->click();

and add the following under the scenario

And I click on photography

Add the following to the FeatureContext file

 * @Then I am on the photography page
public function IAmOnThePhotographyPage()
    if ($this->getSession()->getPage()->has('css', '') == null) {
        throw new Exception(
            'We are not on the photography page'

And this in the scenario

Then I am on the photography page

When you run the test you should see

  In order to prove that Behat works as intended
  We want to test the home page for a phrase

  Scenario: Root Test                 # features/webpages.feature:5
    When I am on the homepage         # FeatureContext::iAmOnHomepage()
    And I click on photography        # FeatureContext::IClickOnPhotography()
    Then I am on the photography page # FeatureContext::IAmOnThePhotographyPage()

1 scenario (1 passed)
3 steps (3 passed)
0m0.14s (30.37Mb)


Selenium is a java service that can control the browser for proper functional tests in a production environment, similar to what a user would experience in the real world. It’s a jar file that can be executed from the command line via

java -jar selenium-standalone-*jar

Behat has the ability to talk to selenium on our behalf to run the tests. To configure selenium testing we have to add the mink selenium driver to the require-dev section.

“behat/mink-selenium2-driver”: “*”

and change the content of the behat.yml file to

            default_session: selenium2
            selenium2: ~

Execute the test with

./vendor/bin/behat features/webpages.feature

You should see a browser popup and complete the described actions.


I hope you can see the potential for creating quick and flexible tests and the potential for separating the testing and development of a software products.

Retrofit and Laravel Restful API

Okay so a I’m writing this for two reasons, one: someone asked me to and secondly I wanted some sort of analytics view on my phone and making a blog post out of my efforts seemed like a good idea.


I already store the page and the type of device that visited my site so getting a count of all the human visits to my site is easy. I mentioned human visits because other computers can visit my site as well to try and do something with the information that I publish. Google is a good example of this, they know what search result to return because they use computers to collect the information on websites all across the internet including my website. I don’t want google’s visits to count to the total visits for my articles so I filter that out.

Firstly in Laravel we need to create a controller for collecting the data and serialising it to something an Android client can consume(in this case JSON).

class ArticleController extends Controller
    public function index()
        $articles = CmsArticle::all();

        /** @var CmsArticle $article */
        foreach($articles as &$article) {
            $article->total_visits = $article->totalVisits();
            $article->daily_visits = $article->dailyVisits();

        return response()->json($articles);

Then we add a route to the controller so we can expose it on a URL.

Route::group(array('namespace' => 'Api', 'prefix' => 'api'), function () {
    Route::resource('articles', 'ArticleController');


For the android application we are going to use the RecyclerView and CardView support libraries to render the data on the screen. RecyclerView is much like ListView but much more memory and processor efficient. For the Android developers reading this, it standardises the view holder pattern. The CardView library is and attempt(a successful one) to help bring modern android design to older phones.

    protected void onCreate(Bundle savedInstanceState) {

        mRecyclerView = (RecyclerView) findViewById(;

        // use this setting to improve performance if you know that changes
        // in content do not change the layout size of the RecyclerView

        // use a linear layout manager
        mLayoutManager = new LinearLayoutManager(this);

        // specify an adapter (see also next example)
        mAdapter = new MyAdapter(myDataSet);



On to the Android part. To import the libraries to build the android application we need to add the following to the dependencies list in Gradle.

    compile ''
    compile 'com.squareup.retrofit:retrofit:1.9.0'
    compile 'com.squareup.okhttp:okhttp-urlconnection:2.0.0'
    compile 'com.squareup.okhttp:okhttp:2.0.0'
    compile ''
    compile ''


The first thing we need here is an object that we can use across our application to hold the API data after we receive it.

public class Article {
    private String title;
    private Integer total_visits;

    public Article(String title, Integer total_visits) {
        this.title = title;
        this.total_visits = total_visits;

    public String getTitle() {
        return title;

    public void setTitle(String title) {
        this.title = title;

    public Integer getTotal_visits() {
        return total_visits;

    public void setTotal_visits(Integer totalViews) {
        this.total_visits = totalViews;

Retrofit requires an API definition defined by a java interface, here is ours:

import java.util.List;

import retrofit.Callback;
import retrofit.http.GET;
import retrofit.http.Headers;

public interface WebsiteService {

    @Headers("User-Agent: android-api-client")
    void list(Callback<list
> cb); }

And lastly we need to call the API from Android, Android doesn’t allow web requests on the main thread so we have to use a callback.

    private void fetchArticles() {
        RestAdapter restAdapter = new RestAdapter.Builder()

        WebsiteService service = restAdapter.create(WebsiteService.class);

        service.list(new Callback<list
>() { @Override public void success(List

articles, Response response) { myDataSet = articles; Log.i(“MA”, “Successful API Call”); for(Article article: articles) { Log.i(“MA”, article.getTitle()); } mRecyclerView.setAdapter(new MyAdapter(myDataSet)); mAdapter.notifyDataSetChanged(); } @Override public void failure(RetrofitError error) { Log.e(“MA”, error.getMessage()); } }); }

The final product is an Android application that shows the amount of page views for each blog post using material design guidelines.

A Good Functional Specification

Functional Specifications are the bane and salvation of every developer that has come across them. They are a detailed document on how users or 3rd party actors will use a system. Developers loathe these documents when they read like a bad novel, disorganised and difficult to separate into different use cases or functional parts. A good functional specification is a great reference on how the user will experience the application. It’s a document that helps keep all the stake holders on track and communicating effectively. It’s a solid foot hold for creating technical specifications, documentation and testing guides.

Living Document

Most people assume that a functional specification gets written at the beginning of a project and never changes afterwards. A good functional specification changes during the course of a project. The assumptions and conditions at the start of a project never stay the same until the end. A functional specification should reflect that, updating and keeping records of the updates as the project matures. This way it becomes a great document for reflection at the end of the project.


The introduction of a function specification should start by stating the problem that the document addresses, It’s the story of the project commissioner. The introduction should state who all the key stake holders are in the project. This help any future work or investigation to pinpoint reliable sources of information. The introduction should list and describe the terms being used in the project and document for clear communication.

Out of Scope

This is one of the most important parts of a functional specification because no one thinks about what’s not there. It’s the best place to clear assumptions especially for the project commissioner. This sections sets the bounds of the project, without it stake holders will likely drift from the project requirements and start implementing undocumented features that push on the project deadlines and is sometimes unwanted.

Why Use Cases

Use cases are in my opinion the best way to explain how the end user will experience the application. They are in story form making them easy to write and understand. They provided a neat and clear way to document the features of a system while providing a framework that keeps the writer form creating a novel. They provide an easy way to write up technical specifications and system documentation. A use case can be copied and pasted to a technical document, the technical documenter simply writes a paragraph on how the system will accomplish the use case.

User Personalities

If the system has different user experiences and user roles it’s a good idea to create user personalities or personas(as they are called in UX). A persona is a good way of getting people to think in terms of the end user experience. They make excellent references in conversation when trying to express an idea. Personas keep developers focused on providing relevant security and user permissions in a multi tenant system. Designers use them for create great user experiences by imagining the persona using the system as a real user would. It’s advisable to create a separate section before the use cases start to introduce all the personas to the stake holders, this makes them referenceable in the use cases as well.

User Interaction Sections

It’s in my opinion a good idea to split the functional specification into feature areas. Ex. on an administration system for a blog, list the headers as: blog posts, blog post categories, blog post comments, user administration, etc. This makes it easy for a developer, end user, software architect or documenter to address only the part of the system that they care about at a particular point. Under each of these sections being listing the use cases. Pay attention to what is not there, if a persona does not have access to that area state the fact clearly do not ignore it.


Some parts of the document will only be addressed to a particular stake holder. The best way to do this, is by using coloured text areas and explaining their use at the introduction of the document.

Signing Page

If you require a signing page it should be kept separate form the rest of the document. It should have a clause stating that previous functional specifications will be voided by signing this document. This way every time a functional specification is updated as the project progress all the parties are protected by the agreement and the document becomes malleable.

Laravel & Spring Session

I really like both Laravel and Spring, just one problem, Spring HTTP Sessions and Laravel Sessions do not play well together. This blog post details how I made these two technologies work together.

The first problem was that both of these technologies stored their data under different keys under Redis. The first thing that I had to do was extend the Laravel session to use a new Redis Session Handler which I called sredis in the config.

In the constructor I could define my own session-key prefix and my own expiry time.

The second problem was that Spring stores it session as a hash and Laravel stores it’s session as a string key value pair. Laravel serialises and deserialises the contents of the session in the functions that call the methods below. The first thing to do was to change back the contents to objects and vise versa so that we can store them as a redis hash.

The third problem was that redis encoded the session cookie which at the time of writing seems impossible to get round so for now I put this project to rest.

Have a nice day 🙂

namespace App\fstegmann\srsession;

use SessionHandlerInterface;
use Illuminate\Support\Facades\Redis;

class SpringRedisSessionHandler implements SessionHandlerInterface
    private $prefix = null;
    private $expire = null;

    private $redis = null;

    function __construct()
        if ($this->redis == null) {
            $this->redis = Redis::connection();
        $this->prefix = '';
        $this->expire = 1800;

    public function read($sessionId)
        if ($this->redis == null) {
            $this->redis = Redis::connection();

        $data = $this->redis->hgetall($this->prefix . $sessionId);
        foreach($data as $key => &$d) {
            $d = @unserialize($d);

        return $data;

    public function write($sessionId, $data)
        if ($this->redis == null) {
            $this->redis = Redis::connection();

        $data = @unserialize($data);

        foreach($data as $key => &$d) {
            $d = serialize($d);
        $this->redis->hmset($this->prefix . $sessionId, $data);

    public function destroy($sessionId)
        if ($this->redis == null) {
            $this->redis = Redis::connection();

        $this->redis->del($this->prefix . $sessionId);

    //Unused Session Handler Methods
    public function gc($lifetime) {}
    public function open($savePath, $sessionName) {}
    public function close() {}
namespace App\fstegmann\srsession;

use Illuminate\Support\ServiceProvider;
use Illuminate\Support\Facades\Session;

class SpringRedisSessionServiceProvider extends ServiceProvider {

	 * Bootstrap the application services.
	 * @return void
	public function boot()
        Session::extend('sredis', function($app) {
            return new SpringRedisSessionHandler;

	 * Register the application services.
	 * @return void
	public function register() {}