Docker & Golang

So I wrote an article about Swift with Docker and made an issue of using Swift with Docker vs Golang with Docker, I thought I’d write up something about using Golang with Docker to illustrate my point. To start with I don’t need a base Golang image. The compiled application can work with Docker out of the box without adding any libraries, so no need for a base Docker image that contains a runtime.

My Dockerfile for the application

FROM alpine:latest
ADD main /app
ENTRYPOINT ["/app"]

Main file

package main

func printSomething() {
  print("test")
}

func main() {
  printSomething()
}

Jenkins build script

go get -d
go build main.go
docker build -t fritsstegmann/golang .

The result of this process is an image that’s about 6MBs. My Swift application’s Docker image is more than 1.1GB.

Docker & Swift Lang

Swift is a relatively new language from Apple that replaces Objective C for IOS and MacOS development. Apple open sourced the language a while ago and it now has official support on Ubuntu(using it on other Linux distributions is a bit troublesome). The issue using it on other Linux systems is the dynamic linking of libraries, I’ve tried using Swift’s ability to statically link libraries, it succeeded in including the swift runtime libraries but it failed to include other lower level runtime libraries. I had to create another docker base image that includes the swift tools for compiling the application based on Ubuntu 16.04. To run the Swift application in docker we have to compile the application in the container it will run in.

My Dockerfile for the base Swift image

FROM ubuntu:16.04

RUN apt-get update && apt-get install -y wget clang libstdc++6 libicu-dev libbsd-dev libxml2 libcurl3 \
        && cd /root \
        && wget https://swift.org/builds/swift-3.0.1-release/ubuntu1604/swift-3.0.1-RELEASE/swift-3.0.1-RELEASE-ubuntu16.04.tar.gz \
        && tar xfzv swift-3.0.1-RELEASE-ubuntu16.04.tar.gz \
        && cd /usr/local/bin \
        && ln -s /root/swift-3.0.1-RELEASE-ubuntu16.04/usr/bin/swift \
        && ln -s /root/swift-3.0.1-RELEASE-ubuntu16.04/usr/bin/swiftc

My Dockerfile for the Swift application

FROM fritsstegmann/swift

RUN mkdir /app
ADD . /app/

WORKDIR /app/
RUN swift build --configuration release

ENTRYPOINT [".build/release/app"]

Package file

import PackageDescription

let package = Package(
    name: “app”,
    targets: []
)

Main file

func sayHello() {
  print("Hello, World!")
}

sayHello()

My gripe with Swift is that the linking issue makes distributing an application over different Linux distributions difficult. It makes Swift a less attractive option over Golang in a container crazed world. I’d prefer to have compiled the application with Jenkins and then copy only the binaries to the application container.

Laravel & Doctrine Tutorial 2

I wrote an article a while back on Laravel and Doctrine, since then a new library for Laravel and Doctrine integration has appeared and it’s pretty good.

Installation

– To install the Laravel Doctrine package for Laravel 5.2 add the following to composer under require:

laravel-doctrine/orm:1.1.*

– Add

LaravelDoctrine\ORM\DoctrineServiceProvider::class,

to

config/app.php

under the service providers section. – Run this command to publish the config settings for Doctrine:

php artisan vendor:publish --tag=“config”

First Model

I prefer using Yaml mappings for my models. By default Laravel Doctrine will look for them in the app directory which I don’t like. I put my models in a subdirectory called models, in that directory I put a subdirectory called mappings to store the Yaml files. To tell doctrine to look for these files in my custom path I have to update the Doctrine configuration to with this:

    'paths'      => [
        base_path('app'),
        base_path('app') . '/Models/mappings',
    ],

In the YAML mappings directory create the following file

App.Models.User.dcm.yml

and add the following content:

App\Models\User:
  type: entity
  table: users
  id:
    id:
      type: integer
      generator:
        strategy: AUTO
  fields:
    name:
      type: text
    email:
      type: text
    password:
      type: text
    rememberToken:
      type: text

In the Models folder create this file

User.php

and add the following content

namespace App\Models;
use JsonSerializable;

class User implements JsonSerializable
{
    use JsonSerializer;
    
    public $id;
    public $name;
    public $email;
    public $password;
    public $rememberToken;

    private $__hidden__ = ['password', 'rememberToken'];
    
    /**
     * User constructor.
     * @param $id
     * @param $name
     * @param $email
     * @param $password
     * @param $rememberToken
     */
    public function __construct($id, $name, $email, $password, $rememberToken)
    {
        $this->id = $id;
        $this->name = $name;
        $this->email = $email;
        $this->password = $password;
        $this->rememberToken = $rememberToken;
    }
}

Two things that stands out about this file

  • The fields are public
  • The JsonSerializer

The fields are public because I feel that the getter-setter pattern isn’t relevant anymore even in languages like Java. In the Java the getter returns, in many cases, a reference to a private variable rather than a copy, breaking whole reason for the pattern. The JsonSerializer is a trait I wrote to exclude some fields from being returned by for example: an API. It excludes the Doctrine proxy custom fields as well. In this case the password and remember me tokens are excluded similar to Eloquent.

namespace App\Models;

trait JsonSerializer
{
    /**
     * @param  int    $options
     * @return string
     */
    public function toJson($options = 0)
    {
        json_encode($this->jsonSerialize(), $options);
    }

    /**
     * Specify data which should be serialized to JSON
     * @link http://php.net/manual/en/jsonserializable.jsonserialize.php
     * @return mixed data which can be serialized by json_encode,
     *               which is a value of any type other than a resource.
     * @since 5.4.0
     */
    public function jsonSerialize()
    {
        $vars = get_object_vars($this);

        $r = [];
        if (isset($this->__hidden__)) {
            foreach($vars as $k => $v) {
                if (!starts_with($k, '__') && !in_array($k, $this->__hidden__)) {
                    $r[$k] = $v;
                }
            }
        } else {
            foreach($vars as $k => $v) {
                if (!starts_with($k, '__')) {
                    $r[$k] = $v;
                }
            }
        }

        return $r;
    }
}

Create a repository

Thanks to Laravel’s ability to inject services without providing a ServiceProvider this step is relativity simple. The two thing to note is the automatic binding of a entity manager by Laravel (The doctrine plugin already created one for us) and how the EntityRepository constructor is called within our constructor.

?php

namespace App\Repositories;

use App\Models\User;
use Doctrine\ORM\EntityManager;
use Doctrine\ORM\EntityRepository;

class UserRepository extends EntityRepository
{
    /**
     * @var EntityManager
     */
    private $em;

    /**
     * PageViewRepository constructor.
     * @param EntityManager $em
     */
    public function __construct(EntityManager $em)
    {
        $this->em = $em;
        parent::__construct($em, $em->getClassMetadata(User::class));
    }
}

Creating a service

Like before because Laravel knows how to construct a Repository object because it knows how to inject a entity manager we can create a service level API object without providing a ServiceProvider to instantiate an object.

?php

namespace App\Services;


use App\Repositories\UserRepository;
use App\User;

class UserService
{
    /**
     * @var UserRepository
     */
    private $userRepository;

    /**
     * UserService constructor.
     * @param UserRepository $userRepository
     */
    public function __construct(UserRepository $userRepository)
    {
        $this->userRepository = $userRepository;
    }

    public function findAll()
    {
        return $this->userRepository->findAll();
    }
}

Laravel & Doctrine Tutorial

Doctrine is a widely used independent(not affiliated with a framework) ORM that uses the entity manager pattern.

Why does it exist?

Doctrine uses the entity manager pattern, this makes it a bit odd in the PHP community, most of the framework ORM tools use the active record pattern. The entity manager pattern splits the database communication layer from the domain layer. Objects are fetched and stored through an entity manager. The entity manager controls their access and storage rather than the models themselves. The entity manager is usually given a description(ex. yaml) of the database and a model that represents the data in the application. In this way the model becomes an abstraction of the database in the application, with the entity manager controlling how and when the data in the model is fetched or persisted.

What is an ORM?

An ORM is a database(or generically a data) abstraction in a application that exposes the storage and management of the data in developer friendly way. Database tables map to classes and columns to variable declerations. In some programming language updating a variable automatically updates the matching database record in other a method to save the data must be called.

Active Record vs Entity Manager

The entity manager pattern is the dominant strategy for accessing data in Java. This quickly makes any talk of doctrine into an argument over Java vs PHP rather than Active Record vs Entity Manager. Trying to pick a fight over Java vs PHP is kinda against the point of what we do and most of the community heads of PHP agree. Both languages offer advantages and both languages can learn something from each other and in most cases do. Anyway back to the database thing. Active Record combines the data representation and the database access. Database operations are done via the models rather than a separate system like an entity manager. Creating or updating a model invokes some database code.

Separate Database from Domain Code

Separating the database access from the domain logic makes the domain logic much faster and lighter. In larger system that deal with a variety of use cases the domain logic will not always be driven by an underlying database. Ex. serialising data to be sent via a socket. Tying the database to the domain logic in that context does make sense and makes create, destroying and administrating objects much more expensive. In the entity manager pattern is very cheap to create 10000 models and persist or manipulate them later.

Laravel and Doctrine

There are a few solutions out there for tying in Doctrine to Laravel. We are going to look at “atrauzzi/laravel-doctrine”.

Add the following the require section in composer.json:

    “atrauzzi/laravel-doctrine”: “dev-master”,
    “doctrine/migrations”: “dev-master”,

Run this command:

php artisan vendor:publish —provider=“Atrauzzi\LaravelDoctrine\ServiceProvider” —tag=“config”

In the config folder there should be a new doctrine config file called doctrine.php. Uncomment the database connections mysql section. Doctrine does not support the mysql driver and Laravel does not support the mysqli driver so we need to separate these for Laravel and Doctrine. Doctrine provides a few ways to add the mapping information for models. For now we are picking the simplest one and adding a static loadMetaData function to the model.

namespace App\Models;

//use Doctrine\ORM\Mapping as ORM;
use Doctrine\ORM\Mapping\ClassMetadata;

/**
 * @Entity
 * @Table(name=“users”)
 */
class Users
{
    /**
     * @Id
     * @GeneratedValue
     * @Column(type=“integer”)
     */
    protected $id;

    /**
     * @Column(type=“string”, unique=true)
     */
    protected $email;

    /**
     * @return mixed
     */
    public function getId()
    {
        return $this->id;
    }

    /**
     * @param mixed $id
     */
    public function setId($id)
    {
        $this->id = $id;
    }

    /**
     * @return mixed
     */
    public function getEmail()
    {
        return $this->email;
    }

    /**
     * @param mixed $email
     */
    public function setEmail($email)
    {
        $this->email = $email;
    }

    public static function loadMetadata(ClassMetadata $metadata)
    {
        $metadata->setPrimaryTable(array(
            ‘name’ => ‘users’
        ));

        $metadata->mapField(array(
            ‘id’ => true,
            ‘fieldName’ => ‘id’,
            ‘type’ => ‘integer’
        ));

        $metadata->mapField(array(
            ‘fieldName’ => ‘email’,
            ‘type’ => ‘string’
        ));
    }
}

In routes add the following in the default Laravel welcome screen.

Route::get(‘/‘, function (\Illuminate\Contracts\Foundation\Application $app) {

    $em = $app->make(‘Doctrine\ORM\EntityManager’);
    $user = $em->find(‘App\Models\Users’, 1);
    print_r($user);

    return view(‘welcome’);
});

Why I likely won’t use Doctrine

I don’t really see myself using doctrine. Most frameworks chose the active record pattern for a reason. The entity manager pattern does not make that much sense in PHP. PHP usually deals with much less data at a time than something like Java and is not constrained by the same memory environment. In the case of PHP the ease of use of the active record pattern for me wins over the system benefits of an entity manager. As for the serialisation benefits mentioned above: because PHP is a dynamically typed language serialising models on the fly is very simple compared to statically typed languages. I feel that the PSR standards will likely go the way of the active record pattern when they get to ratifying database access.

Docker Tutorial

Docker is quickly becoming a new paradigm in software development, so I became curious to find out why it is so special.

Why does it exist?

Originally software was small self-contained parcels, with no worries about setting up complicated servers or processes to support applications. But with the advent of the internet and an interconnected world, it all changed. A business had to buy hardware, and a system administrator spent days setting it up for development or production with no path to ease scalability. Fast forward a bit, now had more powerful hardware, capable of running smaller virtual computers, allowing crude and inefficient scalability. There was no supporting infrastructure or software tools, but we could copy virtual computers across hardware, across the internet. Hardware vendors caught up to the new trend and started creating new products that made virtualisation much more viable and efficient, but this approach was never really going to be the best solution. We needed a better way. Linux namespaces gives us the scalability of a fully-fledged virtual machine but with much less resource usage. A process runs in Linux in a sandboxed environment with a virtual file-system and access to underlying hardware with a light weight wrapper. In this new paradigm we needed a way to replace the disk images of virtual machines and their management (execution, storage and distribution), we call one of these tools Docker.

So what is Docker?

This part of the story is about Linux namespaces and we start it with something called a process identifier or PID. When you open up an application on your computer it is assigned a number so Windows or Linux can keep track of it, this number is called a PID. Linux stores its list of PIDs in a tree structure similar to a folder hierarchy on a desktop PC. When Linux starts up it creates a PID 1 and all other processes are placed under this PID. The problem is that standard permissions allow processes to access and inspect the tree. Linux namespaces create little virtual trees inside of the bigger tree that can not access or know about processes outside of itself. This enhances the security of Linux and creates the isolation needed for Docker to function. Linux provides the same concept for network and disk IO. Docker starts an image using these namespaces to isolate the executable within from the rest of the system. Docker is so much more though, because it deals with the creation and distribution of these images too, it uses Git like functionality to build new images from base images and provides a way of exporting and importing these images onto your local repository. I can pull a Linux image from the docker repository, run a few commands that enables that image to run a Laravel project, save my changes to the new image and publish it back to a docker repository. Someone else can then download my image and be sure it runs exactly the way it would on my host, using the same Apache, the same PHP and the same version of Laravel.

How do I use Docker?

Docker is a command line tool, most Linux developers prefer using the command line anyway. A normal Docker work flow starts by searching the Docker repository for images that fits the need of the user either to run as is or to extend. The command for searching docker is “docker search ‘image’”. To download an image from the repository you use the “docker pull ‘image’” command similar to Git. To execute a command in a Docker image use the “docker run ‘image’ ‘command’“ syntax. I suggest using the “-it” flags if you need to make the command interactive. Using the “-d” flag runs the Docker container (running image) in daemon mode. Using the “docker run -it Ubuntu /bin/bash” command opens up a terminal to the container that allows you to run commands like “apt-get install mysql-server”. At this point you would want to save the changes you have made to a container. First we need to find the container ID, we can do this by using the “docker ps” command to list all the running containers. To actually save the changes we need to save it to a new image by using the “docker commit ‘container ID’ ‘new image name’” command. Now we have a brand new docker image that we can reuse time and again.

Docker Files

Now we come to the really useful part. I mentioned that you can run commands inside a docker container by running its bash command and then save the changes using “docker commit”. Docker files help automate this process by providing a syntax that is stored in a file, making quick revision and automation scripts possible. A Docker file starts with a FROM tag, this specifies the base image that the file is using. The MAINTAINER tag is normally next and provides a reference to the author of the image. A RUN tag allows you to run executables on the system like “apt-get’ or “yum”. It’s best to chain RUN tags using the && operator because Docker creates a temporary image every time you execute a RUN tag command. WORKDIR is another useful tag, this sets the current active directory and is likely required by some less well designed applications to work. Some docker image can be run without providing an explicit command (“docker run cassandra”), it’s because the image has an ENTRYPOINT tag, this tag specifies a default command to use when executing the image. And finally the ADD command, this adds files from the host file-system to the docker image.

Why Docker is so useful

Normally you would store a copy of the Docker file in the project somewhere and have a CI server build artefacts using the syntax. The artefacts are deployable components that the Dev Ops and QAs can use. Here’s the scenario: A developer is asked to implement a new feature, they make some changes and pushes it up on a new branch. The CI server picks up the changes and builds a new Docker image. The QA pulls the new Docker image and runs it on their own desktops. They test the feature and they find a bug. The bug is logged against that docker build and communicated to the developer. The developer wants to see the bug for themselves so they pull the image too and go through the same steps. They come across the bug because they are able to replicate the exact same environment as the QA and are able to fix bug. They push up a new commit and a new image is created. It’s tested and it passes QA. The changes are merged to the master branch in Git and this fires off a production Docker image. The Dev Ops team pull the image on the production server and simply executes the “docker run ‘image’” command. Solving the problem that even with the best of intentions the deployment environments have subtle but important differences. A good example is the difference in configuration between Apache 2.2 and Apache 2.4. A developer might use Windows or OS X but the production environment is a Linux server.

Docker Volumes

An important part of knowing Docker, is knowing how to manage the data inside of a container, for example Docker does not persist file changes between containers. Docker has the concept of data volumes that’s similar to AWS’s EBS. Docker volumes provide sharing of volumes between containers and persistence on the container file-system. When using Docker files the changes to files being added to an image forces docker to bypass cache and redo every step to ensure consistency. Changes to a data volume bypasses this. In other words when updating an image file changes do not trigger a clean Docker build. Docker allows for volumes to be created from the command line too by using the “docker run -v ‘volume’ ‘image’ ‘command’”. Creating volumes from command line has one very useful feature: they can mount the host file-system in a container ex: “docker run -v ‘host dir’:’image dir’ ‘image’ ‘command’”. This allows you to update your project files in real time, reflecting changes immediately. The best way to persist data volumes between container instances is to create a data volume container by calling “docker create -v ‘image dir’ —name ‘image name’ ‘base image’”. The volumes can be mounted to new containers with the “docker run -d —volumes-from ‘volume name’ ‘image name’”. Multiple run calls like this one can be made and the containers will share the same mounted directory. To back up a data volume use the following command “docker run —volumes-from ‘volume name’ -v $(pwd):/backup Ubuntu tar cvf /backup/backup.tar /’image dir’”.

Linking Containers

The first and most useful linking tool to learn is port mapping. When creating server containers they normally expose a network port. By default it is not available to the host machine. By specifying the “-P” flag when running a container, the port the container is listening on is mapped to a high value port on the host machine. By using the “-p ‘host port’:’container port’“ the container port can be explicitly mapped to the host machine. Docker has a linking service as well, a common and good use case to demonstrate how this works is a web application communicating with a database server. Start a database server with the following “docker run ––name db training/postgres”. By explicitly setting the name of a container the following example is much easier to follow. To run a new container while linking the database we run the following command: “docker run -d -P ––name web ––link db:db training/webapp python app.py”. By linking the database container to the web container the web container can inspect the properties of the database container. The properties of the database container is exposed to the web container by environment variables that the web application can use to configure a connection back to the database server.

Conclusion

Docker is an amazing tool for standardising the runtime environment of an application allow a more structured work flow for server side application and allowing more robust testing and deployments.

BDD & Behat

BDD is an amazing way to test software. Behat is a PHP BDD framework that works well with Laravel.

History of testing

Testing has always been a part of software development, but only recently has it been formalised into a work methodology. TDD(test driven development) dictates that the developer writes the tests first then the actual functionality. TDD suffers from two issues: 1.) It’s very difficult to get right. 2.) It’s very developer centric, developers usually only test the happy case scenario. TDD for functional testing is difficult because it’s very code orientated. A skilled QA(quality assurance tester) has a different skill set from a developer and they should feel free to strengthen those core skills instead of developing programming skills.

Why BDD?

BDD was created out of combining TDD, Domain Driven Design and object orientated programming. It focuses on testing the core functionality of the product rather than testing every single part and providing a means for both technical and business interests to be represented in the tests.

What I like the most about BDD is a QA does not have to write any code to write functioning(not just functional) tests. They rely on a set of pre-written pieces of code that are bound to plain English sentences. The sentences are strung together into a paragraph. The paragraph represents a test. Using this approach makes it clear what the test is actually for, the test is self documenting. The net result is developers can concentrate on writing code and QAs can concentrate on writing tests.

Behat

Behat is a BDD framework for PHP that has excellent integration with Laravel. It’s installed via composer like any other modern PHP package and has an executable in the *vendor/bin* folder. To add behat to the project put the following in the require-dev section in the composer file:

“behat/behat”: “^3.0”,
“behat/mink”: “^1.6”,
“behat/mink-extension”: “^2.0”,
“laracasts/behat-laravel-extension”: “^1.0”

It’s necessary to install the mink packages for browser(functional) testing. Mink is an extension package for Behat to allow for browser or web testing. Once installed execute

vendor/bin/behat --init

to create the features directory where the behat files are stored.

In the features directory create a “webpages.feature” and copy in the following

Feature:
  In order to prove that Behat works as intended
  We want to test the home page for a phrase

  Scenario: Root Test
    When I am on the homepage

Create this file in the project directory: “behat.yml”, copy in the following

default:
    extensions:
        Laracasts\Behat:
             env_path: .env
        Behat\MinkExtension:
            default_session: laravel
            base_url: http://docrepo.lh
            laravel: ~

Alter the FeatureContext class definition to look like that following

class FeatureContext extends MinkContext implements Context, SnippetAcceptingContext

Run

vendor/bin/behat —dl

to get a list of command that you can use when creating new tests.

When you are done creating the tests, run the following

./vendor/bin/behat features/webpages.feature

to execute the tests, you should see the following output

Feature:
  In order to prove that Behat works as intended
  We want to test the home page for a phrase

  Scenario: Root Test         # features/webpages.feature:5
    When I am on the homepage # FeatureContext::iAmOnHomepage()

1 scenario (1 passed)
1 step (1 passed)
0m0.14s (29.27Mb)

Features file

The feature file can be called anything, I just used webpages as a way to specify what it tests. The feature file uses the gherkin language which defines a set of keywords to differentiate areas and actions. Gherkin was designed especially for describing system behaviour. It gives us the ability to remove logic from behaviour tests.

  • Feature Some high level description of the functionality, this ties in with the functional specification.
  • Scenario This is a use case for how a user will use the functionality.
  • Given This sets up the precondition for the test, assigns values to variables, …
  • When Would be a user action, ex. a user presses a button.
  • Then Tests an assertion, in other words that the test failed or passed.

Code

In the FeatureContext PHP file create the following method

/**
 * @When I click on photography
 */
public function IClickOnPhotography()
{
    $this->getSession()->getPage()->find(“css”, “[href=‘/photography’]”)->click();
}

and add the following under the scenario

And I click on photography

Add the following to the FeatureContext file

/**
 * @Then I am on the photography page
 */
public function IAmOnThePhotographyPage()
{
    if ($this->getSession()->getPage()->has('css', '.list-unstyled.photography-photo-list') == null) {
        throw new Exception(
            'We are not on the photography page'
        );
    }
}

And this in the scenario

Then I am on the photography page

When you run the test you should see

Feature:
  In order to prove that Behat works as intended
  We want to test the home page for a phrase

  Scenario: Root Test                 # features/webpages.feature:5
    When I am on the homepage         # FeatureContext::iAmOnHomepage()
    And I click on photography        # FeatureContext::IClickOnPhotography()
    Then I am on the photography page # FeatureContext::IAmOnThePhotographyPage()

1 scenario (1 passed)
3 steps (3 passed)
0m0.14s (30.37Mb)

Selenium

Selenium is a java service that can control the browser for proper functional tests in a production environment, similar to what a user would experience in the real world. It’s a jar file that can be executed from the command line via

java -jar selenium-standalone-*jar

Behat has the ability to talk to selenium on our behalf to run the tests. To configure selenium testing we have to add the mink selenium driver to the require-dev section.

“behat/mink-selenium2-driver”: “*”

and change the content of the behat.yml file to

default:
    extensions:
        Behat\MinkExtension:
            base_url: http://www.fritsstegmann.co.za/
            default_session: selenium2
            selenium2: ~

Execute the test with

./vendor/bin/behat features/webpages.feature

You should see a browser popup and complete the described actions.

Conclusion

I hope you can see the potential for creating quick and flexible tests and the potential for separating the testing and development of a software products.

Retrofit and Laravel Restful API

Okay so a I’m writing this for two reasons, one: someone asked me to and secondly I wanted some sort of analytics view on my phone and making a blog post out of my efforts seemed like a good idea.

Laravel

I already store the page and the type of device that visited my site so getting a count of all the human visits to my site is easy. I mentioned human visits because other computers can visit my site as well to try and do something with the information that I publish. Google is a good example of this, they know what search result to return because they use computers to collect the information on websites all across the internet including my website. I don’t want google’s visits to count to the total visits for my articles so I filter that out.

Firstly in Laravel we need to create a controller for collecting the data and serialising it to something an Android client can consume(in this case JSON).

class ArticleController extends Controller
{
    public function index()
    {
        $articles = CmsArticle::all();

        /** @var CmsArticle $article */
        foreach($articles as &$article) {
            $article->total_visits = $article->totalVisits();
            $article->daily_visits = $article->dailyVisits();
        }

        return response()->json($articles);
    }
}

Then we add a route to the controller so we can expose it on a URL.

Route::group(array('namespace' => 'Api', 'prefix' => 'api'), function () {
    Route::resource('articles', 'ArticleController');
});

Android

For the android application we are going to use the RecyclerView and CardView support libraries to render the data on the screen. RecyclerView is much like ListView but much more memory and processor efficient. For the Android developers reading this, it standardises the view holder pattern. The CardView library is and attempt(a successful one) to help bring modern android design to older phones.

    @Override
    protected void onCreate(Bundle savedInstanceState) {
        super.onCreate(savedInstanceState);
        setContentView(R.layout.activity_main);

        mRecyclerView = (RecyclerView) findViewById(R.id.my_recycler_view);

        // use this setting to improve performance if you know that changes
        // in content do not change the layout size of the RecyclerView
        mRecyclerView.setHasFixedSize(true);

        // use a linear layout manager
        mLayoutManager = new LinearLayoutManager(this);
        mRecyclerView.setLayoutManager(mLayoutManager);

        // specify an adapter (see also next example)
        mAdapter = new MyAdapter(myDataSet);
        mRecyclerView.setAdapter(mAdapter);

        this.fetchArticles();
    }

Gradle

On to the Android part. To import the libraries to build the android application we need to add the following to the dependencies list in Gradle.

    compile 'com.android.support:appcompat-v7:22.2.1'
    compile 'com.squareup.retrofit:retrofit:1.9.0'
    compile 'com.squareup.okhttp:okhttp-urlconnection:2.0.0'
    compile 'com.squareup.okhttp:okhttp:2.0.0'
    compile 'com.android.support:cardview-v7:22.2.1'
    compile 'com.android.support:recyclerview-v7:22.2.1'

Retrofit

The first thing we need here is an object that we can use across our application to hold the API data after we receive it.

public class Article {
    private String title;
    private Integer total_visits;

    public Article(String title, Integer total_visits) {
        this.title = title;
        this.total_visits = total_visits;
    }

    public String getTitle() {
        return title;
    }

    public void setTitle(String title) {
        this.title = title;
    }

    public Integer getTotal_visits() {
        return total_visits;
    }

    public void setTotal_visits(Integer totalViews) {
        this.total_visits = totalViews;
    }
}

Retrofit requires an API definition defined by a java interface, here is ours:

import java.util.List;

import retrofit.Callback;
import retrofit.http.GET;
import retrofit.http.Headers;

public interface WebsiteService {

    @Headers("User-Agent: android-api-client")
    @GET("/articles")
    void list(Callback<list
> cb); }

And lastly we need to call the API from Android, Android doesn’t allow web requests on the main thread so we have to use a callback.

    private void fetchArticles() {
        RestAdapter restAdapter = new RestAdapter.Builder()
                .setEndpoint("http://www.fritsstegmann.co.za/api")
                .build();

        WebsiteService service = restAdapter.create(WebsiteService.class);

        service.list(new Callback<list
>() { @Override public void success(List

articles, Response response) { myDataSet = articles; Log.i(“MA”, “Successful API Call”); for(Article article: articles) { Log.i(“MA”, article.getTitle()); } mRecyclerView.setAdapter(new MyAdapter(myDataSet)); mAdapter.notifyDataSetChanged(); } @Override public void failure(RetrofitError error) { Log.e(“MA”, error.getMessage()); } }); }

The final product is an Android application that shows the amount of page views for each blog post using material design guidelines.

A Good Functional Specification

Functional Specifications are the bane and salvation of every developer that has come across them. They are a detailed document on how users or 3rd party actors will use a system. Developers loathe these documents when they read like a bad novel, disorganised and difficult to separate into different use cases or functional parts. A good functional specification is a great reference on how the user will experience the application. It’s a document that helps keep all the stake holders on track and communicating effectively. It’s a solid foot hold for creating technical specifications, documentation and testing guides.

Living Document

Most people assume that a functional specification gets written at the beginning of a project and never changes afterwards. A good functional specification changes during the course of a project. The assumptions and conditions at the start of a project never stay the same until the end. A functional specification should reflect that, updating and keeping records of the updates as the project matures. This way it becomes a great document for reflection at the end of the project.

Introduction

The introduction of a function specification should start by stating the problem that the document addresses, It’s the story of the project commissioner. The introduction should state who all the key stake holders are in the project. This help any future work or investigation to pinpoint reliable sources of information. The introduction should list and describe the terms being used in the project and document for clear communication.

Out of Scope

This is one of the most important parts of a functional specification because no one thinks about what’s not there. It’s the best place to clear assumptions especially for the project commissioner. This sections sets the bounds of the project, without it stake holders will likely drift from the project requirements and start implementing undocumented features that push on the project deadlines and is sometimes unwanted.

Why Use Cases

Use cases are in my opinion the best way to explain how the end user will experience the application. They are in story form making them easy to write and understand. They provided a neat and clear way to document the features of a system while providing a framework that keeps the writer form creating a novel. They provide an easy way to write up technical specifications and system documentation. A use case can be copied and pasted to a technical document, the technical documenter simply writes a paragraph on how the system will accomplish the use case.

User Personalities

If the system has different user experiences and user roles it’s a good idea to create user personalities or personas(as they are called in UX). A persona is a good way of getting people to think in terms of the end user experience. They make excellent references in conversation when trying to express an idea. Personas keep developers focused on providing relevant security and user permissions in a multi tenant system. Designers use them for create great user experiences by imagining the persona using the system as a real user would. It’s advisable to create a separate section before the use cases start to introduce all the personas to the stake holders, this makes them referenceable in the use cases as well.

User Interaction Sections

It’s in my opinion a good idea to split the functional specification into feature areas. Ex. on an administration system for a blog, list the headers as: blog posts, blog post categories, blog post comments, user administration, etc. This makes it easy for a developer, end user, software architect or documenter to address only the part of the system that they care about at a particular point. Under each of these sections being listing the use cases. Pay attention to what is not there, if a persona does not have access to that area state the fact clearly do not ignore it.

Notes

Some parts of the document will only be addressed to a particular stake holder. The best way to do this, is by using coloured text areas and explaining their use at the introduction of the document.

Signing Page

If you require a signing page it should be kept separate form the rest of the document. It should have a clause stating that previous functional specifications will be voided by signing this document. This way every time a functional specification is updated as the project progress all the parties are protected by the agreement and the document becomes malleable.

Laravel & Spring Session

I really like both Laravel and Spring, just one problem, Spring HTTP Sessions and Laravel Sessions do not play well together. This blog post details how I made these two technologies work together.

The first problem was that both of these technologies stored their data under different keys under Redis. The first thing that I had to do was extend the Laravel session to use a new Redis Session Handler which I called sredis in the config.

In the constructor I could define my own session-key prefix and my own expiry time.

The second problem was that Spring stores it session as a hash and Laravel stores it’s session as a string key value pair. Laravel serialises and deserialises the contents of the session in the functions that call the methods below. The first thing to do was to change back the contents to objects and vise versa so that we can store them as a redis hash.

The third problem was that redis encoded the session cookie which at the time of writing seems impossible to get round so for now I put this project to rest.

Have a nice day 🙂

namespace App\fstegmann\srsession;

use SessionHandlerInterface;
use Illuminate\Support\Facades\Redis;

class SpringRedisSessionHandler implements SessionHandlerInterface
{
    private $prefix = null;
    private $expire = null;

    private $redis = null;

    function __construct()
    {
        if ($this->redis == null) {
            $this->redis = Redis::connection();
        }
        $this->prefix = 'fritsstegmann.co.za:sessions:';
        $this->expire = 1800;
    }

    public function read($sessionId)
    {
        if ($this->redis == null) {
            $this->redis = Redis::connection();
        }

        $data = $this->redis->hgetall($this->prefix . $sessionId);
        foreach($data as $key => &$d) {
            $d = @unserialize($d);
        }

        return $data;
    }

    public function write($sessionId, $data)
    {
        if ($this->redis == null) {
            $this->redis = Redis::connection();
        }

        $data = @unserialize($data);

        foreach($data as $key => &$d) {
            $d = serialize($d);
        }
        $this->redis->hmset($this->prefix . $sessionId, $data);
    }

    public function destroy($sessionId)
    {
        if ($this->redis == null) {
            $this->redis = Redis::connection();
        }

        $this->redis->del($this->prefix . $sessionId);
    }

    //Unused Session Handler Methods
    public function gc($lifetime) {}
    public function open($savePath, $sessionName) {}
    public function close() {}
}
namespace App\fstegmann\srsession;

use Illuminate\Support\ServiceProvider;
use Illuminate\Support\Facades\Session;

class SpringRedisSessionServiceProvider extends ServiceProvider {

	/**
	 * Bootstrap the application services.
	 *
	 * @return void
	 */
	public function boot()
	{
        Session::extend('sredis', function($app) {
            return new SpringRedisSessionHandler;
        });
	}

	/**
	 * Register the application services.
	 *
	 * @return void
	 */
	public function register() {}
}
CACHE_DRIVER=redis
SESSION_DRIVER=sredis
QUEUE_DRIVER=redis

Laravel & Redis

Laravel and Redis are both great pieces of technology at the forefront of technical innovation. This site uses both these great technologies to provide a superior user experience.

Laravel

Laravel is a PHP framework that unlike the older PHP frameworks dropped support for any legacy, greatly enhancing what it is capable of. It is an opinionated view of PHP that brings together some of the best practices developing with PHP. It has some other great features usually found in other languages like Dependency Injection and a queuing system.

Redis

Redis is an in memory persisted key-value data structure server with some really neat feature what make it a perfect fit somewhere in just about any architecture. The common use cases for Redis includes: caching, session management, NoSQL storage, pub/sub events. There is a great deal of hype around Redis that transcends language boundaries and software disciplines, and that should tell you something.

Working together

The great news is that Laravel comes with really good support for Redis. It can support caching and session management out of the box and use Redis as a NoSQL store without any major configuration.

By default Laravel tries to use the localhost Redis installation without a username or password making it really easy to setup on a development environment.

Adding PRedis to composer.json

"require": {
     "illuminate/redis": "5.0.26",
     "laravel/framework": "5.0.26",
     "facebook/php-sdk-v4" : "4.0.23",
     "nesbot/carbon": "1.18.0",
     "predis/predis": "1.0.1"
}

In the config/database.php file

'redis' => [
    'cluster' => false,
    'default' => [
        'host' => '127.0.0.1',
        'port' => 6379,
        'database' => 0,
    ],
],

Setting up the redis as the infrastructure provider

CACHE_DRIVER=redis
SESSION_DRIVER=redis
QUEUE_DRIVER=redis

Accessing Redis as a NoSQL store

$redis = Redis::connection();
$redis->set('key', 'value');
$value = $redis->get('key');
$values = $redis->lrange('lkey', 5, 10);

using the sytnax $redis->”function”() you can call any function defined in the Redis command list

Tracking Page Views on this site

public function article($slug)
{
    $article = CmsArticle::where('slug', $slug)->where(array('status_id' => 2))->first();
    $prevArticle = CmsArticle::where('created_at', '<', $article->created_at)->orderBy('created_at', 'desc')->first();
    $comments = CmsComment::where(array('article_id' => $article->id, 'published' => 1))->orderBy('created_at', 'desc')->get();

    $redis = Redis::connection();
    $redis->incr('fritsstegmann.co.za:article:' . $slug);

    return view('article/article')->with(array(
        'article' => $article,
        'prevArticle' => $prevArticle,
        'comments' => $comments
    ));
}