Saturday 30 March 2013

Ruby on Rails


For years I have been building applications using C#. In recent years I have taken a keen interest in Ruby and Rails. I thought that I would blog about how to get started, to make it easier for me to remember. Hopefully you get a kick out of it.

Background


Ruby on Rails emphasises the use of well-known software engineering patterns and principles, such as active record pattern, convention over configuration, don't repeat yourself and model-view-controller.

Active Record Pattern


Active record is an approach to accessing data in a database. A database table or view is wrapped into a class. Thus, an object instance is tied to a single row in the table. After creation of an object, a new row is added to the table upon save. Any object loaded gets its information from the database. When an object is updated the corresponding row in the table is also updated. The wrapper class implements accessor methods or properties for each column in the table or view.

Convention Over Configuration


Convention over configuration is a software design paradigm which seeks to decrease the number of decisions that developers need to make, gaining simplicity, but not necessarily losing flexibility.


Don't Repeat Yourself


In software engineering, don't repeat yourself (DRY) is a principle of software development aimed at reducing repetition of information of all kinds, especially useful in multi-tier architectures. The DRY principle is stated as "Every piece of knowledge must have a single, unambiguous, authoritative representation within a system."


Model–View–Controller


Model–view–controller (MVC) is a software architecture pattern which separates the representation of information from the user's interaction with it.


  1. A controller can send commands to its associated view to change the view's presentation of the model (e.g., by scrolling through a document). It can also send commands to the model to update the model's state (e.g., editing a document).
  2. A model notifies its associated views and controllers when there has been a change in its state. This notification allows the views to produce updated output, and the controllers to change the available set of commands. A passive implementation of MVC omits these notifications, because the application does not require them or the software platform does not support them.
  3. A view requests from the model the information that it needs to generate an output representation.





Getting Started


To get started is very easy. Perform the following steps:
  • Install rails by running gem install rail.
  • Run rails new <application_name> -T. The -T option means [--skip-test-unit] # Skip Test::Unit files
  • Go to the application that you created cd <application_name
  • Edit your Gemfile and add the following to the file:
    group :development do
    gem 'rspec-rails'
    end
  • Since I don't like plain old HTML I prefer to use Slim. To make sure rails uses this template engine add the following to your Gemfile gem "slim-rails"
  • Make sure now you run bundle install to make sure your new gem is up
  • To get RSpec run the following command rails generate rspec:install
  • Run rails server and go to http://localhost:3000/ to make sure everything is ruinning
  • Once you are happy lets get it in version control:
  • Run git init .
  • Run git add -A
  • Run git commit -m "Initial commit”

How easy was that.

Scaffolding


Rails uses the concept of scaffolding. Think of it as a way to generate some code for us. To get started we can create a new Post model
  • rails generate scaffold Post name:string title:string content:text. This is such a powerful command as it goes and creates a new model, view, controller, specs and a db migration.
  • To make sure we create a the data in the database lets run the migration run rake db:migrate
  • Lets make sure that it all works by running rails server
  • Go to http://127.0.0.1:3000/posts and play around with it
How easy was that to create a new Post model that has a view, controller that saves to the database.


Testing



As anyone that knows me knows I am a huge advocate of Behaviour Driven Development. The thing I love about ruby is that it makes it so easy. So let's write a test. Open up post_spec.rb and add this test


it 'should not save post without title' do
    post = Post.new
    expect(post.save).should be_false
end


To implement the solution we will add the following to the Post model and add


validates :title, :presence => true


Pretty easy, for more information please look at this excellent guide on testing.


Configuration



Configuration in rails is done through code and using YAML files. In your rails application all configuration is stored in the config folder. The most common configuration that you will deal with is:
  • config/application.rb – Contains all the application configuration. Rather than putting everything in this file it is recommended to add all of your changes to config/initializers folder
  • config/environments/* - Contains environment specific configuration. This configuration takes precedence to the application configuration.
For more information on configuration, this guide will help you out.Migrations


Rails abstracts the database nicely by introducing us to a concept of migrations. Think of it as a way to version your database. When you generate a model by running:


rails generate model Product name:string description:text


Rails will create a file under the db/migrations folder. This migration will look like:


class CreateProducts < ActiveRecord::Migration
   def change
     create_table :products do |t|
        t.string :name
        t.text :description
        t.timestamps
     end
  end
end


This is just a template and it is encouraged that you change it. As you should be thinking about the queries that you will be making against this model.


More information on migrations can be found here.

Asset Pipeline



The asset pipeline provides a framework to concatenate and minify or compress JavaScript and CSS assets. It also adds the ability to write these assets in other languages such as CoffeeScript, Sass and ERB.



The asset pipeline is a great idea. As backend developers we have taken this sort of thing for granted. This pipeline will allow us to work with JavaScript and CSS in a consistent way. So how does it work?



Well everything for the asset pipeline is stored under app/assets. The convention is that if the file starts with application all of your common assets will be loaded in that file. All of the assets are managed with a gem called sprockets. The format is as follows //= require sub/something, this is like the require statement in ruby. This allows you to manage all the dependencies for JavaScript and CSS.


Another cool aspect is that this pipeline allows you to chain different technologies together. For example if you wanted to use CoffeeScript you would name your file something.js.coffee and include the coffee-rails gem, now the asset pipeline will allow you to write all of you front end in CoffeScript. How cool is that?


For more information I recommend reading the official info on the asset pipeline.

Final Thoughts



This is just a basic introduction to the platform. As you can see Ruby on Rails is a fantastic platform. However realise that it is very opinionated, which is not a bad thing. What I love about rails is how easy it is to get going and testing is not an after thought. There is a lot more to rails which I will explore by writing an application that allows me to explore the concepts of lean projects.


To learn more about rails I recommend looking into the online guides and read the best book on the subject Agile Web Development with Rails (4th edition).

Wednesday 20 March 2013

NoSQL Journey


NoSQL has been a hot topic for a lot of people. There has been lots of things that I have learnt about their architecture and how distributed systems are built. Why was it that other companies started looking at other solutions?

Why NoSQL?


The story is a typical that I think we can all relate to. As data and transaction volumes started to grow, companies realised that they needed to scale their solutions.

So companies tried to address these challenges by trying to make it fit the relational model:

  1. Add more hardware or upgraded to faster hardware.
  2. Simplifying database schema, denormalising the schema, relaxing durability and referential integrity.
  3. Introducing various query caching layers, separating read-only from write-dedicated replicas.

These solutions drove the rise of what is now known as NoSQL. Scaling these kind of solutions we need to think about them in a different way. To understand large-scale distributed systems we need to understand the CAP theorem.

Brewer’s CAP Theorem


The theorem states that within a large-scale distributed data system, there are three requirements that have a relationship of sliding dependency: Consistency, Availability, and Partition Tolerance.

Consistency – All database clients will read the same value for the same query, even given con- current updates.
Availability – All database clients will always be able to read and write data.
Partition Tolerance – The database can be split into multiple machines; it can continue functioning in the face of network segmentation breaks.

Brewer’s theorem states that in any given system, you can strongly support only two of the three. So the traditional relational databases focus of Consistency and Availability, while the NoSQL movement tends to focus more on the Availability and Partition-tolerance (this is tunable in some of the systems). Due to this focus the NoSQL systems are said to be eventually consistent.

Eventually Consistent


Basically the storage system guarantees that if no new updates are made to the object, eventually all accesses will return the last updated value. If no failures occur, the maximum size of the inconsistency window can be determined based on factors such as communication delays, the load on the system, and the number of replicas involved in the replication scheme. The most popular system that implements eventual consistency is DNS (Domain Name System). Updates to a name are distributed according to a configured pattern and in combination with time-controlled caches; eventually, all clients will see the update.

When we look at all lot of our data needs one starts to wonder whether you really need it to be available then and there. How many of us have degraded a system because we were logging everything to the same database? Logs are a great example of data that can be eventually consistent (obviously depending what you are logging)

As stated in this great blog Eventual Consistency By Example

We may sum up the eventual consistency model in the following statement:

Given a total number on T nodes, we choose a subset of N nodes for holding key/value replicas, arrange them in a preference list calling the top node "coordinator", and pick the minimum number of writes (W) and reads (R) that must be executed by the coordinator on nodes belonging to the preference list (including itself) in order to define the write and read as "successful".

Here are the key concepts, extracted from the statement above:

  • N is the number of nodes defining the number of replicas for a given key/value.
  • Those nodes are arranged in a preference list.
  • The node at the top of the preference list is called coordinator.
  • W is the minimum number of nodes where the write must be successfully replicated.
  • R is the minimum number of nodes where the read must be successfully executed.

More specifically, values for N, W and R can be tuned in order to:

  • Achieve high write availability by setting: W < N
  • Achieve high read availability by setting: R < N
  • Achieve full consistency by setting: W + R > N

Final Thoughts


Hopefully you can se how powerful this information is and it really gets you thinking about the way you design your systems. I look forward going thought each of the NoSQL databases and see how these topics apply to them.

Saturday 16 March 2013

Web Development on Mono

Background


Today I decided that I wanted to try out deploying a ASP.NET MVC + Web Api application on Ubuntu.

Why do I want to do that? Well I have always loved C# and .NET, however I have never really enjoyed the Microsoft ecosystem. I am glad that Mono is around and have often wondered What if C# and .NET had targeted all platforms?

I wanted to get MVC 4 and Web API to work correctly. So after spending a few hours on it I thought I would share it with you all.

Process


Below will outline the steps that I took to get this working. I will leave the blood and tears to myself.

Preliminary


The first thing I did was to get vagrant to work. This makes it easier to provision a virtual machine on my mac book pro. The image that I used was Ubuntu Server 12.10 amd64 Minimal (VirtualBox 4.2.6). I created a Vagrantfile that contained the following code:


Vagrant.configure("2") do |config|
  config.vm.box = 'quantal64'
  config.vm.provision :shell, :path => 'bootstrap.sh'
  config.vm.network :forwarded_port, guest: 80, host: 9000
end

The bootstrap.sh contained the following:


#!/usr/bin/env bash

apt-get update
apt-get install -y git
apt-get install -y nginx
apt-get install -y g++
apt-get install -y gettext
apt-get install -y pkg-config
apt-get install -y automake

These packages will be used to build mono and the web-server  Once you have all this set up is just as easy as saying vagrant up and vagrant ssh

Compiling Mono


Once you have logged-in into the box you will run the following commands:

  • wget http://download.mono-project.com/sources/mono/mono-3.0.7.tar.bz2
  • tar -xvjpf mono-3.0.7.tar.bz2
  • cd mono-3.0.7
  • ./configure --prefix=/usr/local
  • make
  • sudo make install
Once all that finishes you will need to get web-sever to work:

  • git clone https://github.com/mono/xsp
  • cd xsp
  • ./autogen.sh --prefix=/usr/local
  • make 
  • sudo make install
To test if it works create yourself a simple example project or use the one I made. To get the development web server up run sudo xsp4 --port 80 and go to http://localhost:9000/api/service/test. You should see the following:

<string xmlns="http://schemas.microsoft.com/2003/10/Serialization/">awesome</string>

Future


I will be going through some of the MVC 4 and Web API 4 features to make sure it all works. Once that is done I will try to do this on AWS by creating an AMI so that I can test some release processes. Thanks for reading and hope this was just as exciting for you as it was for me.


Saturday 9 March 2013

Lean Projects - Part 2


So some people will read the previous blog and say why are you being so negative? See I don't see it that way, I see it as a place that has a great culture that wants to embrace change and continuous improvement. I also see this as massive opportunity to try out some of the ideas that I will present. A lot of these ideas are not new topics, however they have been tailored for the environment that I am in.

The concept that I am calling this is Lean Projects. A way for us to embrace lean and agile methodologies in an agency environment.


Kanban


Kanban is a method for developing software products and processes with an emphasis on just-in-time delivery while not overloading the software developers. In this approach, the process, from definition of a task to its delivery to the customer, is displayed for participants to see and developers pull work from a queue.

The thing to note from the above statement is that Kanban works on the concept of a pull model.

Visualise Workflow


As stated above, one of the challenges that we face is to visualise our workflow. One way to get started is to map the value stream. The idea behind value stream mapping is to sketch out how you are working as this will help everyone to understand how the process works.

From this visualisation we can create what is called a Kanban board. The idea behind this board is to have a way to show your team what your value stream looks like. An example of a board looks like this:

This board is the single point of truth for everyone working on a product.

If you would like to know how to manage a process of the Kanban board, please read my previous blog Launching Australia's Biggest News Site

Limit Work In Progress


The most important part of the Kanban process is to limit your WIP.

In Kanban, we don’t juggle. We try to limit what we are doing, and get the most important things done, one by one, with a clear focus. It’s not uncommon to find that doing ten things at once takes a week, but doing two things at once takes hours, resulting in twenty things being done by the end of the week.

A great explanation can be found here Why Limit Work In Progress?


Measure and Manage Flow


The important aspect is to make sure that each of the features that is on the board actually goes through to the last step of your flow. We set limits to make sure we don't start picking up more features than the ones already started.

Some teams have the concept of stop the line, where the team gets together to make sure the pipeline clears up for the next features to be pulled in.

Fortunately Kanban has a few ways for us to measure the pipeline, as without measurement we can't improve on things.

Cumulative Flow Diagrams


This diagram is an area graph that depicts the quantity of work in a given state, showing arrivals, time in queue, quantity in queue, and departure. This is a great way to visualise how we are churning features through our work flow.


Lead Time


The term lead time (LT) is just a fancy way to say time to market. It’s essentially the time that elapses (on the calendar) from something is ordered until it is received. In software development, a feature is ordered when someone (like the product owner) asks someone else (the team) to implement it. The same feature is received when it is deployed to the production system and ready to be used by end users.

Why is this important? Well, we can track how long a feature took. This is a really powerful metric, as it can be used to eliminate the entering of hours spent into a system.

How can this be used in Lean Projects? Well when I estimate I always like to estimate in terms of effort, meaning that it is easier for me to think about how easy/hard something is rather than how long it will take. This effort for me is measured using t-shirt sizes, e.g. S, M and L. Now if we start tracking all the features that have been sized using a t-shirt size we can start measuring the lead time for each of these features. We can then take the average of the lead time and this amount can be used to drive the estimates of the next feature.

Make Process Policies Explicit


As mentioned on wikipedia:

Until the mechanism of a process is made explicit it is often hard or impossible to hold a discussion about improving it. Without an explicit understanding of how things work and how work is actually done, any discussion of problems tends to be emotional, anecdotal and subjective. With an explicit understanding it is possible to move to a more rational, empirical, objective discussion of issues. This is more likely to facilitate consensus around improvement suggestions.

LeanKit has a great example of how this can be achieved.

Feedback Loops and Improvement Opportunities


The way to improve this is to provide a way to have some form of feedback loop. I am big fan of Agile Retrospectives. The idea is to meet at a regular time to reflect on how everything is going. With Lean Projects we need to reflect on how efficient the process is and ask the right questions. The way that we can do that is by using the 5 Why's Technique. A good metric for improvement is reducing lead time, as this will help you deliver all features in a better way.

Remember to listen to your team as they are closer to the problem than we usually are. Usually issues tend to show that it is a process problem not a people problem. Lean Projects are about a bottom up approach.

Your people are your most important resource. Ensuring their support and cooperation is vital for all major projects.

Specification By Example


Specification by Example is a set of process patterns that facilitate change in software products to ensure that the right product is delivered efficiently.Wikipedia summarises this quite well:

Teams that apply Specification by example successfully commonly apply the following process patterns:
  1. Deriving scope from goals
  2. Specifying collaboratively – through all-team specification workshops, smaller meetings or teleconference reviews
  3. Illustrating requirements using examples
  4. Refining specifications
  5. Automating tests based on examples
  6. Validating the underlying software frequently using the tests
  7. Evolving a documentation system from specifications with examples to support future development

So what does this all mean for Lean Projects? Well the basic idea is that we want to find a common language to describe the requirements through as a set of examples. For this work we need to do this collaboratively and to make sure that the requirements don't become stale as they will serve as documentation.

Before we can think of an example, we need to think of the value that we are adding. The best format I have found is as follows:

In order to [business value]
As a [role]
I want to [some action]

The first part is the most important and it should reflect one of these values:

  • Protect revenue
  • Increase revenue
  • Manage cost
  • Increase brand value
  • Make the product remarkable
  • Provide more value to your customers

Also think about the role that this feature is for. The reason I mention this is because sometimes we build a feature for the “Highest Paid Person's Opinion”, which most of the time adds no value. A scenario or example has the following format:

Given [context]
When I do [action]
Then I should see [outcome]

The Given step is where you set up the context of your scenario. Every scenario starts with a blank slate. The When step is where you exercise the application in order to accomplish what needs testing. Finally, the Then step is where you verify the result.

This way of working can be though of as outside-in development. So an example feature would look like this:

Feature: Login

In order to access the website content
As a website user
I need to log in to the website

Scenario: valid credentials
Given I am presented with the ability to log in
When I provide the email address "test@mywebsite.com"
And I provide the password "Foo!bar1"
Then I should be successfully logged in

Scenario Outline: invalid credentials
Given I am presented with the ability to log in
When I provide the email address <email>
And I provide the password <password>
Then I should not be logged in
Examples
| email | password |
| test@mywebsite.com | Foo!bar2 |
| test@mywebsite.com | Foo |

Lean Software Development


Lean software development is a translation of lean manufacturing and lean IT principles and practices to the software development domain. Adapted from the Toyota Production System, a pro-lean subculture is emerging from within the Agile community.

Lean software development plays by seven principles.

Eliminate Waste


Waste is anything that interferes with giving customers what they value at the time and place where it will provide the most value. Anything we do that does not add customer value is waste, and any delay that keeps customers from getting value when they want it is also waste.

Some forms of waste are:

  1. unnecessary code or functionality
  2. starting more than can be completed
  3. delay in the software development process
  4. bureaucracy
  5. slow or ineffective communication
  6. partially done work
  7. defects and quality issues
  8. task switching

Build Quality In


The idea here is to start with the frame of mind that quality is important and needs to be done from the start. Your goal is to build quality into the code from the start, not test it in later. You don’t focus on putting defects into a tracking system; you avoid creating defects in the first place.

Extreme Programming is an excellent example of building quality in, some examples are:

Pair Programming


Pair Programming is an agile software development technique in which two programmers work together at one workstation. One, the driver writes code while the other, the observer, reviews each line of code as it is typed in. The two programmers switch roles frequently.

Test Driven Development


Test-driven development (TDD) is a software development process that relies on the repetition of a very short development cycle: first the developer writes an (initially failing) automated test case that defines a desired improvement or new function, then produces the minimum amount of code to pass that test, and finally refactors the new code to acceptable standards.

Continuous Integration


Continuous integration (CI) is the practice, in software engineering, of merging all developer workspaces with a shared mainline several times a day.


Refactoring


Code refactoring is a "disciplined technique for restructuring an existing body of code, altering its internal structure without changing its external behaviour", undertaken in order to improve some of the nonfunctional attributes of the software. Advantages include improved code readability and reduced complexity to improve the maintainability of the source code, as well as a more expressive internal architecture or object model to improve extensibility.

Create Knowledge


We have all been in the situation where we have only one person that knows the codebase. There will always naturally be individuals that are stronger than others in specific areas. This is what makes great cross functional teams, however having only one person know something is what is called a single point of failure.

It is important that the whole team has an understanding of how the system works, so that we can eliminate single points of failure. However this is easer said than done as not everyone is always interested in every aspect of every system.

The following provide great ways to create knowledge:


Defer Commitment


Emergency responders are trained to deal with challenging, unpredictable, and often dangerous situations. They are taught to assess a challenging situation and decide how long they can wait before they must make critical decisions. Having set a time-box for such a decision, they learn to wait until the end of the time-box before they commit to action, because that is when they will have the most information.

Basically we want to decide as late as possible, specially for irreversible decisions.

Deciding too early, you run the very likely risk that something significant will have changed, meaning your end product might meet the spec, but it might still be the wrong product! This is one reason why so many projects fail.

Deliver Fast


If we can deliver software fast that means we can get real feedback from actual users, which is the reason you are building the feature in the first place. Another really great side effect of delivering software fast is that you would have had to solve the way you release your features. In order to release quickly means that you need to do it in a repeatable way, this means automation. The way to achieve this is through continuous delivery.

Continuous delivery is a pattern language in growing use in software development to improve the process of software delivery. Techniques such as automated testing, continuous integration and continuous deployment allow software to be developed to a high standard and easily packaged and deployed to test environments, resulting in the ability to rapidly, reliably and repeatedly push out enhancements and bug fixes to customers at low risk and with minimal manual overhead.

Respect People


I think it’s important to treat everyone with the same respect, whatever their job. It doesn’t matter whether they’re the CEO, a developer, project manager, the receptionist or the cleaner, respect everyone equally.

This comes back to the bottom up approach, giving people the responsibility to make decisions about their work. It is important to build knowledge and develop people who can think for themselves.

Respecting people means that teams are given general plans and reasonable goals and are trusted to self-organise to meet the goals. Respect means that instead of telling people what to do and how to do it, you develop a reflexive organisation where people use their heads and figure this out for themselves.

Optimise The Whole


A lean organisation optimises the whole value stream, from the time it receives an order to address a customer need until software is deployed and the need is addressed. If an organisation focuses on optimising something less than the entire value stream, we can just about guarantee that the overall value stream will suffer.

Looking at the whole picture is important. In the past I have spent time just optimising the development practices, however I believe that if you don't love what you are building then what is the point. We should all care about the overall value stream.

Final Thoughts


All of these thoughts have been going around in my head for years and I am truly excited to put these ideas to work in such a vibrant environment. I look forward sharing my findings as I love to see how all these thoughts change over time.

Lean Projects - Part 1

I have had the pleasure to start a new job for a very well established agency. As with every company, managing projects is not easy. However, what makes this company particularly interesting is the shear growth they are going through.

As with any start you get the opportunity to look at things from a different point of view. This has given me the ability to asses the way things are done. What I will present is how things currently are done and the parts that I don't personally like. This points will be discussed and I will propose a way of addressing them, which I call Lean Projects.

Project Management


The system of choice for managing a project is called podio. The cool aspect of this tool is that it keeps all communication in one place. Emails should not be used to communicate in a project. All tasks are created and tracked under the specific projects. A lot of tasks are created by the project managers and assigned to the developer. Sometimes these tasks will take precedence over what is currently being worked on. What are some of the issues that I have found with this approach?

  1. The tool doesn't show not you overall visibility. This makes it really hard to prioritise and forecast, or even plan.
  2. The tasks are assigned to developers without always knowing what they are working on. Unfortunately this approach is reactive and it causes too much context switching.


Time sheets


The one thing that I have disliked about this tool is that each individual enters the time they spend on each task. Why don't I like this?

  1. No one tracks time efficiently.
  2. The time is never really accurate.
  3. If you have estimated a time you are more than likely that you will just enter that amount.

So we got introduced to an expansion pack for Scrum Development Pack for podio, however it still records everything in hours. I have worked in a few agile projects and I have found that this process does not work for software development.

So why do they do it? Well this is quite simple. In agency work it is all about billable hours. These billable hours are then passed on to the clients. No billable hours = no money = no job. The last statement is a reality however it makes me sad thinking about it that way.

Requirements


As with any business that delivers software we need to gather requirements. This currently is achieved by meeting with the client. We have some great people that really understand the client and the product. Once the meeting is over, all of this is documented in google docs. These docs are first used as a WIKI and then cleaned up to turn into a requirements document.

The developer will add questions to this document that will be answered by either someone internally or the client. The idea is to try to get all the requirements down so that the developer can estimate. Does this sound like waterfall?

Estimates


Since we have the concept of billable hours we need to estimate the time it will take for a feature to be built. Estimates are done for multiple features, once that is done it is often multiplied by a factor to give it some padding. This estimate is needed as the client needs to sign of on it. So how is this estimate carried out? Well from what I can see, it is done as a gut feel, or as I've heard it by using your life experience. What are some of the issues with estimating in this manner?

  1. One does not always know what needs to be done and people tend to be very optimistic.
  2. Studies have confirmed that most people's intuitive sense of "90% confident" is really comparable to something closer to "30% confident".
  3. No one ever thinks about the non functional requirements.
  4. A lot of assumptions are made, which usually are wrong.

Some great articles on this topic:


Planning


Unfortunately I have not been involved in any planning session, though I expect that this is due to me just starting there. They have a concept of a WIP meeting. During these meetings we go through the whole list of projects that are being worked on. After this we are taken through all the new work coming up. This meeting goes for about 30-60 minutes. This is done once a week with the whole team.

My first impression is that it is too much information to deal with at the meeting. This reminds me of a really big standup where half of the team just switches of.

As a development team we meet every day to talk about what we are doing, however I find that we jump into solution mode rather than do analysis. This is hard as we are nerds and we like to think that way. The rest of the teams seem to meet quite regularly too, which is always great to see.

In part 2 I discuss how we are going to tackle each of the issues

Friday 1 March 2013

System.Spec - Specifications By Example in C#

System.Spec


I am a big fan of TDD and when I first started doing it I was accustomed to using NUnit in C#. Then I started learning Ruby and fell in love in how they were doing TDD/BDD. The gem they use is called RSpec.

For a while I was jealous that we did not have something as cool in C#. There is NSpec, however for me if did not read that well.

So I decided to create System.Spec. We are currently using at the place I work at and we are really enjoying it. As we continue dogfooding this solution we will build more features into it.

An example specification looks like the following:

namespace YourCompany.Specs
{
    using NSubstitute;
    using System.Spec;

    public class ImportantSpecification : Specification
    {
        protected override void Define()
        {
            Describe("A very cool specification", () => {
                It("should do something that is good", () => {
                    var testInterface = Substitute.For<ITestInterface>();
                    testInterface.Received().TestMethod();
                });
            });
        }
    }
}

As you can see there is some C# ceremonies that we need to take into consideration, Ruby allows for a much cleaner syntax. It follows closely the design of Jasmine.

Moving Forward


The following is what I plan doing in the coming weeks/months:

  1. Add a runner for ReSharper
  2. Add a runner for VS 2010
  3. Add a runner for Xamarin Studio (if possible)
  4. Allow reordering of tests.
I urge you to look at the solution and if you have any thoughts or enhancements just yell out.