Version Numbers

I have been struggling a bit with version numbers.  Normally version numbers are fairly easy:  Major.Minor.Build.Revision.  So, a release number would be and subsequent patches (aka revisions) of that release would be,, etc.  Unfortunately, there are two forces that  are working against this version numbering schema in the software I am currently working on.

First, the industry does not like change.  We have thousands of clients and our system is critical to their ability to work with their vendors, process orders or bill customers.  Therefore, once it works it works and they never want to change it.  Trying to upgrade these customers is a challenge so we have clients on old versions of our software.  Therefore, moving from 4.2.x.x to 4.4.x.x or 4.5.x.x seems like a major step for them.  We believe they would be more apt to move from 4.2.1.x to 4.2.6.x because the major and minor are the same.

Second, our software is certified for integration with other software based on the major and minor number.  If we change our major and minor number, we need to re-certify our software at a cost.  Hence, we would rather not update our major and minor number.

Given these forces, we have created a version schema of Major.Minor.SubMinor.BuildNumber.  This works fine for a release (e.g., but for a revision on top of a release this breaks down because we are out of placeholders.  So, we end up having builds like: or  Both these are ugly.  Does anyone have any better ideas?

Code Organization

I have found that people and companies organize their code differently within a class.  For example, the company I work for now organizes their code by type.  For example, here are the regions I found in a file I was recently working on:

#region Public Events
#region Private Fields
#region Constructor
#region Properties
#region Event Handlers
#region Private Methods
#region Public Methods

Personally, I am not a big fan of this organization and here is why.  I am refactoring some code today and found that a public method is about 100 lines of code.  My first reaction is to split it up into smaller private methods to make the class more manageable.  Since all the private methods are together I need to move this into the private methods region in order to continue with the organization strategy.  However, I don’t like this because now the public method calls a couple private methods and they are nowhere near the originating public methods.  So, when I am working to refactor this code (note that my company does not use Resharper which would make this much easier), I am all over the class instead of just moving around 100 – 120 lines of code.

If I was to organize the code using regions I would attempt to put like used code in regions.  For example:

#region Constructor
#region Properties
#region Grid Updates
#region Save Methods

The draw back to this approach is then you have private methods that are used between the Grid Updates and Save Methods regions and therefore you have a #region Utilities (or  Misc) which defeats the entire purpose.  So, I guess nothing is perfect.

Do you see the value in organizing code?  How do you do it?  I would be interested in hearing opinions.

A Bit About Burndown Graphs

I have been creating burndown graphs to track whether we are going to hit our commitments for the iteration.  However, I am less than convinced that it is better at determining our true status.  Case in point, below is my burndown graph, where the purple line represents perfect utilization and the blue line represents current execution.  Based on this graph it looks like we are about 1/2 day behind and still should finish by 7/5.

However, this graph is deceiving because we could be ahead on development and very behind on QA, so will we still hit our 7/5 date?  In the pure Agile sense, we would have developers help out testing when they finish because it is a team commitment.  In my experience, developers are not as good at testing as professional QA, therefore, what could take a tester one day to complete could take a developer 1 1/2 days.  So, when you find yourself in this situation, do you re-estimate each scrum meeting to account for this?  Again it seems that the real world is butting up against the ideal world.

My manager still likes to see burndown graphs, and I believe they are of some use.  So, this is what I did.  I separate QA and development time so I can see our execution on each of these areas:

This is good, but it really does not get me all the way there.  If I really wanted full understanding of our status I would have multiple graphs each representing areas which we are responsible (QA, Development, Documentation, etc).  Each have their own execution rate and each would provide me knowledge of our status.  For me, though, I look at the above burndown graph, the remaining user stories and tasks to determine our status.  Looking at any one of these metrics is not going to provide me the right status, instead it takes an experienced project manager to be able to understand the context of the project and the current status.

How I Organized Build Files

This may not be that complicated, but  I have gone back and forth on this as I am creating the build infrastructure.  Where should I put my build files?  Of course they should be on the build machine, but how should they be organized?  I narrowed it down to two choices:

  1. Put the files in each module so there is build folder containing all build files under trunk and in each of the branches.
  2. Create a separate “module” called Build with all the build files for each module

As I built the Nant build and include files I found that every module shares certain tasks (aka targets) like cleaning a solution, building a solution, copying the built files to an output directory, etc.  Therefore, I started creating shared tasks which look at generic variables to execute.  For example, the build solution generic target looks like this:

    <target name="BuildSolution">
        <msbuild project="${master.solution.file}">
            <arg value="/p:Configuration=${target}" />
            <arg value="/p:Platform=${platform}" />
            <arg value="/t:Rebuild" />

Any module can use this target to build its solution as long as the generic variables are setup correctly.  So, there are going to be generic targets shared by all modules.

I also found there are generic targets shared by trunk and all the branches within modules.  For example, one module needs to copy some COM objects to a folder prior to building.  Trunk and all the branches will have to do this, so I created one target to be shared by the entire module:

<target name="CopyCOMObjects">

        <echo>Making ${ui.bin.dir} directory</echo>
        <mkdir dir="${ui.bin.dir}" />

        <copy   file="${referenced.assemblies.dir}\COM_OBJECT_1.DLL"

        <copy   file="${ui.dir}\EXECUTABLE.EXE"

        <attrib readonly="false" hidden="false">
                <include name="${ui.bin.dir}\COM_OBJECT_1.DLL" />
                <include name="${ui.bin.dir}\EXECUTABLE.EXE" />


So, there are shared targets by all modules and shared targets within a module.  After giving this some thought, I decided the best way to handle this is by creating a new module called Build with a sub folder for each module.  Each module will then have a trunk, branches and shared directory.  All in all it looks like this:

As you can see there is a Core folder which contains all the shared properties, tasks, utilities, builds, projects, etc for each module.  In the Project 1 and Project 2 modules there are  branches folders and each branch contains its individual builds, obfuscation, project, properties, tasks and utilities folder just like the Trunk folder contains the same.  Further, there is a shared folder containing shared properties, tasks, builds, obfuscation, utilities, etc which are shared within this module.

I like to keep things organized and as time goes on and I create more builds and branches, I will see if this structure will help keep me organized or it is a maintenance nightmare.

Is a Scrum a Scrum Without the Sacred Three Questions?

At a scrum meeting I would  often get the following exchange:

Me:  What did you do yesterday?

Developer:  Worked on the Import Form.

Me:  What are you doing today?

Developer:  Working on the Import Form.

Me:  Do you have any impediments?

Developer:  No

After the scrum, my manager would approach me and ask, “How is the import screen coming?”  Given the above exchange there was little I could do except tell him “The developer is working on it and he has no impediments.”  I quickly realized these open ended questions were not providing enough information for me to truly track the iteration and report up to my manager.  While ideally my manager would provide team autonomy and not ask for weekly status and my role as a Scrum Master would not necessitate me tracking tasks because my job is about removing impediments and helping the team collaborate, this is not a world I have ever lived in.  Rather, I have found that it has been my responsibility to understand exactly how we are executing in an iteration, what is going to be completed, what is going to be rolled over and how this affects the overall release.  The above exchange was not providing me this knowledge.

Given the lack of information from the original line of questions, I need to ask probing questions to determine the actual status of the enhancement or work off a task list and ask the team the status of each task.  So, here is how I plan in the “Agilish” world I live:

  • On iteration planning day (we execute in 2 week iterations) we review each user story
  • For each of the user stories, we talk through the the acceptance criteria and determine the tasks that are necessary to complete the story
  • I document the task on a spreadsheet with the following information:
    • Task Name
    • Estimate
    • Assigned Team Member

Here is an example:

Now, each scrum meeting, I work through all the user stories and ask the assigned team member the status of each task.  This provides me with the knowledge of the exact status of the user story which in turn gives me more visibility into the iteration and release status.

So, does tracking tasks on a spreadsheet and leading scrum meetings as a project manager instead of a scrum master make me no longer Agile?  Truly, I don’t care.  Unless my team is going to provide me excellent status and I always hire absolutely self motivated and extremely highly competent developers, this is the only way that has worked for me.

Here are the open questions, that I would like to touch on in future posts:

  • How do you get information in the spreadsheet without typing?
  • Are there tools that do this for you?
  • How do burndown graphs work given different execution rates when developers help test or document?

Creating Builds

This is the second post discussing creating an automated build structure.  After selecting CC.NET for our build infrastructure due to cost, stability and personal knowledge, I need to determine which builds I am going to create to begin the process of having a more streamlined build process.  While Continuous Integration hardliners may not agree, I created two builds: a Commit Build and a Release build.

Commit Build:

The commit build is executed every time a developer checks into source control for the project.  The purpose of this build is to ensure that the check-in compiles and any automated tests execute successfully.  This build should run in less than five minutes so the developer can retrieve feedback from their check in as soon as possible.  The build rebuilds the solution using the debug configuration, updates the assembly version with a debug assembly version but, does not copy assemblies to a directory for consumption.  Therefore, the output of the commit build will never be used and this build is only used for commit verification.

Release Build:

The release build is executed each evening at 11:30 PM as well as on demand by anyone on the team.  The purpose of this build is to create a output which will be tested and potentially released by the team.  While this build should run under 10 minutes there is less of a time constraint on this build except for the fact that, if this is running during the day, developers should not check in while it is retrieving the source to limit the chances of the build pulling half a check.  The release build builds in release mode, updates the assembly version with the the build number, copies the output to a share drive and obfuscates the code.  Ideally, the release build will also build the installer, but to do this the installer would need to support build to build migration instead of merely release to release migration which it does now.  Therefore, the output is a releasable.

As I stated above some may cringe that I have created two builds and believe every check in should create a testable or releasable output.  The reason I did not go down this path was due to the excess of builds.  Copying each commit build to a server which is backed up will clutter our server with builds which will never be used.  While storage is cheap, there is no reason to waste it with unused output.  Instead, achieve the benefits of near instant feedback with a Commit Build and enable the team to create releasable builds in an ad-hoc fashion with a Release Build.

My team supports many different applications and as we move forward we will be creating a Commit and Release build for each supported branch of each supported application.

In the next post on this subject I am going to discuss how I structured my NANT builds to share properties and targets and decrease modifications to the ccnet.config file.

Creating a Build Infrastructure

Over the past couple of days, I have heard the following:

  • “Hey development, can I get a new build?”
  • “QA failed a bug fix, but we determined the wrong build was installed”

These two statements are symptoms of having either a poor or non-existent continuous build process.  In my case it is non-existent so I am working on creating a build infrastructure from the ground up.  I am going to research build software, determine the right builds to execute and work with the team to educate them on the build process.

Creating a build process does not happen over night and it will change over time.  Therefore, this will be a series of posts as I work on it.

I am using Continuous Delivery: Reliable Software Releases through Build, Test, and Deployment Automation as a guide during this process.  I have been very impressed with this book as it is easy to read, gives insights into best practices and provides a lofty goal for my build and deployment process.

So, the first step in this process is to create stop gap builds and in order to do that I need to choose the CI software which will run the builds.  I have used CruiseControl.NET before but, it was recommended to me that I look into Jenkins which seems to be the next generation of open source CI software.  After installing it  and taking it for a test run, I came away with some pros and cons:


The main pro for Jenkins is that it removes the necessity to write NANT in order to make the build work, it allows you to select build activities via drop down and plug ins.  Further, Jenkins has a plethora of plug ins which could be used for any number of build activities.


While I am sure that Jenkins is running the build infrastructure for many companies, in my short evaluation, I could not get my build to work without using a .bat script the developers created to build the software manually.  The only build activity I was using was the “run .bat file” activity so I was not taking advantage of Jenkins major pro.  Through my research I found numerous critical and high bugs within their bug tracking system which were a bit dated.  Now, this is open source and free so I don’t have any expectations of grandure but, since I am not a full time build coordinator I don’t want to continue to beat my head against the wall trying to figure out why something does not work.

So, given my past experience with CruiseControl.NET and the fact that it has been well tested and used throughout the industry, I decided to install CC.NET and start creating the build infrastructure with CC.NET.

Now that I have the technology, I need to start creating the builds.  In the next post I will discuss the builds I am creating.