Tuesday, October 7, 2014

Optimizing Team Foundation Server Build Time

Fast feedback is important. Knowing you broke something a month ago or just a few minutes ago can make a huge difference.

More and more teams use a build server to compile the code and run a variety of checks on each check in. This is often referred to as the commit stage. One nice feature of TFS to make sure the quality of your code base is guarded, is a Gated Check-in. A Gated Check-means the whole commit stage is executed on some code a developer wants to check in. Only when the commit stage succeeds, the check in is allowed.

This way, you can be sure that the code on your server is always in a good state. An important characteristic of successful gated check-ins is that they are fast. If developers have to wait a long time, they will start checking in less regularly and find ways around your build server to share code with other developers.

How can you optimize the build time of your Gated check-in?


What’s acceptable?


An acceptable build time depends on your team. However, from experience you should aim for a Gated Check-in build of less then five minutes. Somewhere around one to two minutes would be perfect but that’s often hard to realize. It’s important to regularly check your build time and make sure you constantly optimize whenever necessary.


1. Check your hardware


The Team Foundation Server build architecture consists of two important parts:
  • Build Controllers
  • Build Agents
A Build Controller connect to Team Foundation Server. They monitor your build queue and contain the logic of executing the build workflow. The actual build runs on a Build Agent. This means that  Build Controllers and Build Agents have different resource needs.

A Controller is memory intensive. An Agent is both CPU and Disk intensive. When you do a default installation of a TFS Build Server, both the Agent and the Controller are installed on the same machine. This is an easy setup and requires less servers but if build speed becomes an issue, an easy solution is just to scale out your build hardware. Moving your agents to a different server then your controller is a first and easy step.

Instead of scaling out you can also scale up by using a machine with more CPU cores, more memory and faster disks. I was at a customer once where their Build took 90 minutes to run. Because they where constrained on their on-premises hardware, we moved the build server to Azure. By using a larger VM size on Azure the build time dropped to 15 minutes. That’s a very quick win.


2. What are you building?


When you create a new standard Build Definition in Visual Studio, you get a build that monitors all the code in your team project. This is easy to get started since you probably only have a single source tree with one solution file in it that contains your code.

This is defined in your Source Settings in your Build Definition file as you can see in the following image:

image

Here you see there are two work folders defined. One points to your source code and is marked as Active. The Cloaked working folder makes sure that your Drops folder (which is created by a build that copies the resulting binaries to the TFS server) is not monitored.

When the Build server starts a new Build it will get a fresh copy (in the following section you will see if this is incremental or from scratch) of all the source files beneath your work folder mapping.
If you start getting multiple branches or other folders that contain documentation or other non-essential stuff for your build, the build server will download all those files every time a build runs.

Modifying your Source Settings to only include the files you really need can speed up your build time quite substantially.


3. How are you getting your sources?


If you create a new Build Definition you will see the following setting in the Process tab:

image

What’s important to note is that the Clean workspace property is set to true. This means that every time a build runs, all the source files are first removed from the Build Server and then the whole source tree is downloaded.

By setting this value to false, you will do an incremental get, just as you would in Visual Studio. The Build Server will only download the files that have changed since the last build.

Now of course, this is a nice default. Having a clean slate every time you run a build makes sure you won’t have any artifacts left from previous builds.

But you know your code and your team best. You can experiment with setting this value to false and checking what it does to your build time.


4. Where are you getting your sources from?


Normally you connect your Build Server directly to Team Foundation Server. This means that when the Build Server needs to download a newer version of a file, it will go directly to the TFS server and download it from their.

By using a Team Foundation Server Proxy you can use an in-between cache that sits between your Build Server and the TFS Server. The proxy will cache source files and optimizes getting files from TFS. Especially when your Build Server and TFS Server are not co-located, a proxy can save you huge amounts of time by installing it nearby your Build Server (on a controller machine or on a separate server in the same location as your Build infrastructure).

See the following MSDN article for configuration details: Configure Team Foundation Build Service to Use Team Foundation Server Proxy


5. How are you building?


There is a huge change your Build Agent is running on a multi-core machine. MSBuild, which is used in the Build Template to do the actual compiling of your code, has an option to start using multiple cores and parallelizing your build process.

If you look at MSDN you will see the following documentation:
/maxcpucount[:number]
 
/m[:number] Specifies the maximum number of concurrent processes to use when building. If you don't include this switch, the default value is 1. If you include this switch without specifying a value, MSBuild will use up to the number of processors in the computer. For more information, see Building Multiple Projects in Parallel with MSBuild.
The following example instructs MSBuild to build using three MSBuild processes, which allows three projects to build at the same time:
msbuild myproject.proj /maxcpucount:3


By adding the /m option to your TFS Build you will start utilizing multiple cores to execute your build. You can add this option in your Process tab for the MSBuild arguments property. This is also a setting you have to test and make sure it works for you. Sometimes you will get Access Denied errors because multiple processes are trying to write to the same folder.


6. Building less


When you are dealing with a huge application that maybe exists of several sub applications, you can start splitting your application in distinct parts that can be build separately. By using NuGet and your own NuGet repository you can then have those subparts publish a precompiled and ready to go NuGet package. This package can then be used by other applications that depend on it.

This means that you won’t have to build all your source code every time you run a build. Instead you only build those parts that have changed and reuse the resulting NuGet packages in other parts of your build.

If you have a look at the NuGet documentation you will find some easy steps that allow you to create a package and setup your own NuGet server.

And these are my favorite steps for optimizing a TFS build. What are your favorite steps? Anything that I missed? Please leave a comment!

Tuesday, September 23, 2014

My SDN presentations

Last week I had two presentations and one ask the experts at SDN. If you are in the Netherlands, SDN is a great event to visit!

What you may have missed in C#


C# 6 is coming with a bunch of new features that are pretty exciting. However, have you already mastered all the previous versions of C#? In this session you will take a swift tour across all versions of C# first, making sure that you’re up to date on everything, followed by an in-depth focus on the new features of C# 6. Finally you will learn how Roslyn, the new C# compiler, will change your live as a C# developer.

Slides


Getting some Insight with Application Insight


Do you know what your customers are doing? Can you respond quickly to incidents? Do you know the favorite features of your customer? This session will show you how to get insight into these things and more. You will learn how to use Application Insight to monitor your web, native and desktop applications. Through code examples and real world scenarios you will see what Application Insights can offer you and how you can start using it right away.

Slides

Ask the experts


At the end of the day Hassan Fadili (my friend and fellow ALM enthusiast) had an hour of Ask The Experts. We had some interesting discussions ranging from PowerShell Desired State Configuration and Release Management to build server optimizations.
All in all, it was a great day!

If you are interested in having a .NET or ALM session at your company, send me a mail or leave a comment!

Tuesday, September 16, 2014

Have you already heard about PowerShell Desired State Configuration?

What?! PowerShell, isn’t that for IT pro’s? I thought this was a blog by a developer. Well, it’s true that I am a software developer at heart. But that doesn’t mean you should ignore what’s going on in the IT pro world.

Why should I care: meet Bob and Joe


Meet Bob the Developer. Bob works on a great new application: project Unicorn. He uses cool techniques like Angular, ASP.NET MVC, WebAPI and Entity Framework to build a stunning SPA. But in essence his application is a web based app with a SQL Server database.

Bob has a great team of developers and they are producing quite some code. After a couple of weeks, a tester joins the team and asks if there is a testing environment available. So Bob goes of to the IT guys and asks them for a new machine. Fortunate as Bob is, it only takes a couple of days before his machine is ready! He gets the credentials to remotely access his Windows Server. Now as a true developer, Bob knows how to install IIS and SQL Server by clicking next –> next –> finish. After doing this, he copies his website and database to the machine, fiddles with some configuration settings and he’s good to go.

All of this is done through a set of GUIs like the Control Panel and Microsoft Management Console. Now that’s not a big problem for Bob. He knows how to do this. And while doing this, he installs his favorite tools and plugins for Windows. Who likes that new start menu on Windows Server? And having Visual Studio locally on the test environment is much easier to debug some stuff.
Meet Joe, the IT pro. Joe is given the job to prepare a production environment for  Unicorn. He looks at Bobs test environment, shudders and starts working on a production environment with all the bells and whistles that are required to get a stable and secure environment that’s up to his standards.
Joe uses PowerShell. He needs to configure a lot of machines and he doesn’t want to do it by hand. Instead he has collected a great amount of scripts over time that he stores on his hard drive and shares with some of his colleagues.

Things start breaking down


Until now, this doesn’t sound to bad. Maybe you have been a Bob or a Joe in a situation like this. But then, Joe calls Bob.

Joe: Your application doesn’t work
Bob: Yes it does. It not only works on my machine but also on the test environment
Joe: But it doesn’t work in production and that’s the only thing that matters

Bob who clicked through all his GUIs has no idea what changes he made. And so the search begins. After a long and heated search, Bob and Joe decide they really don’t like each other. Eventually the problem is found: a permission setting is required for a logs folder.

So what does this have to do with PowerShell Desired State Configuration?


Can you explain what the following scripts does?

Configuration ContosoWebsite
{
  param ($MachineName)

  Node $MachineName
  {
    #Install the IIS Role
    WindowsFeature IIS
    {
      Ensure = “Present”
      Name = “Web-Server”
    }
    
    #Install ASP.NET 4.5
    WindowsFeature ASP
    {
      Ensure = “Present”
      Name = “Web-Asp-Net45”
    }
  }
}

It’s not too hard is it? This is a PowerShell DSC script. It instructs a server to make sure that IIS and ASP.NET are installed.

This script is plain text. It can be read by a developer and an IT pro. Now imagine that Bob would have set down with Joe when he started preparing a test environment. Instead of clicking through a GUI, Bob asked Joe to help him create a DSC script. This script describes exactly what state the server should be in.

Since it’s just a script, it can be added to version control. Now that it is in version control, it can be added to the package that Continuous Integration build creates.

Release Management Update 3 has support for DSC. This means that after your build finishes, Release Management takes your DSC files and applies them to the set of servers you have in your environment. At the start, these machines can be completely clean. Everything is configured automatically when the DSC script gets applied. Whenever someone makes a manual change to a machine, the script reruns and the machine autocorrects itself.

Now that the script is finished, can you imagine how Joe setups the new production environment?

If you want to know more about PowerShell DSC, have a look at http://powershell.org. They have some great resources on DSC.

Feedback? Questions? Please leave a comment

Tuesday, September 9, 2014

Adding Code Metrics to your Team Foundation Server 2013 Build

When implementing a Deployment Pipeline for your application, the first step is the Commit phase. This step should do as many sanity checks on your code as possible in the shortest amount of time.Later steps will actually deploy your application and start running all kinds of other tests.

One check I wanted to add to a Commit phase was calculating the Code Metrics for the code base. Code Metrics do a static analysis on the quality of your code and help you pinpoint those types or methods that have potential problems. You can find more info on Code Metrics at MSDN.

Extending your Team Foundation Server Build


Fortunately for us, TFS uses a workflow based process template to orchestrate builds. This workflow is based on Windows Workflow Foundation and you can extend it by adding your own (custom) activities to it.

If you have a look at GitHub you’ll find a lot of custom created activities that you can use in your own templates. One of those is the Code Metric activity that uses the Code Metric Powertool to calculate Code Metrics from the command line.

If you check the documentation, using the Code Metric activity comes down to downloading the assemblies, storing them in version control and then adding the custom activity to your build template.

And that would be true if you wouldn’t be running on Visual Studio/Team Foundation Server 2013. For example, check the following line of code on GitHub:

string metricsExePath = Path.Combine(ProgramFilesX86(), 
       @"Microsoft Visual Studio 11.0\Team Tools\
         Static Analysis Tools\FxCop\metrics.exe");


This code still points to the old version of the Code Metrics Powertool. There where also some other errors in the Activity. For example, setting FailBuildOnError to false won’t have any effect.

Fortunately, all the activities are open source. Changing the path was easy. Fixing the FailBuildOnError bug was a little harder since it’s impossible (to my knowledge) to debug the custom activities directly on the Build server.

But there is a NuGet package for that!


But as a good developer, we first create a unit test that shows the bug really exist. By fixing the unit test, we then fix our bug. Unit testing Workflow activities is made a lot easier with the Microsoft.Activities.UnitTesting NuGet package.

Using this NuGet package I came up with the following ‘integration’ test:
[TestMethod]
[DeploymentItem("Activities.CodeMetrics.DummyProject.dll")]
public void MakeSureABuildDoesNoFailWhenFailBuildOnErrorIsFalse()
{
    var activity = new CodeMetrics();

    var buildDetailMock = new Mock<IBuildDetail>();
    buildDetailMock.SetupAllProperties();

    var buildLoggingExtensionMock = new Mock<IBuildLoggingExtension>();

    var host = WorkflowInvokerTest.Create(activity);
    host.Extensions.Add<IBuildDetail>(() => buildDetailMock.Object);
    host.Extensions.Add<IBuildLoggingExtension>(() => 
           buildLoggingExtensionMock.Object);
    host.InArguments.BinariesDirectory =                             
                          TestContext.DeploymentDirectory;
    host.InArguments.FilesToProcess = new List<string> 
    { 
       "Activities.CodeMetrics.DummyProject.dll" 
    };

    host.InArguments.LinesOfCodeErrorThreshold = 25;
    host.InArguments.LinesOfCodeWarningThreshold = 20;

    host.InArguments.MaintainabilityIndexErrorThreshold = 60;
    host.InArguments.MaintainabilityIndexWarningThreshold = 80;

    host.InArguments.FailBuildOnError = false;

    try
    {
        // Act
        host.TestActivity();

        Assert.AreEqual(BuildStatus.PartiallySucceeded,  
                 buildDetailMock.Object.Status);
    }
    finally
    {
        host.Tracking.Trace();
    }
}

I’ve configured the Code Metrics activity to run the analysis against a Dummy project dll with some threshold settings and of course the FailBuildOnError set to false. Fixing the test is left as an exercise to the reader ;)

Extending the Build Template


As a final step I’ve added parameters to configure the different thresholds and some other important settings to the Build workflow. That way, a user can configure the Code Metrics activity by editing the Build Definition:

image

And that’s it! You can download the code with the modified activity code, the workflow unit test and a copy of an implemented build workflow template here.

Useful? Feedback? Please leave a comment!