Tuesday, September 16, 2014

Have you already heard about PowerShell Desired State Configuration?

What?! PowerShell, isn’t that for IT pro’s? I thought this was a blog by a developer. Well, it’s true that I am a software developer at heart. But that doesn’t mean you should ignore what’s going on in the IT pro world.

Why should I care: meet Bob and Joe


Meet Bob the Developer. Bob works on a great new application: project Unicorn. He uses cool techniques like Angular, ASP.NET MVC, WebAPI and Entity Framework to build a stunning SPA. But in essence his application is a web based app with a SQL Server database.

Bob has a great team of developers and they are producing quite some code. After a couple of weeks, a tester joins the team and asks if there is a testing environment available. So Bob goes of to the IT guys and asks them for a new machine. Fortunate as Bob is, it only takes a couple of days before his machine is ready! He gets the credentials to remotely access his Windows Server. Now as a true developer, Bob knows how to install IIS and SQL Server by clicking next –> next –> finish. After doing this, he copies his website and database to the machine, fiddles with some configuration settings and he’s good to go.

All of this is done through a set of GUIs like the Control Panel and Microsoft Management Console. Now that’s not a big problem for Bob. He knows how to do this. And while doing this, he installs his favorite tools and plugins for Windows. Who likes that new start menu on Windows Server? And having Visual Studio locally on the test environment is much easier to debug some stuff.
Meet Joe, the IT pro. Joe is given the job to prepare a production environment for  Unicorn. He looks at Bobs test environment, shudders and starts working on a production environment with all the bells and whistles that are required to get a stable and secure environment that’s up to his standards.
Joe uses PowerShell. He needs to configure a lot of machines and he doesn’t want to do it by hand. Instead he has collected a great amount of scripts over time that he stores on his hard drive and shares with some of his colleagues.

Things start breaking down


Until now, this doesn’t sound to bad. Maybe you have been a Bob or a Joe in a situation like this. But then, Joe calls Bob.

Joe: Your application doesn’t work
Bob: Yes it does. It not only works on my machine but also on the test environment
Joe: But it doesn’t work in production and that’s the only thing that matters

Bob who clicked through all his GUIs has no idea what changes he made. And so the search begins. After a long and heated search, Bob and Joe decide they really don’t like each other. Eventually the problem is found: a permission setting is required for a logs folder.

So what does this have to do with PowerShell Desired State Configuration?


Can you explain what the following scripts does?

Configuration ContosoWebsite
{
  param ($MachineName)

  Node $MachineName
  {
    #Install the IIS Role
    WindowsFeature IIS
    {
      Ensure = “Present”
      Name = “Web-Server”
    }
    
    #Install ASP.NET 4.5
    WindowsFeature ASP
    {
      Ensure = “Present”
      Name = “Web-Asp-Net45”
    }
  }
}

It’s not too hard is it? This is a PowerShell DSC script. It instructs a server to make sure that IIS and ASP.NET are installed.

This script is plain text. It can be read by a developer and an IT pro. Now imagine that Bob would have set down with Joe when he started preparing a test environment. Instead of clicking through a GUI, Bob asked Joe to help him create a DSC script. This script describes exactly what state the server should be in.

Since it’s just a script, it can be added to version control. Now that it is in version control, it can be added to the package that Continuous Integration build creates.

Release Management Update 3 has support for DSC. This means that after your build finishes, Release Management takes your DSC files and applies them to the set of servers you have in your environment. At the start, these machines can be completely clean. Everything is configured automatically when the DSC script gets applied. Whenever someone makes a manual change to a machine, the script reruns and the machine autocorrects itself.

Now that the script is finished, can you imagine how Joe setups the new production environment?

If you want to know more about PowerShell DSC, have a look at http://powershell.org. They have some great resources on DSC.

Feedback? Questions? Please leave a comment

Tuesday, September 9, 2014

Adding Code Metrics to your Team Foundation Server 2013 Build

When implementing a Deployment Pipeline for your application, the first step is the Commit phase. This step should do as many sanity checks on your code as possible in the shortest amount of time.Later steps will actually deploy your application and start running all kinds of other tests.

One check I wanted to add to a Commit phase was calculating the Code Metrics for the code base. Code Metrics do a static analysis on the quality of your code and help you pinpoint those types or methods that have potential problems. You can find more info on Code Metrics at MSDN.

Extending your Team Foundation Server Build


Fortunately for us, TFS uses a workflow based process template to orchestrate builds. This workflow is based on Windows Workflow Foundation and you can extend it by adding your own (custom) activities to it.

If you have a look at GitHub you’ll find a lot of custom created activities that you can use in your own templates. One of those is the Code Metric activity that uses the Code Metric Powertool to calculate Code Metrics from the command line.

If you check the documentation, using the Code Metric activity comes down to downloading the assemblies, storing them in version control and then adding the custom activity to your build template.

And that would be true if you wouldn’t be running on Visual Studio/Team Foundation Server 2013. For example, check the following line of code on GitHub:

string metricsExePath = Path.Combine(ProgramFilesX86(), 
       @"Microsoft Visual Studio 11.0\Team Tools\
         Static Analysis Tools\FxCop\metrics.exe");


This code still points to the old version of the Code Metrics Powertool. There where also some other errors in the Activity. For example, setting FailBuildOnError to false won’t have any effect.

Fortunately, all the activities are open source. Changing the path was easy. Fixing the FailBuildOnError bug was a little harder since it’s impossible (to my knowledge) to debug the custom activities directly on the Build server.

But there is a NuGet package for that!


But as a good developer, we first create a unit test that shows the bug really exist. By fixing the unit test, we then fix our bug. Unit testing Workflow activities is made a lot easier with the Microsoft.Activities.UnitTesting NuGet package.

Using this NuGet package I came up with the following ‘integration’ test:
[TestMethod]
[DeploymentItem("Activities.CodeMetrics.DummyProject.dll")]
public void MakeSureABuildDoesNoFailWhenFailBuildOnErrorIsFalse()
{
    var activity = new CodeMetrics();

    var buildDetailMock = new Mock<IBuildDetail>();
    buildDetailMock.SetupAllProperties();

    var buildLoggingExtensionMock = new Mock<IBuildLoggingExtension>();

    var host = WorkflowInvokerTest.Create(activity);
    host.Extensions.Add<IBuildDetail>(() => buildDetailMock.Object);
    host.Extensions.Add<IBuildLoggingExtension>(() => 
           buildLoggingExtensionMock.Object);
    host.InArguments.BinariesDirectory =                             
                          TestContext.DeploymentDirectory;
    host.InArguments.FilesToProcess = new List<string> 
    { 
       "Activities.CodeMetrics.DummyProject.dll" 
    };

    host.InArguments.LinesOfCodeErrorThreshold = 25;
    host.InArguments.LinesOfCodeWarningThreshold = 20;

    host.InArguments.MaintainabilityIndexErrorThreshold = 60;
    host.InArguments.MaintainabilityIndexWarningThreshold = 80;

    host.InArguments.FailBuildOnError = false;

    try
    {
        // Act
        host.TestActivity();

        Assert.AreEqual(BuildStatus.PartiallySucceeded,  
                 buildDetailMock.Object.Status);
    }
    finally
    {
        host.Tracking.Trace();
    }
}

I’ve configured the Code Metrics activity to run the analysis against a Dummy project dll with some threshold settings and of course the FailBuildOnError set to false. Fixing the test is left as an exercise to the reader ;)

Extending the Build Template


As a final step I’ve added parameters to configure the different thresholds and some other important settings to the Build workflow. That way, a user can configure the Code Metrics activity by editing the Build Definition:

image

And that’s it! You can download the code with the modified activity code, the workflow unit test and a copy of an implemented build workflow template here.

Useful? Feedback? Please leave a comment!

Tuesday, September 2, 2014

To version control, or to source control: that’s the question

One of the hardest things in software development is naming things. When designing your architecture, creating a method or adding a new variable: naming it correctly is half the work. This is why design patterns that create a shared vocabulary is so important.

But naming doesn’t only apply to our code and designs. We use all kinds of tools,. techniques and practices that need to have a name and some are around for quite  a while.

However, that doesn’t mean there isn’t any confusion on naming those things.

Version control or source control?

One particular area of naming problems is around source or version control. If you think about it for a moment, what term are the people around you using? What do you use? And can you describe the differences between those two terms?

For example, I was privileged to hear the following discussion at a customer:

Developer: we would like a way to bring the environment configuration under source control so we can version and test them
Ops: You don’t have to test our environment configuration. That’s our job. Our configuration scripts are not code so we don’t want to store them in source control.

Is it true that your source control can only be used for actual source code? Is that the reason we started using source control? Or do you use it for all your artifacts like documentation and configuration, build and deployment scripts?

When moving to a DevOps culture discussions like these are not uncommon. Making sure that you have a shared vocabulary with all stakeholders really helps in getting your communication running smoothly.

Switching from source control to version control is a small and simple step in that direction.

Tuesday, August 26, 2014

Do you know Microsoft Test Manager?

Application Lifecycle Management is all about getting traceability, visibility and automation into your software development process. When I see customers implementing ALM, they start with things like source control, project management tooling and build servers. Some of the more advanced development teams start looking at release management to automate their deployments. One area however that's often overlooked is testing. All to often I see companies use Excel to track their test cases. Testers spent a lot of time executing there tests manually and tracking their progress. When it comes to goals such as continuous delivery, a non-efficient testing process can be a big obstacle.

One of the better kept secrets of the Microsoft ALM implementation is Microsoft Test Manager. MTM can help testers with their work and integrate them fully into the ALM process of the overall team.

In this blog post I want to highlight a few options that got me enthusiastic about using MTM.

Meet Microsoft Test Manager


vs test professional 2013 with msdn As developers we use Visual Studio. Project managers use the web interface of TFS and Excel. Testers use Microsoft Test Manager. MTM is specifically created for testers. The application is a lot easier to use than Visual Studio and really helps testers in getting their work done.
You can download a free, 90 day trial of Visual Studio Test Professional to checkout all the capabilities of MTM.

Fast forwarding your tests


MTM lets testers create test cases that record the steps to test some functionality. A typical test case is shown in the following screenshot of MTM.
  Test Case in MTM

Here you see a test case with a couple of test actions and an expected result. One of the coolest features of MTM is the ability to record your test steps while manually running the test. The next time you execute the test case, you can fast forward through the steps and only pause on the interesting steps. Recording steps works in a lot of applications. The following screenshot shows how a previously recorded  test case is automatically played back. In this case, you will see MTM automatically open your browser, navigate to the correct website, perform some actions on the site and then pause so you can decide if the outcome is correct.

  image

Imagine how much time this can save your testers! Instead of having to manually repeat all steps for every test case they are executing, they can automatically fast forward to the interesting steps in their test.

Data Collection


When running a test case, MTM helps you by automatically collecting all kinds of data that can help in reproducing and fixing bugs. This is done by using so called Data Collectors. By default, you can collect System Information like Windows version, resolution, language settings and much more. But as you can see in the following screenshot this list can easily be expanded:
  image

One of the options is to record your testers screen and voice. Or what about IntelliTrace data? When data is collected, it gets automatically attached to the test case or any bugs created by the tester. No more struggling with testers to get all the information you need to fix a bug. Just configure a Data Collector for them and let them run their tests while you get all the data you want.

One notable option is Test Impact analysis. When you go full ALM with TFS, you can configure your build servers to deploy to test environments. The build server can analyze what has changed in a certain build and map this to the test cases that your testers are running. By combining this data, MTM can do a pretty good prediction of which test cases need to run on a new version of your application.

Exploratory Testing


What if you have no formal test cases or you don’t have any testers on your team? What if you want to do some testing work as a developer or let a stakeholder go through your application while making sure you can reproduce what he's done? Meet Exploratory Testing. By starting an Exploratory Testing session from MTM you can follow your own intuition and test whatever looks important to you. In the mean time, the full power of MTM helps you by recording your actions, allowing you to easily create screenshots and add comments to your testing session. Whenever you encounter a bug, you can attach all relevant recorded data to your bug and put in TFS. For example, the following bug is created while running an exploratory testing session. Do you notice the steps automatically recorded in the Steps to reproduce panel? You can edit those steps, add extra information and combine this with the automatically collected video or IntelliTrace data.
  image

MTM is cool!


These are only three of the features that get me enthusiastic about MTM. However, there is a lot more. Using parameters, tracking progress, using different configurations, using the web interface and much more. If you want to experiment with MTM, you can download the 90 day trial of Visual Studio Test Professional or get the Brian Keller VM with a couple of Hands On Labs to quickly get a tour of all MTM has to offer you.

Let me know what you think of it!

Questions? Feedback? Please leave a comment!