Tuesday, September 2, 2014

To version control, or to source control: that’s the question

One of the hardest things in software development is naming things. When designing your architecture, creating a method or adding a new variable: naming it correctly is half the work. This is why design patterns that create a shared vocabulary is so important.

But naming doesn’t only apply to our code and designs. We use all kinds of tools,. techniques and practices that need to have a name and some are around for quite  a while.

However, that doesn’t mean there isn’t any confusion on naming those things.

Version control or source control?

One particular area of naming problems is around source or version control. If you think about it for a moment, what term are the people around you using? What do you use? And can you describe the differences between those two terms?

For example, I was privileged to hear the following discussion at a customer:

Developer: we would like a way to bring the environment configuration under source control so we can version and test them
Ops: You don’t have to test our environment configuration. That’s our job. Our configuration scripts are not code so we don’t want to store them in source control.

Is it true that your source control can only be used for actual source code? Is that the reason we started using source control? Or do you use it for all your artifacts like documentation and configuration, build and deployment scripts?

When moving to a DevOps culture discussions like these are not uncommon. Making sure that you have a shared vocabulary with all stakeholders really helps in getting your communication running smoothly.

Switching from source control to version control is a small and simple step in that direction.

Tuesday, August 26, 2014

Do you know Microsoft Test Manager?

Application Lifecycle Management is all about getting traceability, visibility and automation into your software development process. When I see customers implementing ALM, they start with things like source control, project management tooling and build servers. Some of the more advanced development teams start looking at release management to automate their deployments. One area however that's often overlooked is testing. All to often I see companies use Excel to track their test cases. Testers spent a lot of time executing there tests manually and tracking their progress. When it comes to goals such as continuous delivery, a non-efficient testing process can be a big obstacle.

One of the better kept secrets of the Microsoft ALM implementation is Microsoft Test Manager. MTM can help testers with their work and integrate them fully into the ALM process of the overall team.

In this blog post I want to highlight a few options that got me enthusiastic about using MTM.

Meet Microsoft Test Manager


vs test professional 2013 with msdn As developers we use Visual Studio. Project managers use the web interface of TFS and Excel. Testers use Microsoft Test Manager. MTM is specifically created for testers. The application is a lot easier to use than Visual Studio and really helps testers in getting their work done.
You can download a free, 90 day trial of Visual Studio Test Professional to checkout all the capabilities of MTM.

Fast forwarding your tests


MTM lets testers create test cases that record the steps to test some functionality. A typical test case is shown in the following screenshot of MTM.
  Test Case in MTM

Here you see a test case with a couple of test actions and an expected result. One of the coolest features of MTM is the ability to record your test steps while manually running the test. The next time you execute the test case, you can fast forward through the steps and only pause on the interesting steps. Recording steps works in a lot of applications. The following screenshot shows how a previously recorded  test case is automatically played back. In this case, you will see MTM automatically open your browser, navigate to the correct website, perform some actions on the site and then pause so you can decide if the outcome is correct.

  image

Imagine how much time this can save your testers! Instead of having to manually repeat all steps for every test case they are executing, they can automatically fast forward to the interesting steps in their test.

Data Collection


When running a test case, MTM helps you by automatically collecting all kinds of data that can help in reproducing and fixing bugs. This is done by using so called Data Collectors. By default, you can collect System Information like Windows version, resolution, language settings and much more. But as you can see in the following screenshot this list can easily be expanded:
  image

One of the options is to record your testers screen and voice. Or what about IntelliTrace data? When data is collected, it gets automatically attached to the test case or any bugs created by the tester. No more struggling with testers to get all the information you need to fix a bug. Just configure a Data Collector for them and let them run their tests while you get all the data you want.

One notable option is Test Impact analysis. When you go full ALM with TFS, you can configure your build servers to deploy to test environments. The build server can analyze what has changed in a certain build and map this to the test cases that your testers are running. By combining this data, MTM can do a pretty good prediction of which test cases need to run on a new version of your application.

Exploratory Testing


What if you have no formal test cases or you don’t have any testers on your team? What if you want to do some testing work as a developer or let a stakeholder go through your application while making sure you can reproduce what he's done? Meet Exploratory Testing. By starting an Exploratory Testing session from MTM you can follow your own intuition and test whatever looks important to you. In the mean time, the full power of MTM helps you by recording your actions, allowing you to easily create screenshots and add comments to your testing session. Whenever you encounter a bug, you can attach all relevant recorded data to your bug and put in TFS. For example, the following bug is created while running an exploratory testing session. Do you notice the steps automatically recorded in the Steps to reproduce panel? You can edit those steps, add extra information and combine this with the automatically collected video or IntelliTrace data.
  image

MTM is cool!


These are only three of the features that get me enthusiastic about MTM. However, there is a lot more. Using parameters, tracking progress, using different configurations, using the web interface and much more. If you want to experiment with MTM, you can download the 90 day trial of Visual Studio Test Professional or get the Brian Keller VM with a couple of Hands On Labs to quickly get a tour of all MTM has to offer you.

Let me know what you think of it!

Questions? Feedback? Please leave a comment!

Tuesday, August 19, 2014

Accessing TFS from behind a proxy

Lately I’ve been working with a client with very strict security rules. One of their policies is that all internet traffic runs through a proxy. This causes some problems when accessing a remote Team Foundation Server over HTTP.

One of the issues we ran into was using Excel to connect to Team Foundation Server to select queries and edit work items.

Excel gave the following error:

enter image description here

As you can see, at the bottom a 407 Proxy Authentication error is mentioned. This error happens because by default programs like Excel and Visual Studio are not configured to use your proxy settings when connecting to another application.

Proxy settings


Configuring a .NET application can be done by editing the app.config file. If you look at the MSDN documentation you’ll see that you can use the defaultProxy attribute:

<defaultProxy enabled="true|false" useDefaultCredentials="true|false" 
    <bypasslist> … </bypasslist> 
    <proxy> … </proxy> 
    <module> … </module> 
/>

The most important settings for our situation are the enabled and useDefaultCredentials attribute. Both should be set to true to make sure that Excel uses the proxy settings when connecting to TFS.

You can also specify a proxy element that specifies which proxy can be used:

<proxy usesystemdefault="True" bypassonlocal="True"/>

By setting usesystemdefault to True, your application will use the proxy settings that are configured in Internet Explorer. In corporate environments these settings are most often configured through a group policy so all computers have the correct settings.

Combining these will give you the following:

<?xml version="1.0" encoding="utf-8" ?>
<configuration>   
    <system.net>
        <defaultProxy enabled="true" 
                      useDefaultCredentials="true">       
            <proxy usesystemdefault="True"
         bypassonlocal="True"/>
  </defaultProxy>
     </system.net>
</configuration>

Configuring Excel


When you have a 32bit version of Excel 2013 installed, your executable file can be found in:

C:\Program Files (x86)\Microsoft Office\Office15

If you have a different version of Excel (like 2007), you have to look in your Microsoft Office folder for the correct version number.

To add the proxy configuration, add a new text file named: excel.app.config with the following content:

<?xml version="1.0" encoding="utf-8" ?>
<configuration>   
    <system.net>
        <defaultProxy enabled="true" 
                      useDefaultCredentials="true">       
            <proxy usesystemdefault="True"
         bypassonlocal="True"/>
  </defaultProxy>
     </system.net>
</configuration>

Make sure you restart Excel so that it loads the new settings. And that’s all you have to do. After these settings, Excel will use your proxy configuration and allow you to connect to Team Foundation Server.

Questions? Feedback? Please leave a comment!

Tuesday, August 12, 2014

Continuously deploying your database with Release Management - Part 2

In Continuously deploying your database with Release Management – Part 1, we looked at how to use the Sql Server Data Tools to bring your database under source control. By using the Publish wizard you can easily deploy your changes to a target database.

But this process was still manual. In this blog post, you’ll use Release Management to deploy both your database and web application to a target server.

Configuring your target environment


When using Release Management you need a couple of components:

  • Team Foundation Server with Release Management installed and configured
  • Build server
  • Deployment Agent
  • Release Management Client

In my scenario, I have TFS running on a virtual machine in Azure (a great way to setup a demo environment!). I’ve installed both the Deployment Agent and the Client at my local development pc (if you have problems connecting your client and agent to the server see Connecting release management clients and deployment agents across domains). I’ve configured Release Management to link to my TFS instance.

Configuring your release path


After you have installed the deployment agent and client on your development machine, you can start configuring your release path.

First you need to add a new server that points to your local PC. By going to Configure Paths –> Servers –> Scan for New you can select your local PC and add it as a new server.

image

Make sure that you configure the Drop Location Access correctly for your server. If your Agent is part of the Team Foundation Server domain and can access the Build drop location of your build server by UNC Path, you can select the left option. If you’re Agent is outside of the Build domain, you can select the HTTP option. This means that all sources and tools will be copied to your target machine over HTTP(S).

image

In Release Management, you link your servers together in an Environment. Maybe your application uses both a database and a web server and you want to link those two servers together. In this sample we use only one server that will host both the database and the website.

By going to Configure Paths –> Environments you can create a new Environment. By choosing Link Existing you can pick the server you just added to your environment.

image

Now that you have a server that’s part of an environment it’s time to create your Release Path. A Release Path is the workflow that your application will go through. For example, maybe you are running a DTAP (Development, Test, Acceptance, Production). Your Release Path will determine the order in which you want to deploy to those environments, who’s responsible for which environment and what steps are manual (like verifying a deployment) and which are automated (like running tests).

If you don’t have any Stage types defined, first go to Administration –> Manage Pick Lists and Add a new Stage Type.

image

You can now create a new Release Path by going to Configure Paths –> Release Paths and choosing New. In this case, your Release Path only has one Stage since we’re using only one environment. If you would have multiple environments you can add multiple stages.

image 

Configuring your Build


Before getting into creating your actual release template, you need to make sure that there is something to deploy. You do so by creating a build that runs on your TFS Build Server and creates the required packages for deploying your application.

To connect your build to Release Management you need a special Build Template. You can download the correct version here. You need to copy this file to the BuildProcessTemplates folder in your source control (create the folder if it doesn’t exist).

After checking in the template file, you can now create a build that uses this template. You create a new template by navigation to your Team Explorer –> Builds and selecting New Build Definition.

You can configure a name for your template and some trigger settings but that’s not really important for now.

What is important is the tab Build Defaults. Here you should select the option Copy build output to the following drop folder. This drop folder should be accessible by your Build Server and the Release Management Server.

image

On your Process tab, you need to select the ReleaseTfvcTemplate.12.xaml file that you just added to source control. By selecting the New button you can browse source control and find your file in the BuildProcessTemplates folder.

image

In the bottom part of the process tab you need to configure your build process. You should specify the solution you want to build and add the build and release configurations.

image

Now you can save your Build Definition and fire off a new build! This build will fail because it can’t release your application yet. After your build has failed you can navigate to the drop folder that you configured on your Build Defaults tab. Here you will see a new folder showing up that contains a bunch of files and folders.

One of those files should be your MyDatabase.dacpac file. Your website will be stored in the folder _PublishedWebsites\GettingStartedWithSSDT.

Both the location of your dacpac and website are required for configuring your release template.

Configuring your Release Template


Now you can switch back to the Release Management client and take the final steps in setting up Release Management.

At the moment you have the following:

  • A Visual Studio Solution with a website and database project
  • A Build Definition based on Release Management
  • A Release Path with one environment that consists of one server

The final step is creating your Release Template. Your application has two parts: the website and your database. For those parts, you need to create a Component in Release Management.

If you go to Configure Apps –> Components –> New you can create a new Component. To create a component for your website you need to first specify the location in your Build Drop Location.

image

And on the Deployment tab you should configure how your website needs to be deployed. One of the easiest options in Release Management is to choose XCopy. This will just copy your website to a specified location (probably an IIS folder) on your target server.

image

To create a component for your database specify \ as the Build Drop Location to signal that your dacpac is located at the root of your build drop.

image

And for your Deployment Tool you can select the DACPAC Database Deployer. In my case, I’ve used a custom component called DACPAC Database Deployer v12 to deploy to SQL Server 2014. You can find instructions on how to add this component yourself here: Release Management for Visual Studio 2013 and SQL Server 2014 Database Deployments.

image

Creating your Release Template


Now that you have your components, you can use them in your actual Release Template. You start by going to Configure Apps –> Release Templates –> New.
image

Here you map your Release Template to the Release Path you created and to the Build Definition on your TFS server. As you can see, everything is starting to come together.

In your Toolbox you will see a category called Components. By right clicking here and selecting Add you can link your database and website component to this template.

You can now drag your target server and the two components you have created to your workflow.

image

This template will deploy the dacpac to your localdb instance and copy the website to a local folder on your pc. Of course this template is not production ready yet! You should add backup and rollback steps to make sure that your server stays in a good state when a deployment fails.

But for this demo, you have setup continuous deployment of your database and website!

Running a Release


If you now switch back to Visual Studio and queue a new build you will trigger the release template you just created.

Your build will now succeed and you will have a new database MyReleasedDatabase on your localdb instance. If you check your c:\temp folder you should also find your published website there.

Configuring your connection string


One thing you haven’t taken care of is making sure that the connection string of your website points to your newly configured database.

Release Management supports a process called tokenizing where you define values in your code that you want to be changed during deployment.

You start by making a copy of your web.config file and naming it web.config.token. Make sure that your file is copied to your output directory.

image

In the token file you will see that your connection string is:

<add name="MyModelContainer" connectionString="metadata=res://*/MyModel.csdl|res://*/MyModel.ssdl|res://*/MyModel.msl;provider=System.Data.SqlClient;provider connection string=&quot;data source=(localdb)\v11.0;initial catalog=SSDTSample;integrated security=True;MultipleActiveResultSets=True;App=EntityFramework&quot;" providerName="System.Data.EntityClient" />

To create  a token you change your connection string to:

<add name="MyModelContainer" connectionString="metadata=res://*/MyModel.csdl|res://*/MyModel.ssdl|res://*/MyModel.msl;provider=System.Data.SqlClient; provider connection string=&quot;__ConnectionString__&quot;" providerName="System.Data.EntityClient" />

By replacing the provider connection string with __ConnectionString__ you’ve introduced a token called ConnectionString. After adding this file to source control you can configure your token further in your Release Management Client.

By going to your Website component you can add your tokens on the Configuration Variables tab.

image


Now when you navigate to your Release Template you see that your Website component has an extra configuration variable named ConnectionString. Here you can specify the value you want to use for the ConnectionString at deployment.

image

And that’s it. Queue a new Build and watch how your web.config now has the correct connection string.

You now have automatic deployment of your website and database with Release Management!

Questions? Feedback? Please leave a comment

Friday, August 8, 2014

Please restart your PC

When a friend calls you with a computer problem, probably one of the first things you say is:

Have you already restarted your PC?

Well, today I fell into the same trap. I’m currently in the process of installing a Team Foundation Server environment on Azure as a demo environment (If you’ve never done this, a great place to get started is the ALM Ranger guidance found at http://vsarplanningguide.codeplex.com/ which now has a new supplement on installing TFS on Azure IaaS).

So my environment now consists of a TFS Application Tier, a SQL Server 2014 Data Tier and a Build Server.

While in the process of installing SQL Server Reporting Services I ran into the following error when viewing the Test Case Readiness report:

An error has occurred during report processing. (rsProcessingAborted) 
    Query execution failed for dataset 'dsArea'. (rsErrorExecutingCommand)
         The Team System cube either does not exist or has not been processed.

So I started my analysis by running the Team Foundation Server Best Practice Analyzer tool. This gave me the following Reports:

image

image As you can see, there are some errors regarding the Analysis Cube processing. So I tried to run a manual processing job of the Analysis Cube server.

This great walkthrough helps you in doing a manual start of the warehouse and analysis cube processing.

What it comes down to is navigating to http://localhost:8080/tfs/TeamFoundation/Administration/v3.0/WarehouseControlService.asmx on your application tier and then using ProcessWarehouse, ProcessAnalysisDatabase and GetProcessingStatus to check what’s going one.

One thing to be aware of is that the walkthrough is a little outdated. When you call GetProcessingStatus on a TFS 2013 installation you will get the following screen:

image

A new option includeOnlineHostsOnly is added. You need to pass TRUE for this field otherwise you will get a 500 error when checking the status of your jobs.

Now when checking my job status I found the following:

<Job JobProcessingStatus="Idle" Name="Incremental Analysis Database Sync">
    <LastRun Result="Failed" EndTimeUtc="2014-08-07T15:06:42.29Z" ExecutionStartTimeUtc="2014-08-07T15:06:38.747Z" QueueTimeUtc="2014-08-07T15:06:38.217Z">
         <ResultMessage>[Incremental Analysis Database Sync]: ---> Microsoft.TeamFoundation.Warehouse.WarehouseException: TF221122: An error occurred running job Incremental Analysis Database Sync for team project collection or Team Foundation server TEAM FOUNDATION. ---> Microsoft.TeamFoundation.Warehouse.WarehouseException: TF255040: Install SQL Reporting Services or at a minimum SQL Client Connectivity Tools on the application tier to ensure Analysis Services object model is present for warehouse processing. at Microsoft.TeamFoundation.Warehouse.AnalysisDatabaseSyncJobExtension.RunInternal(TeamFoundationRequestContext requestContext, TeamFoundationJobDefinition jobDefinition, DateTime queueTime, String& resultMessage) at Microsoft.TeamFoundation.Warehouse.WarehouseJobExtension.Run(TeamFoundationRequestContext requestContext, TeamFoundationJobDefinition jobDefinition, DateTime queueTime, String& resultMessage) --- End of inner exception stack trace ---
        </ResultMessage>
   </LastRun>
   <NextRun QueueTimeUtc="2014-08-08T09:06:38.217Z" JobState="QueuedScheduled"/>
</Job>

So somehow I get an error that the SQL Client Connectivity Tools are missing on the Application Tier. But since I have no idea what to do else, I’ve rerun the SQL Server Installation to make sure.

And that’s when I noticed:

image

And that’s it. Apparently I forgot to restart after installing the SQL Server Client Connectivity Tools. After restarting the Application Tier the Analysis Job ran perfectly and my reports are functioning just as you would expect.

Question? Feedback? Please leave a comment!

Monday, August 4, 2014

Continuously deploying your database with Release Management- Part 1

When it comes to automating the deployment of an application, I always see customers struggling when it comes to the database. Updating a web application with Web Deploy or even with a simple XCopy isn’t that hard. But a database with schema changes is not a simple XCopy.
Fortunately, Microsoft released a great tool for dealing with database updates. Unfortunately, this tool isn’t widely used yet. This blog series will help you in getting started with SQL Server Data Tools and setting up continuous deployment of your database with Release Management.

Part 1 Meet SSDT

Part 2 Using Release Management to deploy your database

Part 1: Meet SSDT


With Visual Studio 2012 Microsoft released a big update to what was previously known as the Database Project. Starting with VS2012, SQL Server Data Tools overhauled the database project and gives you all you need to bring your database into source control and enable continuous deployment scenarios.

The newest version at the time of writing is SDDT July 2014. If you are running Visual Studio 2013, you will automatically get notified of any new updates in your IDE. If you’re running 2012 you can download the update here.

Now let’s start with a simple web application and add a database project to it.

Adding SSDT to your application


The sample application is a simple website that uses Entity Framework Database First to access a table called People.

You can download the source code for the final application here or you can create the project with the following easy steps:
  • Create a database SSDTSample on your LocalDB instance. Add the following table:
-- Creating table 'PersonSet'
CREATE TABLE [dbo].[PersonSet] (
    [Id] int IDENTITY(1,1) NOT NULL,
    [FirstName] nvarchar(max) NULL
);
GO 

-- Creating primary key on [Id] in table 'PersonSet'
ALTER TABLE [dbo].[PersonSet]
ADD CONSTRAINT [PK_PersonSet]
    PRIMARY KEY CLUSTERED ([Id] ASC);
GO 
  • Create a new ASP.NET Web project for MVC named GettingStartedWithSSDT and add this project to source control. In this sample we will use TFVC in a Team Project called SSDTTestProject (for more information on creating a Team Project see Create a team project)
  • Add an Entity Framework Database First model that points to your SSDTSample database
  • Add a MVC5 Controller with views, using Entity Framework
image
image
  • Run your application and navigate to /People
And that’s it for your sample application. You now have a web application that points to a database on your LocalDB. Because you’ve scaffolded the controller, you can now view, create, edit and delete items from your database through your web application.

Add a database project


The following step is to add a Database project to your solution. If you have SSDT installed you can find this project template in the SQL Server folder.
image

By default, this will create an empty project that you can then use to develop your database from scratch or you can start with an existing database.

In this case, you already have a database so you can choose to import the database in your project.
image
All you have to do now is select the database you want to import. You can leave the other settings at their default values.
image
After the import process finishes the database project contains the files that describe all your tables, relationships, stored procedures and views.  If you build this project you will see that the output folder now contains three files:
  • MyDatabase.dacpac
  • MyDatabase.dll
  • MyDatabase.pdb
image

The most interesting one is the dacpac file. This a complete self-containing description of your database. This is the file that you can deploy to update an existing database.

Manually publishing your database


Now that you have your database project you can use this project to create new or update existing databases. This is very easy to do from Visual Studio. Just right click your project and choose publish. image

In this dialog you specify the connection to your server and the name of the database you want to update. If you use a database name that doesn’t exist, the database will be created for you. Now just hit Publish to create your database!

After a couple of seconds you will see the following log:
image
If you now open SQL Server Management Studio (or the SQL Server Object Explorer in Visual Studio) you will see that the database is created with your table in it.
If you get an error regarding your SQL Server version make sure that the version defined in your Database projects properties matches the version of the SQL Server that you are trying to deploy to.

Making a change to your database


Now let’s say you want to add a column LastName to your database. You can use the Database Project, make your changes in the project and then update your local database. Or if you’re trying to make as less changes to your process as possible, you can add the column through SQL Server Management studio and then update your database project.

Let’s say you’ve chosen to use SQL Server Management studio and added a column LastName nvarchar(max) NULL to your table. Now you want to update your database project so it knows about the schema change. Normally, you would force your developers to create scripts for every change they make. But not with SSDT. SSDT can easily pickup the changes you’ve made and keep a record of those changes.

To do this, right click your project in the solution explorer and choose Schema Compare. image

This will load a new Schema Compare. If you set the source to your database and the target to your project, you can now run a compare operation to see if there where any changes.

image

As you can see, the compare process found the change where you added a column called LastName. On the left it shows the current state of your database, on the right the state of your database project.

By hitting the Update button you can write the new found changes to your database project.

Updating a database


The previously deployed MyDatabaseDeploy database does not yet have the newly added column. However, this can be easily done by publishing your database project to the now already existing MyDatabaseDeploy.

All you have to do is right click your database project and select Publish. Now target the MyDatabaseDeploy Database and click Publish. This will automatically apply any changes made in your Database project to your target database.

If there is a change on data loss (such as dropping or truncating a column), you need to explicitly allow this. If you’re still not sure on what’s happening you can let the Publish wizard create a set of scripts that you can manually inspect and adapt to your situation.

That’s it for SSDT. Next time we’ll look at using Release Management to automate the deployment of a dacpac to your servers.

Feedback? Questions? Comments? Please let me know!

Thursday, July 31, 2014

Release Management: The specified path, file name, or both are too long

While working with Release Management a new deployment suddenly gave an error:

ERROR: The deployment for release template 'XXXX' has not been completed successfully. Check the releases history for more details.
Exit: 1

The release history didn’t gave much information but the Event Viewer showed the following:

The specified path, file name, or both are too long. The fully qualified file name must be less than 260 characters, and the directory name must be less than 248 characters.: \r\n\r\n   at System.IO.PathHelper.GetFullPathName()

 

So what’s happening here?

When a new release starts, the Deployment Agent on your server copies all required data from your Release Management server to a local folder on your deployment server. For example, if you have a component MyDatabase that is located in the root of your build package, you can use a forward slash in your component configuration to tell Release Management to look for the component at the root of your build drop location.

image

Now all the data in your build folder gets copied over to the deployment server and gets stored at: 

C:\Users\<UserNameForDeploymenyAgent>\AppData\Local\Temp\ReleaseManagement\<ComponentName>\<VersionNumber>

The problem I was having was that the customer not only had a database project but also a really long path and filename in a web project. The website got published to _PublishedWebsites\WebsiteName and got copied to the deployment server. That together with the temp folder and the component name was way to long.

Now off course we could have shortened the path name. But the underlying problem was that not only our database was copied but the complete build package. This is a waist of time and resources.

 

Splitting the build package

To resolve the issue, you can change the build to output each project in his own folder. This way, the build drop location would contain a folder MyDatabase with only the database files. It would also contain a specific folder for the website with the published website files.

You can configure this by adding the following MSBuild argument to your Build Definition:

/p:GenerateProjectSpecificOutputFolder=true

image

Now the structure of your build drop location changes and contains a separate folder for each project. This means that a folder MyDatabase is created with all the database files in it. Changing your component to point to \MyDatabase now makes sure that only the required data gets copied to your deployment server.

What about Output location?

In TFS 2013 a new option Output location was added to the Build Definition. This property can be set to:

  • SingleFolder
  • PerProject
  • AsConfigured

PerProject does not do what you initially think. The PerProject option creates a single folder for each project that you specify under the Build –> Projects option at 2.1. This means that if you build two solutions as part of your build, SingleFolder will combine the results of those two solutions into one folder while PerProject will output a folder for each solution.

AsConfigured won’t copy the build output anywhere. Instead, you can attach your own scripts that copy the build output from your build location to your drop folder. This gives you total freedom. In this case however, using the GenerateProjectSpecificOutputFolder MSBuild argument did the trick.

Comments? Feedback? Questions? Please leave a comment!