Thursday, March 27, 2014

Build here we come!

Only six more days and then Microsoft //Build 2014 will start! The schedule still only shows the begin and end times but slowly some details are coming out.
So here are my personal hopes and speculations on what Microsoft is going to show us next week.
One source of inspiration is the schedule for the Dutch TechDays that will be held two weeks after Build. This week a 16 sessions showed up on the schedule with the following entrigung title:
 

to be announced after //Build

The following speakers will be presenting those sessions:
  • Marcel de Vries (ALM MVP and Microsoft Regional Director for the Netherlands)
  • Alex Thissen (.NET architect at a Dutch company)
  • Rajen Kishna (Microsoft Technical Evangelist focused on Windows 8 and Windows Phone)
  • Maarten Struys (Microsoft Account Technology Strategist focusing on Windows Embedded, Windows Phone and Windows Store apps)
  • Andy Wigley (Microsoft Technical Evangelist focused on Windows Phone)
  • Matthijs Hoekstra (Senior Product Marketing Manager focused on Windows Phone)
  • Bart de Smet (Microsoft Software Development Engineer focused on C#/Reactive Extensions and cloud)
  • Robert Green (Microsoft Technical Evangelist focused on Visual Studio)
So what can we conclude from this list?
There is definitely going to be Windows Phone news. This shouldn’t be a surprise since Windows Phone 8.1 rumors are all over the internet. Other rumors focus on the merge of Windows Store for Windows Phone, Xbox and Windows 8 Store. Will this be announced this year or do we have to wait to 2015? We will see!
The other interesting sessions will revolve around ALM, C# and Visual Studio. Since those are the areas I really look forward to here some hopes of mine:
  • Roslyn! I really hope we get at least a new CTP or maybe even a beta.
  • Brian Harry already mentioned on his blog they did a huge update to Visual Studio Online that’s still hidden behind feature flags. In his ALM Summit keynote in Germany he mentioned that Release Management will be added to Visual Studio Online so that’s at least one thing we can expect. The other feature I really hope for is the merging of Lab Management and Release Management. Currently there is a lot of overlap between those services and it’s hard to explain this to customers other then ‘wait and see’. Let’s hope Build will clear this.
  • Project N: at the Visual Studio 2013 launch a brief demo was shown on compiling C# directly to machine code. Maybe we are going to hear more about it.
  • C# 6: Mads Torgersen already demoed some expected new C# 6 features at the NDC and I hope that together with Roslyn we will get our hands on those new features.
  • Visual Studio 2013 and Team Foundation Server 2013 Update 2 will probably be released. Currently in CTP 2, I wouldn’t be surprised if we got a new beta with a Go Live license.
  • I’m also wondering if we will be seeing anything new related to Azure. At least the name change to Microsoft Azure is scheduled for Thursday during Build so maybe that will be live and we will get some new features?
So that are my hopes and speculations. New ALM and new C# stuff would make my Build great. It would also be nice to see Satya Nadella in person and maybe we can even get some Developers Developers Developers.
So what are your thoughts? Any new speculations? Any things you hope for? Or want to grab a cup of coffee during Build? Just leave a comment!




Monday, October 28, 2013

Desugaring foreach

When working with data you often work with collections. This could be an array, one of the .NET Framework collections or maybe even a custom collection type.

Iterating over the elements in a collection is such a common task that the creators of C# added some syntactic sugar to make this task even easier in the form of the foreach statement. Understanding what’s going on behind the scenes can help you when working with collections and it can show you new ways to improve your own collection types.

What if there was no syntactic sugar?

Let’s say you have an array containing some integers and you where asked to display them on screen without using the foreach statement. What would you use?

Well, you could use a for loop like this:

   1:  using System;
   2:   
   3:  namespace Foreach
   4:  {
   5:      class Program
   6:      {
   7:          static void Main(string[] args)
   8:          {
   9:              int[] numbers = { 1, 1, 2, 3, 5, 8, 13 };
  10:   
  11:              for (int index = 0; index < numbers.Length; index++)
  12:              {
  13:                  int current = numbers[index];
  14:                  Console.WriteLine(current);
  15:              }
  16:   
  17:              Console.ReadLine();
  18:          }
  19:      }
  20:  }

By keeping track of where you are in the collection and finding the current element you can iterate quite easily over all the elements in the array.

Another way would be to use a while loop:


   1:  using System;
   2:   
   3:  namespace Foreach
   4:  {
   5:      class Program
   6:      {
   7:          static void Main(string[] args)
   8:          {
   9:              int[] numbers = { 1, 1, 2, 3, 5, 8, 13 };
  10:              
  11:              int index = 0;
  12:              while (index < numbers.Length)
  13:              {
  14:                  int current = numbers[index];
  15:                  Console.WriteLine(current);
  16:                  index++;
  17:              }
  18:   
  19:              Console.ReadLine();
  20:          }
  21:      }
  22:  }

Under the covers, a for statement is usually implemented using a while loop. If you compare both statements, you see a couple of similarities. Both use an index to keep track of where they are in the collection and check the length of the collection. They also fetch the current item for you so you can use it. Because of these similarities, it would be nice to have a language construct that made it easier to use collections. Fortunately, that’s exactly what the foreach statement does.

Using some syntactic sugar: foreach

The foreach statement is syntactic sugar that allows you to easily work with collections. The previous examples can be rewritten to use foreach:


   1:  using System;
   2:   
   3:  namespace Foreach
   4:  {
   5:      class Program
   6:      {
   7:          static void Main(string[] args)
   8:          {
   9:              int[] numbers = { 1, 1, 2, 3, 5, 8, 13 };
  10:   
  11:              foreach (int current in numbers)
  12:              {
  13:                  Console.WriteLine(current);
  14:              }     
  15:   
  16:              Console.ReadLine();
  17:          }
  18:      }
  19:  }

When you are working on an array, the compiler is smart enough to translate your code to a basic for loop. This gives you the best performance when working with arrays.

When you are working with a more complex collection, like a List<int> you get a different behavior. When working with such a collection, the IEnumerable and IEnumerator interfaces are used to implement the iterator pattern.

In essence, foreach translates into the following C# code:


   1:  List<int>.Enumerator e = numbers.GetEnumerator();
   2:   
   3:  try
   4:  {
   5:      int v;
   6:      while (e.MoveNext())
   7:      {
   8:          v = e.Current;
   9:          Console.WriteLine(v);
  10:      }
  11:  }
  12:  finally
  13:  {
  14:      IDisposable d = e as IDisposable;
  15:      if (d != null) d.Dispose();
  16:  }

Your collection exposes a method called GetEnumerator. The enumerator object keeps track of where it is in the collection and offers you access to the current item. By using the while loop, you can move trough all elements in the collection. The try/finally block is added to make sure that you always dispose of the enumerator if it implements IDisposable.

And that’s how the foreach statement works. As you can understand, there is a whole lot more to creating your own enumerator objects and making sure that your collection can be used in a foreach statement. This concerns things like implementing a GetEnumerator method, the IEnumerable interface, using dynamic and using some other syntactic sugar: yield.

But that’s a topic for next time!

Questions? Feedback? Please leave a comment!

Monday, October 21, 2013

Why I love Visual Studio 2013

After only one year, Microsoft released a new version of Visual Studio. The preview and release candidate where already available for a couple of months but now the official RTM is released last week. I've used the preview and release candidate extensively and installed the RTM as soon as the download was available.

if you’re wondering if it’s worth the the trouble to upgrade to VS2013 I encourage you to keep reading!

Project round tripping

Upgrading a product that you rely on every day can be quite exciting. Especially when working in a team, upgrading an application can be something that requires some coordination and planning.

However, that’s not true for Visual Studio. You can upgrade your edition of Visual Studio without bothering your coworkers. Because of a feature called project round tripping you can work on a project using both Visual Studio 2012 and 2013 without any problems.

This means you won’t have to wait for other members of your team to upgrade. You can upgrade today and start using all the new features without having to wait on a centralized upgrade.

So what are some of those new features?

Debugging enhancements

Microsoft put a lot of work in enhancing the debugger in Visual Studio 2013. One great feature is edit-and-continue support for 64-bit applications. Since most development pc’s are running on 64-bit hardware, this is a very welcome feature.

In C# 5 async/await was introduced and the debugger was capable of stepping through a function that used the await operator just as if you where debugging synchronize code. However, the call stack showed that you where actually switching threads and lost the context of the call stack you where working on. All this is improved in VS2013. The debugger now nicely shows where you are hitting an async command in your code and helps you in keeping track of your context.

One other great feature is the ability to see the return value of a method. If you have code like this:

   1:  int result = Multiply(Five(), Six());

you normally only see the value of result in the debugger. With VS2013 the auto’s window will also show you what Five and Six returned.This can make your life a lot easier. Instead of introducing all kinds of temporary variables during debugging, you can now just use the debugger to view all intermediate results.

Code Lens

Another new feature is code lens. Code lens is some sort of hub that’s projected over the code you’re working on. It shows you all kinds of information that normally would require you to open separate windows or navigate through your solution.

One thing it can always show is the number of references to your class or method. You can easily inspect the references without having to navigate through your solution.

When you are using unit tests, the code lens also shows if your unit tests for that particular method are passing or failing. And when you are using Team Foundation Server 2013 or Team Foundation Service you get even more information like who edited the code and what changes they made.

Browser link


Since I’m a web developer, I’m especially thrilled by all the new features that are coming out for web development. Browser link is definitely at the top of my list and if you are a web developer this feature makes the upgrade a must have!

Browser link is a real time connection between Visual Studio and any browser that’s running your application. This is done by using SignalR, one of the coolest and hottest frameworks released by Microsoft. This makes all sorts of scenarios possible. By default, you get an easy way to refresh your opened browsers after you’ve made a change. This allows for a rapid feedback cycle while doing web development.

However,that’s only the beginning. If you install Web Essentials (a must have extension for every web developer!) you get access to all the new features that the ASP.NET team is experimenting with. This gives you a bunch of new possibilities for Browser link like inspection mode and design mode just as you where used to having in your browser development tools. But also the ability to sync changes from your browser directly into your code (CSS tweaking!) is really nice.

And as Microsoft clearly states: this is only the beginning. Browser link is an open API and you can start customizing it with your own ideas.


And a lot more

In addition to these new features, there is a lot more in VS2013. Enhancements to the HTML, CSS and JavaScript editor, new JavaScript profiling tools, GIT integration, a new Team Explorer, new Azure capabilities and integration into Visual Studio and a bunch of other features.

I can say that I really love VS2013. So, why haven’t you upgraded?

Have you already upgraded? What are your favorite new features? Please leave a comment!

Thursday, October 17, 2013

Desugaring object initializers

Do you sometimes shiver when you see yourself writing line after line of assignments when initializing a new object? Maybe something like this:

   1:  Foo foo = new Foo();
   2:  foo.A = 1;
   3:  foo.B = 2;
   4:  foo.C = 3;
   5:  foo.D = 4;
   6:  foo.E = 5;

As you can see, this syntax is quite verbose. Fortunately, C# added some syntactic sugar to make your life easier.

Meet the object initializer

Starting with C# 3 some syntactic sugar was added to help you with initializing objects. Using this new syntax the previous code can be changed to:

   1:  Foo foo = new Foo
   2:  {
   3:      A = 1,
   4:      B = 2,
   5:      C = 3,
   6:      D = 4,
   7:      E = 5
   8:  };

This syntax is called an object initializer. If you desugare this code you will see the following:

   1:  Foo __foo = new Foo();
   2:  __foo.A = 1;
   3:  __foo.B = 2;
   4:  __foo.C = 3;
   5:  __foo.D = 4;
   6:  __foo.E = 5;
   7:   
   8:  Foo foo = __foo;

A local temporary variable is created by the compiler. In the generated IL code, this variable uses the special naming construct with angle brackets so you don’t get conflicts with your own code. Then the properties are assigned and the temporary variable is assigned to the actual variable.

On a side note, do you think the following the code compiles?

   1:  Foo foo = new Foo
   2:  {
   3:      A = 1,
   4:      B = 2,
   5:  };

As you can see, you have a comma following the assignment of B. And yes, this code will compile! Why? The compiler allows the extra comma in object initializers to make code generation easier. You don’t have to check if you are generating code for the last property you want to set, you can jus add the comma.

Anonymous types


While object initializers are a nice form of syntactic sugar in these types of scenarios, they are actually required when working with anonymous types. An anonymous type is a type that is defined by the way it’s initialized:

   1:  var myAnonymousType = new { X = 42 };

By using the object initialization syntax, you define a new property called X for your anonymous type. After this line, your object will have a property X. You can’t just add a new property to it by doing something like myAnonymousType.Y = “This won’t work”

Why? Well, anonymous types are also a form of syntactic sugar. In a future blog post we will look at anonymous types in more detail but in essence, the compiler generates a class for you with an angle bracket name and the properties that you use in the object initializer.

So, that’s the idea behind object initializers. It’s just a nice way of quickly initializing a new object or creating a new anonymous type.

Feedback? Questions? Please leave a comment!

Tuesday, October 8, 2013

Desugaring auto-implemented properties

What do you prefer: a field or a property? Most object oriented gurus will tell you that a property should be preferred. But do you sometimes feel that writing all those properties in C# is a waste of time?

Of course you know that using properties is important to encapsulate the inner workings of your class. By using accessors and mutators you hide the implementation details of your class and you can add logic like validation or notifications. If you would convert a field to a property you would change the metadata of your assembly and depending applications would have to be recompiled.

But most of your properties have empty getters and setters, but still you have to create a private field and a property with both a get and set accessor like this:

   1:  class Foo
   2:  {
   3:      private int _bar;
   4:   
   5:      public int Bar
   6:      {
   7:          get { return _bar; }
   8:          set { _bar = value; }
   9:      }
  10:  }

When looking at your Foo class in Ildasm you see the following:

image

As you can see, your property is emitted as get_Bar and set_Bar method. This is how your property actually works. Every place where you read from a property is replaced by a call to get_Bar. Writing is replaced with set_Bar. So here you see an example of syntactic sugar! Instead of having to write the get_Bar and set_Bar method yourself, the compiler helps you by offering a nice syntax for properties.

Going auto


This is all nice, but what if you don’t have to add any extra code to the getter or setter? This is where auto-implemented properties show their value. An auto-implemented property is syntactic sugar that allows you to write a plain property in a simple way:

   1:  class Foo
   2:  {
   3:      public int Bar { get; set; }
   4:  }


You can specify different accessibility levels for your get and set method, for example by marking the setter as read-only:
public int Bar { get; private set; }
As you can see, you don’t have to specify a backing field and you don’t have to write the code for the getter and setter. This is all nice, but how does it work behind the scenes?

Using Ildasm on auto-implemented properties


We can inspect the IL that’s being generated by using Ildasm on the Foo class. This gives you the following result:

image
You still have the get_Bar and set_Bar method but now you also have a <Bar>k_BackingField field. In C# this would be an illegal name for a field but in IL code it’s perfectly legal. The compiler uses this naming convention to make sure that generated fields don’t conflict with any user defined elements. It also decorates the field with the CompilerGeneratedAttribute. You can verify that this field really exists by using reflection:

   1:  Foo foo = new Foo();
   2:   
   3:  Console.WriteLine(foo.Bar); // Displays 0
   4:   
   5:  var fields = foo.GetType().GetFields(
   6:      BindingFlags.Instance | BindingFlags.NonPublic);
   7:  var field = fields[0];
   8:   
   9:  Console.WriteLine(field.Name); // <Bar>k_BackingField
  10:  field.SetValue(foo, 42);
  11:   
  12:  Console.WriteLine(foo.Bar); // Displays 42

As you can see, the field really exists and when you know it’s name you can even access it through reflection. Of course this isn’t a recommended way to start juggling with your auto-implemented properties. Normally, the backing field of an auto-implemented property is not accessible from your C# code. This also explains why an auto-implemented property requires both a getter and a setter. Since you can’t access the backing field from normal code, it would be useless to have an auto-implemented property with only a getter.

So that’s how auto-implemented properties work. By using them, you get a syntax that’s almost as concise as using a field but offers you the possibility of switching to a full property whenever you want. And by desugaring your code, you discovered that an auto-implemented property is just a get and set method with a private backing field.

Questions? Remarks? Please leave a comment!

Tuesday, October 1, 2013

Async and await and the UI thread

A reader posted a question on my blog about the use of ConfigureAwait. His question was about the following paragraph I wrote for the book Programming in C#:

"Both awaits use the ConfigureAwait(false) method because if the first method is already finished before the awaiter checks, the code still runs on the UI thread"

What is this all about? Well, imagine the following scenario. You are working on a WPF or WinForms application and you have some asynchronous code. The beauty of async and await is that it automatically keeps track of the synchronization context you are on and makes sure that your code will run on the correct thread.

Take the following code:

   1:  private async void button1_Click(object sender, EventArgs e)
   2:  {
   3:      HttpClient client = new HttpClient();
   4:      string result = await client
                            .GetStringAsync("http://microsoft.com");
   5:      label1.Text = result;
   6:  }

(You can run this code by creating a new WinForms application with a button and a label. If you then wire up the click event to the previous code, you can run the example).

This code executes an asynchronous request to fetch microsoft.com as a string and then displays the resulting HTML in the label.

In applications like WinForms and WPF, there is only one single thread that has access to the elements on screen. Async and await makes sure that the reminder of your method will run on the UI thread so you can access the label. It does so by capturing the current SynchronizationContext and restoring it when the async method is finished.

But what if you don’t want to run the reminder of your method on the UI thread? Maybe you have another asynchronous operation, like writing the content to a file, that doesn’t have to run on the UI thread and the capturing of the SynchronizationContext and the thread switching will only cost you performance.

You can turn of this behavior by using ConfigureAwait(false):

   1:  await SomeFastAsyncOperation().ConfigureAwait(false);

By using ConfigureAwait(false) you disable the capturing of the SynchronizationContext and you let the .NET Framework know that you can continue on any thread that’s available. But now take the following code where you have multiple asynchronous operations:

   1:  private async void button1_Click(object sender, EventArgs e)
   2:  {
   3:      Console.WriteLine("Current thread: " +    
                               Thread.CurrentThread.ManagedThreadId);
   4:      await SomeFastAsyncOperation().ConfigureAwait(false);
   5:      Console.WriteLine("Current thread: " + 
                               Thread.CurrentThread.ManagedThreadId);
   6:      await SomeSlowAsyncOperation();
   7:      Console.WriteLine("Current thread: " + 
                               Thread.CurrentThread.ManagedThreadId);
   8:  }

The first async operation uses ConfigureAwait(false). The second operation doesn’t use this option. Now you would think that the first Thread id is the UI thread and the second and third one are ids of different threads. This is true if your first operation actually executes asynchronously. But if that operation finishes really quickly, before you are awaiting it, the generated code is smart enough to not switch to another thread. That way, it could suddenly happen that the second operation on line 6 executes on the UI thread.

And that’s what I meant with the paragraph from the introduction of this article:

"Both awaits use the ConfigureAwait(false) method because if the first method is already finished before the awaiter checks, the code still runs on the UI thread"

Meaning that your code should look like this:

   1:  private async void button1_Click(object sender, EventArgs e)
   2:  {
   3:      Console.WriteLine("Current thread: " 
                              + Thread.CurrentThread.ManagedThreadId);
   4:      await SomeFastAsyncOperation().ConfigureAwait(false);
   5:      Console.WriteLine("Current thread: " 
                              + Thread.CurrentThread.ManagedThreadId);
   6:      await SomeSlowAsyncOperation().ConfigureAwait(false);
   7:      Console.WriteLine("Current thread: " 
                              + Thread.CurrentThread.ManagedThreadId);
   8:  }
   9:   
  10:  private Task<string> SomeFastAsyncOperation()
  11:  {
  12:      return Task.FromResult("test");
  13:  }
  14:   
  15:  private async Task<string> SomeSlowAsyncOperation()
  16:  {
  17:      HttpClient client = new HttpClient();
  18:      string result = await client.GetStringAsync("http://microsoft.com");
  19:      return result;
  20:  }

As you can see, both operations on line 4 and 6 use the ConfigureAwait(false) method. So although the operation on line 12 finishes immediately, the method on line 15 will still be configured correctly.

If you copy this code to your WinForms application you can play around with it by changing the values for ConfigureAwait. Also try to access the label element in combination with those changes so you can clearly see the results.

What do you think about async and await? Have you already used ConfigureAwait or do you have other questions about it? Please leave a comment!

Monday, September 30, 2013

Desugaring the using statement

Do you need to worry about memory management in .NET?

Although .NET is a managed environment with a garbage collector, this doesn’t mean you don’t have to worry about memory management. Of course, it’s different then when using a language like C++ where you have to explicitly free memory but in C# you still have to think about memory management.

The garbage collector in .NET is a big help in freeing memory and when you were only dealing with managed objects, that would be enough. However, in most applications you don’t only deal with managed objects. Maybe you access a file on disk, a web service or a database. Those resources are unmanaged and they do have to be freed explicitly.

If you won’t do anything, the garbage collector will eventually kick in and remove unused objects from memory. The classes in the .NET Framework that deal with unmanaged resources implement what's called a finalizer. The finalizer runs code to close your file or database handle. So with a well-designed class, the finalizer will eventually cleanup your unmanaged resources. But waiting for the garbage collector makes your code unreliable because you can’t tell for sure when an external handle will be closed.

Meet IDisposable


To cleanup memory in a deterministic way, the .NET framework offers the IDisposable interface. This is interface is pretty simple:

   1:  public interface IDisposable
   2:  {
   3:      void Dispose();
   4:  }

The interface has only one method called Dispose. Calling this method will free any unmanaged resources that an object has. This way you can explicitly determine when a resource should be freed. A lot of the types in the .NET framework implement IDisposable. For example, when you create a new text file you get back a StreamWriter that implements IDisposable:

   1:  IDisposable disposableFile = File.CreateText("temp.txt");
   2:  disposableFile.Dispose();

When dealing with disposable objects you should call Dispose as soon as possible. But what if an exception happens before you can call Dispose? To make sure that some code always runs, with or without an exception, C# offers you the finally block:

   1:  IDisposable disposableFile = null;
   2:  try
   3:  {
   4:      disposableFile = File.CreateText("temp.txt");
   5:  }
   6:  finally
   7:  {
   8:      disposableFile.Dispose();
   9:  }

If some exception happens in the try block, the Dispose method will always be called. This way you can make sure that your resources will be released. Writing a try/finally block every time you deal with an IDisposable quickly becomes cumbersome. Fortunately, C# offers some syntactic sugar that can help you.

Desugaring using


When dealing with an IDisposable object, you can use a special statement called the using statement. The previous code with the try/finally block can be changed into the following:

   1:  using (var disposableFile = File.CreateText("temp.txt"))
   2:  {
   3:      // do something with your file
   4:  }

This is a much nicer syntax for working with disposable objects. When you look at the IL this generates you will see the following:

   1:  .method private hidebysig static void  Main(string[] args) cil managed
   2:  {
   3:    .entrypoint
   4:    // Code size       34 (0x22)
   5:    .maxstack  2
   6:    .locals init ([0] class [mscorlib]System.IO.StreamWriter disposableFile,
   7:             [1] bool CS$4$0000)
   8:    IL_0000:  nop
   9:    IL_0001:  ldstr      "temp.txt"
  10:    IL_0006:  call       class [mscorlib]System.IO.StreamWriter  
                    [mscorlib]System.IO.File::CreateText(string)
  11:    IL_000b:  stloc.0
  12:    .try
  13:    {
  14:      IL_000c:  nop
  15:      IL_000d:  nop
  16:      IL_000e:  leave.s    IL_0020
  17:    }  // end .try
  18:    finally
  19:    {
  20:      IL_0010:  ldloc.0
  21:      IL_0011:  ldnull
  22:      IL_0012:  ceq
  23:      IL_0014:  stloc.1
  24:      IL_0015:  ldloc.1
  25:      IL_0016:  brtrue.s   IL_001f
  26:      IL_0018:  ldloc.0
  27:      IL_0019:  callvirt   instance void [mscorlib]
                      System.IDisposable::Dispose()
  28:      IL_001e:  nop
  29:      IL_001f:  endfinally
  30:    }  // end handler
  31:    IL_0020:  nop
  32:    IL_0021:  ret
  33:  } // end of method Program::Main

As you can see, your small using statement generates quite a bunch of IL code. When you look at line 12 and 18 you see the try and finally statements. And in the finally block you see a call to Dispose on the StreamWriter.

And that’s how you can easily work with unmanaged objects in C#. Make sure that you always dispose of objects that implement IDisposable. The using statement is the easiest way to do this and by desugaring it you now understand why.

And if you ever find yourself creating a class that uses unmanaged resources, think of IDisposable and a finalizer.

Feedback? Questions? Please leave a comment!