oct. 23

You all know that VS 2005 has added some code snippets to ease typing some of the most standard code (like properties, constructors, ...). You also know that you can create your own snippets and using some predefined functions. I have been looking for pretty long time to see what is the list of available functions and today I have found it at last in the MSDN :

http://msdn2.microsoft.com/en-us/library/ms242312(VS.80).aspx

And I must admit I'm a bit disappointed, I was hoping there would be more available ! This list is valid for VS.NET 2005 and also for VS.NET 2008.
oct. 22

I was just surfing around Internet, reading a blog I have just discovered when I have found this post (in french) : http://blogs.msdn.com/nay/archive/2007/10/10/quand-excel-perd-la-boule.aspx

Excel has some calculation problems (or display ?) when dealing with the number 65 535.

Interesting to know no ? ( and to download the associated patch Wink )

Tags:
oct. 22
CITCON Brussels 2007

I have been writing a very small introduction to the CITCON in this post

The conference is now finished and so it's time to speak a bit about it !

This edition was full of surprises for me. First because it's the first time I was going to CITCON (pronounce it "KIT-KON") and the first time I was participating in Open Space discussion. The principle ? Simple. No conference, nothing really planned in advance.

On friday evening, after every one (between 60 and 100 persons) has presented themselves in a few words, some people started to explain the subjects they would like to discuss about. All could propose a subject and all could vote for their favorite subject in order to plan after how the different talks will occur on the next day and in which room. Interesting concept. In the end CITCON is really an excahnge of point of views, good and bad experiences, between people that all have the same goals : learning and sharing.

Continuous Integration - Improving Software Quality and Reducing Risk

Second surprise : I won a book. Nice. I never win to lotteries I participate into, but I win in lotteries I don't participate in ! Smile

 It will probably be an interesting reading.

You want to see more of this book ? You can go to see it on Amazon

 But let's come back to the discussions I have participated into.

  1. At 10h00 : Discussions about the "Build pipeline" with maybe 30 participants, or how, why and which difficulties shall we encounter, to provide an automated build from development to automated test platform, QA platform and production platform. We have been discussing how recent is the technology, and so how "immature" or "not yet ready" are people from Operations. I specially liked the discussion and point of view of Xavier Quesada Allue about database update.
  2. At 11h15, Jeffrey Fredrick (one of the organizer of the CITCON) has presented Crap4J, a new utility used for Java to determine where "crap is". By "crap", you should hear "risk", as this acronym means "Change Risk Analysis Prediction". How it works ? Putting in common the cyclomatic complexity and the code coverage to see the most potential risk in code.
  3. At 14h00, Alexander Snaps had organized a small talk about "Code Reviews and Delayed Commits". This discussion involved a few people only among Dario Garcia Coder, Andrew Binstock, Patrick Smith and myself. It was for me, without any comparison, the most interesting discussion (maybe also because it was with much less people) with some opposite point of views. Shall we - or not - make code reviews before commits? If yes, on wich criteria shall we do them? Is it possible to ask the developpers to commit anyway (in a separate repository ?) and ask a code review later on ?
  4. At 15h15, Douglas Squirrel wanted to share with us its idea of creating a new software : "Karma for Continuous Integration". Its goal ? Attributing (or removing) "Karma points" to developpers based on the commit he has just done (has he incresed te code coverage, decrease the complexity, ...). He wanted during this session to explain its idea and see what would be the people expectations about such an idea
  5. And finally at 16h30, Bernard Vander Beken and Alexander Snaps wanted to speak about the Web UI Testing problematic. In the end we have spoken about some tools, possible limitations, how to test Ajax based application, should we go until "mocking up" the server itself ? Interesting talk wth the expertise of the google guys showing of what they have done.

What can I say more ? Well I just hope to continue the discussions with some of you guys and to be present in the next year session !

I will probably update soon this post to add some link to other interesting people I have met there !

oct. 10

Hello All,

one of my colleague - Damien Pinauldt - has proposed us a small exercices yesterday. After I have seen the unexpected results, I have been thinking it could be interesting to share it with you.

The goal of this exercices was to recall how method hiding was working in .NET and what can be the consequences when using generics.

Method Hiding

Let's first redefine what is a method hiding. Of course you all know that in .NET are virtuals only the methods you decide to be, on the opposite of the Java World where all are virtual.
As a consequence, you may be in front of cases where you want to "override" a "non-virtual" method. We agree, this is highly unadviced and a bad practice. However you can, and you may face from time to time this obligation.

So you can be tempted to write something like this :

public class A
{
   public void DoSomething() { }
}
 
public class B : A
{
   public new void DoSomething() { }
}

And you would be right. This is an example of method hiding. In such a case, Visual Studio compiles without any error but gives you a warning explaining that there is a name collision inside of the classes A and B. By default, VS understands such a writing as "Method B.DoSomething is hiding inherited method A.DoSomething".
To clarify your (bad) intention (and get rid of this warning), you should use the "new" keyword:

public class B : A
{
  public new void DoSomething()  {}
} 

What is the consequence of such a code ?
Let's write an example and compare with polymorphism and method overriding:

public class A
{
   public void DoSomething() { Console.WriteLine("In A"); }
   public virtual void DoSomethingVirtual() { Console.WriteLine("In A"); }
} 
 
public class B : A
{
   public new void DoSomething() { Console.WriteLine("In B"); }
   public override void DoSomethingVirtual() { Console.WriteLine("In B"); }
} 

 

A a = new A();
a.DoSomething();           //Display "In A"
a.DoSomethingVirtual();    //Display "In A" 
 
B b = new B();
b.DoSomething();           //Display "In B"
b.DoSomethingVirtual();    //Display "In B" 
 
A a1 = new B();
a1.DoSomething();          //Display "In A"
a1.DoSomethingVirtual();   //Display "In B" 

In a word, the method resolution is based on the static type (meaning the type of the variable) when using method hiding, and is based on the dynamic type when using polymorphism and method overriding.
I imagine I didn't learned you anything here.

Generics and Generated IL

Ok now with theses basis recalled, let's go now to generics.
We often compare .NET generics with C++ templates and Java generics. In fact if the goal is quite similar, they work in very different ways.
To simplify, we could say .NET is at the mid-way between C++ and Java for code generation, avoiding the potential C++ "code bloating", and the Java "over casting". (You agree experts ?)
Is this better? Let's just say different.

.NET generics had been created to achieve two goals :

  • Improving the performance by avoiding the boxing and unboxing encountered when dealing with value types and "generic solution" (meaning here standard solutions, being applicable for many types, and so dealing with objects)
  • Ensure type safety

In my view, Microsoft as stated that they could deal with these two goals and had to face a choice that will anyway decrease a bit the performance:

  • either by generated a "huge" DLL (the typical code bloating due to some C++ compilers)
  • or by doing some casts

The final choice was:

  • We'll generate one new class / structure / method / ... for each ValueType using it
  • We'll generate only one class / structure / method / ... dealing with objects. A "compiler trick" can ensure type safety and give the illusion we work with strong typed solutions and doing some casts when appropriate

How can we verify that ? "Simply" by combining the method hiding and the generics (remember method hiding works with static types !)

Generics and method hiding

To avoid dealing with generic constraints (which won't anything by the way), let's create a simple method that will hide the ToString method and a simple generic method that will call the ToString().

public class C
{
   public new virtual string ToString()
   {
      return "In Class C";
   }
} 
 
public class Generic
{
   public static void DisplayToString<T>(T t)
   {
      Console.WriteLine(t.ToString());
   }
} 

This case is quite simple. So now let's compare what happens if we call the ToString method directly or via generics :

C c = new C();
object o = new C(); 
 
Console.WriteLine(c.ToString());    //Display "In Class C"
Console.WriteLine(o.ToString());    //Display "Namespace.C" 
 
Generic.DisplayToString(c);         //Display "Namespace.C"
Generic.DisplayToString(o);         //Display "Namespace.C"
Generic.DisplayToString<C>((C)o);   //Display "Namespace.C"

Why such a difference ? Simple, in fact when using the generic method, you are using one and only one method that takes an "object" and so the static type is "object".

Surprising ? Well I would say interesting :-) And definitely something to know if we are using some "new" keywords in our code. Maybe it would be another argument to avoid it ?

Tags: | |