C# Object Creation Time Trials

August 21st, 2008

I’ve been helping here and there with a C++ developer get attuned to C# and web development lately.   I’ve talked with C++ developers a number of times through the years and I tend to get the same sentiments over and over

1. that C++ is the most efficient language out there and nothing else can compete.

2. That .net is too inefficient to do any real work

My answer to #1 is: that depends on the developer — and code maintainability is more important to me than code efficiency.  I’ve often brushed off for that answer, I guess I have to work on my delivery.

The second one is flat out wrong, but harder to answer.  If your dealing with a die hard C++ developer your are dealing with a control freak (they will usually admit to that with a grin — as they should).  But convincing a C++ developer that garbage collection is not a huge performance hit can be difficult without them experiencing it themselves.

Another part of this argument is that the things that a developer thinks are expensive are not.  Like creating objects.  I bring this up specifically because of web development, where for every page load, hundreds to thousands of objects are created every time — and then thrown away a moment later once the page is done loading. 

So I devised a simple test.  How many object can I create in a second using C#?  Honestly, I didn’t know.

But first, my machine specs, because results will vary:

Windows Vista SP1
2 gig of ram (1 gig already used before my tests run)
2.4 GHz AMD Athlon 3800
4.1 Windows Experience Index

This is not a beast by any stretch — is isn’t even dual core!

Here is the class I’m creating

public class Customer
{
    public int Id;
    public string Name;
    public int Age;
}

Here was my test:

[Test]
public void FiveHundredThousandObjects()
{
    List<Customer> list = new List<Customer>();
    for (int i = 0; i < 500000; i++)
    {
        var c = new Customer();
        c.Id = i;
        c.Name = "Name " + i;
        c.Age = 10;
        list.Add(c);
    }
    Assert.AreEqual(500000, list.Count);
}

Quick note: when you run your unit test through ReSharper, there is an option where it will time your tests for you.  That is what I was using to time my tests.  I’m sure if I had used a console application these number could have been even better.

My average time was 1.5 seconds to create 500,000 Customer objects and add them to a list.  This also means I was creating 500,000 immutable string objects.  Which is really 1,000,000 object created in 1.5 seconds.

Here are some further timings:

Object Count Time (seconds)
1,000 0.03
10,000 0.03
50,000 0.17
100,000 0.40
250,000 0.90
500,000 1.30
1,000,000 4.40

When was the last you created that many object to make a web page?

Now a quick note.  I’m sure you see an upward trend line forming with the data presented.  500,000 came out at 1:30 while 1,000,000 came out at 4.40 for instance.  As with any technology, there comes a point where you are being plain stupid.  If you are initializing 1,000,000 object to create a web page — you’re doing it wrong.  If you are creating 1,000,000 object on a desktop program, I’d check your reasoning — and then I’d make sure it wasn’t happening too often. 

So this is the same argument I give to people who don’t like ORMs because they feel the ORMs wont be efficient enough: this technology isn’t a licence to be stupid.  You still have limitations to be aware of.  But it is far better than the alternative 90% of the time.

Anyway, next up is getting him to understand the page life cycle, how to work with a stateless environment, and making him understand that Object Oriented development techniques still apply in these circumstances.   It should be fun.

  • http://iamabruisedreed.blogspot.com Nate

    First, is it possible that string creation is the problem between the 500k and the 1M? I’ve worked on C projects where more string creation caused exponential load time. What I mean, is, does .Net new up strings in a different enough way that it could cause exponential problems?

    Second, based on your results, I’ve decided that if I’m going to create 1000 objects, I might as well create 10,000. After all, 10,000 is 10x better than 1000 and I get it for the same cost :)

  • http://mashi@twitter Bjorn Reppen

    >1. that C is the most efficient language out there
    >My answer to #1 is: that depends on the developer

    You are right. The constructor on list has an overload which could have saved you all the “array” copying that goes on in the background. This would also help you avoid the exponential time curve.

    >2. That .net is too inefficient to do any real work

    If real work means web server to handle thousands of simultaneous requests with immediate responses, you have probably just proved above that .net is too slow to do any ‘real’ work. .Net is also often unsuited for real-time apps.

    Also keep in mind when testing that the GC will only kick in after you have been running for a while – performance will drop even more at that point. (run longer tests)

  • http://www.elegantcode.com Chris Brandsma

    Hi Nate,

    String creation in .net is different than in C in that .net string are immutable and are a managed resource.

    If I do a simple string concatenation:
    string a = “abc”;
    string b = “def”

    a = a b;

    In C#, in the a = a b I’m creating a new string that consists of the old a and b. Also, if I have two string variables with the same text in them, then those variables are actually pointing to the same string. That is part of being a managed resource.

    I put the string concatenation in there to give things a small “real-world” flavor, knowing full well it would slow things down a bit. After all, to just create 1,000,000 empty objects isn’t saying too much.

    But, as a test, I created 1,000,000 strings in 1.6 seconds using the same test. I did a second test as well where I pre-initialized the collection, thinking this would improve performance — it did not. Not noticeably anyway.

    As for the time difference between 1000 and 10,000…I can’t completely explain that one. My thought is that it happens so fast what you are seeing is just the ramp up time for the method. Or it could be my timing mechanism was not as precise as it is made out to be.

  • Pingback: Dew Drop - August 21, 2008 | Alvin Ashcraft's Morning Dew

  • http://www.elegantcode.com Chris Brandsma

    Hi Bjorn,

    The object of the tests was not how to eek out every last bit of performance out of .net. The code was specifically written in normal way to illustrate that .net has good performance without doing strange things.

    Something that I forgot to mention as well was that I ran all the tests in debug mode, not release mode — which would have helped performance even more.

    As for you point about .net is not suitable for real time — funny, I know a number of guys who did just that with .net in a large manufacturing environment. I’ll have to tell them their working implementation wont work. :)