Code Cast 34 – David Starr Sells Out to the Man

OK, bitter-sweet podcast here, this is Daddy Starr’s final podcast with the Elegant Code Cast.  Why?  Well, you will just have to listen to find that out.

In the mean time, we also talk about Agile, LEAN, working agile methodologies in with family life, and his new company: Guild 3.  Have a listen and tell us what you think.

Links from the Show:

Link to show

Asp.Net MVC JavaScriptView

For a while now I’ve wanted the ability to generate javascript file like I generate html files.  Take a little bit of the power of the WebForm view engine (just a little bit) and give that to me for JavaScript.

Why?  I have a few reasons.  Number one is I want to remove all of my JavaScript from the html, so the JavaScript is loaded via script tags.  Once that happens, then I can have the JavaScript cached by the browser and speed up my application a bit.  What I need the Asp.Net engine for was to give me accurate links to web site resources (images, web services, call, other javascript files, etc).  These are thing that I don’t want to hard wire, mostly because I’ve bitten off the Asp.Net Routing engine bug.  So, I want to use the Asp.Net Routing engine to tell me where to find stuff.

Not that I’ve committed to Asp.Net MVC I’ve come to a realization that making dynamic javascript files is well within my reach.  Actually, it is almost there already; Asp.Net MVC ships with a JavaScriptResult.  The downside of that result action is that the object expects you to hand it the JavaScript as a string, which it pushes to the browser as a file.  I want to give a method a JavaScript View for that.  The other good news about JavaScriptResult is that it doesn’t do much (just set the response.ContentType  to application/x-javascript).  So I could grab all of the View methods, turn them into extension methods, rename them, and then use them for my own underhanded affairs.

Luckily we can all grab the Asp.Net MVC source code and go to town.  I didn’t modify the original source, but I defined a new class (JavaScriptFileResult) and a bunch of extension methods for the Asp.Net MVC Controller to give you JavaScriptView methods.  You can see a usage in the first piece of code.

   1: public ActionResult JsConstants( )

   2: {

   3:     return this.JavaScriptView();

   4: }

Now below is the actually extension methods and class needed.  Put this in your project some place and go to down.

   1: public class JavaScriptFileResult: ViewResult

   2: {

   3:     public override void ExecuteResult(ControllerContext context)

   4:     {

   5:         base.ExecuteResult(context);

   6:         HttpResponseBase response = context.HttpContext.Response;

   7:         response.ContentType = "application/x-javascript";

   8:     }

   9: }


  11: public static class JavaScriptControllerExtensions

  12: {

  13:     public static ViewResult JavaScriptView(this Controller controller )

  14:     {

  15:         return JavaScriptView(controller, null /* viewName */, null /* masterName */, null /* model */);

  16:     }


  18:     public static ViewResult JavaScriptView(this Controller controller, object model)

  19:     {

  20:         return JavaScriptView(controller, null /* viewName */, null /* masterName */, model);

  21:     }


  23:     public static ViewResult JavaScriptView(this Controller controller, string viewName)

  24:     {

  25:         return JavaScriptView(controller, viewName, null /* masterName */, null /* model */);

  26:     }


  28:     public static ViewResult JavaScriptView(this Controller controller, string viewName, string masterName)

  29:     {

  30:         return JavaScriptView(controller, viewName, masterName, null /* model */);

  31:     }


  33:     public static ViewResult JavaScriptView(this Controller controller, string viewName, object model)

  34:     {

  35:         return JavaScriptView(controller, viewName, null /* masterName */, model);

  36:     }


  38:     public static ViewResult JavaScriptView(this Controller controller, string viewName, string masterName, object model)

  39:     {

  40:         if( model != null )

  41:         {

  42:             controller.ViewData.Model = model;

  43:         }


  45:         return new JavaScriptFileResult

  46:         {

  47:             ViewName = viewName,

  48:             MasterName = masterName,

  49:             ViewData = controller.ViewData,

  50:             TempData = controller.TempData

  51:         };

  52:     }


  54:     public static ViewResult JavaScriptView(this Controller controller, IView view)

  55:     {

  56:         return JavaScriptView(controller, view, null /* model */);

  57:     }


  59:     public static ViewResult JavaScriptView(this Controller controller, IView view, object model)

  60:     {

  61:         if( model != null )

  62:         {

  63:             controller.ViewData.Model = model;

  64:         }


  66:         return new JavaScriptFileResult

  67:         {

  68:             View = view,

  69:             ViewData = controller.ViewData,

  70:             TempData = controller.TempData

  71:         };

  72:     }


Now the view, here is what it looks like:

   1: <%@ Control Language="C#" Inherits="System.Web.Mvc.ViewUserControl" %>

   2: var UrlList = {

   3:     SubscriptionList: '<%=Url.Action("SubscriptionList", "Channel") %>',

   4:     MoreSubscriptions: '<%=Url.Action("More", "Channel") %>',

   5:     WaitGif: '<%=Url.Content("~/Content/img/ajaxloader.gif") %>',

   6: };

Like I stated before, I was looking to get uris for the most part.  But this could also be a strongly typed view where you have to pass in real data.

Finally, loading this view into my page.  You saw the controller action was “JsConstants” above, which was in my HomeController (not shown), so here is how I ask the routing engine for the JavaScript file.

   1: <script type="text/javascript" src="<%=Url.Content("~/Home/JsConstants") %>"></script>


So far I am happy with this technique.  But if you see something that can be improved, please let me know.

Code Cast 33 – Jimmy Bogard on AutoMapper

AutoMapper is one those tools tools that seems to be gaining in popularity lately, generating a lot of talk on the web and at user groups.  Seemed like a good idea to talk with its creator, Jimmy Bogard, and get the run-down on it.

In this code case we have Elegant Coders Chris Brandsma and Richard Cirerol (listen as Chris completely stumbles over Rich’s last name), and we are joined by Cory Isakson.  For those of you outside of Bosie, Cory is a local usergroup leader, Code Camp presenter, and .net junky.  We keep trying to get Cory to blog, but so far he is resisting our calls.

Get the show here

View in iTunes Any Podcatcher

Playing with JQuery Validation Library, Part 2

Background: JQuery Validator is JQuery plug-in for validating html form fields.  Microsoft has included the JQuery Validator as part of Microsoft’s new CDN, and it looks like JQuery and JQuery Validator will both be included with Asp.Net MVC 2.  You can find Part 1 here.

From where we left off, the next thing to do is validate a field via a web service. 

AJAX Validation

There are actually two ways of validating fields via an ajax call.  The first one I’ll show it the built in way.  The second is more of a hack, but gives a good background on what is actually going on with the JQuery Validator.

So in part 1 we showed how to do simple validations of a form.  We will start with a simpler form for this example:

   1: <form id='UserNameForm'>

   2:     <input id='userNameEdit' type='text' />

   3:     <button type='submit'>Submit</button>

   4: </form>

Obviously the form can be more complex, but I like starting simple.

The JQuery Validation script should look something like this:

   1: $('#UserNameForm').validate({

   2:     rules: {

   3:         userNameEdit: { required: true }

   4:     }

   5: });

That script will make sure there is something in userNameEdit.  But now we want to make sure the user name is unique in the system.  So to start, I need a web service to call.  For this example, I’m using Asp.Net MVC.

   1: public JsonResult VerifyUserName(string userNameEdit)

   2: {

   3:     return Json(true);

   4: }

OK, a few things about the web service above.  1.  in the current implementation, all user names are valid.  That is the Json(true).   If you want to see the failure case, change it to Json(false).  2. look at the method parameter.  It is the same name as the form field.  That is a convention in the JQuery Validator, and near as I can tell you cannot override that behavior.   Third: JsonResult stipulates that the method will return its data in the Json format. Unlike other web method types, where the output type come from the caller, this one only returns the specified format.

Validation Method 1: remote

Now, lets modify our script from above to call that web method:

   1: $('#UserNameForm').validate({

   2:     rules: {

   3:         userNameEdit: { required: true, remote: "<%=Url.Action("VerifyUserName", "Account") %>" }

   4:     }

   5: });

The new part is the remote in the script.   You give it the name/location of your web service to call to validate the field, and the field name/value are passed in. 

Now about the Url.Action bit in there…If you are not used to Asp.Net MVC, that line will look strange to you.  Here is how I understand it: Url.Action is a method that utilizes the Asp.Net Routing engine to create urls.  In the case of MVC, I need a url to a controller method.  The controller is “Account” (the actual class name is AccountController), and the method is “VerifyUserName”.   The cool part, now you can get a valid url no matter where you are in the project directory hierarchy (that is a problem with WebForms).

But there are downsides: I hope the only data you are going to pass to the method is the data located in the control.  If you need to pass more information to do the validation you could be in trouble.

Validation Method 2: Validator.addRule

The next option is to add a custom rule that make the ajax call and does the validation.  So first we add the custom rule:

   1: function VerifyUserName(value, element) {

   2:           if ( this.optional(element) )

   3:               return "dependency-mismatch";


   5:           var previous = this.previousValue(element);


   7:           if (!this.settings.messages[] )

   8:               this.settings.messages[] = {};

   9:           this.settings.messages[].verifyUserName = typeof previous.message == "function" ? previous.message(value) : previous.message;


  11:           if ( previous.old !== value ) {

  12:               previous.old = value;

  13:               var validator = this;

  14:               this.startRequest(element);

  15:               var data = {};

  16:               data[userNameEdit] = value;

  17:               $.ajax($.extend(true, {

  18:               url: '<%=Url.Action("VerifyEmail", "Account") %>',

  19:                   mode: "abort",

  20:                   port: "validate" +,

  21:                   dataType: "json",

  22:                   data: data,

  23:                   success: function(response) {

  24:                       var valid = response === true;

  25:                       if ( valid ) {

  26:                           var submitted = validator.formSubmitted;

  27:                           validator.prepareElement(element);

  28:                           validator.formSubmitted = submitted;

  29:                           validator.successList.push(element);

  30:                           validator.showErrors();

  31:                       } else {

  32:                           var errors = {};

  33:                           errors[] = previous.message = response || validator.defaultMessage( element, "verifyUserName" );

  34:                           validator.showErrors(errors);

  35:                       }

  36:                       previous.valid = valid;

  37:                       validator.stopRequest(element, valid);

  38:                   }

  39:               }, '<%=Url.Action("VerifyEmail", "Account") %>'));

  40:               return "pending";

  41:           } else if( this.pending[] ) {

  42:               return "pending";

  43:           }

  44:           return previous.valid;

  45:       }

The only thing interesting in that ajax call (as far as JQuery is concerned) is the async = false bit.   That line is telling JQuery to execute this method serially so I can return the result (hope the method doesn’t take too long). But I can customize the AJAX call however I need.

Next, we can edit our ‘validate’ method to use the custom rule.

   1: $('#UserNameForm').validate({

   2:     rules: {

   3:         userNameEdit: { required: true, verifyUserName: true" }

   4:     }

   5: });

So what I did here was take the code from the Validator’s Remote method, and modify the parts I needed for my own purposes.  That said, I had a lot of trouble getting it to work correctly.   I’m sort of hoping someone will tell me what I did wrong actually.

Validation Method 3: Overloaded Submit

The final way I’m going to show to use an AJAX web method to perform a validation check for use is by overloading the submit callback.   This what I’ve actually ended up using for a number of projects.  It is the simplest to code and understand.

   1: function CustomSubmit(){

   2:     var _val = $("#CustomerForm").validate({

   3:         rules: {

   4:             FirstName: { required: true },

   5:             LastName: { required: true },

   6:             Email: { required: true, email: true}

   7:         },

   8:         messages: {

   9:             FirstName: "First name is required",

  10:             LastName: "Last name is required",

  11:             Email: {

  12:                 required: "Please enter a valid email address",

  13:                 minlength: "Please enter a valid email address",

  14:                 remote: jQuery.format("{0} is already in use")

  15:             }

  16:         },

  17:         submitHandler: function(form) {

  18:             var data = {userNameEdit:$("#Email").val()};

  19:             $.ajax({

  20:             url: '<%=Url.Action("VerifyEmail", "Account") %>',

  21:                 dataType: "json",

  22:                 data: data,

  23:                 success: function(response) {

  24:                     var valid = response === true;

  25:                     if ( valid ) {

  26:                         form.submit();

  27:                     } else {

  28:                         _val.showErrors({ "Email": 'Email is already in use.' });

  29:                     }

  30:                 }

  31:             });

  32:          }

  33:     }

  34: }

A couple of key points on this one as well.  First notice what is happening on line 2.  I’m saving the resulting validator so I can use it later.  Then action happens at line 17 with the submitHandler.  submitHandler is a callback (event) which gives you a hook into the normal submit process.  Also you can see on line 26 I am calling the form.submit myself.  Then on line 28 I am displaying a custom error message. 


Finally, no matter how much checking you do in the client, you still have to recheck everything once you are on the server – sorry it is true.  Until I find a better way to implement Method 2, I will be using Method 3.  It is the most straight forward way to do validation via AJAX methods I have found so far.

If I can get to a Part 3 (Part 2 took WAY too long), I’ll look into using AJAX to handle the submit, and displaying custom error messages when the submit goes wrong.

Poker Planning for Windows Mobile

I got annoyed this week.  We were doing our planning poker (with the cards) when one of my coworkers broke out his IPhone instead.  He had a planning poker app on his freaking IPhone.  I whipped out my Windows Mobile phone and – did nothing.  I grabbed a deck of card and moved on.

Afterwards though, I did a couple of quick searches and found nothing.  Near as I can tell there is no Planning Poker app for Windows Mobile.  So I made one.

The app should run on Windows Mobile 5, 6, and 6.5.  It required .Net 2.0 Compact Framework to run as well.  If someone really wants, I can compile it down to pre-Window Mobile 5.  Also, I wrote it with touch screens in mind.  It might work on a non-touch screen device, but I really don’t know.

The part that made be blog about this though was the amount of time it took to make this.  About an hour.  Start, finish, and install.  So it is possible to make an app quickly in Windows Mobile if you want to.  — OK, reality check, we are not talking about a complicated app here.  The point is, the application did not take any longer than a normal WinForms app would take to code.

The entire project is up on CodePlex, code and all.  MS-PL license, so go nuts if you want to.

image image

Playing with JQuery Validation library

New job (2 weeks in), new responsibilities, new tools to play with.  Amazingly (for me), one of the new tools is the JQuery Validation Library.  My diving into this also coincides with ScottGu’s announcement that Microsoft is creating their own CDN for JQuery and JQuery Validation

Bad news is that this means I’m behind.  That sucks.  I was using JQuery MONTHS before Microsoft officially started supporting it (at least 6 months – but I’m too lazy to look it up).  Anyway, now you can use Microsoft’s bandwidth to get JQuery to your customer’s browser with this line:

<script src="" type="text/javascript"></script>

And you can add the JQuery Validation library like this:

<script src="" type=”text/javascript”></script>

Note: see those end tags, great annoyance of my life, but you have to have them, you can’t do <script src=”http://…” />.  That sucks, but I’m learning to live with it.

Anyway, back to validation.

The more I play with this library, the more I like it.  It is very configurable, easily style-able, the documentation isn’t half bad, and the code is readable.

Getting Started with JQuery Validation:

The JavaScript Way:  OK, assuming you have JQuery and the JQuery Validation library included, lets start with a simple customer form:

   1: <form id="CustomerForm">

   2: <p>

   3: <label for="firstNameEdit">First Name:</label>

   4: <input id="firstNameEdit" type="text" />

   5: </p>

   6: <p>

   7: <label for="LastNameEdit">Last Name:</label>

   8: <input id="LastNameEdit" type="text" />

   9: </p>

  10: <p>

  11: <label for="EmailEdit">Email:</label>

  12: <input id="EmailEdit" type="text" />

  13: </p>

  14: <p>

  15: <button type="submit">Submit</button>

  16: </p>

  17: </form>

Nothing special there.  Just a bunch of inputs and labels.  If you hit the submit button, a postback is fired and the data is sent back to the server.

But, business requirements dictate that all fields are required.  Simple enough, using JQuery the hookup looks like this:

   1: $("#CustomerForm").validate({

   2:     rules: {

   3:       FirstNameEdit: { required: true },

   4:       LastNameEdit: { required: true },

   5:       EmailEdit: { required: true, email:true }

   6:     }

   7: });

OK, the first name, last name, and email will now be required.  I also threw a little extra in there: email validation.  If you submit the form like this will be added after any of the required fields:

   1: <label class="error" for="FirstName" generated="true">This field is required.</label>

Key part there is the ‘error’ css class which is great for styling the messages.  So you can use the ‘label.error’ in your css class.

Another way

You can also use css classes to setup which fields are required.  It is a nice and simple way to hook everything up, but it is not as easily configurable (as we will see in a moment):

   1: <form id="CustomerForm">

   2: <p>

   3: <label for="FirstNameEdit">First Name:</label>

   4: <input id="FirstNameEdit" type="text" class=”required”/>

   5: </p>

   6: <p>

   7: <label for="LastNameEdit">Last Name:</label>

   8: <input id="LastNameEdit" type="text" class=”required” />

   9: </p>

  10: <p>

  11: <label for="EmailEdit">Email:</label>

  12: <input id="EmailEdit" type="text" class=”required email”/>

  13: </p>

  14: <p>

  15: <button type="submit">Submit</button>

  16: </p>

  17: </form>

The the Javascript is actually much simpler:

   1: $("CustomerForm").validate();

And you will get the same results.  But, the rest of the samples will use the prior code setup.  Also note: you can mix and match the css style with the JavaScript style.

Moving the message

Next problem you might encounter is you want to move the messages to another part of the screen.  Remember, default behavior is the messages go next to the control.

To solve this we will add a div to the page:

   1: <div id="RegisterErrors" style="display:none">

I can setup the Validation library to use this by going back to the validate method and adding errorLabelContainer directive.

   1: $(“#CustomerForm”).validate({

   2: errorLabelContainer: $("#RegisterErrors"),

   3: rules: {

   4:         FirstNameEdit: { required: true },

   5:         LastNameEdit: { required: true },

   6:         EmailEdit: { required: true, email: true }

   7:     }

   8: });

Now this is great, but there is still a problem.  Each of the messages will be “This field is required”.  That doesn’t say which field is the problem.  Lets add custom messages for each field.


   2:     $("#CustomerForm").validate({

   3:     errorLabelContainer: $("#RegisterErrors"),

   4:     rules: {

   5:             FirstNameEdit: { required: true },

   6:             LastNameEdit: { required: true },

   7:             EmailEdit: { required: true, email: true }

   8:         },

   9:         messages: {

  10:             FirstNameEdit: "First name is required",

  11:             LastNameEdit: "Last name is required",

  12:             EmailEdit: "Valid email address is required"

  13:         }

  14:     });

Which then gives better messages.

For next time

Ok, this is not all the JQuery Validation library can do, there is a lot more, but this is enough for now.  Next post will show how to validate fields using web service results, and various callbacks that can be used to customize the library.

Please, it is just a ‘var’

OK, on a previous post I had a little bit of code with a ‘var’ statement in it.  Actually, there were a few other items in there, couple of lambdas, lots of nested ‘if’s, etc.   There was a lot of good hand wrangling on how performant the Lambda ‘Where’ function is.  According to the comments, not very performant for very large datasets.  I don’t have a problem with that.

But at some point one of the commentors noticed that the var statement was just as fast as the explicitly called out type.  Let me spell something out real quick: Once compiled, there is no difference between var and explicit types.  ‘var’ is a language sort-cut, asking the compiler to figure out the type and inject it.  This is not a runtime feature – ‘var’ is a compile time feature.

Now, how often do I use var in my code?  All the time.  I use it practically everywhere.  Think of it as a refactoring helper.  Sometime I have to change a method from returning a IList to an IEnumerable.  If I use a var and a foreach then none of the rest of my code has to change.  That, to me is flexible code.

I could make my code more performant by replacing the foreach loops with for loops – but I’ll keep the var’s thank-you-very-much.  But most of the time I am not dealing with enough data to make that worth it.  Most of the time I’m working with data to get displayed on web and mobile user interfaces.  In those cases, limiting the amount of data I need to process, and then display, is of upmost importance.  If I’m doing a file import (rare, but it does happen), then I switch to the other techniques.

Now, why bother?  Why not always use the fastest method all the time?  Because I find the foreach with ‘var’s easier to read, more flexable code, and more refactoring friendly.  For the vast majority of my code, that matters a lot more than eking out every last bit of performance.

Mobile DB Performance Testing: with Hardware

NOTE: this is the same post from a week ago, but now I added in performance times from actual devices.

1. Palm Treo 750 (300 Mhz)

2. Palm Treo Pro (600 Mhz)

3. HP iPAQ hw6945 (416 Mhz) – oldest device

First change for the previous listings: cpu is not everything.  The iPAQ has a faster cpu than the  Treo 750, but is slower on most tests (could be memory, or Windows Mobile 5 vs Windows Mobile 6).

A coworker and me were working on some database performance testing for mobile platforms recently.  Our goal was to get some metrics between SQL CE, SQLite, and UltraLite (a Sybase  product).  Actually, that was my goal, my coworker also wanted to test out Perst, which is an OO database – instead of a relational database.

Now a disclaimer: these tests are not authoritative.  This is not a test of everything you could ever want to test on a mobile database.  In fact, the tests are quite simple.  Also, we are not taking money from any group, so the idea is there is not bias either.  But if anyone would like more authoritative tests – send money.

Finally, performance is not the final statistic for any mobile project, so don’t base your decisions strictly on this.  Notably, how to synchronize the database and memory usage are both huge topics.

On running the test: the metrics I’m displaying are from a WinCE 6.0 emulator, running .net cf 3.5.  But not an actual physical device.  Times from a physical device also vary as there are wide differences between device hardware.  On one device I used, the times were 2-3 times faster on the physical device than the emulator.  But the other important info here is .net cf 3.5.  Microsoft reports a 30% performance increase in moving between .net cf 2.0 and .net cf 3.5.

Finally, all times are reported in milliseconds.  So 1000 is 1 second.

Create Database

This is the first performance category – and frankly, the one I care the least about.  In the code I write, you create the database once, and work with it from there.  But just in case this does matter to you, here it is.

The tests consists of creating a brand new database, along with a few tables.  Here is the SQL that was executed for SQL CE (and variations on this were used for each of the other databases):

   1: CREATE TABLE T1 (i bigint not null PRIMARY KEY, s nvarchar(50))
   2: CREATE UNIQUE INDEX s_idx ON T1 (s)
   4: CREATE TABLE Product (id bigint not null PRIMARY KEY, Name nvarchar(50))
   5: CREATE TABLE Customer (id bigint not null PRIMARY KEY, Name nvarchar(50))
   6: CREATE TABLE ProductOrder (id bigint not null PRIMARY KEY, ProductId bigint, CustomerId bigint, Price decimal, CreatedDate DateTime)
   7: CREATE INDEX poc_idx ON ProductOrder (CustomerId)
   8: CREATE INDEX pop_idx ON ProductOrder (ProductId)
Emulator Treo 750 Treo Pro HP iPAQ
Perst 3500 952 861 925
SQLite 3031 1530 970 1730
UltraLite 3507 2210 1964 3198
SQLCE 4768 2428 2083 4047

None of them really stand out here.  The difference between 3.5 seconds and 4.7 seconds is negligible to me.


Now this one I care about.  Here we are attempting a bulk insert of 2000 records into the T1 table (see above).  This is a key task if you plan on synchronizing the database with a central server. If you want to see the general code for this, below is the SqlCe code.

   1: using (DbConnection conn = SqlCeCommon.CreateConnetion())
   2: {
   3:     conn.Open();
   5:     using (var txn = conn.BeginTransaction())
   6:     {
   7:         using (DbCommand cmd = conn.CreateCommand())
   8:         {
   9:             cmd.CommandText = "INSERT INTO T1 (i, s) VALUES(@i, @s)";
  10:             var p1 = cmd.CreateParameter();
  11:             var p2 = cmd.CreateParameter();
  12:             p1.ParameterName = "i";
  13:             p2.ParameterName = "s";
  14:             p1.DbType = DbType.Int64;
  15:             p2.DbType = DbType.String;
  16:             p2.Size = 50;
  17:             cmd.Parameters.Add(p1);
  18:             cmd.Parameters.Add(p2);
  19:             cmd.Prepare();
  20:             long key = 1999;
  21:             for (long i = 0; i < nRecords; i++)
  22:             {
  23:                 key = (3141592621L*key + 2718281829L)%1000000007L;
  24:                 p1.Value = key;
  25:                 p2.Value = System.Convert.ToString(key);
  26:                 cmd.ExecuteNonQuery();
  27:             }
  28:         }
  29:         txn.Commit();
  30:     }
  31: }
Emulator Treo 750 Treo Pro HP iPAQ
Perst 5948 3890 2877 6454
SQLite 3012 2735 1845 2810
UltraLite 4235 3485 3157 5359
SQLCE 11413 9212 8586 12051

First off, SQLite wins this one hands down, nudging out UltraLite and handily beating Perst.  As for SQLCE, I don’t know what happened there.


This time, take the text column from the T1 table and update it based on the index.  Again, we are executing these updates one at a time (2000 of them), not in bulk.

Emulator Treo 750 Treo Pro HP iPAQ
Perst 1650 1386 1719 1741
SQLite 3000 2897 2696 3775
UltraLite 9047 6309 5815 6739
SQLCE 5851 6556 6222 8215

Perst blows the competition away in this test, with SQLite not far behind.  But I would say that SQLCE also gave a respectable showing in this test.  Also, and perhaps more interestingly, the performance times between the emulator and the physical devices are not that different – and in the case of SQLCE, the physical device is SLOWER!

Search Tests

The next set of test are to perform a search based on Primary Key and by a text column (indexed).  We are searching for all 2000 records that were inserted (from the insert test), but one record at a time.

Primary Key Column

Emulator Treo 750 Treo Pro HP iPAQ
Perst 1809 1602 1507 2048
SQLite 9868 5926 5442 7980
UltraLite 5637 4143 3829 5541
SQLCE 17898 14330 15219 15586

Indexed Text Column

Emulator Treo 750 Treo Pro HP iPAQ
Perst 2435 2903 2243 3016
SQLite 9838 6015 5700 6470
UltraLite 5698 4552 3904 5290
SQLCE 16022 14435 15703 15929

Here you can really see Perst come out into the lead.  But it is the SQLCE times that are rather disheartening.  Problem is they used to be WORSE!  The SQLCE Text is the one test were I did some real performance tuning.  The biggest difference maker was being VERY specific about parameter types and sizes.  (Note: once I did that for SQLCE, I also did the same for the other providers).  Considering where SQLCE came out in this test, I give SQLite and UltraLite a pass – but I was surprised at the (lack of) performance of SQLite in this case.

Also, since both columns were indexed, it is nice to see there was not a noticeable difference between selecting on the primary key verses on an indexed text column.


This is the most straight forward of the tests.  Make a query that grabs all of the records out of a table, and get the data out of them.  Nice simple loop, no filters.

Emulator Treo 750 Treo Pro HP iPAQ
Perst 1946 1807 2225 2988
SQLite 1081 685 557 880
UltraLite 1945 1588 1146 1666
SQLCE 1233 965 688 1096

Here again the differences are negligible between the different databases.  UltaLite and Perst are the slowest, SQLite is the fastest with SQLCE close behind.  But all are between 1 and 2 seconds so who cares.


Again, a simple test.  Delete everything in the table – one record at a time.

Emulator Treo 750 Treo Pro HP iPAQ
Perst 6341 4905 4379 7038
SQLite 2481 2137 1520 2131
UltraLite 5488 3704 3580 4089
SQLCE 8807 8931 7816 11439

Some might be surprised by the Perst numbers here.  But after talking it over, this is somewhat expected if you look at the INSERT times.  The INSERT nearly matches the DELETE.  So if it takes a long to to allocate space, it takes a long to remove it as well.  So no supprise that SQLite runs away with this one.


OK, this is actually one of the more significant tests.  There is an issue with joins on mobile database – they stink.  And the more you add, the worse they get.  Side issue is you get a VERY limited number of index types (do you want ascending or descending).

Anyway, the test is simple.  If you look at the Create test you will see the tables we are dealing with.  I add  100 rows into the Product table and 50 records into the Customer table, then 5000 records (each customer has order one of each product) into the order table.

Then the test (and what is timed) is to execute the following query and extract the results.

   1: SELECT c.Name as Customer, p.Name as Product, o.Price

   2: FROM ProductOrder o

   3: LEFT OUTER JOIN Product p on p.ID = o.ProductId

   4: LEFT OUTER JOIN Customer c on c.ID = o.CustomerID

OK, if you are looking at the results, you see a cop-out.  I didn’t get the Perst code written.  Hopefully this will shame my coworker into getting this done.  🙂 (hi Priyesh).  But you have the other three results.

Emulator Treo 750 Treo Pro HP iPAQ
SQLite 3948 3171 2355 3480
UltraLite 8780 6651 5747 6683
SQLCE 5618 6113 5365 7079

Here you see again, SQLite as run into the lead.  I would also say that UltraLite and SQLCE give a very respectable showing.

Database size

These are mobile devices, space is very limited.  You laptop toting developer all want to talk about gigabytes,  I’m happy to be talking about megabytes in tens – not even hundreds.  So final database size is important.

Perst 408k *
SQLite 333k
UltraLite 480k
SQLCE 640k

* Incomplete result

Because I did not complete the Perst tests, take that number with a grain of salt.  All of the other are reasonably accurate though.  Amazingly, SQLite came out ahead again.


Again, these are not conclusive tests.  Mileage may vary, other factors may apply in choosing one over the other.  But, if you don’t have a synchronizing application, or you are willing to handle that yourself, then SQLite is an excellent option.  If you are willing to go non-conventional for even more speed, then Perst looks to have some good advantages as well.

SQLite also is not issue free.  One problem I encountered from time to time was a “Database locked” error.  I usually got this when executing large numbers of transactions.  As luck would have it, I got a new Ado.Net Provider (065) and the problem went away.

I will also throw out some reasons for Ultalite while I’m at it.  UltaLite has an excellent sync engine to go with it, and also includes device management software.  A downside it that UltraLite is the one database you can not deploy for free.  (Hopefully, in the near future we will also have some tests for Sybase SQLAnywhere as well, it is a more powerful database than UltraLite, but is also non-free).

Some defense for SQLCE.  SQLCE actually has some very good integration with SQL Server, including some of the new column date/time types from SQL Server 2008, as well as a serviceable synchronization engine in Microsoft Sync Services.  Unfortunately, I can’t whole heartedly get behind it – even the Sync Framework for Devices is still in CTP.  The main reason to go with SQLCE is because the SQL used in SQLCE is closest to what is used in SQL Server (or you have a client that insists).  Either way, there is technically wrong with SQL CE.

As for Perst, it is just too knew for me yet.  I’ll have to do a post on how you interact with the api, for instance, as it is not an Ado.Net provider.  But the performance for most tests was very good.  I just have to play with the API a bit more.  Also, the most glowing thing I can say about Perst, the Perst team has been EXTREEMLY responsive to requests.