Advertisements
jump to navigation

Dynamically Select Columns with Server-Side Paging and Datatables.Net January 14, 2011

Posted by ActiveEngine Sensei in .Net, ActiveEngine, Ajax, ASP.Net, DataTables.Net, jQuery, JSON.Net, New Techniques, Problem Solving.
Tags: , , , , , ,
30 comments

Source code has been yet again updated!! Read about the changes in DataTablePager Now Has Multi-Column Sort Capability For DataTables.Net If you are new to DataTables.Net and Sensei’s paging solution and want to detailed study of how it works, work through this post first, then get the latest edition.  Note, code links in this post are to the first version.

The last episode of server-side paging with DataTablerPager for DataTables.Net we reviewed the basics of a server-side solution that paged records and returned results in the multiples as specified by DataTables.Net.  You will want to have read that post before preceding here.  The older version of the source is included in that post as well as this will help get you acclimated.  The following capabilities were reviewed:

  • The solution used generics and could work with any collection of IQueryable.  In short any of your classes from you domain solution  could be used.
  • Filtering capability across all properties was provided.  This included partial word matching, regardless of case.
  • Ordering of result set was in response to the column clicked on the client’s DataTables grid.

DataTablePager Enhancements

This past month Sensei has added new capabilities to the DataTablePager class that makes it an even better fit for use with DataTables.Net.  The new features are:

  • Dynamically select the columns from the properties of your class based on the column definitions supplied by DataTables.Net!!!
  • Exclude columns from sort or search based on configuration by DataTables.Net
  • Mix columns from your class properties with client-side only column definitions; e.g. create a column with <a href>’s that do not interfere with filtering, sorting, or other processing.

Before we jump into the nitty-gritty details let’s review how DataTables.Net allows you to control a column’s interaction with a data grid.  Grab the new source code to best follow along.

DataTables.Net Column Definition

You would think that there would be quite a few steps to keep your server-side data paging solution in concert with a client side implementation, and that would mean customization for each page.   DataTables.Net provides you with fine control over what your columns will do once displayed in a data grid.  Great, but does that mean a lot of configuration on the server side of the equation?  As well soon see, no, it doesn’t.  What is done on the client for configuration will be that you need to do.

The structure aoColumnDefs is the convention we use for column configuration.  From the DataTables.Net site:

aoColumnDefs: This array allows you to target a specific column, multiple columns, or all columns, using the aTargets property of each object in the array (please note that aoColumnDefs was introduced in DataTables 1.7). This allows great flexibility when creating tables, as the aoColumnDefs arrays can be of any length, targeting the columns you specifically want. The aTargets property is an array to target one of many columns and each element in it can be:

  • a string – class name will be matched on the TH for the column
  • 0 or a positive integer – column index counting from the left
  • a negative integer – column index counting from the right
  • the string “_all” – all columns (i.e. assign a default)

So in order for you to include columns in a sort you configure in this manner:

/* Using aoColumnDefs */
$(document).ready(function() {
	$('#example').dataTable( {
		"aoColumnDefs": [
			{ "bSortable": false, "aTargets": [ 0 ] }
		] } );
} );

} );

In other words we are defining that the first column – column 0 – will not be included in the sorting operations.  When you review the columns options you’ll see you have options for applying css classes to multiple columns, can include a column in filtering, can supply custom rendering of a column, and much more.

In the example that we’ll use for the rest of the post we are going to provide the following capability for a data grid:

  1. The first column – column 0 – will be an action column with a hyperlink, and we will want to exclude it form sort and filtering functions.
  2. Only display a subset of the properties from a class.  Each of these columns should be sortable and filterable.
  3. Maintain the ability to chunk the result set in the multiples specified by DataTables.Net; that is, multiples of 10, 50, and 100.

Here is the configuration from the aspx page SpecifyColumns.aspx:

"aoColumnDefs" : [
   {"fnRender" : function(oObj){
      return "<a href="&quot;center.aspx?centerid=&quot;">Edit</a>";
   },
     "bSortable" : false,
     "aTargets" : [0]},
   {"sName" : "Name",
     "bSearchable" : true,
     "aTargets": [1]},
   {"sName" : "Agent",
    "bSearchable" : true,
    "bSortable" : true,
    "aTargets" : [2]
   },
   {"sName" : "Center", "aTargets": [3]},
   {"fnRender" : function(oObj){
            return "2nd Action List";
         },
     "bSortable" : false,
     "aTargets" : [4]},
   {"sName" : "CenterId", "bVisible" : false, "aTargets" : [5]},
   {"sName" : "DealAmount", "aTargets" : [6]}
]
  1. Column 0 is our custom column – do not sort or search on this content.  Look at oObj.aData[4] – this is a column that we’ll return but not display.  It’s referred so by the position in the data array that DataTables.Net expects back from the server.
  2. Columns 1 – 3 are data and can be sorted.  Note the use of “sName”.  This will be included in a named column list that corresponds to the source property from our class.  This will be very important later on for us, as it allows us to query our data and return it in any order to DataTables.Net.  DataTables will figure out what to do with it before it renders.
  3. Threw in another custom column.  Again, no sort or search, but we’ll see how this affects the server side implementation later on.  Hint – there’s no sName used here.
  4. Another data column.

To recap, we want to be able to define what data we need to display and how we want to interact with that data by only instructing DataTables.Net what to do.  We’re going to be lazy, and not do anything else – the class DataTablePager will respond to the instructions that DataTables.Net supplies, and that’s it.  We’ll review how to do this next.  Sensei thinks you’ll really dig it.

DataTablePager Class Handles your Client Side Requests

If you recall, DataTables.Net communicates to the server via the structure aoData.  Here is the summary of the parameters.  One additional parameter that we’ll need to parse is the sColumns parameter, and it will contain the names and order of the columns that DataTables.Net is rendering.  For our example, we’ll get the following list of columns if we were to debug on the server:

,Name,Agent,Center,,CenterId,DealAmount

These are all the columns we named with sName, plus a place holder for those custom columns that not found in our class.  This has several implications.  For one, it will mean that we will no longer be able to simply use reflection to get at our properties, filter them and send them back down to the client.  The client is now expecting an array where each row will have 7 things, 5 of which are named and two place holders for items that the client wants to reserve for itself.  Hence the convention of passing an empty item in the delimited string as shown above.

It will also mean that we’ll have to separate the columns that we can filter or sort.  Again this is the reason for leaving the custom column names blank.  In other words, we’ll have to keep track of the items that we can search and sort.  We’ll do this with a class called SearchAndSortable:

public class SearchAndSortable
    {
        public string Name { get; set; }
        public int ColumnIndex { get; set; }
        public bool IsSearchable { get; set; }
        public bool IsSortable { get; set; }
        public PropertyInfo Property{ get; set; }

        public SearchAndSortable(string name, int columnIndex, bool isSearchable, bool isSortable)
        {
            this.Name = name;
            this.ColumnIndex = columnIndex;
            this.IsSearchable = isSearchable;
            this.IsSortable = IsSortable;
        }

        public SearchAndSortable() : this(string.Empty, 0, true, true) { }
    }

This will summarize what we’re doing with our properties.   The property ColumnIndex will record the position in sColumn where our column occurs.  Since we’ll need access to the actual properties themselves we’ll store these in the SearchAndSortable as well so that we can reduce the number of calls that use reflection. DataTablePager uses a List of SortAndSearchables to track what’s going on.  We fill this list in the method PrepAOData()

//  What column is searchable and / or sortable
            //  What properties from T is identified by the columns
            var properties = typeof(T).GetProperties();
            int i = 0;

            //  Search and store all properties from T
            this.columns.ForEach(col =>
            {
                if (string.IsNullOrEmpty(col) == false)
                {
                    var searchable = new SearchAndSortable(col, i, false, false);
                    var searchItem = aoDataList.Where(x => x.Name == BSEARCHABLE + i.ToString())
                                     .ToList();
                    searchable.IsSearchable = (searchItem[0].Value == "False") ? false : true;
                    searchable.Property = properties.Where(x => x.Name == col)
                                                    .SingleOrDefault();

                    searchAndSortables.Add(searchable);
                }

                i++;
            });

            //  Sort
            searchAndSortables.ForEach(sortable => {
                var sort = aoDataList.Where(x => x.Name == BSORTABLE + sortable.ColumnIndex.ToString())
                                            .ToList();
                sortable.IsSortable = (sort[0].Value == "False") ? false : true;
            });

We’ll get the properties from our class. Next we’ll traverse the columns and match the property names with the names of the columns. When there is a match, we need to query aoData and get the column search and sort definitions based on the ordinal position of the column in the sColumns variable. DataTables.Net convention for communicating this is the form of:

bSortable_ + column index => “bSortable_1” or “bSearchable_2”

We take care of that with this line of code:

var searchItem = aoDataList.Where(x => x.Name == BSEARCHABLE +
                                     i.ToString())
                                     .ToList();
searchable.IsSearchable = (searchItem[0].Value == "False") ? false : true;

Now we go through the list of properties again but this time determine if we should sort any of the columns. That happens in the section //Sort. In the end we have a list of properties that corresponds with the columns DataTables.Net has requested, and we have defined if the property can be search (filtered) or sorted.

For filtering DataTablePager recall that we use the method GenericSearchFilter().  The only alteration here is that we only will add the properties to our query that are defined as searcable:

//  Create a list of searchable properties
            var filterProperties = this.searchAndSortables.Where(x =>
                                        x.IsSearchable)
                                          .Select(x => x.Property)
                                          .ToList();

The rest of the method is unaltered from the prior version. Pretty cool!! Again, we’ll only get the properties that we declared as legal for filtering. We’ve also eliminated any chance of mixing a custom column in with our properties because we did not supply an sName in our configuration.

The method ApplySort() required one change. On the initial load of DataTable.Net, the client will pass up the request to sort on column 0 even though you may have excluded it. When that is the case, we’ll just look for the first column that is sortable and order by that column.

//  Initial display will set order to first column - column 0
//  When column 0 is not sortable, find first column that is
var sortable = this.searchAndSortables.Where(x => x.ColumnIndex ==
                                         firstColumn)
                              .SingleOrDefault();
if(sortable == null)
{
   sortable = this.searchAndSortables.First(x => x.IsSortable);
}

return records.OrderBy(sortable.Name, sortDirection, true);

After we have filtered and sorted the data set we can finally select the only those properties that we want to send to the client.  Recall that we have parsed a variable sColumns that tells what columns are expected.  We’ll pass these names onto extension method PropertiesToList().  This method will only serialize the property if the column is include, and since we have already paired down our data set as a result of our query and paging, there is very little performance impact.  Here is the new PropertiesToList method:

public static ListPropertiesToList(this T obj, List propertyNames)
{
   var propertyList = new List();
   var properties = typeof(T).GetProperties();
   var props = new List();

   //  Find all "" in propertyNames and insert empty value into list at
   //  corresponding position
   var blankIndexes = new List();
   int i = 0;

   //  Select and order filterProperties.  Record index position where there is
   //  no property
   propertyNames.ForEach(name =>
   {
      var property = properties.Where(prop => prop.Name == name.Trim())
         .SingleOrDefault();

      if(property == null)
      {
         blankIndexes.Add(new NameValuePair(name, i));
      }
      else
      {
         props.Add(properties.Where(prop => prop.Name == name.Trim())
                                    .SingleOrDefault());
      }
      i++;
   });

   propertyList = props.Select(prop => (prop.GetValue(obj, new object[0]) ?? string.Empty).ToString())
                                        .ToList();

   //  Add "" to List as client expects blank value in array
   blankIndexes.ForEach(index =>; {
      propertyList.Insert(index.Value, string.Empty);
   });

   return propertyList;
}

You might ask why not just pass in the list of SearchAndSortTable and avoid using reflection again. You could, but remember at this point we have reduced the number of items to the page size of 10, 50 or 100 rows, so your reflection calls will not have that great an impact. Also you should consider whether you want to simply have a function that will select only those properties that you need. Using SearchAndSortable would narrow the scope of utility, as you can use this method in other areas other than prepping data for DataTables.Net.

Now It’s Your Turn

That’s it.  Play with the page named SpecifyColumns.aspx.  You should be able to add and remove columns in the DataTable.Net configuration and they will just work.  This will mean, however, that you’ll have to always define your columns in your aspx page.  But since we worked really hard the first time around, DataTablePager will still be able to create paged data sets for any class in your domain.

Source code is here.  Enjoy.

Advertisements

How to Embed Grooveshark Widgets in WordPress.com January 4, 2011

Posted by ActiveEngine Sensei in ActiveEngine, Problem Solving.
Tags:
49 comments

Readers of this blog will not that there many references to music, music that rocks. To share a secret, Sensei was once a band nerd. A drummer band nerd, the one that you never could sit next to during a test because Sensei could not stop tapping rudiments. To this day he is still shunned from meetings. Not a bad strategy when you want to leave and get things done!

Prior to 2010 ClearSpring made a widget that allowed you to insert a Grooveshark playlist or single song into a post. Here are a few different samples.  In 2010 they announced that they would no longer support the widget platform, and, well, the widgets would be going away.

Come 1/1/2011 all the songs on this blog were gone.  “What the …” you say.  “How come I hear music playing now?”  Well credit goes to Panos at WordPress Tips, who has a great post on how to use gigya code to embed flash objects into your posts.

Here are the steps you need to perform:

  1. Create your Grooveshark song widget.  This can be a playlist or a single song.  For our example we’ll do one song.
  2. Once you have saved your song, select “Get Embedded Code”.  Note that the url will read something similar to this:http://widgets.grooveshark.com/finished?widgetid=23332943The parameter widgetid is the code you’ll want to copy, as this links you to your song.
  3. Past the following code into your post:

Don’t forget to change the widgetID code to match the code of your widget.  You should see the result below:

For a playlist change your gigya code to match the following:

How to Create Server-Side Paging for DataTables.Net with ASP.Net December 19, 2010

Posted by ActiveEngine Sensei in .Net, ActiveEngine, C#, DataTables.Net, Fluent, jQuery, New Techniques, Open Source, Problem Solving, Tutorial.
Tags: , , , , , , ,
22 comments

Source code has been updated!! Read about the changes in Dynamically Select Columns with Server-Side Paging and Datatables.Net If you are new to DataTables.Net and Sensei’s paging solution and want to detailed study of how it works, work through this post first, then get the latest edition.  Note, code links in this post are to the first version.

A central theme for 2010 has been fluency, or the continual practice of certain methods to such a degree that your performance improves and you produce increasingly polished, effective solutions.  For software development this has meant tools to save time and increase quality.  It also means keeping an eye toward making the users of your solutions more efficient as well.  In the spirit of “fluent solutions”, Sensei will end the year with a post that examines how to create a data paging solution for the jQuery data grid plug-in DataTables.Net.

DataTables can turn a HTML table into a fully functional data grid like the one offered by Telerik.  This plug-in offers client side sorting, filtering/ search,  as well as support for server-side processing processing of data.  It is an extremely feature rich tool created by Allan Jardine, and is itself worthy of a series of posts.  For this post on data paging Sensei recommends that you read through these examples to get an idea of what the data paging service needs to achieve.

Let’s get started with the goals we need to achieve when providing server-side data paging support:

  • Send data to client in the multiples or “chunks” that the client requests, and respond when the size of the sets requested is changed by the user.
  • Re-order the data set if the user clicks on a column heading.  Honor the data set size when returning the data.
  • Filter across all columns of data based on user input.  Implement this as partial matches, and again, honor the data set size.

Remember this is about flexibility, so we have the additional goals of:

  • Create a solution that can be reused.
  • Provide a mechanism to accommodate any type of .Net class using generics.

Essentially we want to be able to write code like so:

var tenants = tenantRepository.GetAll();
var dataTablePager = new DataTablePager();
var returnDataSet = dataTablePager.Filter(requestParms, tenants);

Before we proceed, Sensei wants to acknowledge those really smart people whose ideas contributed to this solution:

Zack Owens – jQuery DataTables Plugin Meets C#

Jeff Morris – Using Datatables.net JQuery Plug-in with WCF Services

Dave Ward – ASMX ScriptService mistake – Invalid JSON primitive

You may want to download the source before reading the rest of this post.

Communicating with DataTables

DataTables uses the following parameters when processing server-side data:

Sent to the server:

Type Name Info
int iDisplayStart Display start point
int iDisplayLength Number of records to display
int iColumns Number of columns being displayed (useful for getting individual column search info)
string sSearch Global search field
boolean bEscapeRegex Global search is regex or not
boolean bSortable_(int) Indicator for if a column is flagged as sortable or not on the client-side
boolean bSearchable_(int) Indicator for if a column is flagged as searchable or not on the client-side
string sSearch_(int) Individual column filter
boolean bEscapeRegex_(int) Individual column filter is regex or not
int iSortingCols Number of columns to sort on
int iSortCol_(int) Column being sorted on (you will need to decode this number for your database)
string sSortDir_(int) Direction to be sorted – “desc” or “asc”. Note that the prefix for this variable is wrong in 1.5.x where iSortDir_(int) was used)
string sEcho Information for DataTables to use for rendering

Reply from the server

In reply to each request for information that DataTables makes to the server, it expects to get a well formed JSON object with the following parameters.

Type Name Info
int iTotalRecords Total records, before filtering (i.e. the total number of records in the database)
int iTotalDisplayRecords Total records, after filtering (i.e. the total number of records after filtering has been applied – not just the number of records being returned in this result set)
string sEcho An unaltered copy of sEcho sent from the client side. This parameter will change with each draw (it is basically a draw count) – so it is important that this is implemented. Note that it strongly recommended for security reasons that you ‘cast’ this parameter to an integer in order to prevent Cross Site Scripting (XSS) attacks.
string sColumns Optional – this is a string of column names, comma separated (used in combination with sName) which will allow DataTables to reorder data on the client-side if required for display
array array mixed aaData The data in a 2D array

The data sent back is in the following form depicted below. Note that aaData is merely an array of strings – there is no column information. This will present a challenge in that you will not be able to simply serialize a collection and pass back the results.

{
    "sEcho": 3,
    "iTotalRecords": 57,
    "iTotalDisplayRecords": 57,
    "aaData": [
        [
            "Gecko",
            "Firefox 1.0",
            "Win 98+ / OSX.2+",
            "1.7",
            "A"
        ],
        [
            "Gecko",
            "Firefox 1.5",
            "Win 98+ / OSX.2+",
            "1.8",
            "A"
        ],
        ...
    ]
}

As you may be aware, if you wish to use ASP.Net web services to serialize JSON you must POST to the service and instruct it to interpret your parameters as JSON. DataTables will POST variables as value pairs and this won’t work for us when POSTing to a web service. We’ll have to translate the variables to a usable format. Luckily DataTables allows us to intervene with the following code, where we create a JSON string by serializing a structure called aoData:

"fnServerData": function ( sSource, aoData, fnCallback ) {

		        	var jsonAOData = JSON.stringify(aoData);

			        $.ajax( {
                                        contentType: "application/json; charset=utf-8",
				        type: "POST",
				        url: sSource,
				        data: "{jsonAOData : '" + jsonAOData + "'}",
				        success: function(msg){
				            fnCallback(JSON.parse(msg.d));
				        },
				        error: function(XMLHttpRequest, textStatus, errorThrown) {
                            alert(XMLHttpRequest.status);
                            alert(XMLHttpRequest.responseText);

                        }
			        });

Our web service can now de-serialize aoData and parse the appropriate parameters. This gives us important items such as how many records to display, what columns to sort on, and what search terms should be applied in a filter.

DataTablePager Class

DataTablePager.cs is the work horse of our solution.  It will sort, filter and order our data, and as an extra, serialize the results in format required by aaData.  Here’s the constructor:

public DataTablePager(string jsonAOData, IQueryable queryable)
        {
            this.queryable = queryable;
            this.type = typeof(T);
            this.properties = this.type.GetProperties(BindingFlags.Public | BindingFlags.Instance);
            this.aoDataList = new List>();
            this.sortKeyPrefix = new List();

            PrepAOData(jsonAOData);
        }

The parameter jsonAOData is the JSON string that contains the variables iDisplayStart, iDisplayLength, etc.  These will be parsed by the method PrepAOData.  The parameter queryable is the collection of records that will be filtered and parsed into JSON format required by DataTables.

The method Filter() coordinates all of the work.  It’s pretty simple what we want to do:  filter our data based on each column containing the search term, sort the result, then pull out the number of records we need to include in the page, and finally convert the collection into the format DataTables understands.

public FormattedList Filter()
        {
            var formattedList = new FormattedList();

            //  What are the columns in the data set
            formattedList.Import(this.properties.Select(x => x.Name + ",")
                                                 .ToArray());

            //  Return same sEcho that was posted.  Prevents XSS attacks.
            formattedList.sEcho = this.echo;

            //  Return count of all records
            formattedList.iTotalRecords = this.queryable.Count();

            //  Filtered Data
            var records = this.queryable.Where(GenericSearchFilter());
            records = ApplySort(records);

            //  What is filtered data set count now.  This is NOT the
            //  count of what is returned to client
            formattedList.iTotalDisplayRecords = (records.FirstOrDefault() == null) ? 0 : records.Count();

            //  Take a page
            var pagedRecords = records.Skip(this.displayStart)
                     .Take(this.displayLength);

            //  Convert to List of List
            var aaData = new List>();
            var thisRec = new List();

            pagedRecords.ToList()
                    .ForEach(rec => aaData.Add(rec.PropertiesToList()));

            formattedList.aaData = aaData;

            return formattedList;
        }

That said, there is some trickery that goes on in order to make this happen because we are creating a solution to is going to work with any IQueryable to we supply. This means that the filtering and the sorting will need to be dynamic.

To make the filtering dynamic we will build expression trees that will convert each property to a string, convert the string to lower case, then execute a Contains method against the value of that property.  The method GenericSearchFilter() called on line 16 accomplishes this with the following lines of code:

//  Except from GenericSearchFilter
MethodInfo convertToString = typeof(Convert).GetMethod("ToString", Type.EmptyTypes);

 var propertyQuery = (from property in this.properties
        let toStringMethod = Expression.Call(                                          Expression.Call(Expression.Property(paramExpression, property), convertToString, null),                                                            typeof(string).GetMethod("ToLower", new Type[0]))
         select Expression.Call(toStringMethod, typeof(string).GetMethod("Contains"), searchExpression)).ToArray();

We get an array of Expressions that when executed will tell us if the value matches our search term. What we want is to include the item if ANY of the properties is a match, so this means we have to use and OR for all of the properties. That can be accomplished with:

for (int j = 0; j < propertyQuery.Length; j++)
{
  //  Nothing to "or" to yet
  if (j == 0)
  {
    compoundOrExpression = propertyQuery[0];
  }

  compoundOrExpression = Expression.Or(compoundOrExpression,
                                              propertyQuery[j]);
}

So with what is listed above we would be able to match all properties with against a single search term. Pretty cool. But DataTables raises the bar even higher. If you were to go to the samples page and filter using multiple partial words you would find that you could perform some very effective searches with phrases like “new chic”. This would select all records that had properties containing “new” OR “chic”. Imagine the scenario where your user wants to finds all cities “New York” or “Chicago”. We’ve all been there where we have a grid and can only search for one term, or worse, where we have to add a row to a search filter grid and constantly push a “query” button to perform our searches. DataTables does all of the with one search box – just type and the filtering begins.

GenericSearchFilter() handles that scenario. First the search term is parsed into individual terms if there is a ” ”  supplied in the string. This means we will have to perform the propertyQuery for each term that we have. To return all of the records that correspond to each term we still need to perform the OR in groups, but then we need to AND these predicates together so we can get all of the groups per individual term. Here’s the source edited slightly for readability:

//  Split search expression to handle multiple words
var searchTerms = this.genericSearch.Split(' ');

for (int i = 0; i < searchTerms.Length; i++) {    var searchExpression = Expression.Constant( searchTerms[i].ToLower());                   //  For each property, create a contains expression   //  column => column.ToLower().Contains(searchTerm)
  //  Edited for clarity - create the array propertyQuery logic is here ...
  var propertyQuery = ...

  //  Inner loop for grouping all OR's for this search term
  for (int j = 0; j < propertyQuery.Length; j++)   {     //  Nothing to "or" to yet     if (j == 0)     {       compoundOrExpression = propertyQuery[0];     }     compoundOrExpression = Expression.Or(compoundOrExpression, propertyQuery[j]);   }   //  First time around there is no And, only first set of or's   if (i == 0)   {     compoundAndExpression = compoundOrExpression;   }   else   {     compoundAndExpression = Expression.And(compoundAndExpression, compoundOrExpression);   } } 

So GenericSearchFilter will build a humongous expression tree for all the properties in your class. To make this usable for the Where we convert it using Expression.Lambda and our Where clause just goes about its merry way. Because we have used generics, you can supply any class from your assemblies. One caveat, and Sensei is trying to find a resolution. If you have a string property to that is set to null, the expression tree fails. You’ll note that in the classes supplied in the sample, the properties that are of type string in the Tenant class are defaulted to empty in the constructor.  A small price to pay for some great functionality. To sort our data we use the method ApplySort():

 private IQueryable ApplySort(IQueryable records)         {             string firstSortColumn = this.sortKeyPrefix.First();             int firstColumn = int.Parse(firstSortColumn);             string sortDirection = "asc";             sortDirection = this.aoDataList.Where(x => x.Name == INDIVIDUAL_SORT_DIRECTION_KEY_PREFIX +
                                                                      "0")
                                                .Single()
                                                .Value
                                                .ToLower();

            if (string.IsNullOrEmpty(sortDirection))
            {
                sortDirection = "asc";
            }

            return records.OrderBy(this.properties[firstColumn].Name, sortDirection, true);
        }

An extension method OrderBy will accept the name of column, the sort direction as parameters. The parameter initial will indicate if we are sorting mulitple times, so we can accomplish multi-property sort with syntax like

var sortedRecords = records.OrderBy("State", "desc", true)
                                      .OrderBy("City", "asc", false);

public static IOrderedQueryable OrderBy(this IQueryable source, string property, string sortDirection, bool initial)
        {
            string[] props = property.Split('.');
            Type type = typeof(T);
            ParameterExpression arg = Expression.Parameter(type, "x");
            Expression expr = arg;
            foreach (string prop in props)
            {
                // use reflection (not ComponentModel) to mirror LINQ
                PropertyInfo pi = type.GetProperty(prop);
                expr = Expression.Property(expr, pi);
                type = pi.PropertyType;
            }
            Type delegateType = typeof(Func<,>).MakeGenericType(typeof(T), type);
            LambdaExpression lambda = Expression.Lambda(delegateType, expr, arg);

            string methodName = string.Empty;

            //  Asc or Desc
            if (sortDirection.ToLower() == "asc")
            {
                //  First clause?
                if (initial && source is IOrderedQueryable)
                {
                    methodName = "OrderBy";
                }
                else
                {
                    methodName = "ThenBy";
                }
            }
            else
            {
                if (initial && source is IOrderedQueryable)
                {
                    methodName = "OrderByDescending";
                }
                else
                {
                    methodName = "ThenByDescending";
                }
            }

            object result = typeof(Queryable).GetMethods().Single(
                    method => method.Name == methodName
                            && method.IsGenericMethodDefinition
                            && method.GetGenericArguments().Length == 2
                            && method.GetParameters().Length == 2)
                    .MakeGenericMethod(typeof(T), type)
                    .Invoke(null, new object[] { source, lambda });
            return (IOrderedQueryable)result;
        }

All good things …
It’s been a long ride, this post. A lot of code discussed, a lot of ground covered. The solution is here.  As always, play around and see how this can help you. If anything breaks, tell Sensei. If you have improvements, tell Sensei. DataTables is a great tool for your arsenal, hopefully the DataTablePager can help you integrate paging with large datasets as part of your solution offering.

Right now Sensei wants to sign off by toasting to you for wading through all of this, and for having the desire to build up your skills.  Obtaining fluency in what you do is a hard road to travel, but it’s worth it because you get things done quicker and better with each session.

Be sure to read about the latest version in “Dynamically Select Columns with Server-Side Paging and Datatables.Net

How Embedded Scripting Makes Dynamically Generated Test Data Possible in ASP.Net – DataBuilder Part 2 November 6, 2010

Posted by ActiveEngine Sensei in .Net Development, ActiveEngine, ASP.Net, C#, CS-Script, DataBuilder, JSON.Net, NBuilder, Problem Solving.
Tags: , , , , , ,
add a comment

Part 1 of a 3 part series.  For the latest DataBuilder capabilities, read this post or download the new source code from here.

Last episode Sensei unveiled a useful little tool called DataBuilder.  DataBuilder helps you to generate test data for you domain objects.  Just point DataBuilder to your assemblies, and with the magic of NBuilder, CS-Script you can create test data as JSON.  How is this possible?  This post will focus on the behind the scenes magic that makes DataBuilder so flexible.

The main problem that DataBuilder solves is that to create test data for your classes you normally need to fire up Visual Studio and a project, create code, compile, etc. to produce anything and this can cause needless context switching and headache.  What if you wish to simply wish to mock up a UI and need some data sets to work with?  DataBuilder helps in that you can create test data for any existing assembly.  You can also create different types of test data based on what ever criteria you need.  This is accomplished by taking the input supplied in the Snippet Editor screen, compiling it to an in-memory assembly and executing it.  No need to fire up Visual Studio and add a TestGeneration project to your .Net solution.

The “dynamic” nature of DataBuilder is implemented with CS-Script.  In short, CS-Script is an embedded scripting system that uses ECMA-compliant C #, with full access to the CLR and OS.  For an in-depth review see  Oleg Shilo’s fantastic article on CodeProject where he describes his product.

As Oleg describes, CS-Script will compile your code into an assembly, load that assembly into a separate app domain, then execute that assembly.  There are two scenarios that can be used to host your script.  They are the Isolated Execution Pattern, where the host and script have no knowledge of each other, and the  Simplified Hosting Model for two way type sharing between the host and the script.  The Simplified Hosting Model allows the script file to access assemblies loaded in the host, as well as pass back data to the host.  DataBuilder uses the Simplified Host Model.

Before we get into the particular DataBuilder code, let’s review some samples that Oleg has provided.  The scenario presented is when you wish to remotely load a script and execute it, and the recommendation is to user interface inheritance to avoid the task of using reflection to invoke the method.

// Host contains this interface:
public interface IWordProcessor
{
void CreateDocument();
void CloseDocument();
void OpenDocument(string file);
void SaveDocument(string file);
}

//  The script file implements the interface
public class WordProcessor: IWordProcessor
{
public void CreateDocument() { ... }
public void CloseDocument() { ... }
public void OpenDocument(string file) { ... }
public void SaveDocument(string file) { ... }
}

//  Host executes the script
AsmHelper helper = new AsmHelper(CSScript.Load("script.cs", null, true));

//the only reflection based call
IWordProcessor proc = (IWordProcessor)helper.CreateObject("WordProcessor");

//no reflection, just direct calls
proc.CreateDocument();
proc.SaveDocument("MyDocument.cs");

There are other methods for invoking methods and scripts. It’s well worth your time reading through the script hosting guidelines as Oleg covers performance, reflection, interface alignment with duck typing and other facets that are important to CS-Script.

Now let’s focus on DataBuilder’s embedded scripting implementation.  DataBuilder uses the interface inheritance approach to execute the script that you supply.  Here’s the interface:

using System;
using System.Collections.Generic;
using System.Linq;
using System.Text;

namespace DataGenerator.ScriptHost
{
    public interface IScriptRunner
    {
        void RunScript();
        void RunScript(Dictionary<string, object> parameters);
    }
}

And here is an implementation of the interface:

//CSScript directives - DO NOT REMOVE THE css_ref SECTION!!!
//css_ref System.Core;
//css_ref System.Data.ComponentModel;
//css_ref System.Data.DataSetExtensions;
//css_ref System.Xml.Linq;

using System;
using System.Collections.Generic;
using System.Text;
using System.IO;
using DataGenerator.Core;
using DataGenerator.ScriptHost;
using System.Linq.Expressions;
using System.Linq;
using Newtonsoft.Json;
using FizzWare.NBuilder;
//  Add a reference to your assemblies as well!!
using UnRelatedAssembly;

public class CreateTestFile : IScriptRunner
{
    public void  RunScript(Dictionary<string,object> parameters)
    {
        var agents = Builder<SalesAgent>.CreateListOfSize(5)
                    .WhereTheFirst(1)
                         .Have(x => x.FirstName = "James")
                         .And(x => x.LastName = "Kirk")
                    .AndTheNext(1)
                          .Have(x => x.FirstName = "Bruce")
                          .And(x => x.LastName = "Campbell")
                    .Build()
                    .ToList();

        parameters["JsonDataSet"] = JsonConvert.SerializeObject(agents);
    }

    public void  RunScript()
    {
 	    throw new NotImplementedException();
    }
}

The script host is derived from ScriptHostBase.  ScriptHostBase will perform the compilation of your script with the method CompileScript(), as well as fetching any remote assemblies that you want to include.  This is a great point of flexibility as it allows you to point to any assembly that you have access to.  Assemblies can come from multiple locations, and as long as you know the namespaces you can include the classes from those assemblies in your scripts.

        /// <summary>
        /// Compile a script and store in a runner object for later
        /// execution
        /// </summary>
        protected void CompileScript()
        {
            if(string.IsNullOrEmpty(this.Script))
            {
                throw new ArgumentNullException("ScriptHostBase - CompileScript : Script can not be blank");
            }

            if (string.IsNullOrEmpty(this.TypeName))
            {
                throw new ArgumentNullException("ScriptHostBase - CompileScript : TypeName can not be blank");
            }

            //  Has an assembly already been loaded?
            string names = string.Empty;
            AppDomain appDomain = AppDomain.CurrentDomain;

            var assemblyPaths = appDomain.GetAssemblies()
                                    .ToList()
                                    .Select(x => x.FullName)
                                    .ToList();

            var fizzWare = assemblyPaths.Where(x => x.Contains("FizzWare.NBuilder"))
                                            .SingleOrDefault();

            var assemblyLoadList = new List<string>();
            assemblyLoadList = this.AssemblyPaths.ToList();

            //  Load if needed
            if (fizzWare != null)
            {
                string remove = assemblyLoadList
                                     .Where(x => x.Contains("FizzWare.NBuilder"))
                                     .SingleOrDefault();
                assemblyLoadList.Remove(remove);
            }
            else
            {
                string path = ConfigurationManager.AppSettings["FizzWarePath"].ToString();
                assemblyPaths.Add(path);
            }

            Assembly compiler = CSScript.LoadCode(this.Script, assemblyLoadList.ToArray());
            AsmHelper asmHelper = new AsmHelper(compiler);
            this.runner = asmHelper.CreateObject(this.TypeName);
        }

You may be scratching your head at the lines of code that explicitly load FizzWare.NBuilder(26 -43).  When first constructing DataBuilder, Sensei struggled with getting NBuilder to compile with the new script.  CS-Script uses an algorithm to probe directories for assemblies as well as probing scripts to resolve namespaces.  In some cases, this probe will NOT locate a namespace based on the naming conventions of an assembly. CS-Script has provisions for handling those scenarios allowing you to specifically load an assembly.  The issue Sensei had at first was that the first execution of a script would complete successfully as NBuilder would be loaded.  The problem lay with the second run of the script, as an exception would be thrown claiming that NBuilder was already loaded and hence there was no need to explicitly load it again!  The work around is to query the loaded assemblies and if NBuilder is loaded, remove that path to FizzWare.NBuilder assembly from the AssemblyPaths list and prevent the script from reloading NBuilder.

Classes derived from ScriptHostBase are responsible for implementing ExecuteScript method.  In this implementation StringScriptHost derives from ScriptHostBase and has the following ExecuteScript method:

        /// <summary>
        /// Compile a script and invoke
        /// </summary>
        public override void ExecuteScript()
        {
            base.CompileScript();

            IScriptRunner scriptRunner = (IScriptRunner)this.runner;
            scriptRunner.RunScript(Parameters);
        }

Other script hosts can be created and handle scenarios where scripts stored in a document database, text fields in SQL Server or other scenarios.

The process of including your import statements, locating any scripts located on a share and passing parameters to scripts is all controlled by the ScriptController.  There are two constructors with one allowing you to specify the script location:

public ScriptController(string scriptShare){}

With the ScriptController you can execute snippets that you type free form with the method ExecuteSnippet.

public void ExecuteSnippet(string snippet, Dictionary<string, object> parameters)
        {
            Enforce.ArgumentNotNull<string>(snippet, "ScriptController.ExecuteAdHoc - snippet can not be null");

            //  Wrap snippet with class declaration and additional using ;
            snippet = snippetHeader + this.UsingFragment + snippetClassName +
                        snippet + snippetFooter;

            var scriptHost = new StringScriptHost();
            scriptHost.Script = snippet;
            scriptHost.TypeName = "AdHoc";
            scriptHost.Parameters = parameters;
            scriptHost.AssemblyPaths = this.assemblyPaths.ToArray();

            scriptHost.ExecuteScript();
        }

Another method ExecuteScript is used for executing script files that you have save on a share.  As you read through the ExecuteSnippet method, you’ll note that the controller will combine the required import and namespace methods.  It’s really just concatenating strings to build a complete script in the format displayed above in the CreateTestFile.cs code.

You create a Dictionary<string, object> called parameters and pass this to the ScriptController.Execute methods.  This allows you great flexibility as you can allow the scripts to instantiate different objects and return them to the host application for further use.  In the case of DataBuilder we are expecting a JsonDataSet object which is our serialized test data in the form of JSON.

That’s it.  Hopefully you find DataBuilder and these posts useful.  CS-Script is quite powerful and can allow you to execute operations without the need to constantly recompile your projects.  It also allows to execute operations dynamically.  DataBuilder would not be possible without it.  When duty calls and fluent solutions are needed, CS-Script and embedded scripting are pretty incredible. Source code is here.

Dynamically Create Test Data with NBuilder, JSON and .Net October 24, 2010

Posted by ActiveEngine Sensei in .Net, ActiveEngine, Ajax, ASP.Net, C#, Fluent, LINQ, Open Source, Problem Solving.
Tags: , , , , ,
5 comments

Part 1 of a 3 part series.  For the latest DataBuilder capabilities, read this post or download the new source code from here.

Building test data should be as easy:

var agentList = Builder<SalesAgent>.CreateListOfSize(5)
                           .WhereTheFirst(1)
                                  .Have(x => x.FirstName = "James")
                                  .And(x => x.LastName = "Kirk")
                            .AndTheNext(1)
                                  .Have(x => x.FirstName = "Bruce")
                                  .And(x => x.LastName = "Campbell")
                            .Build()
                            .ToList();

Wouldn’t be nice if all the properties of your objects were automatically populated:

Product:
       Id              : 1
       Title           : "Title1"
       Description     : "Description1"
       QuantityInStock : 1

NBuilder by provides you with a great fluent interface to accomplish this with ease.  You can even achieve scenarios where you can create hierarchies of data, set property values on a range objects in a list, and even create a specified range of values that you can use populate other objects.  Peruse through the samples and you will see, NBuilder quite capably maps values  the public properties of your objects.  A real time saver.

Sensei is going to kick it up a notch and provide you with a means to create test data with out having to recompile your projects.  This is ideal for when you want to create UI prototypes.  DataBulider uses CS-Script and NBuilder to create a web based data generation tool that can read assemblies and will allow you to script a process that will generate test data in the form of JSON.

This adventure is split into two parts.  First a quick demo, then instructions on how to configure DataBuilder for you environment.  A deeper discussion of CS-Script and embedded scripting in .Net will be part of the sequel to this action/adventure, as we all know the second movie in the series is always the best!.

Operating DataBuilder

In short you have three things to do:

  • Identify the assemblies that contains the objects you want to generate test data for.  The path to the files can be anywhere on your system.  For convenience there is an folder called Assembly that you can copy the files to.  Multiple assemblies from different locations can be imported.
  • Create the import statements.
  • Create the code snippet with the NBuilder statements that will generate your data.

Here’s a screen shot of DataBuilder with each section that corresponds with the three goals stated above.

And here is an example that we’ll be working with.

var agents = Builder<SalesAgent>.CreateListOfSize(5)
                    .WhereTheFirst(1)
                         .Have(x => x.FirstName = "James")
                         .And(x => x.LastName ="Kirk")
                    .AndTheNext(1)
                          .Have(x => x.FirstName = "Bruce")
                          .And(x => x.LastName = "Campbell"})
                    .Build()
                    .ToList();

parameters["JsonDataSet"] = JsonConvert.SerializeObject(agents);

Note that after the end of the code that creates the objects, you need to include a statement

parameters["JsonDataSet"] = JsonConvert.SerializeObject(List);

Without that statement you will not get your data serialized.  If you’ve entered the data as shown, hit the Build button and the resulting JSON is placed in the output box.  That’s it.  Looking through the output you’ll note that the first two sales dudes are James Kirk and Bruce Campbell, while the remaining records are completed by NBuilder.

[{"FirstName":"James","LastName":"Kirk","Salary":1.0,"RegionId":1,"RegionName":"RegionName1","StartDate":"\/Date(1287892800000-0400)\/"},{"FirstName":"Bruce","LastName":"Campbell","Salary":2.0,"RegionId":2,"RegionName":"RegionName2","StartDate":"\/Date(1287979200000-0400)\/"},{"FirstName":"FirstName3","LastName":"LastName3","Salary":3.0,"RegionId":3,"RegionName":"RegionName3","StartDate":"\/Date(1288065600000-0400)\/"},{"FirstName":"FirstName4","LastName":"LastName4","Salary":4.0,"RegionId":4,"RegionName":"RegionName4","StartDate":"\/Date(1288152000000-0400)\/"},{"FirstName":"FirstName5","LastName":"LastName5","Salary":5.0,"RegionId":5,"RegionName":"RegionName5","StartDate":"\/Date(1288238400000-0400)\/"}]

You also can load a script and execute it as well.  That’s done on the “Script Loader” tab.  The location of the scripts is set in the WebConfig and the key name is ScriptPath.  Here’s the screen shot:

Anatonomy of DataBuilder Script

Here’s the complete C# script file that builds your data.  It’s just a class:

//CSScript directives - DO NOT REMOVE THE css_ref SECTION!!!
//css_ref System.Core;
//css_ref System.Data.ComponentModel;
//css_ref System.Data.DataSetExtensions;
//css_ref System.Xml.Linq;

using System;
using System.Collections.Generic;
using System.Text;
using System.IO;
using DataGenerator.Core;
using DataGenerator.ObjectTypes;
using DataGenerator.ScriptHost;
using System.Linq.Expressions;
using System.Linq;
using Newtonsoft.Json;
using FizzWare.NBuilder;
//  Add a reference to your assemblies as well!!
using UserDeploymentDomain;

public class CreateTestFile : IScriptRunner
{
    public void  RunScript(Dictionary parameters)
    {
        var agents = Builder.CreateListOfSize(5)
                    .WhereTheFirst(1)
                         .Have(x => x.FirstName = "James")
                         .And(x => x.LastName = "Kirk")
                    .AndTheNext(1)
                          .Have(x => x.FirstName = "Bruce")
                          .And(x => x.LastName = "Campbell")
                    .Build()
                    .ToList();

        parameters["JsonDataSet"] = JsonConvert.SerializeObject(agents);
    }

    public void  RunScript()
    {
 	    throw new NotImplementedException();
    }
}

The very top section “CSScript Directives” is required by CS-Script.  These are directives that instruct the CS-Script engine to include assemblies when it compiles the script.  The imports section is pretty straight forward.

You’ll note that the script inherits from an interface.  This is a convention used by CS-Script to allow the host and script to share their respective assemblies.  Sensei will discuss that in next post.  The RunScript method accepts a Dictionary that contains the parameters.  This will house the JsonDataSet that is expected for the screen to display the output of your data.

Advanced NBuilder Experiments
The beauty of NBuilder is that you can create test data that goes beyond “FirstName1”, and allows you to quickly create data that matches what the business users are used to seeing. If you think about it you should be able to generate test data that will exercise any rules that you have in the business domain, such as “Add 5% tax when shipping to New York”. With the scripting capability of DataBuilder you can create suites test data that can evolve as you test your system. You could also use the JsonDataSet to create mocks of your objects as well, maybe use them for prototyping your front end.

We’ll do a quick sample. Our scenario is to create assign real regions to sales agents. Furthermore, we want to only chose a range of regions and assign them at random.

First we build the Regions:

var regions= Builder<Region>.CreateListOfSize(4)
	.WhereTheFirst(1)
		.Have(x => x.State = "Texas")
	.AndTheNext(1)
		.Have(x => x.State = "California")
	.AndTheNext(1)
		.Have(x => x.State = "Ohio")
	.AndTheNext(1)
		.Have(x => x.State = "New York")
	.Build();

Now we’ll create a SalesAgents and using the Pick method from NBuilder we’ll randomly assign a region to the sales agents:

var agents = Builder<SalesAgent>.CreateListOfSize(5)
                    .WhereAll()
                           .HaveDoneToThem(x => x.RegionName = Pick.RandomItemFrom(regions).State)
                    .WhereTheFirst(1)
                         .Have(x => x.FirstName = "James")
                         .And(x => x.LastName = "Kirk")
                    .AndTheNext(1)
                          .Have(x => x.FirstName = "Bruce")
                          .And(x => x.LastName = "Campbell")
                    .Build()
                    .ToList();

The result set now has the range of states distributed to the Sales Agents. Looks like James Kirk needs to cover Texas. You may need to view the source to see the output.

[{"FirstName":"James","LastName":"Kirk","Salary":1.0,"RegionId":1,"RegionName":"Texas","StartDate":"\/Date(1287892800000-0400)\/"},{"FirstName":"Bruce","LastName":"Campbell","Salary":2.0,"RegionId":2,"RegionName":"Texas","StartDate":"\/Date(1287979200000-0400)\/"},{"FirstName":"FirstName3","LastName":"LastName3","Salary":3.0,"RegionId":3,"RegionName":"California","StartDate":"\/Date(1288065600000-0400)\/"},{"FirstName":"FirstName4","LastName":"LastName4","Salary":4.0,"RegionId":4,"RegionName":"California","StartDate":"\/Date(1288152000000-0400)\/"},{"FirstName":"FirstName5","LastName":"LastName5","Salary":5.0,"RegionId":5,"RegionName":"Ohio","StartDate":"\/Date(1288238400000-0400)\/"}]

Configure DataBuilder For Your Environment
Given that DataBuilder is loading assemblies you will want to run it on either your dev environment or on a test server where your co workers won’t mind if you need to take IIS up and down. Also, you’ll want to work with a copy of your assemblies in case you need to make a quick change. There are times when IIS will not release a file and if you need to make changes to the assemblies themselves it’s more convenient to copy them after you’ve re-compiled.

There are two settings you need to change in the WebConfig to match your environment.

ScriptPath – Point this to the share where you want to save any scripts. DataBuilder will scour the directory and list anything you place in there.

FizzWarePath – This needs to point to the location of the NBuilder dll. Most likely this will be the bin folder of the DataBuilder website. In the follow up post Sensei will explain what this does.

Wrapping Up For Now

We covered a lot on the whirlwind tour of DataBuilder.  There’s a lot more that is of interest, particularly with respects to the embedded scripting aspects provided by CS-Script.  For now, have fun playing building you data sets.  In the next installment we’ll cover the scripting aspect in more detail  For now, download and experiment.  Here’s the source for DataBuilder with unit tests.

Deserializing to Persistent AnonymousTypes with JSON.Net October 9, 2010

Posted by ActiveEngine Sensei in .Net, .Net Development, ActiveEngine, C#, Problem Solving.
Tags: , , ,
1 comment so far

A few weeks back Sensei unleashed a crazy idea regarding a class AnonymousType that could persist values from an anonymous object.  The AnonymousType, created by Hugo Benocci models an individual object.  In a sense this is a hyper-charged Dictionary of properties that represent an object.  It’s meta data.  This is similar to a concept called the Adaptive Object Model, the theory that you create mechanisms to describe what your objects should do.   Instead of having a class for SalesAgent or Car you have classes that represent the classes, attributes, relationships and behavior in your domain.  In other words, you create a meta data modeler and feed it the criteria that would represent SalesAgent, Car, etc.

Having a “sound-of-one-hand-clapping” moment, Sensei realized that while “Persistent AnonymousTypes” was in the title of the post, no mechanism for for serializing the AnonymousType was included!!  “What the …”.  Jeeezz!  “Hell, that should be easy”, Sensei says.  Grab JSON.Net and with elbow grease make it work, right?  Anybody?

One thing that should be immediately clear is that all the meta data is locked up in the AnonymousType object, so you can’t just write:

string json = JsonConvert.SerializeObject(anonymousType);

Instead we need a way represent all the properties of our AnonymousType and preserve each property’s name, it’s type, and the underlying value.  Something like:

public class NameTypeValue
{
  public string Name { get; set; }
  public Type Type{get; set;}
  public object Value { get; set; }
}

And wouldn’t it be nice if we could take a serialized stream of an actual object and convert that into an AnonymousType?  Thinking further ahead, it would rather easy to pass around a list of NameTypeValues as you could easily send and receive this object from a web client or other front end, building yourself a modelling or code generation tool.

Serializing the object depicted above is pretty trivial.  Using a Func<Dictionary<string,object>, string,  string> we can serialize any way wish as with two tiny methods:

public string ToJSON(Func, string, string> function, string jsonObjectName)
{
    return function(_Values, jsonObjectName);
}
///  Method to serialize.  You can come up with your own!!
public string SerializeWithJObject(Dictionary values, string name)
{
  var jsonObject = new JObject();

  foreach (KeyValuePair property in values)
  {
    jsonObject.Add(new JProperty(property.Key, property.Value));
  }

  return jsonObject.ToString();
}

If there is another mechanism for serialization that you wish to use you are free to come up with your own.  For illustration here is the JSON output of an AnonymousType for a sales agent, and followed by the JSON for an actual Agent object:

Agent JSON ==>{“Name”:”Sales Guy Rudy”,”Department”:45}

AnonymousType JSON ==>{  “Name”: “Sales Guy Rudy”,  “Department”: 45}

 

Now that we can simply serialize our AnonymousType with the output matching that of an actual object,  we just need a way to interpret a JSON stream and build an AnonymousType.  Along with discussion, Sensei will talk about the second “sound-of-one-hand-clapping” moment he had when working with JSON.Net.  As you may have already surmised, you need to describe the Type of property in order deserialization to happen.  Sensei didn’t and took a trip to the valley for frustration.

Ok.  We have stream of JSON with the Name, Value and Type of each property for an object.  AnonymousType has a Set method to set a new property:

        /// <summary>
        /// Sets the value of a property on an anonymous type
        /// </summary>
        /// <remarks>Anonymous types are read-only - this saves a value to another location</remarks>
        public void Set(string property, object value) {
            this.Set<object>(property, value);
        }

        /// <summary>
        /// Sets the value of a property on an anonymous type
        /// </summary>
        /// <remarks>Anonymous types are read-only - this saves a value to another location</remarks>
        public void Set<T>(string property, T value) {

            //check for the value
            if (!this.Has(property)) {
                this._Values.Add(property, value);

            }
            else {

                //try and return the value
                try {
                    this._Values[property] = value;
                }
                catch (Exception ex) {
                    throw new Exception(
                        string.Format(
                            AnonymousType.EXCEPTION_COULD_NOT_ACCESS_PROPERTY,
                            property,
                            (value == null ? "null" : value.GetType().Name),
                            ex.Message
                            ),
                            ex);
                }
            }

        }

It’s pretty straight forward to accept a NameTypeValue object and perform:

public void AddProperty(string objectName, NameTypeValue nameTypeValue)
{
 //  Object doesn't exist?  Add.
 if (objects.ContainsKey(objectName) == false)
 {
 objects.Add(objectName, new List());
 }

 var properties = objects[objectName];

 //  All properties are unique
 var existingProperty = properties.Where(x => x.Name == nameTypeValue.Name)
 .SingleOrDefault();

 if(existingProperty == null)
 {
 properties.Add(nameTypeValue);
 }
}

and taking this a step further, a List<NameTypeValue> can supply all properties for an object:

properties.ForEach(x => { anonymousType.Set(x.Name, x.Value); });

Accepting a JSON stream of a List<NameTypeValue> should be easy-cheesey mac-n-peasey.  The first version of this looked like the following:

public AnonymousType DeserializeFromJSONProperties(string objectName, string json)
{
  Enforce.ArgumentNotNull(objectName, "AnonFactory.Deserialize - objectName can not be null");
  Enforce.ArgumentNotNull(json, "AnonFactory.Deserialize - json can not be null");

  List propertyList = JsonConvert.DeserializeObject
>(json);

  //  Add properties.  Make sure int is not deserialized to a long since JSON.Net
  //  makes best guess
  propertyList.ForEach(x => AddProperty(objectName, x));

  return CreateAnonymousType(objectName);
}

But one-moooorrree-thing!  Sensei discovered that JSON.Net, when presented with an integer like 5, will deserialize to the largest possible type when not presented with a target.  In other words, when you have this JSON:

{“Department” : 45}

and deserialize to an object, it must accommodate the largest possible type in order to avoid truncating the data.  That means an int is deserialized as Int64!!  The first round of testing was quite aggravating as AnonymousType would accept the property into it’s schema, but when you went to fetch that value later on you would get an exception  In other words, when you did this:

//  Found in JSONTests.MakeItFail()
var anonFactory = new AnonFactory();
var darrellDept = new NameTypeValue();
darrellDept.Name = "Department";
darrellDept.Value = 45;

var darrellName = new NameTypeValue();
darrellName.Name = "Name";
darrellName.Value = "Darrell";

var propertyList = new List();
propertyList.Add(darrellDept);
propertyList.Add(darrellName);

//  Create JSON stream of properties
string darrellPropertyJSON = JsonConvert.SerializeObject(propertyList);

//  Try to deserialize and create an AnonymousType object
var otherDarrell = anonFactory.DeserializeFromJSONProperties("Agent", darrellPropertyJSON);
Assert.AreEqual(otherDarrell.Get("Department"), 45);

you got an InvalidCastException.

Luckily you have the Type so you can perform a conversion as you deserialize the property and add it to AnonymousType’s Dictionary<string, object>.  Here’s the new version:

propertyList.ForEach(x => AddProperty(objectName, ConvertTypeFromDefinition(x)));

private NameTypeValue ConvertTypeFromDefinition(NameTypeValue nameTypeValue)
{
  if (nameTypeValue.Type != nameTypeValue.Value.GetType())
  {
    nameTypeValue.Value = Convert.ChangeType(nameTypeValue.Value, nameTypeValue.Type);
  }

  return nameTypeValue;
}

When you look at the new version of the AnonymoustType project you’ll note that serializing is handled by the AnonymousType itself, while a factory class is used for building the an AnonymousType from the NameTypeValue’s and for deserializing JSON as well.  Sensei struggled a bit with this, as on the one hand if AnonymousType was responsible for serializing itself should it also be able to deserialize a stream?  On the other hand, a factory seemed logical since you could have a registry of AnonymousType objects, thereby centralizing the creation and management of AnonymousTypes.  Don’t like it – create your own and share!  Regardless, looks like we can fit through the mini-Stargate now.  Here’s version 2.

Janga – A Validation Framework with a Fluent API September 26, 2010

Posted by ActiveEngine Sensei in .Net, ActiveEngine, Business Processes, C#, Design Patterns, Expression Trees, Fluent, LINQ, New Techniques, Problem Solving.
Tags: , , , , , ,
add a comment

Why can’t we  write code that read likes this:

bool passed = employee.Enforce()
                    .When("Age", Compares.IsGreaterThan, 45)
                    .When("Department", Compares.In, deptList)
                    .IsValid();
if(passed)
{
    SomeProcess();
}

One of the enduring challenges for software developers and business is to create abstractions that accurately represent concrete rules for business operations.  As opposed to operating like our tribal ancestors where you had to kill a goat, start a fire and listen to the blind boy tell the tale told for thousands of years, today we’d like to be able to read stories ourselves.  Hopefully the story that we read matches the reality of what we have implemented in our code.  Many nested if statements can quickly make verifying that the code matches the story very difficult.

A fluent validation API can assist with this.  Look at the code at the top of the post.  You can show that most people without having to get out the smelling salts.  For your fellow developers its creates a succinct way to express precisely what the logic is. They’ll love you for it.

Janga, a fluent validation framework for creating such an API.  There are three goals to be met here, and Janga fulfills these goals:

Goal 1 – Be able to chain “When” clauses together.  Each test – represented by the “When” clause – needs to chained together.

Goal 2 – Accept a test on any object property where the test criteria is defined in the form of x <= y at runtime.  The types of objects and their properties will not be known until runtime, so our framework must be able to analyze an object and construct a test against each property as it is presented.  This is NOT the specification pattern, where you define a delegates ahead of time.

Goal 3 –  Flexibly handle errors by either halting on the first error, or by proceeding with each test and logging each error as it is encountered.

The code Sensei will present here fulfills all of these goals and gives us the fluent magic we see in the sample at the top of this post.  Before we delve into the details, the sources for the ideas and explanations of Lambda Expressions, fluent apis, Expression trees,  should be acknowledged and applauded, because they got Sensei thinking along the right path:

Fluent Validation API

Roger Alsing – Fluent Argument Validation Specification

Raffaele Garofalo – How to write fluent interface with C# and Lambda.

Lambdas, Expression Trees, Delegates, Predicates

Expression Tree Basics – Charlie Calvert’s Community Blog

Marc Gravell – Code, code and more code.: Explaining Expression

Marc Gravell – Code, code and more code.: Express yourself

Implementing Dynamic Searching Using LINQ (check the section regarding dynamic expressions.)

Creating this api is a twisted cluster-wack of a zen puzzle.  The code for this solution consists of one class and three extension methods.  We’ll make use of generics, delegates and expression trees to evaluate our When clauses.  In the end we’ll see that with very little code we get a lot of mileage.  It took Sensei a long time to wrap his head around how to piece all of these things together, so hopefully the explanation will be clear.  Note that the solution has tests that demonstrate how to use the framework, so if you want to skip the madness and just try things out, go for it.

Goal 1:  Chaining When clauses together

To get the ball rolling, there is an extension method Ensure that will accept the object you wish to evaluate, encapsulate that object into a Validation class.

public static Validation<T> Enforce<T>(this T item, string argName,
    bool proceedOnFailure)
{
    return new Validation<T>(item, argName, proceedOnFailure);
}

Creating a chain of tests is accomplished with the Validation class and successive calls to the extension method When.  Validation encapsulates the object you wish to test.  In our examples that’s Employee.  Employee will be passed on to When, When executes a test and stores the results in Validation.  After the test, When returns Validation, and this creates the opportunity to execute another extension method.

public class Validation<T>
{

    public T Value { get; set; }
    public string ArgName { get; set; }
    public bool ProceedOnFailure { get; set; }
    public bool IsValid { get; set; }
    public IList<string> ErrorMessages { get; set; }

    public Validation(T value, string argName)

    {

        this.ArgName = argName;
        this.Value = value;
        this.ProceedOnFailure = false;

        //  Set to valid in order to allow for different chaining of validations.
        //  Each validator will set value relative to failure or success.
        this.IsValid = true;
        this.ErrorMessages = new List<string>();

}

     public Validation(T value, string argName, bool proceedOnFailure)
    {
        this.ArgName = argName;
        this.Value = value;
        this.ProceedOnFailure = proceedOnFailure;

        //  Set to valid in order to allow for different chaining of validations.
        //  Each validator will set value relative to failure or success.

        this.IsValid = true;
        this.ErrorMessages = new List<string>();
    }
}

Signature of When (note that we return Validation):

public static Validation<T> When<T>(this Validation<T> item, string propertyName, Compare compareTo, object propertyValue)

Before we continue on with reviewing dynamic evaluation by the When clause, you could stop here and still have a useful mechanism for creating validation routines.  That is, you could create a extension method for each validation you want to perform.  One example could be:

public static Validation<Employee> LastNameContains(

        this Validation<Employee> employee, string compareValue)

{

    var result = employee.Value.LastName.Enforce("LastName",

                  employee.ProceedOnFailure).Contains(compareValue);

    employee.IsValid = result.IsValid;

    result.ErrorMessages

            .ToList()

            .ForEach(x => employee.ErrorMessages.Add("LastName => " + x));

    return employee;

}

So instead of Ensure.When you will use Ensure.LastNameContains(“Smi”).  You will also have to create a new method for each condition.  This is still quite expressive and would go a long way to keeping things organized.  This would be more in the spirit of the specification pattern.

Goal 2:  Dynamically Evaluating Tests at Runtime

As stated, the “tests” are performed with extension method When.  When accepts the Validation object, along with propertyName and the propertyValue that you are testing.  The enum Compare determines the type of test to perform.  The comparisons are:

public enum Compare
{
    Equal = ExpressionType.Equal,
    NotEqual = ExpressionType.NotEqual,
    LessThan = ExpressionType.LessThan,
    GreaterThan = ExpressionType.GreaterThan,
    LessThanOrEqual = ExpressionType.LessThanOrEqual,
    GreaterThanOrEqual = ExpressionType.GreaterThanOrEqual,
    Contains = ExpressionType.TypeIs + 1,
    In = ExpressionType.TypeIs + 2
}

The magic of When stems from the use of Expression trees as delegates.  As defined on MSDN, an expression tree is:

Expression trees represent code in a tree-like data structure, where each node is an expression, for example, a method call or a binary operation such as x < y.

You can compile and run code represented by expression trees. This enables dynamic modification of executable code, the execution of LINQ queries in various databases, and the creation of dynamic queries.

This gives you the ability, at runtime, to dynamically evaluate an expression in the form of x = y, also referred to as a binary expression.  And in our case, we wish to evaluate:  Employee.Age = = 45.  The delegate takes care of presenting the type of the Expression and it’s components to the runtime engine.

Marc Gravell explains the difference between a delegate and an Expression as:

  • The delegate version (Func<int,int,bool>) is the belligerent manager; “I need you to give me a way to get from 2 integers to a bool; I don’t care how – when I’m ready, I’ll ask you – and you can tell me the answer”.
  • The expression version (Expr<Func<int,int,bool>>) is the dutiful analyst; “I need you to explain to me – if I gave you 2 integers, how would you go about giving me a bool?”
  • In standard programming, the managerial approach is optimal; the caller already knows how to do the job (i.e. has IL for the purpose). But the analytic approach is more flexible; the analyst reserves the right to simply follow the instructions “as is” (i.e. call Compile().Invoke(…)) – but with understanding comes power. Power to inspect the method followed; report on it; substitute portions; replace it completely with something demonstrably equivalent, etc…

.NET 3.5 allows us to create “evaluators” with Lambda Expressions compiled as delegates that will analyze an object type, the comparisons we can make, and the values we want to compare dynamically. It will then execute that tiny block of code. This is treating our code as a set of objects.  A graph representing this tree looks like so:

Each node on the tree is an Expression. Think of this as a “bucket” to hold a value, a property or an operation.  For the runtime engine to know what the type and parameters of the Expressions are, we create a delegate from the Lambda expression of that node.  In other words, we let the compiler know that we have an expression of type Employee and will evaluate whether Employee.Age is equal to 45.

To accomplish the magic at runtime, you need to set up “buckets” to hold Employee.Age or Employee.FirstName and their values with their respective type for evaluation.  Furthermore we want to be able to evaluate any type of binary expression, so our Expression will make use of generics and a tiny bit of reflection so that we will have code that “parses” the object and it’s properties dynamically.

The Extension Method When:

public static Validation<T> When<T>(this Validation<T>; item, string propertyName, Compare compareTo, object propertyValue)

Creating the delegate of the Lambda expression:

//  Determine type of parameter.  i.e. Employee
ParameterExpression parameter = Expression.Parameter(typeof(T), "x");

//  Property on the object  to compare to.  i.e. Employee.Age
Expression property = Expression.Property(parameter, propertyName);

//  The propertyValue to match.  i.e 45
Expression constant = Expression.Constant(propertyValue, propertyValue.GetType());

This takes care of the X and Y of the binary expression, but the next task is to create the comparison as an Expression as well:

Expression equality = CreateComparisonExpression<T>(property, compareTo, constant);

The type of comparison is determined by the enum Compare.  Once these steps are completed we convert the expression into a delegate with the statement:


var executeDelegate = predicate.Compile();

If you are worried about performance and the use of reflection, note that the use of static will greatly minimize this impact.  Basically you’ll take the performance hit on the first run but not on the subsequent runs.

Goal 3:  Error Reporting

For error reporting, Validation requires the name of the object with the property ArgName, and asks that you specify whether you wish to halt when there is an error.  This is accomplished with ProceedOnFailure.  An error log is created when you wish all tests to complete despite their respective results.  When you want to halt on the first error and throw an exception set the ProceedOnFailure to false.

Reporting the errors themselves takes place in each When clause, and this is implemented at the end of the When extension method.

//  Report Error handling
if(item.IsValid == false)
{
    if(item.ProceedOnFailure)
    {
        item.ErrorMessages.Add("When " + item.ArgName + "."
            + propertyName + " " + compareTo.ToString()
            + " " + propertyValue + " failed.");
    }
    else
    {
        throw new ArgumentException("When " + item.ArgName + "."
            + propertyName + " " + compareTo.ToString()
            + " " + propertyValue + " failed.");
    }
}

Finally we need to return the Validation object so that we can chain another When operation

To recap, When is a dynamic filter where at runtime, code is evaluated and created on the fly to analyze and execute a tree representing code as an object.  The expression trees can be applied to any object and evaluate the object’s properties.  Holy snikes!!!  If that doesn’t scare you how ‘bout chaining When’s together by always returning a Validation object so that you continue to apply another extension method to it.  Twisted Zen mind torture indeed, since we have complicated looking code so that we can less complicated “business code”.

Here is the source code with unit tests.

Fail Often, Fail Fluently March 27, 2010

Posted by ActiveEngine Sensei in ActiveEngine, Agile, Fluent, Mythology, Problem Solving, software economics.
add a comment

What do you when your Scenario or User story just sucks? You’ve haggled with your peers over how to implement, the user has changed tunes and come over to your side of things by realizing that they want two things at the same time, but now that you’ve listened to everybody and re-worked your logic, you’ve just spend 6 or 7 extra hours testing. Now, you doubt that anybody really knows what the original intent of your use case was because there are so many different variants and vagaries from all the meetings, emails, hallway tests.

Now succumb to the brain death of Sarbanes-Oxley. Where is the traceability in all the discussion threads? How do you prove that you have what you want and that transactions are preserved and yada-yada-yada it just works? Before the project you thought that your team was like these guys:

But in reality you are this crew:

Sensei won’t pretend that there is a cool Zen technique to avoid hard work or failure. Maybe this type of failure of communication is a test of your core skills and your “fluency”. Look at the Elvis’ team. They’re practicing. They’ve been over the material again and again and again. That’s three again’s for each of the yada’s. To get to that point where they can adjust to his direction they’ve done much on their own time acquiring skills. Years of practice and adjustment.

Your project is like that path to acquiring a skill set, gaining mastery, being fluent. You have to build for flexibility, for

change. You CAN NOT give in to YAGNI just because this week you think you know all the answers. You won’t create a fan base that way. And just because something is written down does not mean that it’s set in stone. Remember Moses and the stone tablets? Even though he could part the waters he still had to go up the hill twice.  Things will go wrong, but if you put in the time your adjustments, while painful after a long haul, won’t be that bad.  6 hours could have been 6 days.  Be thankful you have good partners.

Getting to 11 February 13, 2010

Posted by ActiveEngine Sensei in ActiveEngine, Fluent, New Techniques, Problem Solving, software economics.
Tags: , , , , , , ,
add a comment

In the past Sensei has written insane tomes regarding time travel and how your best intentions really get you no where. The story today is about getting to 11, which as Nigel says is one more than ten, putting you over the top.  Consider for a moment the times that you really think you’re like this guy to the right.  Yep, you think you have a Martin Fowler sized audience when you are coding.  The scientists of the future will study my code and say “Here, this is the start of the great insight.  How interesting.”  In reality you are like Spinal Tap, unaware of how absurd you can be.  Code too complex, but it goes to 11!   Most blokes keep it at 10, but then you need to put it over the top take it up a notch.  That extra notch.  That’s 11.

Here’s a thought – what about 6?  Is it viable?  Can you be flexible by doing a 6, just good enough to not paint yourself into a corner?  “Perfection is a process, viable is an end state.”  As a developer you may not be able to judge what 6 is.  If you’re in tune with your fan base you’ll know but that can only come from wisdom born out of great mistakes.   For those of you who study Budo you may recall the concept of short and long and how relative scale can shift your advantage.  Your opponent may have a sword and you only a dagger.  Short and long makes a big difference, but you can alter that equation with small maneuver.  Once you’re inside and beyond the sword’s curring range you have the advantage, as your dagger is now long enough to finish the skirmish.  Change the scale.

Years back Sensei was given the task of reducing shelf space utilized by paper by 25%.  The CFO arrived at this goal via scientific method.  It was scientific since at Sensei’s company if you don’t do what the CFO said it is axiomatic that you were in deep doo-doo.  Laws of hierarchy and all.  Now imagine rooms filled with documents related to contractors, account-receiveables, human resources, legal contracts, project management etc.  Yah, DISPARATE is the word.  Not meta data, just a meta-mess.

Now in the best of all worlds where you need to get to 11, you would have time to survey all document types and refine each attribute set before you design your system for document categorization.  This foundation becomes your data model in a database and many would claim that you should a create data table per document type to house the varying number of attributes. But you have 2 million sheets of paper to scan and in 12 months re-construction at your offices begin so you need to be able to walk into a room and quickly categorize all documents, throw them into boxes, scan them, and automatically assign the meta data to the document and store the thing.  Oh, and if you miss a document type or need more attributes you don’t want to go back to your database, add or modify a table, re-gen your data access layer, add the attribute to your screen all before your adjust your categorization.  And remember, you need to ship out 80 to 100 boxes every 2 weeks so you need to keep the data entry flowing.  Finally, you are told that some projects can have up to 50 different types of documents, but no one is sure to what degree the project documentation is complete so the number of document types per project is not known and NOBODY HAS THE TIME TO GO THROUGH THE SHELVES AND CREATE DOCUMENT TYPES BEFORE ANY DATA ENTRY IS POSSIBLE!

Play the song, ’cause it adds to the excitement!!

Several key decisions solved this mess, and the solution was simple enough that temps could walk into a room categorize and pack documents into boxes for scanning.  The error rate ranged between 1 – 5% per department.  These were not solutions cranked up to 11, they were 6’s:

  • No change to database schema or screens will be made, ever.  A document was modeled with four tables with a base Document table, Document- Type table, Document-Attribute table that contained all attributes per Document-Type and finally a Document-Attribute-Value table where the meta data was stored.  This way each Document-Type could be be created with simple data entry.  One data entry screen that could create data controls on the fly per attribute type developed.
  • Each document shall have a bar-coded coversheet.  Nothing gets scanned without meta data.  EVV-ARR.
  • Import data from existing systems. The meta data for your documents resides in many of your accounting, job cost, and budget systems.  Once document types are known, dumping data from accounts receivable and / or accounts payable allows your to assemble thousands of cover-sheets for all invoices.  Quite literally you create a stack of paper for all possible types of invoices for all accounts, walk into a room, pull documents of the shelf, attach coversheets, and keep the sheets you didn’t use.  Now, since the unused sheets have a bar-code, run these through you bar-code reader and create delete records for what you didn’t use.  Now you have a complete accurate manifest of what was on the shelf and what was packed away.  When the scanned images come back you can inspect them against the manifest.
  • People can work better with paper.  As stated in the last bullet point, creating all possible types of document coversheets per account or project and printing them allows you to quickly categorize all documents.  With minimal or no data entry and a stack of coversheets, anyone now can go through shelves and associate the coversheets with the appropriate documents.  In other words, the subject matter experts have a tangible, traceable system that they can hand off and supervise someone who can do the grunt work.  Not sure where you finished with your categorization?  Just look at your stack of coversheets.  Want to inspect accuracy, grab a document and compare it to the categories printed on the coversheet.

What?  Process management with paper ?  That sucks!  No it really doesn’t.  You see, a 6 to you really is an eleventy-one for your user community who is really busy.  Yep, you have to be smart with your database design by focusing on one key area and that’s it.  The rest  of the effort is imports with SSIS packages, CSV files, and printed coversheets.  But it’s easy for the users to use paper, and that keeps a flow going.  2 million sheets of paper scanned in a year.  Maybe a 6 isn’t all that bad after all.

Chang-chang-a-ching-changa-langa-langa: Why Your User Community is Fluent in English and You Are Not the King February 6, 2010

Posted by ActiveEngine Sensei in .Net, ActiveEngine, Business Processes, Coaching, Fluent, Problem Solving, software economics.
Tags: , , , , , ,
1 comment so far

Get ready for the sound of one hand clapping, but first, fire off the song as it get’s your head straight.

Some of you want to be Elvis too much.  Sensei’s going to tell you a story so you know what he’s talking about.  You see, users of your apps  are waaaay smarter than you, and spend more time in their fields than you ever hope to do.   You need a little love.  It’s called fluent interaction.  Fluent. Interaction.  Lord have mercy.

Process mapping helps, but in the end that takes you to overly scientific abstractions, and while user stories help some they, too, stray with you as the sole author.  You in the chair, just the important details from the user, but mostly you.  Should you consider yourself not Mort but an Elvis, you may want to ask yourself what Elvis you want to be:

Kick-ass Karate Elvis
Drug Ridden Elvis Wanna Be

Back to the story.  Last episode, in a spate of productivity and a dose of SQL-NoSQL fever, Sensei created a slim document management solution that can be quickly applied to an existing framework with minimal impact to database schema and code base.  Sitting around the conference room table the comment arose from Annie, the project lead from the Sales group:

“Why do I have to save a commission record first before I can attach a document?  That interrupts my flow.  I want to put in everything that I want and save, period.  No dialog box thingy prompting to save first, come back and do something else.  Why can’t we just do it”

Long silence.  The sound of one hand clapping.

One of Sensei’s report-to’s jumped in:  “Because in order to associate the document to the commission you have to save that commission first in the database, then take the id from the record and associate it document.  This allows you to retrieve it later on.”

Annie:  So.  Can’t that just happen behind the scenes?  If it’s two steps the sales gal won’t do it.  She’s got calls to make.

Ssensei drifted out in research land, or as normal people call it, he spaced out for a bit.  NetFlix sprang to mind, iPhone too, where you delete, it does it, but you can bring it back.  Take the confirmation response out of the equation.  Give the user a chance to undo their mess, but don’t get in their way.  It’s fun to pretend to be the King, but what a wake up slap.  The technology was right, but the user was seeing the benefit because “putting the stuff in was too clunky”.  Sensei went and did want Annie wanted.  Annie thinks its great.  Good technology made better by the user, not the King.

Fluent.  Interaction.  Lord have mercy.  You see, Annie’s right and user stories, UML and other brain death would never capture the essense of her perspective, particularly after she used the software.  Yeah, soft deletes are great theory, but you are not thinking like a user.  In order to be a better King, you gotta give the concert they want to hear.  You have to know that the fans have created you, have shaped your persona.  You have to know your fans, almost be them.

Elvis had a come back concert in 1968 but it almost didn’t happen as there was a huge fight with NBC.  The network insisted that the show would be like a Bing Crosby special given that the air date was during the Christmas holiday season.  Elvis wanted an intimate environment where he could perform up close, live with his fans.  He thrived off of close contact with his fans.  Know your audience.  Elvis was right, and it helped re-launch his singing career and revive his legend.  It was one of his best performances.  For the fans.

You need to listen to your users.  Spend the time to hone your craft, but work even harder to make them fans.  What do they need?  Is the concert for them or for you?  Are you learning just to be smart or for their benefit?  Fluent solutions require interaction with the fans.  Thank you.  Thank you very much.

%d bloggers like this: