jump to navigation

jsRazor – New Way of Rendering Json That Is Pure Genius May 9, 2013

Posted by ActiveEngine Sensei in ActiveEngine, Ajax, jQuery, JSON.Net, KnockoutJS, Open Source, Problem Solving.
Tags: , , , , ,
add a comment

DClick this link immediately, and you’ll see wonderful things! Sensei has said many times that simple is better, and rgubarenko of MakeItSoft has a brand spanking new template engine that is so simple it makes you speechless.  His premise:  every possible rendering task can be accomplished as a random combination of two functional primitives:

repeat – repeats fragment of HTML code for each object in array
toggle – shows or hides fragment of HTML depending on boolean flag

Here is an example taken from the github site.  To get the output display below:

jsRazor Example Output

from a Json data set that looks like this:

var data_Themes = [ { name: "Dreaming Theme", colors: ["#2A1910", "#9E7064", "#B0967C", "#E7435E", "#6D4F3F"] },

{ name: "Moth Theme", colors: ["#30382D", "#565539", "#78765F", "#403F2B"] },

{ name: "5 Dark Theme", colors: ["#000000", "#280705", "#2E0500", "#3B0000", "#3C1100"] },

{ name: "Blue Volcano Theme", colors: ["#5077FF", "#8A84FF", "#81C1FF"] } ];

You will have a template that looks like this:

<pre><div id="example">
  <ul>
    <!--repeatfrom:themes-->
    <li> 
      <div class="name">{name} ({CountColors} colors)</div>
      <!--repeatfrom:colors-->
      <div class="wrap">
        <div class="color" style="background-color:{item};">
          <!--showfrom:dark-->
          <span style="color:white">{item}</span>
          <!--showstop:dark-->
          <!--showfrom:light-->
          <span style="color:black">{item}</span>
          <!--showstop:light-->
        </div>
        <div class="rgb">({R},{G},{B})</div>
      </div>
      <!--repeatstop:colors-->
    </li>  
    <!--repeatstop:themes-->
  </ul>
</div></pre>

The controller that creates the final result is as follows:

<pre>// get initial template
var tmp = document.getElementById("example").innerHTML; 
// repeat theme objects (pass array of themes to repeat functional)
tmp = $.jsrazor.repeat(tmp, "themes", data_Themes, function (tmp, idx, item)
{
  // repeat inner color objects (pass array of colors of the current theme item)
  tmp = $.jsrazor.repeat(tmp, "colors", item.colors, function (tmp, idx, item)
  {
    // use toggle to show dark or light color text (to be contrast with background)
    tmp = $.jsrazor.toggle(tmp, "dark", hex2rgb(item).mid <= 128);
    tmp = $.jsrazor.toggle(tmp, "light", hex2rgb(item).mid > 128);
    // output RGB representation of the color as custom value
    tmp = tmp
      .replace("{R}", hex2rgb(item).r)  // red
      .replace("{G}", hex2rgb(item).g)  // green
      .replace("{B}", hex2rgb(item).b); // blue
    // return processed template for current INNER item
    return tmp;
  });
  // color counter is not a part of JSON, so we output it as custom value
  tmp = tmp.replace("{CountColors}", item.colors.length);
  // return processed template for current item
  return tmp;
});
// put processed output back
document.getElementById("example").innerHTML = tmp;</pre>

This looks very promising.  In fact, this could be a very cool way of writing reports in jQuery!  Think about it – with a tool like jLinq for filtering data sets and jsRazor, you could replace SQL Server Reports.

Advertisements

ApprovaFlow: A Quick Primer On RavenDB December 15, 2011

Posted by ActiveEngine Sensei in .Net, .Net Development, ActiveEngine, Approvaflow, ASP.Net, Business Processes, JSON.Net, Open Source, RavenDB, Workflow.
Tags: , , , , ,
2 comments

This post focuses on getting started with RavenDB, so we’ll set aside our focus on workflows for a bit.  It’s included in the ApprovaFlow series as it is an important part of the workflow framework we’re building.  To follow along you might want to get the source code.

RavenDB is a document database that provides a flexible means for storing object graphs.  As you’ll see a document database presents you with a different set of challenges than you are normally presented when using a traditional relational database.

The storage “unit” in RavenDB is a schema-less JSON document.  This takes the form of:  Because you are working with documents you now have the flexibility to define documents differently; that is, you can support variations to your data without have to re-craft your data model each time you want to add a new property to a class.  You can adopt a “star” pattern for SQL as depicted here, but querying can become difficult.  Raven excels in this situation and one such sweet spot is:

Dynamic Entities, such as user-customizable entities, entities with a large number of optional fields, etc. – Raven’s schema free nature means that you don’t have to fight a relational model to implement it.

Installing and Running RavenDB

The compiled binaries are easy to install.  Download the latest build and extract the files to a share.  Note that in order to run the console you are required to install Silverlight.  To start the server, navigate to the folder[] and double click “Start.cmd”.  You will see a screen similar to this one once the server is up and running:

The console will launch it self and will resemble this:

How To Start Developing

In Visual Studio, reference Raven with Raven.Client.Lightweight.  For CRUD operations and querying this will be all that you will need.

First you will need to connect to the document store.  It is recommended that you do this once per application.  That is accomplished with

var documentStore = new DocumentStore {Url = "http://localhost:8080"};
documentStore.Initialize();

Procedures are carried out using the Unit of Work pattern, and in general you will be using these type of blocks:

using(var session = documentStore.OpenSession())
{
   //... Do some work
}

RavenDB will work with Plain Old C# Objects and only requires an Id property of type string.  An identity key is generated for Id during this session.  If were were to create multiple steps we would have identities created in succession.  A full discussion of the alternatives to the Id property is here.

Creating a document from your POCOs’ object graphs is very straight forward:

public class Person
{
    public string FirstName { get; set; }
public string LastName { get; set; }
public string Id { get; set; }
public int DepartmentId { get; set; }
    // ...
}

var person = new Person();

using(var session = documentStore.OpenSession())
{
   session.Store(person);
   session.SaveChanges();
}

Fetching a document can be accomplished in two manners:  by Id or with a LINQ query.  Here’s how to get a document by id:

string person = "Person/1";  //  Raven will have auto-generated a value for us.
using(var session = documentStore.OpenSession())
{
   var fetchedPerson = session.Load<Person>(personId);
   //Do some more work
}

You’ll note that there is no casting or conversion required as Raven will determine the object type and populate the properties for you.

There are naturally cases where you want to query for documents based on attributes other than the Id. Best practices guides that we should create static indexes on our documents as these will offer the best performance. RavenDB also has a dynamic index feature that learns from queries fired at the server and over time these dynamic indexes are memorialized.

For your first bout with RavenDB you can simply query the documents with LINQ.   The test code takes advantage of the dynamic feature.  Later you will want to create indexes based on how you most likely will retrieve the documents.  This is different that a traditional RDMS solution, where the data is optimized for querying.  A document database is NOT.

Continuing with our example of Person documents we would use:

int departmentId = 139;

using(var session = documentStore.OpenSession())
{
   var people = session.Query<Person>()
                          .Where(x => x.DepartmentId == departmentId)
                          .ToList();
}

In the source code for this post there are more examples of querying.

Debugging, Troubleshooting and Dealing with Frustration

Given that this is something new and an open source project you may find yourself searching for help and more guidelines.  One thing to avail yourself of while troubleshooting is the fact that RavenDB has REST interface and you can validate your assumptions – or worse, confirm your errors – by using curl from the command line.  For example, to create a document via http you issue:

curl -X POST http://localhost:8080/docs -d "{ FirstName: 'Bob', LastName: 'Smith', Address: '5 Elm St' }"

Each action that takes place on the RavenDB server is displayed in a log on the server console app.  Sensei had to resort to this technique when troubleshooting some issues when he first started.  This StackOverflow question details the travails.

Another area that threw Sensei for a loop at first was the nature of the RavenDB writing and maintaining indexes.  In short, indexing is a background process, and Raven is designed to be “eventually consistent”.  That means that there can be a latency between when a change is submitted, saved, and indexed in the repository so that it can be fetched via queries.  When running tests from NUnit this code did not operate as expected, yet the console reported that the document was created:

session.Store(teamMember);

int posttestCount = session.Query<TeamMember>()
                .Count();

According to the documentation you can overcome this inconsistency by declaring that you are willing to wait until RavenDB has completed its current write operation.   This code will get you the expected results:

int posttestCount = session.Query<TeamMember>()
              .Customize(x => x.WaitForNonStaleResults())
              .Count();

Depending on the number of tests you write you may wish to run RavenDB in Embedded mode for faster results.  This might prove useful for automated testing and builds.  The source code provided in this post does NOT use embedded mode; rather, you have need your server running as this gives you the opportunity to inspect documents and acclimate yourself to the database.

There is much more that you can do with RavenDB, such as creating indexes across documents, assign security to individual documents, and much more.  This primer should be enough to get you started.  Next post we’ll see how RavenDB will fit into the ApprovaFlow framework.  Grab the source, play around and get ready for the next exciting episode.

 

Simple Workflows With ApprovaFlow and Stateless April 2, 2011

Posted by ActiveEngine Sensei in .Net, ActiveEngine, Approvaflow, ASP.Net, C#, JSON.Net, New Techniques, Stateless.
Tags: , , , , ,
add a comment

This is the second in a series of posts for ApprovaFlow, an alternative to Windows Workflow written in C# and JSON.Net. Source code for this post is here.

Last time we laid out out goals for a simple workflow engine, ApprovaFlow, with the following objectives:
• Model a workflow in a clear format that is readable by both developer and business user. One set of verbiage for all parties.
•. Allow the state of a workflow to be peristed as an integer, string. Quicky fetch state of a workflow.
•. Create pre and post nprocessing methods that can enforce enforce rules or carry out actions when completing a workflow task.
•. Introduce new functionality while isolating the impact of the new changes. New components should not break old ones
•.Communicate to the client with a standard set of objects. In other words, your solution domain will not change how the user interface will gather data from the user.
•. Use one. aspx page to processes user input for any type of workflow.
•. Provide ability to roll your own customizations to the front end or backend of your application.

The fulcrum point of all we have set out to do with ApprovaFlow is a state machine that will present a state and accept answers supplied by the users. One of Sensei’s misgivings about Windows Workflow is that it is such a behemoth when all you want to implement is a state machine.
Stateless, created Nicholas Blumhardt, is a shining example of adhering to the rule of “necessary and sufficient”. By using Generics Stateless allows you to create a state machine where the State and Trigger can be represented by an integer, string double, enum – say this sounds like it fulfills our goal:

•. Allow the state of a workflow to be persisted as an integer, string. Quicky fetch state of a workflow.
Stateless constructs a state machine with the following syntax:

var statemachine =
       new StateMachine(TState currentState);

For our discussion we will create a state machine that will process a request for promotion workflow. We’ll use:

var statemachine =
       new StateMachine(string currentstate);

This could very easily take the form of

<int, int>

and will depend on your preferences. Regardless of your choice, if the current state is represent by a primitive like int or string, you can just fetch that from a database or a repository and now your state machine is loaded with the current state. Contrast that with WF where you have multiple projects and confusing nomenclature to learn. Stateless just stays out of our way.
Let’s lay out our request for promotion workflow. Here is our state machine represented in English:

Step: Request Promotion Form
  Answer => Complete
  Next Step => Manager Review

Step: Manager Review
  Answer => Deny
  Next Step => Promotion Denied
  Answer => Request Info
  Next Step => Request Promotion Form
  Answer => Approve
  Next Step => Vice President Approve

Step: Vice President Approve
  Answer => Deny
  Next Step => Promotion Denied
  Answer => Manager Justify
  Next Step => Manager Review
  Answer => Approve
  Next Step => Promoted

Step: Promotion Denied
Step: Promoted

Remember the goal Model a workflow in a clear format that is readable by both developer and business user. One set of verbiage for all parties? We are very close to achieving that goal. If we substitute “Step” with “State” and “Answer” with “Trigger”, then we have a model that matches how Stateless configures a state machine:

var statemachine = new StateMachine(startState);

//  Request Promo form states
statemachine.Configure("RequestPromotionForm")
               .Permit("Complete", "ManagerReview");

//  Manager Review states
statemachine.Configure("ManagerReview")
               .Permit("RequestInfo", "RequestPromotionForm")
               .Permit("Deny", "PromotionDenied")
               .Permit("Approve", "VicePresidentApprove");

Clearly you will not show the code to your business partners or end users, but a simple chart like this should not make anyone’s eyes glaze over:

State: Request Promotion Form
  Trigger => Complete
  Target State => Manager Review

Before we move on you may want to study the test in the file SimpleStateless.cs. Here configuring the state machine and advancing from state to state is laid out for you:

//  Request Promo form states
statemachine.Configure("RequestPromotionForm")
                    .Permit("Complete", "ManagerReview");

//  Manager Review states
statemachine.Configure("ManagerReview")
                     .Permit("RequestInfo", "RequestPromotionForm")
                     .Permit("Deny", "PromotionDenied")
                     .Permit("Approve", "VicePresidentApprove");

//  Vice President state configuration
statemachine.Configure("VicePresidentApprove")
                      .Permit("ManagerJustify", "ManagerReview")
                      .Permit("Deny", "PromotionDenied")
                      .Permit("Approve", "Promoted");

//  Tests
Assert.AreEqual(startState, statemachine.State);

//  Move to next state
statemachine.Fire("Complete");
Assert.IsTrue(statemachine.IsInState("ManagerReview"));

statemachine.Fire("Deny");
Assert.IsTrue(statemachine.IsInState("PromotionDenied"));

The next question that comes to mind is how to represent the various States, Triggers and State configurations as data. Our mission on this project is to adhere to simplicity. One way to represent a Stateless state machine is with JSON:

{WorkflowType : "RequestPromotion",
  States : [{Name : "RequestPromotionForm" ; DisplayName : "Request Promotion Form"}
    {Name : "ManagerReview", DisplayName : "Manager Review"},
    {Name : "VicePresidentApprove", DisplayName : "Vice President Approve"},
    {Name : "PromotionDenied", DisplayName : "Promotion Denied"},
    {Name : "Promoted", DisplayName : "Promoted"}
    ],
  Triggers : [{Name : "Complete", DisplayName : "Complete"},
     {Name : "Approve", DisplayName : "Approve"},
     {Name : "RequestInfo", DisplayName : "Request Info"},
     {Name : "ManagerJustify", DisplayName : "Manager Justify"},
     {Name : "Deny", DisplayName : "Deny"}
  ],
StateConfigs : [{State : "RequestPromotionForm", Trigger : "Complete", TargetState : "ManagerReview"},
     {State : "ManagerReview", Trigger : "RequestInfo", TargetState : "RequestPromotionForm"},
     {State : "ManagerReview", Trigger : "Deny", TargetState : "PromotionDenied"},
     {State : "ManagerReview", Trigger : "Approve", TargetState : "VicePresidentApprove"},
     {State : "VicePresidentApprove", Trigger : "ManagerJustify", TargetState : "ManagerApprove"},
     {State : "VicePresidentApprove", Trigger : "Deny", TargetState : "PromotionDenied"},
     {State : "VicePresidentApprove", Trigger : "Approve", TargetState : "Promoted"}
  ]
}

As you can see we are storing all States and all Triggers with their display names. This will allow you some flexibility with UI screens and reports. Each rule for transitioning a state to another is stored in the StateConfigs node. Here we are simply representing our chart that we created above as JSON.

Since we have a standard way of representing a workflow with JSON de-serializing this definition to objects is straight forward. Here are the corresponding classes that define a state machine:

public class WorkflowDefinition
{
        public string WorkflowType { get; set; }
        public List States { get; set; }
        public List Triggers { get; set; }
        public List StateConfigs { get; set; }

        public WorkflowDefinition() { }
}

public class State
{
        public string Name { get; set; }
        public string DisplayName { get; set; }
}

public class Trigger
{
        public string Name { get; set; }
        public string DisplayName { get; set; }

        public Trigger() { }
}
public class StateConfig
{
        public string State { get; set; }
        public string Trigger { get; set; }
        public string TargetState { get; set; }

        public StateConfig() { }
}

We’ll close out this post with an example that will de-serialize our state machine definition and allow us to respond to the triggers that we supply. Basically it will be a rudimentary workflow. RequestionPromotion.cs will be the workflow processor. The method Configure is where we will perform the de-serialization, and the process is quite straight forward:

  1. Deserialize the States
  2. Deserialize the Triggers
  3. Deserialize the StateConfigs that contain the transitions from state to state
  4. For every StateConfig, configure the state machine.

Here’s the code:

public void Configure()
{
    Enforce.That((string.IsNullOrEmpty(source) == false),
                            "RequestPromotion.Configure - source is null");

    string json = GetJson(source);

    var workflowDefintion = JsonConvert.DeserializeObject(json);

    Enforce.That((string.IsNullOrEmpty(startState) == false),
                            "RequestPromotion.Configure - startStep is null");

    this.stateMachine = new StateMachine(startState);

    //  Get a distinct list of states with a trigger from state configuration
    //  "State => Trigger => TargetState
    var states = workflowDefintion.StateConfigs.AsQueryable()
                                    .Select(x => x.State)
                                    .Distinct()
                                    .Select(x => x)
                                    .ToList();

    //  Assing triggers to states
    states.ForEach(state =>
    {
        var triggers = workflowDefintion.StateConfigs.AsQueryable()
                                   .Where(config => config.State == state)
                                   .Select(config => new { Trigger = config.Trigger, TargeState = config.TargetState })
                                   .ToList();

        triggers.ForEach(trig =>
        {
            this.stateMachine.Configure(state).Permit(trig.Trigger, trig.TargeState);
        });
    });
}

And we advance the workflow with this method:

public void ProgressToNextState(string trigger)
{
Enforce.That((string.IsNullOrEmpty(trigger) == false),
"RequestPromotion.ProgressToNextState – trigger is null");

this.stateMachine.Fire(trigger);
}

The class RequestPromotionTests.cs illustrates how this works.

We we have seen how we can fulfill the objectives laid out for ApprovaFlow and have covered a significant part of the functionality that Stateless will provide for our workflow engine.   Here is the source code.

ApprovaFlow – A Proof of Concept March 25, 2011

Posted by ActiveEngine Sensei in .Net, .Net Development, ActiveEngine, Approvaflow, ASP.Net, C#, JSON.Net, LINQ, New Techniques, Open Source.
Tags: , ,
add a comment

Like Tolkien, Sensei wants to create the landscapes, cultures and languages before he writes his next epic. You can be the judge whether the work is a series of sketches and notes like the Silmarillion or cohesive, compelling story that you want read again and again. As a bonus Sensei will deliver working software that hopefully will be of use to you.  (Photo credit – utnapistim).

The epic will be called ApprovaFlow. ApprovaFlow is a framework / process / methodology that allows you to create workflow applications that are easy to deploy and are configurable. With ApprovaFlow Sensei hopes to demonstrate how to readily encorporate the inevitable changes that your users will ask of you. Deliver changes effortlessly and without groans. Cast off the chains inconvenient builds and focus on creating solutions that stay out of the users way.

Ok. Managent wants bullet points so here are our goals for ApprovaFlow:

• Model a workflow in a clear format that is readable by both developer and business user. One set of verbiage for all parties.
•. Allow the state of a workflow to be peristed as an integer, string. Quicky fetch state of a workflow.
•. Create pre and post nprocessing methods that can enforce enforce rules or carry out actions when completing a workflow task.
•. Introduce new functionality while isolating the impact of the new changes. New components should not break old ones
•.Communicate to the client with a standard set of objects. In other words, your solution domain will not change how the user interface will gather data from the user.
•. Use one. aspx page to processes user input for any type of workflow.
•. Provide ability to roll your own customizations to the front end or backend of your application.

There it is. These goals will probably take us a good amount of time to review and implement. Is it worth it? Hell yeah. We’ll end up with one simple project instead of a bloated framework where it takes forever to find anything. A nice by product will be that you can spend more time thinking about how to solve your users problems rather than trying to figure out a monsterous framework that requires a huge investment of energy and time learning how to get simple things done.

Dynamically Select Columns with Server-Side Paging and Datatables.Net January 14, 2011

Posted by ActiveEngine Sensei in .Net, ActiveEngine, Ajax, ASP.Net, DataTables.Net, jQuery, JSON.Net, New Techniques, Problem Solving.
Tags: , , , , , ,
30 comments

Source code has been yet again updated!! Read about the changes in DataTablePager Now Has Multi-Column Sort Capability For DataTables.Net If you are new to DataTables.Net and Sensei’s paging solution and want to detailed study of how it works, work through this post first, then get the latest edition.  Note, code links in this post are to the first version.

The last episode of server-side paging with DataTablerPager for DataTables.Net we reviewed the basics of a server-side solution that paged records and returned results in the multiples as specified by DataTables.Net.  You will want to have read that post before preceding here.  The older version of the source is included in that post as well as this will help get you acclimated.  The following capabilities were reviewed:

  • The solution used generics and could work with any collection of IQueryable.  In short any of your classes from you domain solution  could be used.
  • Filtering capability across all properties was provided.  This included partial word matching, regardless of case.
  • Ordering of result set was in response to the column clicked on the client’s DataTables grid.

DataTablePager Enhancements

This past month Sensei has added new capabilities to the DataTablePager class that makes it an even better fit for use with DataTables.Net.  The new features are:

  • Dynamically select the columns from the properties of your class based on the column definitions supplied by DataTables.Net!!!
  • Exclude columns from sort or search based on configuration by DataTables.Net
  • Mix columns from your class properties with client-side only column definitions; e.g. create a column with <a href>’s that do not interfere with filtering, sorting, or other processing.

Before we jump into the nitty-gritty details let’s review how DataTables.Net allows you to control a column’s interaction with a data grid.  Grab the new source code to best follow along.

DataTables.Net Column Definition

You would think that there would be quite a few steps to keep your server-side data paging solution in concert with a client side implementation, and that would mean customization for each page.   DataTables.Net provides you with fine control over what your columns will do once displayed in a data grid.  Great, but does that mean a lot of configuration on the server side of the equation?  As well soon see, no, it doesn’t.  What is done on the client for configuration will be that you need to do.

The structure aoColumnDefs is the convention we use for column configuration.  From the DataTables.Net site:

aoColumnDefs: This array allows you to target a specific column, multiple columns, or all columns, using the aTargets property of each object in the array (please note that aoColumnDefs was introduced in DataTables 1.7). This allows great flexibility when creating tables, as the aoColumnDefs arrays can be of any length, targeting the columns you specifically want. The aTargets property is an array to target one of many columns and each element in it can be:

  • a string – class name will be matched on the TH for the column
  • 0 or a positive integer – column index counting from the left
  • a negative integer – column index counting from the right
  • the string “_all” – all columns (i.e. assign a default)

So in order for you to include columns in a sort you configure in this manner:

/* Using aoColumnDefs */
$(document).ready(function() {
	$('#example').dataTable( {
		"aoColumnDefs": [
			{ "bSortable": false, "aTargets": [ 0 ] }
		] } );
} );

} );

In other words we are defining that the first column – column 0 – will not be included in the sorting operations.  When you review the columns options you’ll see you have options for applying css classes to multiple columns, can include a column in filtering, can supply custom rendering of a column, and much more.

In the example that we’ll use for the rest of the post we are going to provide the following capability for a data grid:

  1. The first column – column 0 – will be an action column with a hyperlink, and we will want to exclude it form sort and filtering functions.
  2. Only display a subset of the properties from a class.  Each of these columns should be sortable and filterable.
  3. Maintain the ability to chunk the result set in the multiples specified by DataTables.Net; that is, multiples of 10, 50, and 100.

Here is the configuration from the aspx page SpecifyColumns.aspx:

"aoColumnDefs" : [
   {"fnRender" : function(oObj){
      return "<a href="&quot;center.aspx?centerid=&quot;">Edit</a>";
   },
     "bSortable" : false,
     "aTargets" : [0]},
   {"sName" : "Name",
     "bSearchable" : true,
     "aTargets": [1]},
   {"sName" : "Agent",
    "bSearchable" : true,
    "bSortable" : true,
    "aTargets" : [2]
   },
   {"sName" : "Center", "aTargets": [3]},
   {"fnRender" : function(oObj){
            return "2nd Action List";
         },
     "bSortable" : false,
     "aTargets" : [4]},
   {"sName" : "CenterId", "bVisible" : false, "aTargets" : [5]},
   {"sName" : "DealAmount", "aTargets" : [6]}
]
  1. Column 0 is our custom column – do not sort or search on this content.  Look at oObj.aData[4] – this is a column that we’ll return but not display.  It’s referred so by the position in the data array that DataTables.Net expects back from the server.
  2. Columns 1 – 3 are data and can be sorted.  Note the use of “sName”.  This will be included in a named column list that corresponds to the source property from our class.  This will be very important later on for us, as it allows us to query our data and return it in any order to DataTables.Net.  DataTables will figure out what to do with it before it renders.
  3. Threw in another custom column.  Again, no sort or search, but we’ll see how this affects the server side implementation later on.  Hint – there’s no sName used here.
  4. Another data column.

To recap, we want to be able to define what data we need to display and how we want to interact with that data by only instructing DataTables.Net what to do.  We’re going to be lazy, and not do anything else – the class DataTablePager will respond to the instructions that DataTables.Net supplies, and that’s it.  We’ll review how to do this next.  Sensei thinks you’ll really dig it.

DataTablePager Class Handles your Client Side Requests

If you recall, DataTables.Net communicates to the server via the structure aoData.  Here is the summary of the parameters.  One additional parameter that we’ll need to parse is the sColumns parameter, and it will contain the names and order of the columns that DataTables.Net is rendering.  For our example, we’ll get the following list of columns if we were to debug on the server:

,Name,Agent,Center,,CenterId,DealAmount

These are all the columns we named with sName, plus a place holder for those custom columns that not found in our class.  This has several implications.  For one, it will mean that we will no longer be able to simply use reflection to get at our properties, filter them and send them back down to the client.  The client is now expecting an array where each row will have 7 things, 5 of which are named and two place holders for items that the client wants to reserve for itself.  Hence the convention of passing an empty item in the delimited string as shown above.

It will also mean that we’ll have to separate the columns that we can filter or sort.  Again this is the reason for leaving the custom column names blank.  In other words, we’ll have to keep track of the items that we can search and sort.  We’ll do this with a class called SearchAndSortable:

public class SearchAndSortable
    {
        public string Name { get; set; }
        public int ColumnIndex { get; set; }
        public bool IsSearchable { get; set; }
        public bool IsSortable { get; set; }
        public PropertyInfo Property{ get; set; }

        public SearchAndSortable(string name, int columnIndex, bool isSearchable, bool isSortable)
        {
            this.Name = name;
            this.ColumnIndex = columnIndex;
            this.IsSearchable = isSearchable;
            this.IsSortable = IsSortable;
        }

        public SearchAndSortable() : this(string.Empty, 0, true, true) { }
    }

This will summarize what we’re doing with our properties.   The property ColumnIndex will record the position in sColumn where our column occurs.  Since we’ll need access to the actual properties themselves we’ll store these in the SearchAndSortable as well so that we can reduce the number of calls that use reflection. DataTablePager uses a List of SortAndSearchables to track what’s going on.  We fill this list in the method PrepAOData()

//  What column is searchable and / or sortable
            //  What properties from T is identified by the columns
            var properties = typeof(T).GetProperties();
            int i = 0;

            //  Search and store all properties from T
            this.columns.ForEach(col =>
            {
                if (string.IsNullOrEmpty(col) == false)
                {
                    var searchable = new SearchAndSortable(col, i, false, false);
                    var searchItem = aoDataList.Where(x => x.Name == BSEARCHABLE + i.ToString())
                                     .ToList();
                    searchable.IsSearchable = (searchItem[0].Value == "False") ? false : true;
                    searchable.Property = properties.Where(x => x.Name == col)
                                                    .SingleOrDefault();

                    searchAndSortables.Add(searchable);
                }

                i++;
            });

            //  Sort
            searchAndSortables.ForEach(sortable => {
                var sort = aoDataList.Where(x => x.Name == BSORTABLE + sortable.ColumnIndex.ToString())
                                            .ToList();
                sortable.IsSortable = (sort[0].Value == "False") ? false : true;
            });

We’ll get the properties from our class. Next we’ll traverse the columns and match the property names with the names of the columns. When there is a match, we need to query aoData and get the column search and sort definitions based on the ordinal position of the column in the sColumns variable. DataTables.Net convention for communicating this is the form of:

bSortable_ + column index => “bSortable_1” or “bSearchable_2”

We take care of that with this line of code:

var searchItem = aoDataList.Where(x => x.Name == BSEARCHABLE +
                                     i.ToString())
                                     .ToList();
searchable.IsSearchable = (searchItem[0].Value == "False") ? false : true;

Now we go through the list of properties again but this time determine if we should sort any of the columns. That happens in the section //Sort. In the end we have a list of properties that corresponds with the columns DataTables.Net has requested, and we have defined if the property can be search (filtered) or sorted.

For filtering DataTablePager recall that we use the method GenericSearchFilter().  The only alteration here is that we only will add the properties to our query that are defined as searcable:

//  Create a list of searchable properties
            var filterProperties = this.searchAndSortables.Where(x =>
                                        x.IsSearchable)
                                          .Select(x => x.Property)
                                          .ToList();

The rest of the method is unaltered from the prior version. Pretty cool!! Again, we’ll only get the properties that we declared as legal for filtering. We’ve also eliminated any chance of mixing a custom column in with our properties because we did not supply an sName in our configuration.

The method ApplySort() required one change. On the initial load of DataTable.Net, the client will pass up the request to sort on column 0 even though you may have excluded it. When that is the case, we’ll just look for the first column that is sortable and order by that column.

//  Initial display will set order to first column - column 0
//  When column 0 is not sortable, find first column that is
var sortable = this.searchAndSortables.Where(x => x.ColumnIndex ==
                                         firstColumn)
                              .SingleOrDefault();
if(sortable == null)
{
   sortable = this.searchAndSortables.First(x => x.IsSortable);
}

return records.OrderBy(sortable.Name, sortDirection, true);

After we have filtered and sorted the data set we can finally select the only those properties that we want to send to the client.  Recall that we have parsed a variable sColumns that tells what columns are expected.  We’ll pass these names onto extension method PropertiesToList().  This method will only serialize the property if the column is include, and since we have already paired down our data set as a result of our query and paging, there is very little performance impact.  Here is the new PropertiesToList method:

public static ListPropertiesToList(this T obj, List propertyNames)
{
   var propertyList = new List();
   var properties = typeof(T).GetProperties();
   var props = new List();

   //  Find all "" in propertyNames and insert empty value into list at
   //  corresponding position
   var blankIndexes = new List();
   int i = 0;

   //  Select and order filterProperties.  Record index position where there is
   //  no property
   propertyNames.ForEach(name =>
   {
      var property = properties.Where(prop => prop.Name == name.Trim())
         .SingleOrDefault();

      if(property == null)
      {
         blankIndexes.Add(new NameValuePair(name, i));
      }
      else
      {
         props.Add(properties.Where(prop => prop.Name == name.Trim())
                                    .SingleOrDefault());
      }
      i++;
   });

   propertyList = props.Select(prop => (prop.GetValue(obj, new object[0]) ?? string.Empty).ToString())
                                        .ToList();

   //  Add "" to List as client expects blank value in array
   blankIndexes.ForEach(index =>; {
      propertyList.Insert(index.Value, string.Empty);
   });

   return propertyList;
}

You might ask why not just pass in the list of SearchAndSortTable and avoid using reflection again. You could, but remember at this point we have reduced the number of items to the page size of 10, 50 or 100 rows, so your reflection calls will not have that great an impact. Also you should consider whether you want to simply have a function that will select only those properties that you need. Using SearchAndSortable would narrow the scope of utility, as you can use this method in other areas other than prepping data for DataTables.Net.

Now It’s Your Turn

That’s it.  Play with the page named SpecifyColumns.aspx.  You should be able to add and remove columns in the DataTable.Net configuration and they will just work.  This will mean, however, that you’ll have to always define your columns in your aspx page.  But since we worked really hard the first time around, DataTablePager will still be able to create paged data sets for any class in your domain.

Source code is here.  Enjoy.

Prototyping With AnonymousTypes, CS-Script and JSON.Net – DataBuilder Part 3 November 14, 2010

Posted by ActiveEngine Sensei in .Net, ASP.Net, C#, CS-Script, DataBuilder, JSON.Net, New Techniques, Open Source.
Tags: , , , ,
4 comments

Sensei hit his head in the shower this morning and instead of the flux-capacitor he saw the following code that could be used for prototyping:

var jsonDataSet = AnonFactory.CreateList(10, new { FirstName = "",
                                 LastName = "",
                                 Age = 0,
                                 Salary = 45000})
                              .WhereAll()
                              .Have("FirstName", "Jim")
                              .BuildList()
                              .ToJSON();

If you have been following the series on DataBuilder, a utility that dynamically creates test data as JSON, you’ll recall one of the highlights was avoiding the need to create an additional Visual Studio project just so you could create test data.  DataBuilder can import your assemblies and with the use of NBuilder it creates test data for you.  You may wish to read those posts for some background.

But consider the scenarios where you do not have an application domain or a .Net assembly to test.  Should you stop?  For prototyping do you really need to compile, change, compile, change, show-to-your-user, repeat ad-nauseum?

Here’s a new concept:  Build data sets that you use in prototyping with Javascript before you break out Visual Studio.  If you can quickly build a tool set to communicate with your business users you’ll be much further ahead by avoiding the back-end client side impedance mismatch.  After all, many times the business user has the concept already hidden upstairs in their heads, why not help get that vision out in the open earlier and save yourself.

So, to be able to write the statement that Sensei saw in his vision we need to achieve to following goals:

  • Create an anonymous type as template for building a list
  • Create a fluent interface for chaining clauses similar to NBuilder
  • Serialize the list of anonymous types to JSON
  • Run this process in DataBuilder – in other words, go dynamic!

You may want to download the source and follow along.

AnonymousType as a Template

It turns out that Sensei has a tool in his bag of tricks, namely the Persistent Anonymous type.  As you are aware, anonymous types in C# have a limited scope, but the Persistent Anonymous type that Sensei discussed allows you to create a structure that mimics a standard anonymous type while being persist that structure beyond the scope of its creation.  The AnonymousType creator, Hugo Benocci created something really great.  Here’s what you can do from the get-go:

var anonType = AnonymousType(new {FirstName = "James",
                                         Middle = "T",
                                         LastName="Kirk"
                                   });

If you recall the examples used in the prior posts on data generation with DataBuilder, NBuilder supplied default values for you when you built an object. You then had to alter those values should you want something different.

With an AnonymousType we use a factory to create a list of AnonymousTypes like so:

var anonTypeList = AnonFactory.CreateListSizeOf(10,
                                  new {FirstName = "James",
                                         Middle = "T",
                                         LastName="Kirk"
                                   });

Nice and simple. If we didn’t have more goals to achieve, we could go home. As it is, on to the next step.

Creating a Fluent Interface for Chaining Clauses

We still need to provide variation in our data set.  NBuilder does a nice job of letting us use clauses like .WhereTheNext(5).Have() or WhereAll.Have() to introduce more structured variation in our data.  We want to achieve the same thing with the AnonymousTypes and can do the following:

var anonTypeList = AnonFactory.CreateListSizeOf(10,
                                  new {FirstName ="",
                                         Middle = "",
                                         LastName=""
                                   })
                                   .WhereAll()
                                       .Have("FirstName", "William")
                                       .Have("LastName", "Riker")
                                   .BuildList();

Like NBuilder we want to specify that different segments get different values. Here’s how that is accomplished:

var anonTypeList = AnonFactory.CreateListSizeOf(10,
                                  new {FirstName = "",
                                         Middle = "",
                                         LastName=""
                                   })
                                   .WhereTheFirst(5)
                                       .Have("FirstName", "William")
                                       .Have("LastName", "Riker")
                                   .AndTheNext(5)
                                       .Have("FirstName", "Jean-Luc")
                                       .Have("LastName","Picard")
                                   .BuildList();

There is also a WhereTheLast() clause for working with data at the end of a list.  All of these extension methods are contained in Extensions.cs.

Now let’s talk about how this works. For our purposes a fluent interface allows us to chain methods together yielding a more readable syntax. This is achieved with methods that return a reference to it’s class as in:

class RetangleCreator
{
    private decimal width;
    private decimal height;
    private string color;

    public RectangleCreator SetWidth(decimal width)
    {
        this.width = width;
        return this;
    }

    public RectangleCreator SetHeight(decimal height)
    {
        this.height = height ;
        return this;
    }

    public RectangleCreator SetColor(string color)
    {
        this.color= color;
        return this;
    }
}

var rectangleCreator = new RectangleCreator()
                                         .SetHeight(4.5)
                                         .SetWidth(5.75)
                                         .SetColor("red");

With a collection or list you might be tempted to create code that simply passes on the list after performing some action. This is fine until you need to perform actions on specific segments of that list. In our case we need to know where an action has left of in order to achieve syntax like WhereTheNext(5).Have(…).AndTheNext(5).

The class AnonTypeRange accomplishes this and allows us to perform actions on a list of AnonymousTypes in the manner we desire. Here is the code:

public void SetRange(int start, int amount)
{
    Enforce.ArgumentGreaterThanZero(amount, "AnonTypeRange.Next - amount must be greater than start");
    Enforce.That((start >= 0), "AnonTypeRange.Next - start must be greater than or equal to 0");
    Enforce.That((amount + start <= this.limit), "AnonTypeRange.Next - amount can not be greater than limit");     this.Start = start;     this.End = (start + amount - 1) > limit ? limit : start + amount - 1;
        }

The ranges themselves are set in the extension methods that comprise our syntax. Examine WhereTheNext() and AndTheNext() methods:

public static AnonTypeRange WhereTheFirst(this List anonTypes, int amount)
{
    Enforce.ArgumentGreaterThanZero(amount, "AnonTypeRange.WhereTheFirst - amount can not be less that 0");

    var anonTypeRange = new AnonTypeRange(anonTypes);
    anonTypeRange.SetRange(0, amount);

    return anonTypeRange;
}

public static AnonTypeRange AndTheNext(this AnonTypeRange anonTypeRange, int amount)
{
    Enforce.ArgumentGreaterThanZero(amount, "AnonTypeRange.AndTheNext - amount can not be less that 0");

    anonTypeRange.SetRange(anonTypeRange.End + 1, amount);

    return anonTypeRange;
}

The only drawback is that the actions are processed serially. In other words you do a group of 5, then another group of 5. If you need to go back you could add a WhereTheFirst() to reset the position of operations.

Before we move on take note of the WhereAll() method. This takes in a List and returns a AnonTypeRange with the range set to 0 spanning to the end:

public static AnonTypeRange WhereAll(this ListanonTypes)
{
    var anonTypeRange = new AnonTypeRange(anonTypes);
    anonTypeRange.SetRange(0, anonTypes.Count);

    return anonTypeRange;
}

Our values are set with the Have clause. Again we try to mimic the great functionality of NBuilder, so you have two options.  You can set a single value on a property over a range:

//  Have
public static AnonTypeRange Have(this AnonTypeRange anonTypeRange, string property, object value)
{
    anonTypeRange.Apply(property, value);
    return anonTypeRange;
}

//  Have calls AnonTypeRange.Apply to save the properties
public void Apply(string property, object value)
{
    Enforce.ArgumentNotNull(property, "AnonTypeRange.Apply - property can not be null");
    Enforce.ArgumentNotNull<object>(value, "AnonTypeRange.Apply - value can not be null");

    int count = (this.End - Start) + 1;
    var range = this.internalAnonList.GetRange(this.Start, count);
    range.ForEach(x => x.Set(property, value));                    }

Or you can create a list of values that will be selected at random and distributed across the range.  This takes advantage of the Pick functionality provided by NBuilder and is quite useful.

public void Apply(string property, List<T>pickList)
{
    Enforce.ArgumentNotNull(property, "AnonTypeRange.Apply - property can not be null");

    int count = (this.End - Start) + 1;
    var range = this.internalAnonList.GetRange(this.Start, count);

    range.ForEach(x => x.Set(property, Pick.RandomItemFrom(pickList)));
}

Serializing the List of AnonymousTypes

AnonymousType has a method that will serialize the properties that it contains.  Since AnonymousType stores the properties and respective values in a Dictionary it’s fairly easy to searlize the Dictionary.  The method uses the JObject from JSON.Net, but you can come up with your own mechanisms if you like:

//  From AnonymousType
public string ToJSON(Func , string, string> function,
                                                string jsonObjectName)
    {
        return function(_Values, jsonObjectName);
    }
//  Delegate method for serializing
public static string SerializeWithJObject(Dictionary values, string name)
{
    var jsonObject = new JObject();

    foreach (KeyValuePair property in values)
    {
        jsonObject.Add(new JProperty(property.Key, property.Value));
    }

    return jsonObject.ToString();
}

Serializing a list of AnonymousTypes is as equally straight forward to accomplish.  You only need to traverse the list and call the ToJSON methods on each AnonymousType object.  So easy it almost makes you feel guilty!

Dynamically Generate Data Sets with DataBuilder

If you’ve made it this far, congratulations, it’s been a bit of a marathon.   What is striking is how very straight forward much of this has been.  And the icing on the cake is that you can use the Snippet section of DataBuilder to run the code.  This required a slight alteration to the ScriptHostBase file, as it was expecting a path to an assembly.  Since the goal is to generate data sets without assemblies it would be pretty silly if you had supply something in the Assembly Path section.  All you need to do is supply something like the code below in the Code Snippet and hit “Build Data”:

var namesPickList = new List();
namesPickList.Add("Geordi");
namesPickList.Add("Que");
namesPickList.Add("Data");
namesPickList.Add("Jean-Luc");

string json = AnonFactory.CreateListOfSize(10,
                     new { LastName = "Kirk",
                              FirstName = "James",
                              MiddleInitial = "T." })
                      .WhereAll()
                            .Have("FirstName", namesPickList)
                      .BuildList()
                      .ToJSON();

parameters["JsonDataSet"] = json;

Faster than a photon torpedo you get back JSON. Now you’re set to start your HTML / Javascript prototypes. Need to alter the data, update the snippet and run it again.  Here’s the new source code, and prototype away!

How Embedded Scripting Makes Dynamically Generated Test Data Possible in ASP.Net – DataBuilder Part 2 November 6, 2010

Posted by ActiveEngine Sensei in .Net Development, ActiveEngine, ASP.Net, C#, CS-Script, DataBuilder, JSON.Net, NBuilder, Problem Solving.
Tags: , , , , , ,
add a comment

Part 1 of a 3 part series.  For the latest DataBuilder capabilities, read this post or download the new source code from here.

Last episode Sensei unveiled a useful little tool called DataBuilder.  DataBuilder helps you to generate test data for you domain objects.  Just point DataBuilder to your assemblies, and with the magic of NBuilder, CS-Script you can create test data as JSON.  How is this possible?  This post will focus on the behind the scenes magic that makes DataBuilder so flexible.

The main problem that DataBuilder solves is that to create test data for your classes you normally need to fire up Visual Studio and a project, create code, compile, etc. to produce anything and this can cause needless context switching and headache.  What if you wish to simply wish to mock up a UI and need some data sets to work with?  DataBuilder helps in that you can create test data for any existing assembly.  You can also create different types of test data based on what ever criteria you need.  This is accomplished by taking the input supplied in the Snippet Editor screen, compiling it to an in-memory assembly and executing it.  No need to fire up Visual Studio and add a TestGeneration project to your .Net solution.

The “dynamic” nature of DataBuilder is implemented with CS-Script.  In short, CS-Script is an embedded scripting system that uses ECMA-compliant C #, with full access to the CLR and OS.  For an in-depth review see  Oleg Shilo’s fantastic article on CodeProject where he describes his product.

As Oleg describes, CS-Script will compile your code into an assembly, load that assembly into a separate app domain, then execute that assembly.  There are two scenarios that can be used to host your script.  They are the Isolated Execution Pattern, where the host and script have no knowledge of each other, and the  Simplified Hosting Model for two way type sharing between the host and the script.  The Simplified Hosting Model allows the script file to access assemblies loaded in the host, as well as pass back data to the host.  DataBuilder uses the Simplified Host Model.

Before we get into the particular DataBuilder code, let’s review some samples that Oleg has provided.  The scenario presented is when you wish to remotely load a script and execute it, and the recommendation is to user interface inheritance to avoid the task of using reflection to invoke the method.

// Host contains this interface:
public interface IWordProcessor
{
void CreateDocument();
void CloseDocument();
void OpenDocument(string file);
void SaveDocument(string file);
}

//  The script file implements the interface
public class WordProcessor: IWordProcessor
{
public void CreateDocument() { ... }
public void CloseDocument() { ... }
public void OpenDocument(string file) { ... }
public void SaveDocument(string file) { ... }
}

//  Host executes the script
AsmHelper helper = new AsmHelper(CSScript.Load("script.cs", null, true));

//the only reflection based call
IWordProcessor proc = (IWordProcessor)helper.CreateObject("WordProcessor");

//no reflection, just direct calls
proc.CreateDocument();
proc.SaveDocument("MyDocument.cs");

There are other methods for invoking methods and scripts. It’s well worth your time reading through the script hosting guidelines as Oleg covers performance, reflection, interface alignment with duck typing and other facets that are important to CS-Script.

Now let’s focus on DataBuilder’s embedded scripting implementation.  DataBuilder uses the interface inheritance approach to execute the script that you supply.  Here’s the interface:

using System;
using System.Collections.Generic;
using System.Linq;
using System.Text;

namespace DataGenerator.ScriptHost
{
    public interface IScriptRunner
    {
        void RunScript();
        void RunScript(Dictionary<string, object> parameters);
    }
}

And here is an implementation of the interface:

//CSScript directives - DO NOT REMOVE THE css_ref SECTION!!!
//css_ref System.Core;
//css_ref System.Data.ComponentModel;
//css_ref System.Data.DataSetExtensions;
//css_ref System.Xml.Linq;

using System;
using System.Collections.Generic;
using System.Text;
using System.IO;
using DataGenerator.Core;
using DataGenerator.ScriptHost;
using System.Linq.Expressions;
using System.Linq;
using Newtonsoft.Json;
using FizzWare.NBuilder;
//  Add a reference to your assemblies as well!!
using UnRelatedAssembly;

public class CreateTestFile : IScriptRunner
{
    public void  RunScript(Dictionary<string,object> parameters)
    {
        var agents = Builder<SalesAgent>.CreateListOfSize(5)
                    .WhereTheFirst(1)
                         .Have(x => x.FirstName = "James")
                         .And(x => x.LastName = "Kirk")
                    .AndTheNext(1)
                          .Have(x => x.FirstName = "Bruce")
                          .And(x => x.LastName = "Campbell")
                    .Build()
                    .ToList();

        parameters["JsonDataSet"] = JsonConvert.SerializeObject(agents);
    }

    public void  RunScript()
    {
 	    throw new NotImplementedException();
    }
}

The script host is derived from ScriptHostBase.  ScriptHostBase will perform the compilation of your script with the method CompileScript(), as well as fetching any remote assemblies that you want to include.  This is a great point of flexibility as it allows you to point to any assembly that you have access to.  Assemblies can come from multiple locations, and as long as you know the namespaces you can include the classes from those assemblies in your scripts.

        /// <summary>
        /// Compile a script and store in a runner object for later
        /// execution
        /// </summary>
        protected void CompileScript()
        {
            if(string.IsNullOrEmpty(this.Script))
            {
                throw new ArgumentNullException("ScriptHostBase - CompileScript : Script can not be blank");
            }

            if (string.IsNullOrEmpty(this.TypeName))
            {
                throw new ArgumentNullException("ScriptHostBase - CompileScript : TypeName can not be blank");
            }

            //  Has an assembly already been loaded?
            string names = string.Empty;
            AppDomain appDomain = AppDomain.CurrentDomain;

            var assemblyPaths = appDomain.GetAssemblies()
                                    .ToList()
                                    .Select(x => x.FullName)
                                    .ToList();

            var fizzWare = assemblyPaths.Where(x => x.Contains("FizzWare.NBuilder"))
                                            .SingleOrDefault();

            var assemblyLoadList = new List<string>();
            assemblyLoadList = this.AssemblyPaths.ToList();

            //  Load if needed
            if (fizzWare != null)
            {
                string remove = assemblyLoadList
                                     .Where(x => x.Contains("FizzWare.NBuilder"))
                                     .SingleOrDefault();
                assemblyLoadList.Remove(remove);
            }
            else
            {
                string path = ConfigurationManager.AppSettings["FizzWarePath"].ToString();
                assemblyPaths.Add(path);
            }

            Assembly compiler = CSScript.LoadCode(this.Script, assemblyLoadList.ToArray());
            AsmHelper asmHelper = new AsmHelper(compiler);
            this.runner = asmHelper.CreateObject(this.TypeName);
        }

You may be scratching your head at the lines of code that explicitly load FizzWare.NBuilder(26 -43).  When first constructing DataBuilder, Sensei struggled with getting NBuilder to compile with the new script.  CS-Script uses an algorithm to probe directories for assemblies as well as probing scripts to resolve namespaces.  In some cases, this probe will NOT locate a namespace based on the naming conventions of an assembly. CS-Script has provisions for handling those scenarios allowing you to specifically load an assembly.  The issue Sensei had at first was that the first execution of a script would complete successfully as NBuilder would be loaded.  The problem lay with the second run of the script, as an exception would be thrown claiming that NBuilder was already loaded and hence there was no need to explicitly load it again!  The work around is to query the loaded assemblies and if NBuilder is loaded, remove that path to FizzWare.NBuilder assembly from the AssemblyPaths list and prevent the script from reloading NBuilder.

Classes derived from ScriptHostBase are responsible for implementing ExecuteScript method.  In this implementation StringScriptHost derives from ScriptHostBase and has the following ExecuteScript method:

        /// <summary>
        /// Compile a script and invoke
        /// </summary>
        public override void ExecuteScript()
        {
            base.CompileScript();

            IScriptRunner scriptRunner = (IScriptRunner)this.runner;
            scriptRunner.RunScript(Parameters);
        }

Other script hosts can be created and handle scenarios where scripts stored in a document database, text fields in SQL Server or other scenarios.

The process of including your import statements, locating any scripts located on a share and passing parameters to scripts is all controlled by the ScriptController.  There are two constructors with one allowing you to specify the script location:

public ScriptController(string scriptShare){}

With the ScriptController you can execute snippets that you type free form with the method ExecuteSnippet.

public void ExecuteSnippet(string snippet, Dictionary<string, object> parameters)
        {
            Enforce.ArgumentNotNull<string>(snippet, "ScriptController.ExecuteAdHoc - snippet can not be null");

            //  Wrap snippet with class declaration and additional using ;
            snippet = snippetHeader + this.UsingFragment + snippetClassName +
                        snippet + snippetFooter;

            var scriptHost = new StringScriptHost();
            scriptHost.Script = snippet;
            scriptHost.TypeName = "AdHoc";
            scriptHost.Parameters = parameters;
            scriptHost.AssemblyPaths = this.assemblyPaths.ToArray();

            scriptHost.ExecuteScript();
        }

Another method ExecuteScript is used for executing script files that you have save on a share.  As you read through the ExecuteSnippet method, you’ll note that the controller will combine the required import and namespace methods.  It’s really just concatenating strings to build a complete script in the format displayed above in the CreateTestFile.cs code.

You create a Dictionary<string, object> called parameters and pass this to the ScriptController.Execute methods.  This allows you great flexibility as you can allow the scripts to instantiate different objects and return them to the host application for further use.  In the case of DataBuilder we are expecting a JsonDataSet object which is our serialized test data in the form of JSON.

That’s it.  Hopefully you find DataBuilder and these posts useful.  CS-Script is quite powerful and can allow you to execute operations without the need to constantly recompile your projects.  It also allows to execute operations dynamically.  DataBuilder would not be possible without it.  When duty calls and fluent solutions are needed, CS-Script and embedded scripting are pretty incredible. Source code is here.

%d bloggers like this: