jump to navigation

More RavenDB Resources January 3, 2012

Posted by ActiveEngine Sensei in .Net Development, C#, New Techniques, Open Source, RavenDB.
Tags: , ,
3 comments

Daniel Lang has a great post regarding how to handle relations in RavenDB.  He emphasizes that a document database is vastly different from a relation database and illustrates various scenarios of do’s and don’ts.  Go read it now.

ApprovaFlow: A Quick Primer On RavenDB December 15, 2011

Posted by ActiveEngine Sensei in .Net, .Net Development, ActiveEngine, Approvaflow, ASP.Net, Business Processes, JSON.Net, Open Source, RavenDB, Workflow.
Tags: , , , , ,
2 comments

This post focuses on getting started with RavenDB, so we’ll set aside our focus on workflows for a bit.  It’s included in the ApprovaFlow series as it is an important part of the workflow framework we’re building.  To follow along you might want to get the source code.

RavenDB is a document database that provides a flexible means for storing object graphs.  As you’ll see a document database presents you with a different set of challenges than you are normally presented when using a traditional relational database.

The storage “unit” in RavenDB is a schema-less JSON document.  This takes the form of:  Because you are working with documents you now have the flexibility to define documents differently; that is, you can support variations to your data without have to re-craft your data model each time you want to add a new property to a class.  You can adopt a “star” pattern for SQL as depicted here, but querying can become difficult.  Raven excels in this situation and one such sweet spot is:

Dynamic Entities, such as user-customizable entities, entities with a large number of optional fields, etc. – Raven’s schema free nature means that you don’t have to fight a relational model to implement it.

Installing and Running RavenDB

The compiled binaries are easy to install.  Download the latest build and extract the files to a share.  Note that in order to run the console you are required to install Silverlight.  To start the server, navigate to the folder[] and double click “Start.cmd”.  You will see a screen similar to this one once the server is up and running:

The console will launch it self and will resemble this:

How To Start Developing

In Visual Studio, reference Raven with Raven.Client.Lightweight.  For CRUD operations and querying this will be all that you will need.

First you will need to connect to the document store.  It is recommended that you do this once per application.  That is accomplished with

var documentStore = new DocumentStore {Url = "http://localhost:8080"};
documentStore.Initialize();

Procedures are carried out using the Unit of Work pattern, and in general you will be using these type of blocks:

using(var session = documentStore.OpenSession())
{
   //... Do some work
}

RavenDB will work with Plain Old C# Objects and only requires an Id property of type string.  An identity key is generated for Id during this session.  If were were to create multiple steps we would have identities created in succession.  A full discussion of the alternatives to the Id property is here.

Creating a document from your POCOs’ object graphs is very straight forward:

public class Person
{
    public string FirstName { get; set; }
public string LastName { get; set; }
public string Id { get; set; }
public int DepartmentId { get; set; }
    // ...
}

var person = new Person();

using(var session = documentStore.OpenSession())
{
   session.Store(person);
   session.SaveChanges();
}

Fetching a document can be accomplished in two manners:  by Id or with a LINQ query.  Here’s how to get a document by id:

string person = "Person/1";  //  Raven will have auto-generated a value for us.
using(var session = documentStore.OpenSession())
{
   var fetchedPerson = session.Load<Person>(personId);
   //Do some more work
}

You’ll note that there is no casting or conversion required as Raven will determine the object type and populate the properties for you.

There are naturally cases where you want to query for documents based on attributes other than the Id. Best practices guides that we should create static indexes on our documents as these will offer the best performance. RavenDB also has a dynamic index feature that learns from queries fired at the server and over time these dynamic indexes are memorialized.

For your first bout with RavenDB you can simply query the documents with LINQ.   The test code takes advantage of the dynamic feature.  Later you will want to create indexes based on how you most likely will retrieve the documents.  This is different that a traditional RDMS solution, where the data is optimized for querying.  A document database is NOT.

Continuing with our example of Person documents we would use:

int departmentId = 139;

using(var session = documentStore.OpenSession())
{
   var people = session.Query<Person>()
                          .Where(x => x.DepartmentId == departmentId)
                          .ToList();
}

In the source code for this post there are more examples of querying.

Debugging, Troubleshooting and Dealing with Frustration

Given that this is something new and an open source project you may find yourself searching for help and more guidelines.  One thing to avail yourself of while troubleshooting is the fact that RavenDB has REST interface and you can validate your assumptions – or worse, confirm your errors – by using curl from the command line.  For example, to create a document via http you issue:

curl -X POST http://localhost:8080/docs -d "{ FirstName: 'Bob', LastName: 'Smith', Address: '5 Elm St' }"

Each action that takes place on the RavenDB server is displayed in a log on the server console app.  Sensei had to resort to this technique when troubleshooting some issues when he first started.  This StackOverflow question details the travails.

Another area that threw Sensei for a loop at first was the nature of the RavenDB writing and maintaining indexes.  In short, indexing is a background process, and Raven is designed to be “eventually consistent”.  That means that there can be a latency between when a change is submitted, saved, and indexed in the repository so that it can be fetched via queries.  When running tests from NUnit this code did not operate as expected, yet the console reported that the document was created:

session.Store(teamMember);

int posttestCount = session.Query<TeamMember>()
                .Count();

According to the documentation you can overcome this inconsistency by declaring that you are willing to wait until RavenDB has completed its current write operation.   This code will get you the expected results:

int posttestCount = session.Query<TeamMember>()
              .Customize(x => x.WaitForNonStaleResults())
              .Count();

Depending on the number of tests you write you may wish to run RavenDB in Embedded mode for faster results.  This might prove useful for automated testing and builds.  The source code provided in this post does NOT use embedded mode; rather, you have need your server running as this gives you the opportunity to inspect documents and acclimate yourself to the database.

There is much more that you can do with RavenDB, such as creating indexes across documents, assign security to individual documents, and much more.  This primer should be enough to get you started.  Next post we’ll see how RavenDB will fit into the ApprovaFlow framework.  Grab the source, play around and get ready for the next exciting episode.

 

ApprovaFlow – A Proof of Concept March 25, 2011

Posted by ActiveEngine Sensei in .Net, .Net Development, ActiveEngine, Approvaflow, ASP.Net, C#, JSON.Net, LINQ, New Techniques, Open Source.
Tags: , ,
add a comment

Like Tolkien, Sensei wants to create the landscapes, cultures and languages before he writes his next epic. You can be the judge whether the work is a series of sketches and notes like the Silmarillion or cohesive, compelling story that you want read again and again. As a bonus Sensei will deliver working software that hopefully will be of use to you.  (Photo credit – utnapistim).

The epic will be called ApprovaFlow. ApprovaFlow is a framework / process / methodology that allows you to create workflow applications that are easy to deploy and are configurable. With ApprovaFlow Sensei hopes to demonstrate how to readily encorporate the inevitable changes that your users will ask of you. Deliver changes effortlessly and without groans. Cast off the chains inconvenient builds and focus on creating solutions that stay out of the users way.

Ok. Managent wants bullet points so here are our goals for ApprovaFlow:

• Model a workflow in a clear format that is readable by both developer and business user. One set of verbiage for all parties.
•. Allow the state of a workflow to be peristed as an integer, string. Quicky fetch state of a workflow.
•. Create pre and post nprocessing methods that can enforce enforce rules or carry out actions when completing a workflow task.
•. Introduce new functionality while isolating the impact of the new changes. New components should not break old ones
•.Communicate to the client with a standard set of objects. In other words, your solution domain will not change how the user interface will gather data from the user.
•. Use one. aspx page to processes user input for any type of workflow.
•. Provide ability to roll your own customizations to the front end or backend of your application.

There it is. These goals will probably take us a good amount of time to review and implement. Is it worth it? Hell yeah. We’ll end up with one simple project instead of a bloated framework where it takes forever to find anything. A nice by product will be that you can spend more time thinking about how to solve your users problems rather than trying to figure out a monsterous framework that requires a huge investment of energy and time learning how to get simple things done.

How Embedded Scripting Makes Dynamically Generated Test Data Possible in ASP.Net – DataBuilder Part 2 November 6, 2010

Posted by ActiveEngine Sensei in .Net Development, ActiveEngine, ASP.Net, C#, CS-Script, DataBuilder, JSON.Net, NBuilder, Problem Solving.
Tags: , , , , , ,
add a comment

Part 1 of a 3 part series.  For the latest DataBuilder capabilities, read this post or download the new source code from here.

Last episode Sensei unveiled a useful little tool called DataBuilder.  DataBuilder helps you to generate test data for you domain objects.  Just point DataBuilder to your assemblies, and with the magic of NBuilder, CS-Script you can create test data as JSON.  How is this possible?  This post will focus on the behind the scenes magic that makes DataBuilder so flexible.

The main problem that DataBuilder solves is that to create test data for your classes you normally need to fire up Visual Studio and a project, create code, compile, etc. to produce anything and this can cause needless context switching and headache.  What if you wish to simply wish to mock up a UI and need some data sets to work with?  DataBuilder helps in that you can create test data for any existing assembly.  You can also create different types of test data based on what ever criteria you need.  This is accomplished by taking the input supplied in the Snippet Editor screen, compiling it to an in-memory assembly and executing it.  No need to fire up Visual Studio and add a TestGeneration project to your .Net solution.

The “dynamic” nature of DataBuilder is implemented with CS-Script.  In short, CS-Script is an embedded scripting system that uses ECMA-compliant C #, with full access to the CLR and OS.  For an in-depth review see  Oleg Shilo’s fantastic article on CodeProject where he describes his product.

As Oleg describes, CS-Script will compile your code into an assembly, load that assembly into a separate app domain, then execute that assembly.  There are two scenarios that can be used to host your script.  They are the Isolated Execution Pattern, where the host and script have no knowledge of each other, and the  Simplified Hosting Model for two way type sharing between the host and the script.  The Simplified Hosting Model allows the script file to access assemblies loaded in the host, as well as pass back data to the host.  DataBuilder uses the Simplified Host Model.

Before we get into the particular DataBuilder code, let’s review some samples that Oleg has provided.  The scenario presented is when you wish to remotely load a script and execute it, and the recommendation is to user interface inheritance to avoid the task of using reflection to invoke the method.

// Host contains this interface:
public interface IWordProcessor
{
void CreateDocument();
void CloseDocument();
void OpenDocument(string file);
void SaveDocument(string file);
}

//  The script file implements the interface
public class WordProcessor: IWordProcessor
{
public void CreateDocument() { ... }
public void CloseDocument() { ... }
public void OpenDocument(string file) { ... }
public void SaveDocument(string file) { ... }
}

//  Host executes the script
AsmHelper helper = new AsmHelper(CSScript.Load("script.cs", null, true));

//the only reflection based call
IWordProcessor proc = (IWordProcessor)helper.CreateObject("WordProcessor");

//no reflection, just direct calls
proc.CreateDocument();
proc.SaveDocument("MyDocument.cs");

There are other methods for invoking methods and scripts. It’s well worth your time reading through the script hosting guidelines as Oleg covers performance, reflection, interface alignment with duck typing and other facets that are important to CS-Script.

Now let’s focus on DataBuilder’s embedded scripting implementation.  DataBuilder uses the interface inheritance approach to execute the script that you supply.  Here’s the interface:

using System;
using System.Collections.Generic;
using System.Linq;
using System.Text;

namespace DataGenerator.ScriptHost
{
    public interface IScriptRunner
    {
        void RunScript();
        void RunScript(Dictionary<string, object> parameters);
    }
}

And here is an implementation of the interface:

//CSScript directives - DO NOT REMOVE THE css_ref SECTION!!!
//css_ref System.Core;
//css_ref System.Data.ComponentModel;
//css_ref System.Data.DataSetExtensions;
//css_ref System.Xml.Linq;

using System;
using System.Collections.Generic;
using System.Text;
using System.IO;
using DataGenerator.Core;
using DataGenerator.ScriptHost;
using System.Linq.Expressions;
using System.Linq;
using Newtonsoft.Json;
using FizzWare.NBuilder;
//  Add a reference to your assemblies as well!!
using UnRelatedAssembly;

public class CreateTestFile : IScriptRunner
{
    public void  RunScript(Dictionary<string,object> parameters)
    {
        var agents = Builder<SalesAgent>.CreateListOfSize(5)
                    .WhereTheFirst(1)
                         .Have(x => x.FirstName = "James")
                         .And(x => x.LastName = "Kirk")
                    .AndTheNext(1)
                          .Have(x => x.FirstName = "Bruce")
                          .And(x => x.LastName = "Campbell")
                    .Build()
                    .ToList();

        parameters["JsonDataSet"] = JsonConvert.SerializeObject(agents);
    }

    public void  RunScript()
    {
 	    throw new NotImplementedException();
    }
}

The script host is derived from ScriptHostBase.  ScriptHostBase will perform the compilation of your script with the method CompileScript(), as well as fetching any remote assemblies that you want to include.  This is a great point of flexibility as it allows you to point to any assembly that you have access to.  Assemblies can come from multiple locations, and as long as you know the namespaces you can include the classes from those assemblies in your scripts.

        /// <summary>
        /// Compile a script and store in a runner object for later
        /// execution
        /// </summary>
        protected void CompileScript()
        {
            if(string.IsNullOrEmpty(this.Script))
            {
                throw new ArgumentNullException("ScriptHostBase - CompileScript : Script can not be blank");
            }

            if (string.IsNullOrEmpty(this.TypeName))
            {
                throw new ArgumentNullException("ScriptHostBase - CompileScript : TypeName can not be blank");
            }

            //  Has an assembly already been loaded?
            string names = string.Empty;
            AppDomain appDomain = AppDomain.CurrentDomain;

            var assemblyPaths = appDomain.GetAssemblies()
                                    .ToList()
                                    .Select(x => x.FullName)
                                    .ToList();

            var fizzWare = assemblyPaths.Where(x => x.Contains("FizzWare.NBuilder"))
                                            .SingleOrDefault();

            var assemblyLoadList = new List<string>();
            assemblyLoadList = this.AssemblyPaths.ToList();

            //  Load if needed
            if (fizzWare != null)
            {
                string remove = assemblyLoadList
                                     .Where(x => x.Contains("FizzWare.NBuilder"))
                                     .SingleOrDefault();
                assemblyLoadList.Remove(remove);
            }
            else
            {
                string path = ConfigurationManager.AppSettings["FizzWarePath"].ToString();
                assemblyPaths.Add(path);
            }

            Assembly compiler = CSScript.LoadCode(this.Script, assemblyLoadList.ToArray());
            AsmHelper asmHelper = new AsmHelper(compiler);
            this.runner = asmHelper.CreateObject(this.TypeName);
        }

You may be scratching your head at the lines of code that explicitly load FizzWare.NBuilder(26 -43).  When first constructing DataBuilder, Sensei struggled with getting NBuilder to compile with the new script.  CS-Script uses an algorithm to probe directories for assemblies as well as probing scripts to resolve namespaces.  In some cases, this probe will NOT locate a namespace based on the naming conventions of an assembly. CS-Script has provisions for handling those scenarios allowing you to specifically load an assembly.  The issue Sensei had at first was that the first execution of a script would complete successfully as NBuilder would be loaded.  The problem lay with the second run of the script, as an exception would be thrown claiming that NBuilder was already loaded and hence there was no need to explicitly load it again!  The work around is to query the loaded assemblies and if NBuilder is loaded, remove that path to FizzWare.NBuilder assembly from the AssemblyPaths list and prevent the script from reloading NBuilder.

Classes derived from ScriptHostBase are responsible for implementing ExecuteScript method.  In this implementation StringScriptHost derives from ScriptHostBase and has the following ExecuteScript method:

        /// <summary>
        /// Compile a script and invoke
        /// </summary>
        public override void ExecuteScript()
        {
            base.CompileScript();

            IScriptRunner scriptRunner = (IScriptRunner)this.runner;
            scriptRunner.RunScript(Parameters);
        }

Other script hosts can be created and handle scenarios where scripts stored in a document database, text fields in SQL Server or other scenarios.

The process of including your import statements, locating any scripts located on a share and passing parameters to scripts is all controlled by the ScriptController.  There are two constructors with one allowing you to specify the script location:

public ScriptController(string scriptShare){}

With the ScriptController you can execute snippets that you type free form with the method ExecuteSnippet.

public void ExecuteSnippet(string snippet, Dictionary<string, object> parameters)
        {
            Enforce.ArgumentNotNull<string>(snippet, "ScriptController.ExecuteAdHoc - snippet can not be null");

            //  Wrap snippet with class declaration and additional using ;
            snippet = snippetHeader + this.UsingFragment + snippetClassName +
                        snippet + snippetFooter;

            var scriptHost = new StringScriptHost();
            scriptHost.Script = snippet;
            scriptHost.TypeName = "AdHoc";
            scriptHost.Parameters = parameters;
            scriptHost.AssemblyPaths = this.assemblyPaths.ToArray();

            scriptHost.ExecuteScript();
        }

Another method ExecuteScript is used for executing script files that you have save on a share.  As you read through the ExecuteSnippet method, you’ll note that the controller will combine the required import and namespace methods.  It’s really just concatenating strings to build a complete script in the format displayed above in the CreateTestFile.cs code.

You create a Dictionary<string, object> called parameters and pass this to the ScriptController.Execute methods.  This allows you great flexibility as you can allow the scripts to instantiate different objects and return them to the host application for further use.  In the case of DataBuilder we are expecting a JsonDataSet object which is our serialized test data in the form of JSON.

That’s it.  Hopefully you find DataBuilder and these posts useful.  CS-Script is quite powerful and can allow you to execute operations without the need to constantly recompile your projects.  It also allows to execute operations dynamically.  DataBuilder would not be possible without it.  When duty calls and fluent solutions are needed, CS-Script and embedded scripting are pretty incredible. Source code is here.

Deserializing to Persistent AnonymousTypes with JSON.Net October 9, 2010

Posted by ActiveEngine Sensei in .Net, .Net Development, ActiveEngine, C#, Problem Solving.
Tags: , , ,
1 comment so far

A few weeks back Sensei unleashed a crazy idea regarding a class AnonymousType that could persist values from an anonymous object.  The AnonymousType, created by Hugo Benocci models an individual object.  In a sense this is a hyper-charged Dictionary of properties that represent an object.  It’s meta data.  This is similar to a concept called the Adaptive Object Model, the theory that you create mechanisms to describe what your objects should do.   Instead of having a class for SalesAgent or Car you have classes that represent the classes, attributes, relationships and behavior in your domain.  In other words, you create a meta data modeler and feed it the criteria that would represent SalesAgent, Car, etc.

Having a “sound-of-one-hand-clapping” moment, Sensei realized that while “Persistent AnonymousTypes” was in the title of the post, no mechanism for for serializing the AnonymousType was included!!  “What the …”.  Jeeezz!  “Hell, that should be easy”, Sensei says.  Grab JSON.Net and with elbow grease make it work, right?  Anybody?

One thing that should be immediately clear is that all the meta data is locked up in the AnonymousType object, so you can’t just write:

string json = JsonConvert.SerializeObject(anonymousType);

Instead we need a way represent all the properties of our AnonymousType and preserve each property’s name, it’s type, and the underlying value.  Something like:

public class NameTypeValue
{
  public string Name { get; set; }
  public Type Type{get; set;}
  public object Value { get; set; }
}

And wouldn’t it be nice if we could take a serialized stream of an actual object and convert that into an AnonymousType?  Thinking further ahead, it would rather easy to pass around a list of NameTypeValues as you could easily send and receive this object from a web client or other front end, building yourself a modelling or code generation tool.

Serializing the object depicted above is pretty trivial.  Using a Func<Dictionary<string,object>, string,  string> we can serialize any way wish as with two tiny methods:

public string ToJSON(Func, string, string> function, string jsonObjectName)
{
    return function(_Values, jsonObjectName);
}
///  Method to serialize.  You can come up with your own!!
public string SerializeWithJObject(Dictionary values, string name)
{
  var jsonObject = new JObject();

  foreach (KeyValuePair property in values)
  {
    jsonObject.Add(new JProperty(property.Key, property.Value));
  }

  return jsonObject.ToString();
}

If there is another mechanism for serialization that you wish to use you are free to come up with your own.  For illustration here is the JSON output of an AnonymousType for a sales agent, and followed by the JSON for an actual Agent object:

Agent JSON ==>{“Name”:”Sales Guy Rudy”,”Department”:45}

AnonymousType JSON ==>{  “Name”: “Sales Guy Rudy”,  “Department”: 45}

 

Now that we can simply serialize our AnonymousType with the output matching that of an actual object,  we just need a way to interpret a JSON stream and build an AnonymousType.  Along with discussion, Sensei will talk about the second “sound-of-one-hand-clapping” moment he had when working with JSON.Net.  As you may have already surmised, you need to describe the Type of property in order deserialization to happen.  Sensei didn’t and took a trip to the valley for frustration.

Ok.  We have stream of JSON with the Name, Value and Type of each property for an object.  AnonymousType has a Set method to set a new property:

        /// <summary>
        /// Sets the value of a property on an anonymous type
        /// </summary>
        /// <remarks>Anonymous types are read-only - this saves a value to another location</remarks>
        public void Set(string property, object value) {
            this.Set<object>(property, value);
        }

        /// <summary>
        /// Sets the value of a property on an anonymous type
        /// </summary>
        /// <remarks>Anonymous types are read-only - this saves a value to another location</remarks>
        public void Set<T>(string property, T value) {

            //check for the value
            if (!this.Has(property)) {
                this._Values.Add(property, value);

            }
            else {

                //try and return the value
                try {
                    this._Values[property] = value;
                }
                catch (Exception ex) {
                    throw new Exception(
                        string.Format(
                            AnonymousType.EXCEPTION_COULD_NOT_ACCESS_PROPERTY,
                            property,
                            (value == null ? "null" : value.GetType().Name),
                            ex.Message
                            ),
                            ex);
                }
            }

        }

It’s pretty straight forward to accept a NameTypeValue object and perform:

public void AddProperty(string objectName, NameTypeValue nameTypeValue)
{
 //  Object doesn't exist?  Add.
 if (objects.ContainsKey(objectName) == false)
 {
 objects.Add(objectName, new List());
 }

 var properties = objects[objectName];

 //  All properties are unique
 var existingProperty = properties.Where(x => x.Name == nameTypeValue.Name)
 .SingleOrDefault();

 if(existingProperty == null)
 {
 properties.Add(nameTypeValue);
 }
}

and taking this a step further, a List<NameTypeValue> can supply all properties for an object:

properties.ForEach(x => { anonymousType.Set(x.Name, x.Value); });

Accepting a JSON stream of a List<NameTypeValue> should be easy-cheesey mac-n-peasey.  The first version of this looked like the following:

public AnonymousType DeserializeFromJSONProperties(string objectName, string json)
{
  Enforce.ArgumentNotNull(objectName, "AnonFactory.Deserialize - objectName can not be null");
  Enforce.ArgumentNotNull(json, "AnonFactory.Deserialize - json can not be null");

  List propertyList = JsonConvert.DeserializeObject
>(json);

  //  Add properties.  Make sure int is not deserialized to a long since JSON.Net
  //  makes best guess
  propertyList.ForEach(x => AddProperty(objectName, x));

  return CreateAnonymousType(objectName);
}

But one-moooorrree-thing!  Sensei discovered that JSON.Net, when presented with an integer like 5, will deserialize to the largest possible type when not presented with a target.  In other words, when you have this JSON:

{“Department” : 45}

and deserialize to an object, it must accommodate the largest possible type in order to avoid truncating the data.  That means an int is deserialized as Int64!!  The first round of testing was quite aggravating as AnonymousType would accept the property into it’s schema, but when you went to fetch that value later on you would get an exception  In other words, when you did this:

//  Found in JSONTests.MakeItFail()
var anonFactory = new AnonFactory();
var darrellDept = new NameTypeValue();
darrellDept.Name = "Department";
darrellDept.Value = 45;

var darrellName = new NameTypeValue();
darrellName.Name = "Name";
darrellName.Value = "Darrell";

var propertyList = new List();
propertyList.Add(darrellDept);
propertyList.Add(darrellName);

//  Create JSON stream of properties
string darrellPropertyJSON = JsonConvert.SerializeObject(propertyList);

//  Try to deserialize and create an AnonymousType object
var otherDarrell = anonFactory.DeserializeFromJSONProperties("Agent", darrellPropertyJSON);
Assert.AreEqual(otherDarrell.Get("Department"), 45);

you got an InvalidCastException.

Luckily you have the Type so you can perform a conversion as you deserialize the property and add it to AnonymousType’s Dictionary<string, object>.  Here’s the new version:

propertyList.ForEach(x => AddProperty(objectName, ConvertTypeFromDefinition(x)));

private NameTypeValue ConvertTypeFromDefinition(NameTypeValue nameTypeValue)
{
  if (nameTypeValue.Type != nameTypeValue.Value.GetType())
  {
    nameTypeValue.Value = Convert.ChangeType(nameTypeValue.Value, nameTypeValue.Type);
  }

  return nameTypeValue;
}

When you look at the new version of the AnonymoustType project you’ll note that serializing is handled by the AnonymousType itself, while a factory class is used for building the an AnonymousType from the NameTypeValue’s and for deserializing JSON as well.  Sensei struggled a bit with this, as on the one hand if AnonymousType was responsible for serializing itself should it also be able to deserialize a stream?  On the other hand, a factory seemed logical since you could have a registry of AnonymousType objects, thereby centralizing the creation and management of AnonymousTypes.  Don’t like it – create your own and share!  Regardless, looks like we can fit through the mini-Stargate now.  Here’s version 2.

The Economics of Protecting the Red Shirts July 29, 2009

Posted by ActiveEngine Sensei in .Net Development, ActiveEngine, Business Processes, Coaching, Design Patterns, Mythology, Personal Development, Problem Solving, software economics.
Tags: , , , , ,
2 comments

Recently I came across this post from a fellow lamenting the lack of interest on the part of .Net developers in architecture solutions such as IoC, Dependency Injection, ORMs, and the like.  800px-KirkSlapsSelfThis stood out in stark contrast to Java developers who this person interviewed, who either were conversant with the technology or were interested enough to pursue informing themselves further.  Some call this the result of Drag -n -Drop design as laid out in a post  by Greg Young, a Microsoft MVP and .Net developer who has specialized in high performance applications.  Greg maintains in his post Java vs. .Net Developers that drag and drop is mis applied and there needs to be an greater effort the isolate the cases where it is mis used.  This practice has arisen, he maintains, from poor training and lack of awareness of other development platforms. (more…)

TDD and Design Pattern Screencasts by Jean Paul Boodhoo December 21, 2007

Posted by ActiveEngine Sensei in .Net Development, ActiveEngine, Design Patterns, TDD, Test Driven Development.
Tags: , , , , ,
4 comments

If you have read this blog, you’ll have noted that Jean Paul Boodhoo is a great resource to learn from. Below is a compilation of screencasts that he completed for DNR TV with Carl Franklin. You should take the time to review these, as you will see clearly that JP has dedicated himself to refining the art of programming. He himself freely admits that he has had to struggle to learn and develop his skills to the point where they are today. Watching him apply TDD with Resharper is utterly amazing.

You should take the time to view each of these and share them with your team members, as it will jump start your interest in enhancing your development discipline. Work on your core toolkit and improve your ActiveEngine!

Enjoy:

Demystifying Design Patterns Part 1

Demystifying Design Patterns Part 2

Demystifying Design Patterns Part 3

Demystifying Design Patterns Part 4

Demystifying Design Patterns Part 5

Food For Thought … December 8, 2007

Posted by ActiveEngine Sensei in .Net Development, ActiveEngine, C#, Mythology, Problem Solving.
Tags: , , , , ,
add a comment

This post by Stevey discusses Java but could be easily applied to C#. For any of you old-time procedural coders that the Alt.Net bullies like to kick around, here may lie some vindication:

Execution in kingdom of nouns

Clarity of Thought Only Comes From Practice November 21, 2007

Posted by ActiveEngine Sensei in .Net Development, ActiveEngine, Business Processes, Coaching, Personal Development, Problem Solving.
Tags: , , ,
2 comments

All of you code jockeys need to start a new series of mental calisthenics. Your thoughts are blocked, as your set of speech patterns is your worst enemy. If you have used the phrase “I want it to be extensible” in front of you customers this week, shame on you!

The business community is looking to solving problems, but they are not looking to you for help; rather, they want you to just make things work based on what they tell you the software should do. Because of that fact alone you may not be offering the value that will sustain your gigs, period. “Open to extension, but closed to modification” – who cares, because your customer base can’t understand you what you mean.

ActiveEngine Sensei says head over to ProjectManagement411.com, and learn what the business community wants to hear and how to best communicate with them. You should read the series on value selling your projects, because you’ll notice that NONE OF IT MENTIONS THAT MODIFICATIONS TO YOUR BUSINESS LAYER .DLL’s WILL BE CHEAPER BECAUSE YOU USED THE DECORATOR PATTERN!!!! That’s not on their minds, and hearing that from you maybe why they want to ship your job offshore. You need to learn how to measure value through their eyes.

ProjectManagement411 is focused on how the enterprise can become lean, how agility is going to become their ActiveEngine. Don’t let this synergy pass you by. Concepts like Data Stewardship may make you snicker, but that is the way your stay will extended.

Drive-by Specficiations – Part 1 November 16, 2007

Posted by ActiveEngine Sensei in .Net Development, ActiveEngine, Business Processes, C#, Problem Solving, TDD, Test Driven Development.
Tags: ,
3 comments

Here’s a concept that’s new for the business community. A team will produce a product that meets your goals, but must write programs to test your new program.

Why? That must be more inefficient since you’re doing twice the amount of work. Why can’t you just do what we listed on the specification we gave you? What about all that talk about necessary and sufficient?

Test Driven Development is often mis-understood, and that’s unfortunate as the business community can greatly benefit from this approach to developing software solutions. Yes, there is additional programming required, but think of this effort as validation of a contract, and that contract is the set of criteria that you have given the developers. By following the proscribed approach, the developer will have thought through each facet of the code that leads to the fulfillment of the specification. No other code is written other than that which will fulfill passing a test. This greatly focuses on what is necessary and sufficient as nothing should be concentrated beyond the boundaries of the set of tests. The test are like that piece of paper with stick figures Al and Wu used to communicate to each other.

It must be noted that an advantage here is that when changes are made later on, as they always will be, the tests can be repeated again and it can be quickly determined if the alterations have broken other facets of the program. Luckily the execution of these tests can be automated, for as you can imagine you quickly end up with lots of tests.

Only after all tests are passing then the developer can perform enhancements as the tests will gage whether or not the program still functions as specified. Most programmers forget this part, as the artisan within can often subdue pragmatism, and some, but not all developers, will continue to will continue to develop code that will be used for the future, yet will not have hit the targets of the original specification. The tests will help re-enforce the boundaries. Who? Wu? No who, Wu! No Wu? Right here on the paper you idiot!! Yes, you should really yell like that at the programmers!

Heres a set of criteria, the basis of our program:

Process an increase in commission of sales for each customer:

  • For customers that have been with our product line UltraSlim for 5 years or less, increase by 4%. All other UltraSlim customers increase by 6%.
  • For customers that have been with our product line UltraSmart for 3 years or less, increase by 2.5%. All other UltrSmart customers increase by 3.5%
  • Do not apply a commission for those customers who have been with either product line since 3rd quarter of the current sales year (calendar year based).

Seems like a lot? Consider this, if this is the criteria and the means for communicating you quickly begin moving in concert and hit those sweet spots of productivity. You’ll find that you’ll discover contradictory criteria that you may have supplied, and you’ll both spot errors more quickly. That flow productivity and ideas will become your ActiveEngine for great software creation.

The next part in the series will discuss how these tests are validated, and will be technical in nature but of benefit to the business community, as when done properly they will communicate what the developers are doing with the code.