Friday, 24 October 2014

I'm done with story points, velocity and sprints

Ok, saying I'm done with it might be harsh, but there has to be a better way to run a project. I've been fairly committed to agile for quite a few years now and one of my main bugbears is slowly becoming story points, velocity and sprints. I have multiple problems with all of these, but my main gripe is that on most projects they break agile. Let's take each and analyse what they are supposed to give you versus what you actually witness in a project

Story points

Story points are a method to size a piece of work without having to get bogged down in how long it will take. Let's say the story is "Filling a glass of Orange Juice".

  • Time based approach: how long will it take to pour a glass of orange juice?
  • Story points approach: "let's estimate what size the glass is". 


With both approaches you have inaccuracies. The first might incorrectly estimate the speed of the pour, while the second might not size the glass correctly. But the reason we moved away from a time based approach is due to developers inability to provide accurate time estimates, we just seem to be better at sizing something as opposed to giving it a time estimation. You've all heard different equations for estimating a story or task length - take a developers estimate and double it, or even multiply it by 2.5. We are notorious underestimaters even given this knowledge! Points based estimates are supposed to fix this because you estimate the size or complexity of the story instead. The theory is it's easier to estimate roughly how big a glass of orange juice is than to estimate roughly how long it takes to fill.

Also, some developers are faster than others. So if you quote a glass of orange filling at 2 days, it might take a junior developer 3 or 4 days to do the same story. If we had it estimated in points then that solves the problem. 2 points to a junior developer can mean a different thing to a senior developer and we can account for that by giving the junior devs (as a resource) a lower sprint velocity. If we lose them from the sprint then it's easier to account for what the loss in points will be.

This sounds fantastic, but doesn't work quite as awesome as you'd think. You've been in those planning meetings where the developers are thinking about a story, prodding the BAs for answers and eventually come to their conclusion. They all raise their cards and the points average out at 5. BAs and project managers shout "How on earth can that be a 5 point story?!?". This completely misses a couple of points of what the estimate is. It's a story size, as in agreement by the developers. What exactly is 5 points anyway? It's just a number that defines story complexity. But quite often you'll get argued down and the story ends up being 3 points. And suddenly you have your 50 point sprint full of underestimated stories and only complete 75% of the sprint. It's almost like the reverse of the stereotypical developer estimates happens, developers give their estimates and rather than the old school "double it!" we get a "half it!" approach.

There are more problems than this, but is better thought of in terms of Velocity.

Velocity

As I said, what exactly are story points anyway? And what exactly is velocity? To give you an idea of the misconceptions I have witnessed I will paraphrase a statement I head in a recent project I was involved in:

'We should have a velocity of 50 points per sprint.'

This instantly rose suspicion with me as we hadn't even hit 3 sprints and were actually performing a little lower than that. It's also a remark usually associated with a misunderstanding of what points and velocity means. I poked about with a few questions and eventually discover that the reason the sprints should have a 50 point velocity is due to another similar-ish project in the company that had a 50 point velocity.

So, let's look at the question again, what exactly are story points anyway? Let's go back to our story "Filling a glass of Orange Juice":

  • Team A estimate this as 5 points
  • Team B estimate this as 3 points
  • Team C estimate this as 8 points


Sprint starts:

  • Team A fill the glass of orange juice. It takes them 2 days (it was a very big glass!). 
  • Team B, also start to fill a glass of orange juice. It also takes them 2 days even though they pointed it lower! 
  • Team C fills their glass. Add a dash of mango to it because they have a pretentious client and then paint some fancy designs on the glass. They were probably right to point it at 8


There's 2 things I'm trying to get across here.

  • Firstly Team A and B quoted different points for what was effectively the same complexity of work. This really doesn't matter. I would expect Team B to also maintain a lower velocity, but that doesn't mean they actually produce any less work. They will have the same output, they just estimate and process points at a different rate.
  • Also, look at Team C versus the other 2 first. Although a story might seem like the same piece of work sometimes it's not. It can depend on technology, who your developers are, etc.

Effectively, as far as different projects are concerned velocity is only relevant to that project. You shouldn't try to impose another projects velocity (or even points) on another independent project.

Up until this point I can actually work with all the concepts. The issue I have with points and velocity is a misunderstanding of the concept as opposed to something more fundamental. But I would add that I'm frustrated of having to re-educate people every time I join a new agile project.

Sprints

Sprints are something I have really become less fond of as time has elapsed. Taking a look at the definition of a sprint on the face of it all they seem like a good idea (taken from wikipedia):

'A sprint (or iteration) is the basic unit of development in Scrum. The sprint is a "timeboxed" effort; that is, it is restricted to a specific duration. The duration is fixed in advance for each sprint and is normally between one week and one month, although two weeks is typical.

'Each sprint is started by a planning meeting, where the tasks for the sprint are identified and an estimated commitment for the sprint goal is made, and ended by a sprint review-and-retrospective meeting, where the progress is reviewed and lessons for the next sprint are identified.

'Scrum emphasizes working product at the end of the Sprint that is really "done"; in the case of software, this means a system that is integrated, fully tested, end-user documented, and potentially shippable.'

The problem I have is when the following 3 concepts are bundled into the same process:

  • Timeboxed
  • Story commitment
  • Shippable product

My first issue is this, if you're going to "timebox" the work you are presuming that every single deliverable can be developed in your timeboxed period. In every project I have worked on to date about 10% of the sprints have truly delivered something shippable. Most sprints finish with incomplete stories being moved into following sprints, or partially delivered features due to your sprint periods being over-restrictive.

My second issue is with the commitment to sprint tasks/stories. Generally, when the sprint content has been defined or stories "accepted" into the sprint, you're not really supposed to change it. We all throw this concept out fairly early, because that's the first bit that really breaks agile. The reason behind not being allowed to change the sprint when it has already started is you'd never get any work done if people kept changing the stories and work from beneath your feet. But there has to be some kind of flexibility here. Priorities change frequently in fast moving projects so you frequently have to move stuff in and out. This means you have to either:

  • Estimate not just stories that can fit in a sprint, but above and beyond that
  • Or estimate new stories as and when we need to push stuff in and out of sprints

Now when the new stories that come in... do they "fit" with our deliverable? Maybe they do, maybe they don't. Either way, to be truly agile we really should allow priorities to change and allow work to get completed when it's needed. But we are dancing a fine line with our shippable product here!

Which takes me on to my final issue - the "Potentially shippable product". Sprints don't help this, they hinder it. What you end up doing is shoehorning X points worth of stories into a sprint and calling it your "shippable" piece of work. When really what you frequently end up delivering is partially complete features that no end user would really want to use (yet).

Alternative?

While writing this post I came across this blog post Stop using story points. Looks like I'm not the only one frustrated with this, and I'm a little late to the game! I have to say I love the concept of estimating in Apples and Oranges. I also love the idea of variable length iterations. I wish I could claim that idea, but alas I cannot!

Flexible iterations is probably the alternative I will seek from here on. One of the biggest advantages is you can define what you're going to ship rather than allowing your iteration to define this for you. Pick something small and manageable, like 2 or 3 features (depending on your feature sizes) that will complete a few pages, or give you a functional service. This doesn't mean you have to stop using points. If this is something you're already comfortable with then keep using them. You can also still calculate velocity, but maybe create a "weekly" velocity so that you can easily calculate your iteration lengths. But the point is pick something shippable you will complete and then estimate the iteration.

Basically, what I'm suggesting is, let's get back to being agile. Let's not taint such a great methodology with unagile practices and in result giving it a bad name.


Friday, 3 October 2014

Web API - Adding Xml Doc Comments to help files

One of the beautiful features of Web API is you get an out of the box set of help pages documenting your API and what calls are available. You will see this "HelpPage" section under "Areas" of your project. One thing I wanted to do to extend this is configure the help pages to read my Xml documentation comments within the code. It's not awfully difficult to achieve, simply follow these steps:

  1. In the file HelpPage/App_Start/HelpPageConfig.cs uncomment the following line:
     config.SetDocumentationProvider(new XmlDocumentationProvider(HttpContext.Current.Server.MapPath("~/App_Data/XmlDocument.xml")));  
    

  2. Open up the project properties for you Web API project and under the build tab enable the Xml documentation file (set the file path to App_Data\XmlDocument.XML)

This gives you the basic setup to get the XML comments outputting to your Help Pages. But how about making it look sexier? For example, let's say I had the following XML comment I wanted to display:
 /// <summary>  
 /// Get a list of products within the given product kit.  
 /// </summary>  
 /// <param name="id">Id of the parent product kit</param>  
 /// <returns>This api method returns a list of products in JSON format. To get a list of products in XML format add the following HTTP header:<br />  
 /// <br />  
 /// Accept: application/xml<br />  
 /// </returns>  

This will strip out the HTML tags and output on a single line like this:



To allow breaks in our comments we need to make a couple of changes. Firstly, we need to change how our XmlDocumentationProvider retrieves the tag value from the XML comment. Find the GetTagValue function in this class and change the line as follows:
     private static string GetTagValue(XPathNavigator parentNode, string tagName)  
     {  
       if (parentNode != null)  
       {  
         XPathNavigator node = parentNode.SelectSingleNode(tagName);  
         if (node != null)  
         {  
           return node.InnerXml;
         }  
       }  
       return null;  
     }  

Next, we need to modify all the display templates that display these tags to show the raw value instead. The Display Template you will need to modify are HelpPages/Views/Help/DisplayTemplates/ApiGroup.cshtml and HelpPageApiModel.cshtml located at. Find any of the following lines:
 <p>@controllerDocumentation</p>  

 <p>api.Documentation</p>  

 <p>@description.ResponseDescription.Documentation</p>  

And replace them with the equivalent as per this code:
 <p>@Html.Raw(controllerDocumentation)</p>  

 <p>@Html.Raw(api.Documentation)</p>  

 <p>@Html.Raw(description.ResponseDescription.Documentation)</p>  

This could cover you for all areas you want to add breaks or html formatting to. For example, our description in the given example should look much better now:

Thursday, 25 September 2014

Unit testing and Autofac

Today I'm going to write a little about unit testing while using a dependency injector. The particular DI used as part of this post is Autofac, but this applies to most DI/IOC solutions out there.


Autofac

One of the nice things about Autofac is how the modules are strucutred. If we look at a basic setup of Autofac within WebAPI it might look something like this:

 var builder = new ContainerBuilder();  
 builder.RegisterModule(new WebApiIocModule());  

 IocProxy.Container = _builder.Build();  
 var resolver = new AutofacWebApiDependencyResolver(IocProxy.Container);  
 GlobalConfiguration.Configuration.DependencyResolver = resolver;  
 _builder = null;  

The part I'd like to highlight is the "RegisterModule" function. This allows me to register a class responsible in the project for building up the IOC container with the correct types. It might look as simple as this for the sake of this example:

 public class WebApiIocModule : Module  
 {  
   protected override void Load(ContainerBuilder builder)  
   {  
     if (builder == null)   
       throw new ArgumentNullException("builder");  
 
     builder.RegisterApiControllers(Assembly.GetExecutingAssembly());  
     //Cascade  
     builder.RegisterModule(new DomainIocModule());  
   }  
 }  

And our DomainIocModule might be a little more complex, like this:

 public class DomainIocModule : Module  
 {  
   protected override void Load(ContainerBuilder builder)  
   {  
     if (builder == null)  
       throw new ArgumentNullException("builder");  

     builder.RegisterAssemblyTypes(typeof(IProductService).Assembly)  
       .Where(t => t.Name.EndsWith("Service") || t.Name.EndsWith("Mapper"))  
       .AsImplementedInterfaces()  
       .PropertiesAutowired()  
       .InstancePerLifetimeScope();  

     //Cascade  
     builder.RegisterModule(new DataAccessIocModule());
   }  
 }  

The DomainIocModule will exist in our domain layer which means we have an easy way to maintain the separation between our data layers and web layers without a DI/IOC breaking it.


Resolving Types by configuration using Autofac

The most common uses for resolving by configuration is when using plugins, or when you need to allow for "hot swapping" of a particular type. To give you an example, let's say your Web API will be deployed to different servers, each having a different back end system. The data sources for your API will vary from SQL Server to CRM Systems. For this reason you have a generic "DataAccess" layer, but you need to plugin a different implementation depending on where the API is running. What you can do in Autofac is use an Xml configuration to register your modules, like this:

Web.Config

 <configuration>  
  <configSections>  
   <section name="autofac" type="Autofac.Configuration.SectionHandler, Autofac.Configuration" />  
  </configSections>  

  <snip...>  

  <autofac>  
   <modules>  
    <module type="DataAccess.Sql.Ioc.SqlIocModule, DataAccess.Sql" />  
    <module type="DataAccess.Crm.Ioc.CrmIocModule, DataAccess.Crm" />  
   </modules>  
  </autofac>  
 </configuration>  

Data Access Module

 public class DataAccessIocModule : Module  
 {  
   protected override void Load(ContainerBuilder builder)  
   {  
     if (builder == null)  
       throw new ArgumentNullException("builder");  
     builder.RegisterModule(new ConfigurationSettingsReader("autofac"));  
   }  
 }  

All you require for the above to work is the Autofac Configuration dll/package.


Resolving all of this in a Unit Test project

You'll start running into problems pretty quickly when testing if you don't do some form of configuration. This could be as complex as writing a specific testing module to override/mock up your classes, or it could be as simple as just enabling Autofac so that it creates all objects as per your code base. There are a few different ways to solve this problem, but what I usually do is create a base class for my tests so that they can all benefit from the Autofac setup. It will look something like this:

 public class TestBase  
 {  
   public TestBase()  
   {  
     IocProxy.Container = TestsIocBuilder.Build();  
   }  
 }  

My TestIocBuilder looks pretty much identical to my code in the very first snippet, apart from it registers a class called "TestsIocModule" instead. My TestIocModule looks something like this:
 public class TestsIocModule : Module  
 {  
   protected override void Load(ContainerBuilder builder)  
   {  
     if (builder == null) throw new ArgumentNullException("builder");  
     //Cascade  
     builder.RegisterModule(new DomainIocModule()); 
   }  
 }  

That's pretty much it really. The DomainIocModule will cascade down for you so everything will be resolved as per your standard Autofac configuration.


What about my Xml Configuration!

The last step we need to think about is the Xml configuration for plugins. This doesn't get read by the testing project so we need to add an application configuration to our test project instead. It will contain exactly the same configuration as the Web.Config so you can pretty much copy it from above.


Some of my tests fail... but only if I run ALL my unit tests!!!

This can happen when you have dodgy Autofac config in your testing libraries. What happens is the first library that comes along and registers all of your Autofac modeles and this may cause subsequent tests to fail. For example, let's say I had an integrations test project and a unit tests project. If by accident I mistyped my module in the configuration file like this:

 <configuration>  
  <configSections>  
   <section name="autofac" type="Autofac.Configuration.SectionHandler, Autofac.Configuration" />  
  </configSections>  

  <snip...>  

  <autofac>  
   <modules>  
    <module type="DataAccess.Sql.Ioc.SqlIocModule, DataAccess.Slq" />  
    <module type="DataAccess.Crm.Ioc.CrmIocModule, DataAccess.Crm" />  
   </modules>  
  </autofac>  
 </configuration>  

But in the other config you have correctly named the module. This can cause you major confusion! Mainly because you might be happily writing and running unit tests that are passing first time, but as soon as you run all tests they start to fail! Be mindful of this when setting up IOC/DI across multiple test projects.


But I want to mock out my classes!!!

Enabling Autofac doesn't change how you mock. It just saves you setting up objects like Autofac would. In fact, it gives you a very easy way to mock out your data layer without having to write mocks up. You could write a full in memory Sql data layer and attach it using some Xml Config module instead. If you need to mock out all your other classes you can still do so as the constructors didn't suddenly disappear! Simply set up the class you want to test and pass in the mocked types as you normally would do.

Thursday, 4 September 2014

Dynamics CRM - Find all Forms using a JavaScript resource

Today I had an interesting task on my hands which was to find all forms using particular JavaScript libraries within CRM. So it got me thinking, there has to be a better way than opening up every form in the system to see what references it. The result was a SQL script that checked the Form XML in the CRM database:

 select ELV.Name EntityName, SF.*  
 from dbo.SystemForm SF  
 inner join [EntityLogicalView] ELV on ELV.ObjectTypeCode = SF.ObjectTypeCode  
 where formxml like '%libraryName="My JavaSript Library Name"%'  


Wednesday, 25 June 2014

EntityState must be set to null, Created (for Create message) or Changed (for Update message)

If you use Linq to retrieve entities from CRM quite a lot you may have come across this problem. In short what the above error means is you got the entity from an OrganizationServiceContext (i.e. Linq), but tried to update it using an OrganizationServiceProxy instead. There are a few ways you can make this mistake, for example take the following code snippet:

OrganizationServiceProxy _serviceProxy;
... <snipped connection code> ...
using (var svcContext = new OrganizationServiceContext(_serviceProxy))
{
    var query = from a in svcContext.AccountSet
                where a.Name.Contains("Contoso")
                select a;
    var account = query.First();
    account.SomeField = "SomeValue";
    _serviceProxy.Update(account);
}

As you can see, we use the service context when retrieving, but updating using the service proxy. CRM doesn't like this and will throw the above error. There are a few ways to get around this so let me explore each.

1. Update via the context.

using (var svcContext = new OrganizationServiceContext(_serviceProxy))
{
    var query = from a in svcContext.AccountSet
                where a.Name.Contains("Contoso")
                select a;
    var account = query.First();
    account.SomeField = "SomeValue";
    svcContext.UpdateObject(account);
    svcContext.SaveChanges();
}

Simple and straight forward. If you are reading via the context, just update via the context as well. This is the correct way to do this if you are staying within the scope of the service context.

2. Update the EntityState manually


There is a little nuance with how you must do this, but it might prove useful if you are outside the scope of the service context:
Account account;
using (var svcContext = new OrganizationServiceContext(_serviceProxy))
{
    var query = from a in svcContext.AccountSet
                where a.Name.Contains("Contoso")
                select a;
    account = query.First();
}
... <snipped other stuff that might happen> ... account.SomeField = "SomeValue"; account.EntityState = null; // Or you can set it to EntityState.Changed _serviceProxy.Update(account);

You'll notice that we have set the "EntityState" to null (or changed) outside of the scope of the ServiceContext using. If you try to do this within the using scope, like this:

Account account;
using (var svcContext = new OrganizationServiceContext(_serviceProxy))
{
    var query = from a in svcContext.AccountSet
                where a.Name.Contains("Contoso")
                select a;
    var account = query.First();
... <snipped other d stuff that might happen> ... account.SomeField = "SomeValue"; account.EntityState = null; // Or you can set it to EntityState.Changed _serviceProxy.Update(account);
}

you'll get the following error:

The entity is read-only and the 'EntityState' property cannot be modified. Use the context to update the entity instead.

The reason you get this problem is the ServiceContext adds the entity to a change tracking list which flags the entity as read only. Once you move outside the context of the using statement the Service Context is disposed and as a result is clears all tracked changes and the entity tracking list. This operation reverts the read only flag on the entity thus allowing you to set the EntityState flag. The issue it leaves behind is the EntityState flag is still set to "Unchanged" which makes it "un-update-able" - a side effect I consider a bug within CRM. (Note: I have not tested this in CRM 2013 yet to see if it was fixed, but it existed as a bug in 2011).

3. Create a new object


This is my preferred method when updating any entities. Why send the entire object back at CRM when you can send just the attributes you want to update? This method looks like the following:
Account account;
using (var svcContext = new OrganizationServiceContext(_serviceProxy))
{
    var query = from a in svcContext.AccountSet
                where a.Name.Contains("Contoso")
                select a;
    account = query.First();
}
... <snipped other stuff that might happen> ...
var accountForUpdate =  new Account {
        Id = account.Id,
        SomeField = "SomeValue"
    };
_serviceProxy.Update(accountForUpdate);

This is far more efficient and has the added benefit of bypassing plugins that might fire off certain field updates. This is because when you send the entire entity back any plugins that have an attribute filter on them will still fire. Just setting the required attributes will only post those to CRM and also make the update message smaller.

Happy CRM'ing!

Wednesday, 19 February 2014

Dynamics CRM - Enabling/Disabling Fields using Javascript - Common pitfalls

One of the great things about CRM is what you can achieve with some JavaScript. A very common use of this is making decisions on what fields should be disabled on a form when driven by more complex business rules. But with this power we need to take care! There are a few gotchas you need to be mindful of when using JavaScript to enable/disable fields.
  1. Avoid invalid states by employing an onload script.
  2. Be mindful of breaking workflow development
  3. This is UI side scripting, so can be bypassed by directly hitting the SDK.
Let's build an example to show and fix these downfalls.

Note that I am still working in the world of Dynamics CRM 2011 so most screen shots and comments will relate primarily to that environment.


The Requirement

We have 2 new fields on the contact form that need some JavaScript to help them function correctly:

  • First is an Option Set called "Occupation" with the choices 
    • 1 - Sales Manager
    • 2 - Computer Programmer
    • 3 - Administrative
    • 4 - Other
  • Second field is a text box called "Other Occupation" that should only be enabled when they select "Other" from the option set
  • "Other Occupation" is a required field if they select "Other" from the option set


The Code

Firstly, let's customize the form and disable the field by default (hint: this causes our first bug):



Now let's create a new script to run off the on change event of the new optionset. Our JavaScript function should look like this:


function setOtherEnabled() {
    if (Xrm.Page.getAttribute("new_occupation").getValue() == 4) {
        Xrm.Page.getControl("new_otheroccupation").setDisabled(false);
        Xrm.Page.getAttribute("new_otheroccupation").setRequiredLevel("required");
    } else {
        Xrm.Page.getControl("new_otheroccupation").setDisabled(true);
        Xrm.Page.getAttribute("new_otheroccupation").setRequiredLevel("none");
    }
}


Testing

Once this has all been linked up to the on change event of our occupation option set we can give this a quick test. Changing to "Other" produces the correct result:


Looks good so far. Let's type in an occupation and save it. This looks fine, until we check how it has reloaded it:


So there's a problem. When the form reloads it disables this field by default, but hasn't recognised that the Occupation has been changed to "Other".


Tip 1


When using JavaScript to enable/disable fields on a form, dependent on other field values, always use a form onload event to initialise fields to their correct state.


The Code (V2)


function onLoad() {
    setOtherEnabled();
}


function setOtherEnabled() {
    if (Xrm.Page.getAttribute("new_occupation").getValue() == 4) {
        Xrm.Page.getControl("new_otheroccupation").setDisabled(false);
        Xrm.Page.getAttribute("new_otheroccupation").setRequiredLevel("required");
    } else {
        Xrm.Page.getControl("new_otheroccupation").setDisabled(true);
        Xrm.Page.getAttribute("new_otheroccupation").setRequiredLevel("none");
    }
}

Hook up that onLoad function to the form onload event. Now refresh the contact page and it's looking much better.


New Requirement!

As with all projects requirements will change and evolve with the business, so let's introduce something new. For the purpose of this example let's say people seem to type in HR and we wish to make sure it is entered as "Human Resources". Our developers are busy so we are going to write a workflow to update the Other Occupation while we wait on a plugin. In CRM 2013, this could be handled differently, but for now let's presume we're stuck in the dark ages of CRM 2011!

Create a new workflow to fire off update of a contact and set the Other Field... and this is where we hit our next Gotcha...



We cannot update the "Other" field because it has been disabled in the workflow editor. This is because we disabled it directly on the form by default.


Tip 2

Never disable fields directly using a form customisation if you want to access it via workflows. Use javascript onload events instead.


Fixing it up

Thankfully for us we already put in an onload event to fix the first bug (to handle the behaviour state of the new field), so this means we already have the JavaScript onload event firing for this. This means we don't need to disable the field by default on the form as the onload will do this for us. So all that is left to do to fix up the issue is undo the first customisation we performed as per this screenshot:




Wrapping up

The last point we need to be aware of is that because this is JavaScript the validation of it being a required field only happens at a UI level. In general this is fine, because this is consistent with the standard CRM field level requirements (you can bypass all required fields if you create records directly via the SDK). So generally I don't bother implementing any server side validation to catch these type of fields.

In summary, always be aware of the most important gotchas when using JavaScript to control fields in this manner:
  1. Use onload events to initialise field behaviour.
  2. Don't default field behaviour by making customisations directly on the form if you might need to work with the fields in workflows.

Tuesday, 21 January 2014

CRM 2013 - Where has my advanced find gone?!?

One of my biggest bugbears of CRM 2013 is the disappearance of advanced find! From what I can fathom CRM now only shows the advanced find button on certain views or on dashboards. So my tip of the day is how to find it.

If you're currently on one of the other pages that does not have the advanced find button, for example it doesn't seem to exist on List of Competitors, Business Management, or in fact most settings pages, then the quickest way to launch advanced find is as follows:

1. If your home page is set to Dashboards click on home and you'll see the Advanced Find button

2. If your homepage is a grid (that has Advanced Find available) then simply click on the Home button and then click on the ellipsis button (three dots) to locate Advanced Find.

3. If all else fails, click on Microsoft Dynamics CRM -> Sales and then navigate to Dashboards. Here you will find a link to advanced find.

Can't believe I just wrote this post. Microsoft, will you please make the Advanced Find available on all screens again!

Monday, 6 January 2014

Repository patterns - Fighting unclean code

Every company you work for is different. Different developers, different styles, different techniques. But one thing I find that is consistent across almost all places I have worked is a constant battle to maintain clean code and some places are worse than others. I have particularly come across this recently and no matter how many times I offer up tips the unclean code seems to creep in. Due to the fact I'm working with some really decent guys that just have a lot of bad habits I am struggling to find the right side of that fine line between sounding like a grumpy picky developer and just putting up with it!

I'm not just talking about different styles here, I'll give you an example. For our plugin development we use the repository pattern so the following type of code is used quite often:

public class AccountRepository
{
    IOrganizationService _service;

    public AccountRepository(IOrganizationService service)
    {
        _service = service;
    }

    /*...*/
}



This is somewhat fine, although the 1 to 1 mapping between entities and repository classes isn't ideal, but it works well for us so far. Unfortunately some of the developers don't fully understand what the repository pattern really is and what it should and shouldn't know of/know how to do. Recently I spotted that the following code was checked in some time back:

public class AccountRepository
{
    IOrganizationService _service;
    Entity Account { get; set; }

    public AccountRepository(IOrganizationService service)
    {
        _service = service;
        Account = null;
    }

    public AccountRepository(IOrganizationService service, Entity Account)
    {
        _service = service;
        this.Account = Account;
    }
    /*...*/
}

The problem this introduces is that my repository is now aware of an account record/entity. This immediately points the finger at the code breaking SRP (Single Responsibility Principle).

Let's refresh our memory on what a repository pattern is. Put simply, the repository should act as a data layer or mediator between our business logic, domain objects and our data source. It should know how to take a query and provide a resulting set of domain objects. It might also be able to add and remove objects, or even update objects. To perform these operations it should be aware of how to connect to the data source, how to map the result of a query to domain objects and it should make that list of objects available to the caller. In some instances you may move the "data mapping" logic out into its own classes, but in Dynamics CRM the Organization Service already provides you with Entity objects (or strongly typed objects if you prefer) so you may not need a separate data mapping abstraction.

Now, going back to the code above, the extra introduction of an existing record in the constructor causes us to add a capability to the repository. It is now record aware before you even run a query. This breaks SRP and most likely some other principles too. You'll find OCP, Open Closed Principle, hanging on a very fine thread as a result too! I am guessing that further down in the code (here where demons lie...) you will most likely see references to this object and decisions being made based on the object and its attributes. Oh you naughty naughty developers! What were you thinking!

So, what do I do? Refactor a load of methods I didn't develop and spend half a day fixing other peoples code? Send out a tip (yet another one!) saying why this is bad and look like Mr Grumpy (yet again!)? Or do I ignore it and hope it goes away in time...

Decisions decisions!