Update a User’s Business Unit and retain their Security Roles

If you have ever had to change the Business Unit of a User then you will know that the change causes their security roles to be dropped, so you need to ensure the security roles are reassigned. This is mildly frustrating when you have one or two Users to update, but what happens when you have to update all Users in your organisation?

I recently encountered a scenario where the organisation needed to completely remodel their Business Unit structure and move the Users to the new Business Units; the prospect of doing this manually filled me with dread.

I’ve done a bit of research and seen methods that others have used (see CRM Tip of the Day 1134 for a good example), but I’m a massive North52 nerd so I felt sure there would be a way I could achieve this dynamically using this tool.

The Scenario

In the scenario, I had ~800 Users in 4 existing Business Units, while the new Business Unit structure was comprised of 7 Business Units. This was further complicated because the Users could have anywhere between 3 and 14 Security Roles (don’t ask…). We needed to be able to assign them to the new BU and ensure their Security Roles were reassigned when their BU was changed.

The Setup

The first step in setting up was to add an Option Set field to the User entity to hold the name of the Business Unit we would be moving the User to.

The next step was to create a FetchXML query to get the Security Roles assigned to a User, which would be called from within the North52 formula. The FetchXML expression I used is:

 <fetch version="1.0" output-format="xml-platform" mapping="logical" distinct="true">
  <entity name="role">
    <attribute name="name" />
    <attribute name="businessunitid" />
    <attribute name="roleid" />
    <order attribute="name" descending="false" />
    <link-entity name="systemuserroles" from="roleid" to="roleid" visible="false" intersect="true">
      <link-entity name="systemuser" from="systemuserid" to="systemuserid" alias="ad">
        <filter type="and">
          <condition attribute="systemuserid" operator="eq" value="@systemuserid@" />


The important thing to note from the FetchXML expression is that the value in the condition searching for the systemuserid is @systemuserid@. North52 uses the @…@ tags to find the value in the field with the schema name that is enclosed in the tags, so it will substitute the GUID of the User on each record this is triggered on.

The Formula

The North52 Formula is actually relatively straightforward:




      When( 150000000), Then (FindValueQuickId('businessunit','Actuarial & Benefits')),

      When( 100000000), Then (FindValueQuickId('businessunit','Business Support Unit')),

      When( 100000001), Then (FindValueQuickId('businessunit','Commercial Group')),

      When( 100000002), Then (FindValueQuickId('businessunit','Insights & Analytics')),

      When( 150000001), Then (FindValueQuickId('businessunit','Investment')),

      When( 100000003), Then (FindValueQuickId('businessunit','Life & Financial Services')),

      When( 150000003), Then (FindValueQuickId('businessunit','Third Party Administration')),

      Default(FindValueQuickId('businessunit','Business Support Unit'))




    SetVar('Role' + recordIndex(), FindValue('role','roleid',currentrecord('roleid'),'name','?','true')),

    SetVar('Roles', RecordTotal())

    SetAttributeLookup('businessunitid', 'businessunit', GetVar('BusUnitID'))



        SetFindAnd(GetVar('BusUnitID'),GetVar('Role' + DoLoopIndex())),



I’ll try to explain some of the key elements of this N52 Formula below:

Case: The Case function is used to check the value in the new Business Unit Option Set field, and it sets a variable to hold the ID of the new Business Unit (‘BusUnitID’) so we can use it later in the formula

ForEachRecord: ForEachRecord is a looping function, and it iterates through the output of the FetchXML query we created above using the FindRecordsFD function and carries out the actions specified. In this instance I have two actions:

  1. Create a Variable to hold the Name of the Security Role (‘Role’). The RecordIndex() function outputs an integer with the number of the current loop, so I’ve used it to create iterative Variables (i.e. each loop of the ForEachRecord function will create a new Variable (Role1, Role2, Role3, etc.)
  2. Create a Variable (‘Roles’)to hold the Total Number of loops we’re running, which will be used later in the Formula

DoLoop: DoLoop is another looping function that allows us build an iterative function to complete actions. We use the variable that we created in the ForEachRecord function step to hold the total number of loops (‘Roles’) to specify the number of iterations of the DoLoop function to carry out.

For each role we will carry out an AssociateEntities function to associate the User to the Security Role. To find the right Security Role(s) to associate we do a FindValue function and use SetFindAnd to enable us to specify multiple input parameters that must be met. In this case we want to find Security Roles using the following criteria:

  1. The businessunitid of the new Business Unit we’ve updated on the User Record, using the ‘BusUnitID’ variable we set at the start of the formula
  2. The Security Role name, which we retrieve by getting the Variable using the DoLoopIndex() function, which outputs an integer with the current loop number, identical to the RecordIndex() function we used to set the Variable name in the ForEachRecords function.


This relatively straightforward formula allowed me to dynamically update all of my Users to their new Business Units and to ensure their security roles were applied properly. It saved me a huge amount of time over a manual approach, and hopefully has demonstrated some of the capabilities of the North52 solution.

I am sure I will be able to reuse the functionality of setting and getting a dynamic number of variables using ForEachRecord and DoLoop functions in this way, but I’d love to hear from others if they can think of any other scenarios in which this could be applied, so please feel free to reach out!

Postcode Region Mapping via Workflow

I recently delivered a session at Dynamics 365 Saturday Scotland covering some advanced functionality you can implement in your Dynamics 365 environment using free custom workflow activities.

To read my thoughts on #D365SatSco and how amazing it was, see the article I posted on LinkedIn

One of the scenarios I covered in my session was looking at how we can carry out regional analysis of our account using workflows, and I’ve outlined my solution below.

The Scenario

For this scenario I wanted to be able to check if the postcode that had been entered for the address on an Account was valid, and if so I wanted to be able to extract the outward code and use this to map the Account to it’s postcode area, locale, sub-region and region.

The Setup

For this scenario I added the following to my environment:

  • A new Entity called Region Mapping, containing
    • An Option Set with 4 options:
      1. Postcode Area
      2. Locale
      3. Sub-Region
      4. Region
    • A hierarchical Parent lookup field
  • Added fields to the Account entity
    • A single line of text field called “Extracted Postcode”
    • 4 lookup fields to the Region Mapping entity (one for each Option in the Option Set

Once this is all created, I imported my dataset, which I derived from data sources from the Office for National Statistics. You can download a copy of my dataset below:

The Workflow

To create my workflow I used tools from two different custom workflow assemblies:

  1. Jason LattimerString Workflow Utilities
    1. Regex Match
    2. Regex Replace with Space
  2. Alex ShlegaTCS Tools
    1. Attribute Setter

Step 1 – Postcode Verification

In the UK, all postcodes follow standard formats, so it makes it relatively easy to determine if the postcode is valid or not. For my workflow I’m using the Regex Match step, so I need a Regex pattern to use. I wanted to be able to separate out the outward and inward sections of the postcode, so the expression I ended up with is:

((?:(?:gir)|(?:[a-pr-uwyz])(?:(?:[0-9]?)|(?:[a-hk-y][0-9]?)))) ?([0-9][abd-hjlnp-uw-z]{2})

I am not an expert at Regex, but I am very good at googling! I added this pattern to Regex101, which does a great job of explaining the component parts if you’d like to understand it further

The output from a Regex Match step will be True or False. If it returned false you could use a cancel step in your real-time workflow to display an error message to your user informing them that their Postcode was not valid

Step 2 – Extract Postcode Area

As I mentioned above all UK Postcodes follow standard formats, and this particularly true for the second part of the postcode which is always one number followed by two letters. To carry out my region mapping I needed to be able to extract the first part of the postcode, so I used the Regex Replace with Space step to replace the second part of the postcode with 0 spaces, in effect just deleting it.

From my Regex pattern in the previous step, I used the second capturing group to match with the second part of the postcode:


The output from this step leaves us with the first part of the postcode, so we update the Extracted Postcode field on the Account entity with this, and we’ll use that in the next step.

Step 3 – Run the Attribute Setter

I’ve previously discussed Alex Shlega’s Attribute Setter, and it’s one of my favourite custom workflow activities. It’s super easy to work with and allows you to dynamically set lookup fields from within your workflow.

The first thing to do is to create a Lookup Configuration with a FetchXML query to find the record you will be setting in the lookup field. For mine, I’ll be looking for the Region Mapping record that matches the extracted postcode. As I’ve discussed before, the magic in the Lookup Configuration is the ability to dynamically pass values to the FetchXML query by putting the schema name of the field that contains the value inside a pair of # marks.

The key part of the FetchXML query abouve is the second condition:

<condition attribute=”ryan_name” operator=”eq” value=”#ryan_extractedpostcode#” />

By putting the schema name of my Extracted Postcode field, whatever value is in there will be added to my query when it is run by the workflow. The Attribute Setter will output the GUID of the Region Mapping record (i.e. the Fetch Result Attribute) and it will set it in the Postcode Area lookup field on the Account (i.e. the Entity Attribute)

Step 4 – Update Account

The final step, now that the Postcode Area has been updated, is to run a child workflow to update the Locale, Sub-Reigon and Region fields. For each of these fields, we’ll run an Update Record step, and select the Parent of the predecessor (i.e. for the Locale we will find the Parent of the Postcode Area field value


This is a relatively simple approach to allow you to carry out regional segmentation of your Accounts, which can be used for marketing purposes or for reporting.

If you’ve found it useful, or if you have any other ideas then please reach out to me on Twitter or LinkedIn

Custom Rollup fields

Microsoft introduced Rollup Fields to the Dynamics CRM platform in 2015, and this added significant positive functionality to the platform. Rollup fields contain aggregated values that have been calculated from child records for a parent record. Typical examples of this would be total value of Open Opportunities for an Account, or the total number of Emails received for a Contact.

Whilst there are benefits to Rollup Fields, there are also some “gotchas” to be aware of before you implement them in your environment. Jukka Niiranen has outlined some of these on his blog, and some of the other considerations are outlined on the Microsoft site. For me, the two main considerations are:

  1. Rollup fields are asynchronous, so they don’t update in real-time
  2. You can only have a maximum of 100 rollup fields across your organisation, and 10 per entity

With these drawbacks in mind, you may wish to explore alternatives to creating rollup fields.

The Scenario

In my environment, we define our Opportunities in two ways:

  1. Is the Opportunity from a New Client, or is it Account Development with an existing Client?
  2. Which Segment of our business is the Opportunity associated with?

In order to capture this information, we have a custom entity for the Segments with a lookup field on the Opportunity, and an Option Set field for the Opportunity Type.

For each Segment, we wanted to be able to see:

  1. What is the total value of the Estimated Revenue of All Open Opportunities?
  2. What is the total value of the Weighted Revenue of All Open Opportunities?
  3. How many Open Opportunities are there?
  4. What is the total value of the Estimated Revenue of all Open Account Development Opportunities ?
  5. What is the total value of the Weighted Revenue of all Open Account Development Opportunities?
  6. How many Account Development Open Opportunities are there?
  7. What is the total value of the Estimated Revenue of all New Client Open Opportunities?
  8. What is the total value of the Weighted Revenue of all New Client Open Opportunities?
  9. How many New Client Open Opportunities are there?

This is easily achievable using standard Rollup Fields, however it would require 9 rollup fields which is a significant proportion of the available rollup fields on the entity and the organisation.

The Solution

One of my favourite things about the Dynamics community is the amount of free content that contributors have made available to help others solve problems. In this case, I’m going to be using the Dynamics 365 Workflow Tools created by Demian Raschkovan

Dynamics 365 Workflow Tools – List of Functions

In this set of Custom Workflow Activities there is an option called Rollup Functions that we will be using. This allows you to use a FetchXML query to define the records we’ll be summing up.

Setting up the Entity

On the entity that you’d like to add the “rollup” fields to (in this case the Segment entity), create Simple fields to capture the data you’ll be summarising. In my case, I have added six Currency fields (for the Estimated Revenue and Weighted Revenue calculations) and three Whole Number fields (for the Quantity calculations)

Creating the Workflow

Create a real-time workflow on the entity you’ll be summarising the data from (in this case the Opportunity entity), and set the trigger to run on Create, or on Update of the Segment, Estimated Revenue or Probability fields.

The first step is to get the GUID of the referenced Segment on the Opportunity, and Demian has a function for that too – Get Record ID. We need the GUID to reference in the FetchXML we’ll be using.

Note: in the description for the Rollup Functions activity it suggests you can use a {PARENT_GUID} tag to pass in dynamic data, but I couldn’t get it to work consistently, so I use the Get Record ID activity to bypass that requirement.

The next step is to add a Rollup Functions step to your workflow, and then define the FetchXML you’ll be using. There are two methods to get your FetchXML, either using the Advanced Find, or using the FetchXML Builder by Jonas Rapp.

I want to find all Open Opportunities against a specific segment so I add two conditions, and ensure the field I will be aggregating is first in the list, which gives me a FetchXML query that looks like this:

We’ll paste this into the Workflow Step properties for the Fetch XML and replace the GUID with the output from the Get Record ID step

We repeat this step for each FetchXML query we want to run. In my case, I’ve ended up with six Rollup Function steps in my workflow. The Rollup Functions can output the Average, Count, Max, Min or Sum of the query.

The final step of the Workflow is to update the Segment and update the values with the Rollup Function outputs

The workflow is now ready and will provide you with real-time rollup functions to count the values in your pipeline.

Updating the Previous Segment

Ah, what’s that I hear you say, we’re not done yet? Well, you’re right of course. The above is all well and good, but changing the Segment means the Opportunity is double-counted in the pipeline analysis, unless we change the previous segment too, so how do we do that?

You need to create a second lookup to the Segment entity on the Opportunity, to hold the Previous Segment. Then create another real-time workflow on the Opportunity that is triggered Before the Segment field changes, and copy the value in the Segment field to the Previous Segment, and then trigger a Child Workflow to update the Previous Segment. The Child Workflow will be a copy of the one we created above, however it will need to be a background workflow to ensure it captures the data correctly.


This offers a viable alternative to the OOTB rollup fields. It should be noted that using real-time workflows can add performance overhead to your environment, so consider what is best for your particular deployment. As ever, using third-party tools is at your own risk, so ensure it has been tested thoroughly before you consider deploying it.

Excel Project Plan

This isn’t strictly a CRM/D365 post, but I think it could provide some assistance for planning CRM related projects, so I thought I’d share.

Any good CRM project, whether that be a new deployment or a small change, requires planning to ensure it is effective; the 5 P’s cliché “Proper Preparation Prevents Poor Performance” exists for a reason.  I am aware that plenty of people use Microsoft Project or use D365 Project Service Automation (if you want to learn more about this I’d highly recommend reading Antti Pajunen‘s excellent blog posts about PSA), however I am also aware, from my experience of working in small companies, that the licence costs for these products can be prohibitive.

A Simple* Solution

Any company that utilises the Microsoft Office technology stack as part of their business will have access to Excel, and therefore they’ll be able to utilise the vast array of templates that Microsoft have made available to help them in their business.  I’ve used many of them in the past, and continue to do so today.

There are many Project related templates available for Excel, and I recently saw the Agile Gantt Chart template.  This template is great because it provides a decent foundation for a Gantt chart, but there a number of areas I felt it was lacking, so I’ve modified it to try and make it more suitable for my purposes.

My Template

My concerns with the template available from Microsoft are:

  1. There is no ability to automatically schedule task completion dates
  2. There is no ability to include predecessors for tasks
  3. There is no ability to effectively resource manage tasks

With all of this in mind, I thought it would be a fun task to see if I could implement some improvements.

Project Plan Template

I’ve included a link below to download my version of the template.  The key features I’ve added are as follows:

Activities are added by:

  1. Selecting a Component from the drop-down selector in the Component column
    1. The Component drop-down is populated from the Component Column in the High-Level Summary Dates table on the Project Summary worksheet
  2. Manually inputting an Activity description in the Activity Column
  3. Selecting a Task from the drop-down selector in the Task column
    1. The Task drop-down is populated from the Task Column in the Mid-Level Summary Dates table on the Project Summary worksheet
  4. Selecting a Category from the drop-down selector in the Category Column
    1. Goal marks the Activity with a Goal marker on the Gantt chart
    2. Milestone marks the Activity with an Activity flag on the Gantt chart
    3. On-Track, Low Risk, Med Risk and High Risk format the cells on the Gantt chart in accordance with the format on the Legend at the top of the sheet


Start Dates are calculated as follows:

  1. Each Activity starts on the End Date of the preceding Activity in the list, unless:
    1. A Predecessor is selected by inputting the ID of the predecessor in the Predecessor column; and/or
    2. A number of “Lag Days” in working days is input in the Lag Days column
    3. An Actual End Date is entered for either the preceding task or the predecessor


End Dates are calculated as follows:

  1. The estimated effort in Working Days is input into the Effort (Working Days) column
  2. Responsibility for the task is allocated to a person using the drop-down selector in the Responsible column
    1. The Responsible drop-down is populated from Name column in the table on the Project Personnel worksheet
  3. The Task Duration is automatically calculated as the Estimated Effort / Effort Profile (from the Profile column in the Project Personnel Sheet), and is rounded up to the nearest ¼ day
    1. E.g. a task with an estimated effort of 1 day, allocated to a person with an Effort Profile of 50%, would have a Task Duration of 2 days
  4. Any holidays to be accounted for are documented in the Holidays table on the calcs worksheet
  5. The End Date is therefore Start Date + Task Duration (in working days), and ignores any holidays


If you want to use this template you can Project Plan Template

Let me know if you find it useful!

Setting a Lookup from a Workflow

One of the limitations of the workflow engine that I have found frustrating for a long time is the inability to dynamically set lookup fields based on the output of a FetchXML query, however I no longer have to worry as Alex Shlega has provided the answer to my problems with his TCS Tools Solution

I’ve used this tool for a few solutions in my environment and, after discussion with my good friend Megan Walker I realised it might be good to share a sample scenario.

The Scenario

There is a web form that is used by visitors to a website to submit queries.  The queries are added to CRM and are all FROM no-reply@company.com.  Within the body of the email is an email address for the submitter, and we need to extract the email address, find the related Contact and set a Lookup field (Regarding) to link the Contact to the Email.

The submitted email body has the following format:

[title] [Mr]
[first name] [Ben]
[last name] [Willer]
[email] [benwil@alliedholdingcompany.co.uk]
[phone] []
[address1] [Mounters]
[address2] [Marnhull]
[address3] [Sturminster Newton]
[address4] [Dorset]
[postcode] [DT10 1NR]
[how did you hear about us?] [Internet search]


The Solution

First things first, you will need to install the TCS Tools solution in your environment.  The link above will take you to Alex’s website to download the solution.  As ever, this is a free third-party tool, so install at your own risk.

Next, you will need to add a Single Line of Text field to your email entity to store the email address we’re going to extract from the body of the email above.  Rather imaginatively, I’ve named mine new_extractedemail.  We’ll need this schema name in the next step.

Create Lookup Configuration

Navigate to the TCS Lookup Configuration entity and create a new lookup configuration as follows:

TCS Lookup Configuration

The Entity Attribute should be the schema name of the lookup field you wish to set with your workflow.  In my case, I’m going to be setting the Regarding field on the Email, so I’ll be using regardingobjectid.

Next we need to create a Fetch XML expression to use in the Lookup Configuation.  The easiest way to do this is to create an advanced find, then download the Fetch XML.  For this one, I’m looking for a Contact where the Email Address equals the submitted email address, so my Advanced Find looks like this:

Create Fetch XML

Note: as you can see above, I’ve set the Email to equal #new_extractedemail#.  The hashtags are used by the TCS Tools solution to replace this value dynamically.

The Fetch XML expression will look as follows:

<fetch version=”1.0″ output-format=”xml-platform” mapping=”logical” distinct=”false”>
<entity name=”contact”>
<attribute name=”fullname” />
<attribute name=”telephone1″ />
<attribute name=”contactid” />
<order attribute=”fullname” descending=”false” />
<filter type=”and”>
<condition attribute=”emailaddress1″ operator=”eq” value=”#new_extractedemail#” />

Extract the Email Address

In order to be able to use the email address that was submitted above, we need to extract it from the the body.  I use Jason Lattimer’s Regex Extract step from his String Workflow Utilities workflow solution.  In order to extract the email address we need to do two Regex Extract steps, as follows:

Step 1: Extract Email Address from Body

Regex 1

The Regex Pattern in this step is (?<=\[email\] )([\s\S]*)(?=\[phone\] )

The pattern essentially looks for any characters in between the [email] and [phone] sections in the email body, and therefore the output from the email above is [benwil@alliedholdingcompany.co.uk].

In order to be able to use this in my workflow, we need to remove the square brackets, so I do another Regex Extract on the output of this step.

Step 2: Extract Email Address from within Square Brackets:

Regex 2

The Regex for this step is (?<=\[)([\s\S]*)(?=\]).  This pattern looks for any content in between the opening square bracket and the closing square bracket, so the output now is benwil@alliedholdingcompany.co.uk.

Note: I am not a Regex expert, but I have found Regex 101 invaluable in learning and testing my expressions, because it lets you see how the expression works and explains what each element means

Once we have carried out the Regex steps, we update the new_extractedemail field with the output of the second step:

Update Extracted Email Step

Run the Lookup Setter

Now that we have the email address extracted and available on the Email entity, the last step is to run the TCS Lookup Configuration we created above to set the lookup:

Set Lookup Configuration


The final workflow should look a bit like this:




This functionality is a really powerful addition to the workflow engine, and opens up a whole raft of advanced possibilities for CRM administrators to create workflows to solve complex problems.  I’ve used this internally to map Excluded Emails from ClickDimensions to error codes for the purposes of reporting, and I’m working on additional scenarios that we can use it for.


CRM Development – As easy as making a cup of tea?

Last weekend I attended the Dynamics 365 Saturday Summer Bootcamp in London; this was a great event full of opportunities to network, engage and learn, and I am very grateful to the organisers for putting on the event.  Whilst at the event I was speaking to someone who told me that he was currently working as a business analyst but was considering training to become a CRM Functional Consultant and it got me thinking about the importance of business analysis skills to me in my role.

Always ask “Why?”

Before I became a CRM Consultant I studied Law at university, and I’ve trained as an ISO9001 internal auditor in previous job roles.  This background has provided me with a strong ability to analyse and understand business process, though my biggest asset is probably my incessant asking “Why?”.

I’ve spent countless hours developing solutions for CRM that then go unused by the business after deployment, and I could’ve saved so much of this time if I’d just asked “Why” a bit more.

  • Why do you need this solution?
  • Why are you recording this information?
  • Why does the process work like this?
  • Why can’t we use existing functionality?

The key thing is to make sure you have enough detail so that you have a full understanding of the requirements before you even commence development.  Think of it as a modern version of “measure twice, cut once”.

Creating Process Flowcharts

I have a logical mind, so I like to document all of my processes in flowcharts before I get underway with development.  I find a flowchart helps me to keep my thoughts in check and guides me in my development of system updates.  A good flowchart should be comprehensive but clear; it should provide you with all of the steps in the process and should have a clear, logical path to follow.  Anyone who has used Visio in the past can probably attest to the simplicity of creating flowcharts, though I think there is a bit of an art to making a good flowchart.

In order to demonstrate the level of detail I look for in a flowchart, I thought it would be useful to think of it in terms of a process that everyone can understand – making a cup of tea!

How to make a cup of tea

I know what you’re thinking – everyone knows how to make a cup of tea, it’s a waste of time to document the process.  Fortunately, this is a really simple process to document:

Simple Process

So that’s us done right?  If only it were that easy…

I’ve experienced plenty of processes like the above, they are a weak attempt at documenting how a process works, and miss out quite a lot of the detail.  For a start, if you boil a kettle without adding water to it, you’re gonna have a bad time.  I’ve yet to make a cup of tea for a group of people without there being variables involved, some people want milk in their tea, and some people want sugar.  Some want both, some want neither.  So let’s start over and create a process that includes these variables:

Simple Process with variables (1)

Ah, that’s better!

This is a much more detailed process, and accounts for the variables at different steps.  A little bit of extra thought, and asking a few more questions about the process has helped us to capture a lot more information and ensure the process accurately reflects the actual work being completed.

Unfortunately, I think this is still too simple.  There are lots of different types of tea, and they have different preparation processes.  In most businesses, their processes may have multiple divergent paths based on decisions taken at different parts of the process, and this can have a massive knock-on effect to your development if you don’t account for it at the planning stage.  I’ve lost so many hours to poor planning and a lack of understanding of the needs of the business when I’ve been creating my solutions.  Trying to unpick a solution after implementation can be time-consuming, painful and frustrating.

A comprehensive tea making flowchart might look like the below:

Comprehensive Process

This process accounts for multiple variables, divergent paths, and includes a lot of detail that would help me understand what I need to do to make sure my system can account for all of the steps.  It is possible to make this even more detailed if you wish, though you also have to know where to draw the line and not to add complexity for no additional benefit


As I’ve hopefully demonstrated above, it’s really easy to make a simple process, but there are risks involved in basing your system development on poor information.  Spending the time at the start to ensure that you fully understand the needs of the business and the process problem you’re creating a solution for will ensure that your development time will not be wasted.

At the very least, after reading this I hope you know how to make a cup of tea!