Filtered Lookup Field based on Linked Entity using North52

If you’ve ever had a requirement to filter lookup fields then you’ll no doubt be aware that this is possible in Dynamics 365, but that there are some limitations to the functionality.

Microsoft have done a great job of enabling out of the box filtering for simple scenarios using the “Related Records Filtering” options or by limiting the records returned using specific view(s)

To read more about the options available out of the box I’d recommend referring to Carl de Souza‘s blog post – https://carldesouza.com/filtering-lookup-fields-in-dynamics-365/

For the more developmentally minded amongst us there is also the option to use the addCustomFilter JavaScript function, more information on which can be found on the Microsoft Docs site – https://docs.microsoft.com/en-us/powerapps/developer/model-driven-apps/clientapi/reference/controls/addcustomfilter

For those who are comfortable with JavaScript I’d recommend reading Aileen Gusni‘s posts about this for some tips and tricks – http://missdynamicscrm.blogspot.com/2016/09/utilize-custom-action-to-help-filtering-lookup-view.html

The Scenario

In my scenario we have a Peer Review entity to record the outputs of peer reviews carried out for activities related to an Account. The Peer Review entity has several Reviewer Roles which are lookups to the User entity. The lookups need to be filtered to only show Users who are in the Account Team. I’ve mapped the relationships between the entities below:

I tried to get this to work with the OOTB options, but found that I couldn’t quite get them to work for this scenario. I also looked at the JavaScript options but again ran into issues, primarily because I need the filtering criteria to be dynamic on each Peer Review record depending on the selected Account, whereas the JavaScript was a bit prescriptive for me. (Note: I’m not a coder, so someone cleverer than me could probably get it to do what they needed).

However, in exploring the JavaScript i stumbled upon a potential solution. You can use an “in” operator in a condition in your FetchXML to specify the list of values to be returned, like so:

<filter type='and'&gt; 
        <condition attribute='YOUR_FIELD_HERE' operator='in'&gt;
          <value&gt;{YOUR_GUID_HERE1}</value&gt;
          <value&gt;{YOUR_GUID_HERE2}</value&gt;
          <value&gt;{YOUR_GUID_HERE3}</value&gt;
        </condition&gt;
</filter&gt;

If I could figure out a way to make this list of values dynamic then that would solve my problem!

The Solution

To solve this issue I turned to my trusty old friend North52. I’ve written previously about using looping functions and I’ll be doing something similar here.

The first step is to get the FetchXML to get the Users from the Team, which I’ve done using advanced find to output:

<fetch version="1.0" output-format="xml-platform" mapping="logical" distinct="true"&gt;
    <entity name="systemuser"&gt;
        <attribute name="systemuserid" /&gt;
        <link-entity name="teammembership" from="systemuserid" to="systemuserid" visible="false" intersect="true"&gt;
            <link-entity name="team" from="teamid" to="teamid" alias="ab"&gt;
                <filter type="and"&gt;
                    <condition attribute="teamid" operator="eq" value="{0}" /&gt;
                </filter&gt;
            </link-entity&gt;
        </link-entity&gt;
    </entity&gt;
</fetch&gt;

As you can see above the value in the teamid condition is set to {0}, and we’ll set this dynamically in the ClientSide – Perform Action formula, which is below:

Smartflow(

  ForEachRecord(

    FindRecordsFD('TeamMembers', true, SetParams([ryan_peerreview.ryan_accountid.ryan_accountteam.teamid.?])), 

    Case(RecordIndex(),

      When(0), Then(SetVar('teammembers', StringFormat('<filter type="and"&gt;<condition attribute="systemuserid" operator="in"&gt; <value&gt;{0}</value&gt;', CurrentRecord('systemuserid')))),

      When(RecordTotal()-1), Then(SetVarConcat('teammembers', StringFormat('<value&gt;{0}</value&gt;</condition&gt;</filter&gt;', CurrentRecord('systemuserid')))),

      Default(SetVarConcat('teammembers', StringFormat('<value&gt;{0}</value&gt;', CurrentRecord('systemuserid'))))

    )

  ),

  AddPreFilterLookup('ryan_primaryreviewerid', 
    'q1z', 
    GetVar('teammembers'), 
    'systemuser')

)

I’ll explain the key elements of this formula below:

SmartFlow: SmartFlow allows you to run multiple actions in one Formula

ForEachRecord: ForEachRecord is a looping function, and it iterates through the output of the FetchXML query we created above using the FindRecordsFD function and carries out the actions specified. As mentioned above, I set the value of the TeamID to be {0}, and now I use the SetParams function to define the value that will be put in here.

As we’re using ForEachRecord to loop through the records returned by the FetchXML, I will use the Case function to create a variable for the filter that I will be putting on the lookup field using the SetVar/SetVarConcat functions

The Case function works by splitting the Filter FetchXML into 3 parts:

  1. The Opening section, which includes the open tags for the Filter and Condition, and the first value returned from the FindRecordsFD function
  2. The Looping section, for all the values between the first and last values returned from the FindRecordsFD function
  3. The Closing section, which includes the closing tags for the Filter and Condition, and the last value returned from the FindRecordsFD function

To make this work with the Case function, we use the RecordIndex function, which contains an integer with the current index number of the loop, so the Case function can be described in plain English as:

WHEN we are on the first loop, THEN create a variable with the opening section of the Filter FetchXML;
WHEN we are on the last loop, THEN concatenate the variable with the closing section of the Filter FetchXML;
OTHERWISE if we are not on the First or Last loops, THEN concatenate the variable with another value

When we have created the Filter FetchXML we use the AddPreFilterLookup function to add the filter to the selected field.

Once we’ve done all of this, the field will show only the people who are in the Team related to the Account on the Peer Review record:

Conclusions

I think this is a good method of dynamically altering the available options in a lookup field, and I can envision a number of useful scenarios for this functionality, but please leave a comment below or reach out to me on social media with your thoughts.

Postcode Region Mapping via Workflow

I recently delivered a session at Dynamics 365 Saturday Scotland covering some advanced functionality you can implement in your Dynamics 365 environment using free custom workflow activities.

To read my thoughts on #D365SatSco and how amazing it was, see the article I posted on LinkedIn

One of the scenarios I covered in my session was looking at how we can carry out regional analysis of our account using workflows, and I’ve outlined my solution below.

The Scenario

For this scenario I wanted to be able to check if the postcode that had been entered for the address on an Account was valid, and if so I wanted to be able to extract the outward code and use this to map the Account to it’s postcode area, locale, sub-region and region.

The Setup

For this scenario I added the following to my environment:

  • A new Entity called Region Mapping, containing
    • An Option Set with 4 options:
      1. Postcode Area
      2. Locale
      3. Sub-Region
      4. Region
    • A hierarchical Parent lookup field
  • Added fields to the Account entity
    • A single line of text field called “Extracted Postcode”
    • 4 lookup fields to the Region Mapping entity (one for each Option in the Option Set

Once this is all created, I imported my dataset, which I derived from data sources from the Office for National Statistics. You can download a copy of my dataset below:

The Workflow

To create my workflow I used tools from two different custom workflow assemblies:

  1. Jason LattimerString Workflow Utilities
    1. Regex Match
    2. Regex Replace with Space
  2. Alex ShlegaTCS Tools
    1. Attribute Setter

Step 1 – Postcode Verification

In the UK, all postcodes follow standard formats, so it makes it relatively easy to determine if the postcode is valid or not. For my workflow I’m using the Regex Match step, so I need a Regex pattern to use. I wanted to be able to separate out the outward and inward sections of the postcode, so the expression I ended up with is:

((?:(?:gir)|(?:[a-pr-uwyz])(?:(?:[0-9]?)|(?:[a-hk-y][0-9]?)))) ?([0-9][abd-hjlnp-uw-z]{2})

I am not an expert at Regex, but I am very good at googling! I added this pattern to Regex101, which does a great job of explaining the component parts if you’d like to understand it further

The output from a Regex Match step will be True or False. If it returned false you could use a cancel step in your real-time workflow to display an error message to your user informing them that their Postcode was not valid

Step 2 – Extract Postcode Area

As I mentioned above all UK Postcodes follow standard formats, and this particularly true for the second part of the postcode which is always one number followed by two letters. To carry out my region mapping I needed to be able to extract the first part of the postcode, so I used the Regex Replace with Space step to replace the second part of the postcode with 0 spaces, in effect just deleting it.

From my Regex pattern in the previous step, I used the second capturing group to match with the second part of the postcode:

?([0-9][abd-hjlnp-uw-z]{2})

The output from this step leaves us with the first part of the postcode, so we update the Extracted Postcode field on the Account entity with this, and we’ll use that in the next step.

Step 3 – Run the Attribute Setter

I’ve previously discussed Alex Shlega’s Attribute Setter, and it’s one of my favourite custom workflow activities. It’s super easy to work with and allows you to dynamically set lookup fields from within your workflow.

The first thing to do is to create a Lookup Configuration with a FetchXML query to find the record you will be setting in the lookup field. For mine, I’ll be looking for the Region Mapping record that matches the extracted postcode. As I’ve discussed before, the magic in the Lookup Configuration is the ability to dynamically pass values to the FetchXML query by putting the schema name of the field that contains the value inside a pair of # marks.

The key part of the FetchXML query abouve is the second condition:

<condition attribute=”ryan_name” operator=”eq” value=”#ryan_extractedpostcode#” />

By putting the schema name of my Extracted Postcode field, whatever value is in there will be added to my query when it is run by the workflow. The Attribute Setter will output the GUID of the Region Mapping record (i.e. the Fetch Result Attribute) and it will set it in the Postcode Area lookup field on the Account (i.e. the Entity Attribute)

Step 4 – Update Account

The final step, now that the Postcode Area has been updated, is to run a child workflow to update the Locale, Sub-Reigon and Region fields. For each of these fields, we’ll run an Update Record step, and select the Parent of the predecessor (i.e. for the Locale we will find the Parent of the Postcode Area field value

Conclusion

This is a relatively simple approach to allow you to carry out regional segmentation of your Accounts, which can be used for marketing purposes or for reporting.

If you’ve found it useful, or if you have any other ideas then please reach out to me on Twitter or LinkedIn

Excel Project Plan

This isn’t strictly a CRM/D365 post, but I think it could provide some assistance for planning CRM related projects, so I thought I’d share.

Any good CRM project, whether that be a new deployment or a small change, requires planning to ensure it is effective; the 5 P’s cliché “Proper Preparation Prevents Poor Performance” exists for a reason.  I am aware that plenty of people use Microsoft Project or use D365 Project Service Automation (if you want to learn more about this I’d highly recommend reading Antti Pajunen‘s excellent blog posts about PSA), however I am also aware, from my experience of working in small companies, that the licence costs for these products can be prohibitive.

A Simple* Solution

Any company that utilises the Microsoft Office technology stack as part of their business will have access to Excel, and therefore they’ll be able to utilise the vast array of templates that Microsoft have made available to help them in their business.  I’ve used many of them in the past, and continue to do so today.

There are many Project related templates available for Excel, and I recently saw the Agile Gantt Chart template.  This template is great because it provides a decent foundation for a Gantt chart, but there a number of areas I felt it was lacking, so I’ve modified it to try and make it more suitable for my purposes.

My Template

My concerns with the template available from Microsoft are:

  1. There is no ability to automatically schedule task completion dates
  2. There is no ability to include predecessors for tasks
  3. There is no ability to effectively resource manage tasks

With all of this in mind, I thought it would be a fun task to see if I could implement some improvements.

Project Plan Template

I’ve included a link below to download my version of the template.  The key features I’ve added are as follows:

Activities are added by:

  1. Selecting a Component from the drop-down selector in the Component column
    1. The Component drop-down is populated from the Component Column in the High-Level Summary Dates table on the Project Summary worksheet
  2. Manually inputting an Activity description in the Activity Column
  3. Selecting a Task from the drop-down selector in the Task column
    1. The Task drop-down is populated from the Task Column in the Mid-Level Summary Dates table on the Project Summary worksheet
  4. Selecting a Category from the drop-down selector in the Category Column
    1. Goal marks the Activity with a Goal marker on the Gantt chart
    2. Milestone marks the Activity with an Activity flag on the Gantt chart
    3. On-Track, Low Risk, Med Risk and High Risk format the cells on the Gantt chart in accordance with the format on the Legend at the top of the sheet

 

Start Dates are calculated as follows:

  1. Each Activity starts on the End Date of the preceding Activity in the list, unless:
    1. A Predecessor is selected by inputting the ID of the predecessor in the Predecessor column; and/or
    2. A number of “Lag Days” in working days is input in the Lag Days column
    3. An Actual End Date is entered for either the preceding task or the predecessor

 

End Dates are calculated as follows:

  1. The estimated effort in Working Days is input into the Effort (Working Days) column
  2. Responsibility for the task is allocated to a person using the drop-down selector in the Responsible column
    1. The Responsible drop-down is populated from Name column in the table on the Project Personnel worksheet
  3. The Task Duration is automatically calculated as the Estimated Effort / Effort Profile (from the Profile column in the Project Personnel Sheet), and is rounded up to the nearest ¼ day
    1. E.g. a task with an estimated effort of 1 day, allocated to a person with an Effort Profile of 50%, would have a Task Duration of 2 days
  4. Any holidays to be accounted for are documented in the Holidays table on the calcs worksheet
  5. The End Date is therefore Start Date + Task Duration (in working days), and ignores any holidays

 

If you want to use this template you can Project Plan Template

Let me know if you find it useful!

Setting a Lookup from a Workflow

One of the limitations of the workflow engine that I have found frustrating for a long time is the inability to dynamically set lookup fields based on the output of a FetchXML query, however I no longer have to worry as Alex Shlega has provided the answer to my problems with his TCS Tools Solution

I’ve used this tool for a few solutions in my environment and, after discussion with my good friend Megan Walker I realised it might be good to share a sample scenario.

The Scenario

There is a web form that is used by visitors to a website to submit queries.  The queries are added to CRM and are all FROM no-reply@company.com.  Within the body of the email is an email address for the submitter, and we need to extract the email address, find the related Contact and set a Lookup field (Regarding) to link the Contact to the Email.

The submitted email body has the following format:

[title] [Mr]
[first name] [Ben]
[last name] [Willer]
[email] [benwil@alliedholdingcompany.co.uk]
[phone] []
[address1] [Mounters]
[address2] [Marnhull]
[address3] [Sturminster Newton]
[address4] [Dorset]
[postcode] [DT10 1NR]
[how did you hear about us?] [Internet search]

 

The Solution

First things first, you will need to install the TCS Tools solution in your environment.  The link above will take you to Alex’s website to download the solution.  As ever, this is a free third-party tool, so install at your own risk.

Next, you will need to add a Single Line of Text field to your email entity to store the email address we’re going to extract from the body of the email above.  Rather imaginatively, I’ve named mine new_extractedemail.  We’ll need this schema name in the next step.

Create Lookup Configuration

Navigate to the TCS Lookup Configuration entity and create a new lookup configuration as follows:

TCS Lookup Configuration

The Entity Attribute should be the schema name of the lookup field you wish to set with your workflow.  In my case, I’m going to be setting the Regarding field on the Email, so I’ll be using regardingobjectid.

Next we need to create a Fetch XML expression to use in the Lookup Configuation.  The easiest way to do this is to create an advanced find, then download the Fetch XML.  For this one, I’m looking for a Contact where the Email Address equals the submitted email address, so my Advanced Find looks like this:

Create Fetch XML

Note: as you can see above, I’ve set the Email to equal #new_extractedemail#.  The hashtags are used by the TCS Tools solution to replace this value dynamically.

The Fetch XML expression will look as follows:

<fetch version=”1.0″ output-format=”xml-platform” mapping=”logical” distinct=”false”>
<entity name=”contact”>
<attribute name=”fullname” />
<attribute name=”telephone1″ />
<attribute name=”contactid” />
<order attribute=”fullname” descending=”false” />
<filter type=”and”>
<condition attribute=”emailaddress1″ operator=”eq” value=”#new_extractedemail#” />
</filter>
</entity>
</fetch>

Extract the Email Address

In order to be able to use the email address that was submitted above, we need to extract it from the the body.  I use Jason Lattimer’s Regex Extract step from his String Workflow Utilities workflow solution.  In order to extract the email address we need to do two Regex Extract steps, as follows:

Step 1: Extract Email Address from Body

Regex 1

The Regex Pattern in this step is (?<=\[email\] )([\s\S]*)(?=\[phone\] )

The pattern essentially looks for any characters in between the [email] and [phone] sections in the email body, and therefore the output from the email above is [benwil@alliedholdingcompany.co.uk].

In order to be able to use this in my workflow, we need to remove the square brackets, so I do another Regex Extract on the output of this step.

Step 2: Extract Email Address from within Square Brackets:

Regex 2

The Regex for this step is (?<=\[)([\s\S]*)(?=\]).  This pattern looks for any content in between the opening square bracket and the closing square bracket, so the output now is benwil@alliedholdingcompany.co.uk.

Note: I am not a Regex expert, but I have found Regex 101 invaluable in learning and testing my expressions, because it lets you see how the expression works and explains what each element means

Once we have carried out the Regex steps, we update the new_extractedemail field with the output of the second step:

Update Extracted Email Step

Run the Lookup Setter

Now that we have the email address extracted and available on the Email entity, the last step is to run the TCS Lookup Configuration we created above to set the lookup:

Set Lookup Configuration

 

The final workflow should look a bit like this:

Workflow

 

Conclusion

This functionality is a really powerful addition to the workflow engine, and opens up a whole raft of advanced possibilities for CRM administrators to create workflows to solve complex problems.  I’ve used this internally to map Excluded Emails from ClickDimensions to error codes for the purposes of reporting, and I’m working on additional scenarios that we can use it for.

 

CRM Development – As easy as making a cup of tea?

Last weekend I attended the Dynamics 365 Saturday Summer Bootcamp in London; this was a great event full of opportunities to network, engage and learn, and I am very grateful to the organisers for putting on the event.  Whilst at the event I was speaking to someone who told me that he was currently working as a business analyst but was considering training to become a CRM Functional Consultant and it got me thinking about the importance of business analysis skills to me in my role.

Always ask “Why?”

Before I became a CRM Consultant I studied Law at university, and I’ve trained as an ISO9001 internal auditor in previous job roles.  This background has provided me with a strong ability to analyse and understand business process, though my biggest asset is probably my incessant asking “Why?”.

I’ve spent countless hours developing solutions for CRM that then go unused by the business after deployment, and I could’ve saved so much of this time if I’d just asked “Why” a bit more.

  • Why do you need this solution?
  • Why are you recording this information?
  • Why does the process work like this?
  • Why can’t we use existing functionality?

The key thing is to make sure you have enough detail so that you have a full understanding of the requirements before you even commence development.  Think of it as a modern version of “measure twice, cut once”.

Creating Process Flowcharts

I have a logical mind, so I like to document all of my processes in flowcharts before I get underway with development.  I find a flowchart helps me to keep my thoughts in check and guides me in my development of system updates.  A good flowchart should be comprehensive but clear; it should provide you with all of the steps in the process and should have a clear, logical path to follow.  Anyone who has used Visio in the past can probably attest to the simplicity of creating flowcharts, though I think there is a bit of an art to making a good flowchart.

In order to demonstrate the level of detail I look for in a flowchart, I thought it would be useful to think of it in terms of a process that everyone can understand – making a cup of tea!

How to make a cup of tea

I know what you’re thinking – everyone knows how to make a cup of tea, it’s a waste of time to document the process.  Fortunately, this is a really simple process to document:

Simple Process

So that’s us done right?  If only it were that easy…

I’ve experienced plenty of processes like the above, they are a weak attempt at documenting how a process works, and miss out quite a lot of the detail.  For a start, if you boil a kettle without adding water to it, you’re gonna have a bad time.  I’ve yet to make a cup of tea for a group of people without there being variables involved, some people want milk in their tea, and some people want sugar.  Some want both, some want neither.  So let’s start over and create a process that includes these variables:

Simple Process with variables (1)

Ah, that’s better!

This is a much more detailed process, and accounts for the variables at different steps.  A little bit of extra thought, and asking a few more questions about the process has helped us to capture a lot more information and ensure the process accurately reflects the actual work being completed.

Unfortunately, I think this is still too simple.  There are lots of different types of tea, and they have different preparation processes.  In most businesses, their processes may have multiple divergent paths based on decisions taken at different parts of the process, and this can have a massive knock-on effect to your development if you don’t account for it at the planning stage.  I’ve lost so many hours to poor planning and a lack of understanding of the needs of the business when I’ve been creating my solutions.  Trying to unpick a solution after implementation can be time-consuming, painful and frustrating.

A comprehensive tea making flowchart might look like the below:

Comprehensive Process

This process accounts for multiple variables, divergent paths, and includes a lot of detail that would help me understand what I need to do to make sure my system can account for all of the steps.  It is possible to make this even more detailed if you wish, though you also have to know where to draw the line and not to add complexity for no additional benefit

Conclusions

As I’ve hopefully demonstrated above, it’s really easy to make a simple process, but there are risks involved in basing your system development on poor information.  Spending the time at the start to ensure that you fully understand the needs of the business and the process problem you’re creating a solution for will ensure that your development time will not be wasted.

At the very least, after reading this I hope you know how to make a cup of tea!

 

Designing CRM Forms for User Experience

Introduction

In order to ensure User Adoption of CRM is successful and that the system is delivering results for Users, it is important to ensure that their experience of using the system is as painless as possible. In practice, this means removing unnecessary obstacles to enable them to find the information they need, when they need it. Whilst CRM is, by its nature, a data input system, it is important to focus on the outputs the system can deliver, and the actionable insights it can generate. A smooth and efficient user experience will help ensure that CRM becomes the point of reference for users when they are looking for data, and will drive them to embrace the system rather than relying on Excel spreadsheets and other workarounds that they may have used in the past.

Why Focus on UX?

Microsoft produced a set of User Experience guidelines in 2013, and the diagram below shows how designing an efficient User Experience benefits the business:

UX Principles
Aligning the purpose of CRM towards the end user’s needs leads to better user perception as the users consider CRM tobe giving them value by making them more efficient.  If the CRM system can be seen to be delivering value to the users it leads to wider user adoption which naturally generates a greater quantity of higher quality data.

A CRM system that has a lot of quality data can be used to generate meaningful insights and actionable intelligence, and this can drive business decisions. If the business is able to see a valuable impact from the system, then they will be more likely to sustain and increase their investment in the system development as they will recognise the return on investment they are receiving.

Microsoft have summed this up in the paper as follows:

The key take away here is this: It’s very important to focus on the value CRM adds to the end users’ day-to-day activities and how it helps them achieve their goals and objectives. If this shared purpose isn’t established early and the focus isn’t enforced through design and implementation decisions, poor adoption and overall project failure will likely follow.

In order to demonstrate these principles in practice, I’ll use some sample issues I’ve encountered in my environment in the Accounts entity, though the conclusions reached can be equally applied across any system entity.

Note: the screenshots below are from a CRM 2016 (v8.1) system, and some of the issues encountered in this version of CRM can be addressed with the upgrade to Dynamics 365.  Nevertheless, I hope it still provides valuable food for thought.

Sample Issues

For the purposes of this section, I’ll be using the screenshot below as a reference

AccountFormSample.PNG

Long lists of fields

According to Miller’s Law, humans can only understand and process information in a maximum “chunks” of 7. In practice in CRM this means that, where possible, we should try to avoid long lists of fields as they become difficult and confusing to process, unless they can be broken into smaller sections and groups that help the users to consume the data.

On the Account form above, you can see that the Account Information section on the left is comprised of a long list of over 20 fields.  The list of fields contains a number of different field types, there is not a logical flow to the fields, and they’re not grouped in a way that new and existing System Users will be able to understand easily.

Having a large list of unrelated fields also creates issues when you take into consideration the likelihood of the fields being filled in.  In the example above, different fields ralte to different Account Types (i.e. clients, prospects, etc.).  If there are large lists of empty fields this can be visually jarring for users, and it can reinforce negative behaviours e.g. not filling in fields when information is known.

Visual Clutter

The sample Account form is also quite cluttered visually. For example, a large amount of screen real-estate is dedicated to the Notes & Activities feed; the information in this section, whilst useful, is not accessed regularly and is usually of interest only in certain circumstances.

Similarly, though it is not visible on the screenshot, underneath the “Address” section there is a map that highlights the location of the selected address. This map is rarely used, often incorrect, and therefore consumes more space on the screen that could be utilised for more relevant information.

Utilisation of Sub-Grids vs. Associated Grids

There are a number of Subgrids on the Account form, however there is limited screen space to ensure all relevant data from the related entities can be displayed. For example, the Contacts subgrid in the screenshot above is situated on the right hand side of the screen, but there are a few potential issues with it:

  1. It only displays six contacts at a time. If there are a lot of contacts on the account Users would need to navigate multiple pages to find the specific contact they wanted.
  2. The list is organised alphabetically; whilst this makes logical sense, it is typically not the most useful form of sorting. For example, if the contacts were organised by Job Role, or by the amount of times contacted, etc. this would probably be more useful for Users
  3. The subgrid only displays two columns, and is therefore missing other potentially useful information that would assist users. They will have to open the Associated Grid or the individual Contacts to find the information they need

There is nothing inherently wrong with using subgrids, however if the information is used infrequently, or the amount of information that needs to be conveyed is more than can be displayed in a small grid, then it is recommended to use the Associated Grids.

Field Positions

Whilst recognising that there is a need to make space on forms for fields that are used infrequently, the position of these fields could be considered to create logical flow through the form. In an ideal scenario, the most useful fields will be closer to the top of the form, and the lesser used fields will be relocated further down. This ensures the User Experience can be optimised to minimise excessive scrolling or searching for information.

For example, the Company Profile section contains fields for Number of Employees and Annual Revenue, however they are not completed on many Accounts. The SIC code fields are also too small to display all of the information. The Primary Contact field is similarly not filled in on a lot of the records.

 

Multiple Forms with Similar Data

One of the great flexibilities of the CRM system is the ability to create multiple forms for entities, however lazy development can quickly lead to issues with this.  When creating a new Form, Microsoft “helpfully” pre-populate the form with tabs, sections and fields for you.  In order to ensure the Forms serve a purpose you should only add fields to the form if they’re required, and remove everything else.  The Account entity in this example had 7 forms available to all system users.  Whilst each form was intended to serve a purpose, the form design meant that there were a lot of repeated fields across all the forms, leading to confusion for Users about which Form they should be using.

Related Records Navigation

related Records Navigation.PNG

An oft-overlooked aspect of CRM Form design is the related records navigation section.  It’s easy to forget to update this section, and there is a limit to the customisation capabilities for this section.  In the example above the list of Related Records is not optimised to deliver best results for Users.

To reduce navigational clutter, any related records that are not likely to be used should be removed, e.g. Import Logs, Feedback, etc. The relationships should be specifically named to ensure there is no confusion (i.e. there are two “Opportunities” relationships currently visualised, but they don’t have specific descriptions.  You should also group the related records into common sections to make it easier to navigate for Users.

Recommendations

In order to address the issues highlighted above, I would recommend designing your Forms to follow, where relevant, the Microsoft guidelines for user experience design.

Some of the recommendations I would implement are:

  1. Field Label Width – set to ensure the full label of the fields are visible and not cut off
  2. Tab and Section based grouping – using Form Tabs to group related fields, and sub-grouping the data into sections within the tabs
  3. Smart Form Design – where necessary, using business rules and JavaScript functions to hide/show fields as required, and using Field Security to restrict access to fields to specific security roles
  4. Removing Duplication – on occasions where additional forms are required, ensuring they are not simply a duplicate of other forms, and are designed for a specific business need.

An example of how these recommendations have been implemented can be seen below:

NewAccountForm

For the example above, I’ve implemented the following changes:

  1. Grouped fields into common sections to make it easier to find relevant information
  2. Used the Advanced Multiselect solution to add multiselect options
  3. Removed the Address composite field.  I’m really not a fan of the composite fields, though I appreciate they can serve a purpose.
  4. Changed the “Country” field from Single Line of Text to an Optionset to ensure consistent data input
  5. Used JavaScript and business rules to show/hide fields based on selected account type
  6. Moved the Notes and Activities feed to a dedicated tab to enable it to be collapsed to save screen real estate
  7. Removed sub-grids where they were not serving a useful purpose
  8. Removed unnecessary duplicate forms
  9. Cleaned up the related records navigation (see below)

New Related Records Navigation.PNG

 

I think this approach has made the Form more usable, and it’s much easier to digest from a visual perspective.  Taking the time to understand the User’s needs helps to ensure they appreciate the system and feel that it is working for them, rather than against them.

If you’ve got any thoughts on how to improve the User Experience please add your comments below, or get in touch with me