Data-Driven Design, or DDD in short, is a design methodology that, in essence, aims to eliminate most of the subjective elements of design from the point of view of the designer. In this post I am going to talk about my personal vision on how to implement this approach for web-based products.
As the name implies, at the centre of the methodology is the idea of informing the design decisions with data to avoid falling into counterintuitive pitfalls that take our products far from their optimal state. Over this idea a process of continuous improvement becomes apparent.
The key: iteration and continuous improvement
For success, it is key that this process is executed iteratively and continuously. A general idea of th process is shown in the next diagram.
The first step in the planning stage is to define the UX goals that we want to achieve. The main result of this plan should be to have metrics that are related to the data that will be grabbed later. It is important that the established metrics correlate with the data that is going to be grabbed. Of course, different aspects of UX will influence each other. Analyzing those cross effects is part of the final decision on which design alternative is chosen to be part of the final product.
A tool amongst many that can be used for the analysis of the goal is Google’s HEART UX measurement framework.
Isolating to the maximum possible extent those cross effects should be the guiding principle of producing the designs to be tested. It is important to focus on a single element to create alternatives, modifying only one component.
During the creation of the alternatives, it is useful to create user personas, which will represent the different groups of users and think how the different alternatives could potentially affect the behaviour of the different user groups.
There are several ways to test the different design alternatives, but the most common way is through A/B testing. In order to carry on with the test, we show the different versions of the page to different users, until we have a reasonably big statistical user sample for each design alternative while recording data about the behaviour. The distribution of the different personas that we recognized in the previous step should be the same in all the groups.
We have several services that we can use to study the behaviour of the users of our service. The most obvious one is Google Analytics, that brings both limited behavioural and demographic information. Google Tag Manager provides a more fine-grained tool to track all sorts of events on a page.
At a different level, there another set of tools that can be used, where you can record and playback a whole user session, seeing all movements that the user has been doing while in the page, like HotJar or Inspectlet. This data, though more accurate, it can be more difficult to process, and also it can potentially arise ethical concerns and affect the overall speed and reactivity of the page.
That being said, those tools only provide an insight into the objective behaviour of the user. To take a fully informed decision, the subjective experience of the user should be measured as well. This can be done through services like Usabilia, for example, by using “exit surveys”, which are presented to the users when they are going to leave the page.
The process of grabbing data should fit the needs of the team in charge of taking the decision. Also, due to the iterative nature of the process, the usage of these tools can get expensive, so it is wise to keep this to the minimum necessary while covering both objective and subjective aspects of the user experience. Remember that eliminating subjective aspects for the designer does not mean that you do it for the user.
Once we have statistically meaningful data, we can proceed to the analysis. In this stage, we use all the gathered information from the last step in order to understand how the change that we implemented in the alternatives has affected the metric that we wanted to measure.
At this point, we can use the data from previous iterations to think of the cross effects. As we are going through this process on iteration, we are able to test at this point if individual single changes affect the different metrics in different ways.
Finally, we can take a design decision backed with data. Sometimes, the results will be counterintuitive, which can potentially make the design team or the management a little bit reluctant to finally adopt the change that is proven to be more effective. This methodology can sometimes prove human understanding and nature wrong. And that is difficult to deal with.
If a good analysis of the data has been done, including possible cross effects, the solution should improve the final product.
Rinse and repeat
As previously stated, iteration is key for this process to be successful. Never settle on a design. Instead, create a culture of continuous improvement for the product. Get everyone involved, from the design to the management team. This process can be time-consuming and expensive, but it is a good way to back design with something more than intuition.
SEO and the testing process
There are some considerations that need to be made regarding testing the different design alternatives, as the testing needs to be done on live systems in order to gather data from real users.
Cloaking, not even once
The search engine bots should always see what the user sees whatever that is. Not respecting this is a great way to get severely penalized.
Limit the duration of the tests
The test should last as short as possible. Limiting the duration of the data grabbing phase to the minimum is the way to go.
Use proper redirections
In order to redirect the users to the different design alternatives, the best is to use the HTTP 302 code (Found) which indicates a temporal redirection, instead of the 301 (Moved Permanently). This will allow all the agents to know that the situation is just temporary.