Weeknotes – series 03 episode 04

This week I have been mostly thinking about user-centred design and how to apply it when designing, measuring and improving data services (again).

DWP's presentation at Data Bites

On Monday I watched a bit of the Data Bites meet-up featuring Aleks Bobrowska from the DWP Data Science Newcastle Hub. The talk was about bringing skills demand data to local government via a digital service.

It was clear from the talk that Aleks had assembled a multi-disciplinary team (interaction designer, content designer and user researcher), and they were designing to meet user needs.

Even though there was clever data science stuff going on in the background, the user interface of the service presented the information in a clear and simple way, using plain language. I know from experience this is really hard to do with data.

Looking forward to seeing what Aleks, Pete and Ryan do next.

Selecting multiple things from a long list

This blog post by Andy Sellick, a frontend developer at GDS (Government Digital Service) is a nice honest write-up about the difficulties of creating accessible components for search interfaces.

It's relevant to this post I wrote whilst at ONS (Office for National Statistics) called picking things from a long list and it feels like a problem I'll come up against again when designing data interfaces in future.

Challenges with searching, finding and filtering seems to crop up in every conversation I have about data.

Common problems facing users of data services

I've been looking over some past projects at Swirrl to see what I can learn from previous user research work. The MHCLG (Ministry of Housing, Communities and Local Government) user engagement review was summarised in this blog post.

It's interesting for me to see the similarities and overlaps in user needs and usability problems that seem to crop up repeatedly in data catalogues.

I put together a list and shared them on Twitter…


I'm collecting all this thinking together into a slide deck as I go, and hopefully can share it more widely (perhaps as a blog post) when it's more coherent.

GOV.UK Registers have a done a good job of solving common tasks

Obviously I'm not a true user of GOV.UK Registers but I think they've done a great job at creating a usable interface for exploring datasets.

They have done a really nice job of meeting user needs that I've observed, for example…

  • A simple search with filters
  • Enough important metadata (description, row count, field descriptions) without being overwhelming
  • Clear links to get the data (via download or API)
  • A history of changes
  • Subscribe to be notified of updates
  • A preview of the data in tabular format

The also have a nice poster that describes the characteristics of a register (I love a good poster) in this article by Ade Adewunmi.

How do you define success for a data catalogue?

I've been thinking a lot about Giuseppe Sollazzo’s tweet about how we can define success for a data catalogue.

I think it's fair to say that my thinking around KPIs (key performance indicators), OKRs (Objectives and Key Results) and outcomes versus outputs is pretty immature. But I hope to spend some serious time digging into how I can link these kind of things back to user needs in future.

Luckily Steve Messer is blogging good stuff about OKRs right now.

Data inclusion scale

I also had a chat with user researcher Louise Petre about her work on data.gov.uk and the data inclusion scale. You can read a bit about the data inclusion scale in this post.

It's a way to plot users based on their understanding of data and their ability to manipulate it. Ranging from data indifferent to data expert. I think it will be a useful tool to try out with stakeholders and users in future.

It reminded me that Matt Knight worked on something similar whilst he was at ONS. Here's his blog post about it: A data literacy scale?.

Other things that I'm reading