Brainstorming session – trying out 3 new design methods
Today I facilitated the first brainstorming session after participating in the HCD workshop. The goal of the session was to generate as many ideas as possible and to narrow it down to just a couple we want to focus on. We went from about 130 ideas to 4 in less than 3 hours with a team of 13 people .
Given a rather broad problem statement around the topic of collaboration, and a handful of existing customer issues/needs, we kicked it off with the Rose, Bud and Thorn exercise. In our case, these is what they map to:
|ROSES:||things that we’re doing today and that we are going to want to continue to do in the future because they work well|
|BUDS:||we’re not hearing it explicitly from customers, but it might be an opportunity to improve their experience|
We followed 3 simple rules:
- Do not worry about implementation or feasibility issues (I’ll throw a juggling ball at you if you bring implementation issues up). When in doubt, think about the problem statement and, if you need help, focus on customer issues
- You have 10 minutes to generate as many R B and T as you can
- Everybody has to write at least 2 of each
We ended up with about 130 stickies, about 1 per minute per person:
- Roses: 32
- Buds: 61
- Thorns: 37
This makes sense, since we did have some past and current experience with collaboration features, hence there were some polarized opinions and feelings about them. However the team mostly saw this as an opportunity to come up with new ideas (buds).
Next, it was time for some Affinity Clustering. We gathered around a large whiteboard and started by looking at a few roses. After a couple of minutes, people spontaneously started to put their stickies on the board, sometimes reading them out-loud, some times just placing them next to a related sticky.
When all the stickies were placed somewhere on the board, the task of creating clusters of related stickies looked quite daunting, but little by little, we started drawing circles around the related ones, adding a cluster label, and moving stickies into the same cluster if related and further apart otherwise. This was a highly collaborative effort, so not just me drawing and labeling clusters (see image below).
We took turns at reading stickies in each cluster to make sure everybody agreed with the clusters on the board and the labels. It was hard to keep everybody’s attention throughout this process, due to the large number of stickies, but we did it, and changed quite a few clusters on the way. The picture below shows the end result.
So now what? How do we narrow it down to just a couple of ideas from here?
Next, we wrote cluster labels onto feature-like stickies (with a few exceptions like security, which we want to make sure we implement regardless of the solution) and ended up with 13 stickies. At this point, we were ready for our last design method: the Importance/Difficulty matrix.
We started ranking ideas from least important to most important on the X-axis. In our case, I translated importance into relative customer impact. Namely, if we were to address the collection of issues in this cluster, how big of an impact we think it would have on the customer experience.
For this exercise, the team needed to agree as a whole on relative impact, and we are going for an approximation, not an exact ordering, as we didn’t have all the details to be super precise yet, but we did have people from many different departments of the company represented, so we could make some calls.
We could have left it here, since the product owner (the one who asked for this brainstorming session) did not want to get into implementation details, but she also wanted to narrow the possibilities down to just a couple, so I asked if people wanted to stay a bit longer and finish the exercise (we had been working on this non-stop for 2 and a half hours). Nobody moved, so I unveiled difficulty as the Y-axis.
Finally the time had come to worry about feasibility and implementation. For our purposes, think of difficulty as either the cost, complexity or finding the amount of skills necessary (it might require lots of resources) or the amount of time it would take to implement a specific feature.
We moved each sticky one by one along the Y-axis so that they were relatively ranked according to difficulty. I then divided the matrix into 4 quadrants (see image below, which does not correspond to our matrix) and got the following distribution:
- Most important, easier to implement (aka target) with just one sticky (hurray!)
- Most important, harder to implement (strategic) with 4 and a half stickies
- Least important, easier to implement (low-hanging fruit) with 2
- Least important, harder to implement (luxury) with 5 and a half
Generic example of Importance/Difficulty matrix
There were really only 4 stickies that were most important, which fulfills the session’s goal and will nicely lead into our next design session to flesh them out and come up with the best solution we can implement for September.