Overwhelmed or nervous about digging into your first large design effort? Have no fear. In this blog post series, I’ll cover my approach to design efforts with examples from the new datasource management administration page. My process varies slightly depending on the hat(s) I’m wearing, but for the most part I’ve found that the key elements to a successful design are the same whether I’m acting as a product owner or as a software engineer. So here we go!
A quick caveat before we get started – there are many successful ways to attack a design effort. You may find that my approach doesn’t quite match your style. Experiment with different strategies and find a process that works for you!
PHASE ONE: GATHERING INFORMATION
I always start by gathering all the assumptions, requirements, and opinions from people who have insights. Here’s what that process looks like for me:
- Do some background research ahead of time. This includes reading the story, associated Confluence pages, associated tickets/support cases, and linked Slack threads. In the background, I jot down the names of anyone who’s expressed an opinion on the topic (customer or developer) for later. If there’s an existing solution or workaround, I put on my customer hat and walk through the entire workflow from start to finish. I don’t worry if I get stuck or frustrated – I just take notes on what’s confusing, tedious, or unintuitive. I try not to spend too long on this step, and time-box it to half an hour to a few hours depending on the size of the effort. This is important, as I find spending too much time here can lead to forming assumptions or attachments to a design direction too early, biasing later conversations with stakeholders or customers. It’s also easy to spend too much time on background research! But it’s usually inefficient compared to diving in and exploring as you go.
- Make a fluid list of individuals to talk to. There may already be a list of stakeholders in the ticket itself, but I try and include people who aren’t listed but are heavily involved in conversations online or working with customers in the problem space. I also try to make sure I have voices from different backgrounds – ex. customers who depend on a workflow, customers who casually use a workflow, customers who rarely use a workflow, support staff and trainers who see the most common issues, etc. I usually start with usually 5-10 individuals depending on the size of my list, and talk to 7-15 as my list grows as I start having conversations. I roughly order this list in order of role [product owner, customer-facing representative, support/training, architects, developers] to “start with” a customer-centric perspective and “blend in” architectural or development considerations later. However, people’s varying availability to meet often shakes up the order.
As an example, my list for the datasource management design included:
- Product owners with a stake in the effort (representing the squad owning the administration component, the squad focusing on scaling, and the squad focusing on infrastructure updates in a related area)
- Members of the Seeq support team, who spend time troubleshooting and configuring datasource connectivity (customer persona 1)
- A partner responsible for writing, installing, and troubleshooting custom connectors at customers (customer persona 2)
- Analytics engineers who help configure and troubleshoot common issues (subset of customer persona 1)
- An admin of Seeq software (not a datasource/server admin) who uses the administration panel (customer persona 3)
- A solution architect who works with customers to configure special datasource cases (customer persona 4)
- Developers/architects who previously expressed opinions about the direction of the effort
For this effort, I had my technical product owner hat on. As such, my design goals were to plan a good user interface and reach a “big picture” cross-squad architectural consensus. So my list skews toward customer personas (about 70%), but your list may lean in the other direction for architecture-focused design efforts.
- Prepare questions to ask. These questions are mostly open-ended, and lead individuals to respond with freeform answers instead of a yes or a no. I often have separate (but overlapping) lists of questions to ask to different audiences. Here are some common questions I ask:
- What are your frustrations with the existing workflow?
- Where do you spend most of your time within this workflow? How much time do you spend in this workflow overall, and when?
- What do you like most and least about your current solution? Are there features that you don’t understand? Are there options that you never use or provide minimal value?
- What are important requirements or technical considerations for you?
- Do you already have a strong preference for the direction of the new workflow, implementation, or architectural choices?
- What concerns do you have about the future of this effort?
- Do you know of any customers who are heavily involved or vocal in this space?
- Is there data that I can have access to?
- Are you aware of {conflicting opinion}? What are your thoughts?
- Who else would be a good person to talk to?
- Is there anything you want to mention that I haven’t asked?
I also ask design-specific questions. For example, here are some datasource management questions I asked:
- What problems are you trying to solve (or what information are you looking for) when you use the current datasources page?
- Can you tell me what these fields mean? (Note: these were fields that I found confusing when I explored the page in step one)
- Within the realm of datasources, what type of problems take the most time? What are the most common?
- How do you currently troubleshoot datasource connectivity problems?
- What is your mental model of agents, datasources, connectors, and connections? What is your perception of customers’ mental models?
- What are your hopes and concerns in relation to datasources as we transition to SaaS/horizontal scaling?
- Talk to the individuals on your list, preferably in one-on-ones. I find that group meetings and asynchronous communication (although great options for later stages of the design process), can be initially overshadowed by meeting conflicts, loud voices, limited or sidetracked conversations, and muted honesty. During these conversations, I:
- Start each conversation by making sure that this is a good time for the person to talk, and they have at least 20 minutes to chat. I usually reschedule conversations if someone is rushed or stressed.
- Come ready to listen and understand with an open mind. I only ask questions during this time – this means no opinions, persuasion, leading questions, or checking to see if a design approach is acceptable. I don’t need to agree with what’s said – my goals are to a) understand where the other person is coming from and b) do so actively, so that my conversation partner feels confident and content that I understand and will consider their perspective. Active listening means that I’m approaching conversations openly and genuinely, and responding with clarifying questions and restatements (“Okay great. So to make sure I understand, your biggest concern is x because of …, and you’d also like to see y and z for … ” ).
This is important! Later, even if I put together the best possible design, it probably won’t satisfy every ask for every person that cares about this effort. But it’s easier to accept a design that excludes aspects that you cared deeply about, if you feel that your concern and your opinion was given respectful consideration. So building that trust right now is critical to building confidence in your future design, reducing friction (and unhappiness) in later phases, and moving a design quickly through the review process.
- Take notes, ideally in a persistent format (sometimes, the raw notes are helpful months or years later – you never know!). Occasionally I ask to record conversations, but often people are less likely to be blunt in their commentary.
Now what? Find out in Phase Two: Transforming Information into a MVD (Minimum Viable Design).