How to liven up retrospectives when they've gotten uneventful / unhelpful?

My current team runs weekly retrospectives using the Lean Coffee format. More and more, I find that the items people are bringing up aren’t really important or could just be a question in Slack.

For example, someone recently made a topic for how we can test credit card payments. Another topic was navel gazing about how we use Jira and multiple team members asked “what’s the problem you’re hoping to solve?” to which the only answer was “That’s not what I’ve seen elsewhere”.

I’m beginning to think that there’s something wrong with our format or prompts, in that we aren’t identifying important issues for discussion. Perhaps the format is stale or there’s no serious issues lingering each week?

Any advice on alternative formats, how to get better feedback, etc. would be greatly appreciated.

Kissaki,

This is a perfect topic for retrospectives.

How do the others see your retrospectives? Are the retrospectives productive? Do you identify and resolve issues? Do you have issues you do not identify and resolve in this format? Is the form stale? What needs to change?

Talk about the goals and focus of your retrospectives. Is it a place for “open ended” questions and discussion? Or should it be more focused on factual or felt issues and irritations? (Doesn’t mean it can’t be gaining others views on personal irritations.)

In my team we have bi-weekly sprint meetings and retrospectives. Our agile master (basically scrum master) prepares a themed board. Themes may liven and lighten up the process. - Personally, I don’t like them much, and feel like the prepared structured format often doesn’t offer or allow me to raise questions and issues that came up - but I’m not letting that stop me from raising them.

Every week feels quite often to me. Consider whether and how spreading them may change them - possibly for the better.

Consider making “nothing to discuss” a fine occurrence.

Consider whether other formats separate from retrospectives could focus retrospectives into something more specific.

kevin_alt2,

In my experience the lean coffee format or something like a 3 column “keep doing” “stop doing” “start doing” style retro is just not enough to understand the interactions between items or to come up with a small number of high quality experiments.

I fully agree with the other poster who mentioned starting to gather metrics so that you can measure experiment success, just make sure that the metrics you’re gathering aren’t vanity metrics (or at least that you’re aware of how they can be gamed and keeping an eye out for it when designing experiments).

When I run retrospectives they generally follow a four step process. Brainstorm items to discuss, filter those items to the ones with the greatest chance of creating an impact, brainstorm ways to create an impact around those items, filter those items and form them into one or two experiments.

To put this in more concrete terms:

  1. Brainstorm items to discuss

This might at first look a lot like the first step of your lean coffee format, spend no more than 5 minutes throwing up any ideas you have. Start with a timer of 2-3 minutes adding 1 minute as you near the end of the timer if there is still activity. Spend no more than about 30 seconds per item making sure the team understands what was meant by that item and doing an initial filtering round of affinity mapping or grouping related items.

2.Filter those items to the ones with the greatest chance of creating an impact

Now we spend a few minutes discussing each of the groups of items, maybe 2-5 minutes per group exploring the impact of these items, interactions between them, and maybe touching on potential solutions. Finally, we either vote or otherwise filter down to one or two groups to bring to the next step.

  1. Brainstorm ways to create an impact around those items

This is another brainstorming step so we move fast again to reduce internal filters, spend no more than 5 minutes but starting with a timer of 2-3 minutes to propose experiments, these don’t need to be well formed but should roughly follow the format of “if we do X, we expect Y”. Similar to the previous step you can spend no more than 30 seconds per item making sure it’s understood and getting related experiments.

  1. Filter those items and form them into one or two experiments

Finally we discuss each of the groups, possibly combining the proposed experiments or coming up with new ones during the discussion. For each group you will likely come up with 1 proposed experiment. If at the end of this process you have only one experiment great! Get an active thumbs up from the group to try it and make sure it’s visible in your team space (on your stand-up board maybe?) so that it doesn’t get lost. If you have more proposed experiments do a round of voting to get down to at most two that the group will agree to try. This limiting Experiments In Progress is very similar to limiting Work In Progress, we do it to improve the chances that any one item gets delivered (or experiment implemented) in a relatively short timeframe.

For each of these steps there are all kinds of activities you could do to keep things fresh, check out Retromat and Liberating Structures for ideas.

steventrouble,

My last team at Google had a policy to cancel recurring meetings if they weren’t useful, and it helped us get a lot more done. It was probably the most productive team I’ve been on.

sbv,

Is it fair to say that your process has stabilized in a good place? It sounds like your team is happy with the current way of doing things.

Rather than a reactive retrospective, you could try running experiments. The ol “what if we stop doing x” or “do more of y”.

Whenever I read about kanban, the author eventually talks about gathering metrics so the team can run experiments and see what happens to the metrics.

flumph,
@flumph@programming.dev avatar

Is it fair to say that your process has stabilized in a good place? It sounds like your team is happy with the current way of doing things.

I think that’s largely true. Not trying to assume intention, but I do think a few more of the junior folks on the team read something in a book or blog and think we have to do it that way. Or that it’ll work for us because it worked for Etsy/Netflix/ThoughtWorks/etc.

Whenever I read about kanban, the author eventually talks about gathering metrics so the team can run experiments and see what happens to the metrics.

You know, metrics might be a great way to discuss some of these concerns/ideas are coming up. For example, the topic on Jira was related to Cycle Time. If we were concerned about Cycle Time, changing how we use Jira for a few weeks would see what happened to the metrics.

otl,

Hm. Interesting. This is something that gets me too…

Taking a step back: what are you hoping to achieve with the retrospective?

I find that the items people are bringing up aren’t really important or could just be a question in Slack.

What criteria would make something important? Conversely: what makes something minor?

Once we nail these it might be easier to focus the discussion.

flumph,
@flumph@programming.dev avatar

Thanks for helping me reframe my thoughts. I definitely don’t want to call anyone’s concerns unimportant.

More specifically, I view retros as a time to talk about improving the team, either through experimentation or doubling down on good practices. To that end, I’d want topics to be problems or shout outs.

Something like “how do we test credit cards” might be a sign of “our documentation isn’t great and it slows me down” but it’s talked about as a discreet item. Similarly “I haven’t seen Jira used this way before” isn’t a problem; maybe the underlying issue is “I don’t understand how we use Jira” or “what we’re doing causes a lot of paperwork”

otl,

Thanks for helping me reframe my thoughts.

Haha don’t worry it’s for framing my thoughts too! ;)

To that end, I’d want topics to be problems or shout outs. Something like “how do we test credit cards” might be a sign of “our documentation isn’t great and it slows me down” but it’s talked about as a discreet item.

Devil’s advocate: what are those good practices? What constitutes improvement? One way to focus discussion is to try and pick some specific team values/objectives. Let’s go with the example about Jira.

Similarly “I haven’t seen Jira used this way before” isn’t a problem; maybe the underlying issue is “I don’t understand how we use Jira” or “what we’re doing causes a lot of paperwork”

The statement “I haven’t seen Jira used this way before” is not ideal starting point for discussion - agreed! But with a value in mind I think we can work with it. Let’s say, for argument’s sake, the goal is stronger shared understanding of project management.

You mentioned other team members asked “what’s the problem you’re hoping to solve?”. I think that’s a very pragmatic, specific question (I’m a software engineer, too, I get it!) but it’s not really in the service of the goal of the discussion: a stronger understanding of project management.

So how can discussion help with that? What about a Q&A session? The interactive, conversational exchange is a natural way for people to learn (I hear ChatGPT is pretty popular!), and it’s likely others will learn a bunch of stuff too about why things are done the way they are.


Another technique is to use the free-flowing discussion format for what its best at: exploration of ideas, not necessarily solving problems. Solving problems usually takes code, data, testing, experimentation… things that require time spent at the keyboard.

Taking the credit card example:

Something like “how do we test credit cards” might be a sign of “our documentation isn’t great and it slows me down” but it’s talked about as a discreet item.

Use the conversational format to its advantage. Respond to the question with another question: “why do you ask how we test credit cards?” From there they might reply with something about documentation, or maybe the tests aren’t clear, or they’re not run often enough. Maybe they want ways to run credit card tests on their own workstation as unit tests? From there we identify a whole bunch of ways to improve the code/project/workflow/better align with best practices.

Anyway, I’m not a manager :) I’m just thinking out loud so next time I start speaking to a team to join maybe I understand the dynamics a bit more.

  • All
  • Subscribed
  • Moderated
  • Favorites
  • ask_experienced_devs@programming.dev
  • DreamBathrooms
  • ngwrru68w68
  • modclub
  • magazineikmin
  • thenastyranch
  • rosin
  • khanakhh
  • InstantRegret
  • Youngstown
  • slotface
  • Durango
  • kavyap
  • mdbf
  • GTA5RPClips
  • JUstTest
  • tacticalgear
  • normalnudes
  • tester
  • osvaldo12
  • everett
  • cubers
  • ethstaker
  • anitta
  • provamag3
  • Leos
  • cisconetworking
  • megavids
  • lostlight
  • All magazines