Implementing a continuous listening approach at your organization can be both exciting and risky. On one hand, the opportunity afforded by getting frequent feedback about the health of the organization is enormous. On the other hand, if you ask employees how they feel more frequently and no one acts on the results, you may end up both hurting engagement and diminishing trust. 

Buy-in from every level of the organization is critical to long-term success.

However, getting everyone on board all at once — convincing employees to take surveys, managers to take action, and leadership to support and act as role models several times per year — can feel like too daunting a task for some organizations to attempt.

Organizations like the Broad Institute of MIT and Harvard have recognized that a phased approach to implementing continuous listening—rolling out to a few groups at a time—can be a more promising path to success.

The Broad Institute’s Journey Begins

The Broad Institute is an “experiment” in a new way of doing science. The institute launched in 2004 to improve human health, using genomics to advance our understanding of the biology and treatment of human disease, and to help lay the groundwork for a new generation of therapies. 

The Broad community is made up of more than 3,000 scientists, including physicians, biologists, chemists, computer scientists, engineers, and administrative staff, who work together in nimble teams to help solve some of the world’s most high-stakes medical challenges.

In early 2016, Kate O’Brien, Associate Director of People Insights at Broad, and her team made the decision to start measuring employee engagement several times per year, enabling managers to take quick action to continually make Broad a better place to work.

Broad has an unusual mix of full-time employees and partners from Harvard, MIT, and Harvard-affiliated hospitals, who make up an academically-inclined, data-driven workforce. Because of its unique composition, Kate, who has a background in change management, knew that the whole organization might not be ready to embrace this progressive approach of continuously listening to employees.

“We needed to make sure we rolled this out in a way that wouldn’t stop the whole organization,” says Kate. “Sometimes, data paralyzes an organization when it comes to action. We wanted to be sure we could quickly move to authentic action based on what we learned.”

To ensure success, Kate and her team implemented a phased approach to launching the program using four critical steps that built on the organization’s strengths and anticipated the barriers to adoption. This framework helped Kate choose the right pilot groups, build buzz and legitimacy for the new program, introduce the new process to stakeholders in a low-pressure climate, and drive organic demand across the rest of the organization.

Once parts of the organization became comfortable with the new cadence, they would roll the program out to everyone with a “Big Bang” launch. This approach, Kate felt, would make leaders feel like they were “opting in,” carrying the organization toward the end goal of improving the employee experience.

Solicit Volunteers — Strategically

The first step was to identify groups that would help the organization rally around the new approach, building buzz and identifying organizational quirks and nuances that had the potential to sidetrack new users in the future.

The initial “pilot” groups were based on a few criteria:

  1. Making sure the whole organization was represented, with a group from each category in the workforce. This way, Kate says, they avoided the debate later about who the approach works for and who it doesn’t — it worked for every type of employee.
  2. Finding what marketers refer to as “influencers,” or groups who were curious about the data and likely to encourage participation later on. “We needed people who would rally others,” Kate says.
  3. Choosing a few teams or groups that were skeptical, but willing to go along. These groups would be a valuable source of feedback — their honest concerns, and willingness to share them, help the launch team iterate and eliminate barriers from the beginning.

Finding the right pilot groups helped build the foundation for the rest of the rollout.

Drive Participation Early On Through a Variety of Incentives

When it comes to surveys, participation is critical in order to have valid, representative results and to be able to release reports to as many managers as possible. Kate bolstered participation through a combination of competition and team-based incentives.

A little competition can go a long way, according to Kate, which is why she encouraged teams and leaders to go head-to-head when it came to participation levels. “Let teams compete on participation rate,” she says. “Foster competition among leaders by sending out response rates” in public channels.

Further, Kate appealed to people’s motivations with a well-timed reminder cadence and raffles for teams that hit certain participation thresholds.

“You need incentives,” she says. “They don’t have to be big. If you think that through up front, it acts as a nudge.”

Let Managers Own Their Results

For Kate and team, the end goal of getting more frequent feedback was to get managers taking action to improve the employee experience on their own, independent of the People Insights team’s intervention. From Day One, they provided all the pilot managers with their teams’ unique data, and let them do their own analysis and interventions (or not).

When Kate’s team met with managers after the results were released, they didn’t tell the managers what to focus on, rather, they asked them thought-provoking questions to get them curious and engaged in the data. This was a critical signal that the People Insights team didn’t own the engagement data or resulting actions; the managers did.

Watching these groups make use of the data on their own gave Kate’s team a good sense of how the rest of the organization would react down the line—and allowed them to make necessary adjustments to the “Big Bang” launch plan.

Find and Tell Success Stories to Leave Them Hungry for More

Once managers were able to identify what to work on with their teams using the data, many took action to make improvements, providing Kate’s team with a rich repository of potential success stories. To pinpoint these case studies, all they had to do was look at trends on teams across the organization to see whose scores had changed the most.

“It’s very fun when we look through the results and see 5- or 10-point score differences in groups, and we weren’t involved in the intervention,” she says. “Then we go in and say, ‘What happened here?’ and they’re able to tell us. Then we know that the change isn’t dependent on us leading the whole way. When they start to realize that those things made a difference, it causes positive energy, engagement, and creativity.”

Once the team learned the stories, they shared them with the leadership teams and the company. For leadership team stakeholders, these stories validated the value of the new process, mitigated likely objections, and drove excitement around the possibilities afforded by the program.

Kate says: “[We were able to show that] these small interventions that are happening do statistically make a difference. Now we have different vignettes we can talk about and spread around — it gave us even further validation that filling out the survey is valuable, and also what actions make an impact, so you don’t have an excuse to say, ‘It won’t work for us.’”

Once Kate’s stakeholders saw proof of the data, and how taking action was impacting teams across the organization, they began to ask when their teams would be included. When leaders were able to make the connection between their teams’ outcomes and how committed and engaged their people are, they tended to opt-in — fast.

Embedding Frequent Feedback into the Heart of the Organization

One key lesson for Kate? Make the process iterative. Set out with an expectation or goal of how quickly and easily you’ll be able to roll out the program to the organization, but be open to adjusting based on the feedback you receive. Most important, remember that the overall goal of implementing more frequent feedback is to better understand the health of the organization — and to continuously improve so that employees can do their best work.

“I’ve heard employees say, ‘I think it’s so great that Broad actually cares what we think.’ You can see it on their faces when we talk about the different things we’re doing as a result of the feedback,” says Kate. “As an institution, we are working to understand the biology behind disease, with the goal of helping drive treatments and cures. People really think [about their work] in terms of how important it is, and they take it to heart when they know that the organization truly cares about the wellbeing of everyone who works here.”

The rest of the Broad story is still being written, but the phased approach to implementing continuous feedback has energized leaders at the complex organization around engagement, and has set the stage for future success. While Broad has always cared about its people, now the organization has the data to know where to focus and the environment to make continuous improvements faster than ever.