August 31, 2022 · 7 min read
In the fourth part of our series on successfully scaling digitization projects, we look at value contribution. If we want to replicate the success of our pilot, we have to define and monitor what we want from the pilot project—and look out for everything else our solution delivers.
This is the fourth part of our series on how to escape pilot purgatory and successfully scale digitization projects. You can find the introduction here, and the previous article here.
In the previous issues of this series, we looked at the importance of clear use cases, innovation strategies, and involving stakeholders. This time, we assess the importance of value contribution to rollout success. Ultimately, we want to replicate the success we find in our pilot. That means that our digitization project has to demonstrate added value, and we have to document it.
This step builds on the steps preceding to define and demonstrate added value. Our use case, strategy, and stakeholders should set the terms under which our project's value contribution becomes clearly readable and directly recognizable for every stakeholder. When we launch a pilot project with clear objectives and preferred outcomes defined—hopefully in measurable terms—not only do we know what it takes to succeed, we know exactly when the solution has delivered. Even better: this information allows us to replicate that success in rollout.
This is especially important in rolling out digitization projects. While many organizations slack on monitoring value addition, digitization projects not only benefit from gathering information on value contribution, they can redefine the value we look for, helping us unlock new value we have yet to monitor. According to this study from MIT Sloan Management Review, “...most companies do not deploy KPIs rigorously for review or as drivers of change.” However:
As next-generation predictive algorithms are incorporated into business process planning and design, they seem destined to inspire next-generation digital dashboards. KPIs will consequently offer predictive and prescriptive indicators, not just rearview-mirror reviews. Data-driven companies that leverage these advances by reconceiving their KPIs will enjoy distinct competitive advantages. (MIT Sloan Management Review)
The road to rollout is not always straight and easy. But we should know where it goes. And if we look, we can find surprising rewards along the way.

Cut Glitch some Slack
In September, 2011, Tiny Speck launched their new product, a browser-based, massively multiplayer online game called Glitch. This was founder Stewart Butterfield’s baby—he’d previously cofounded Flickr; now he could pursue his passion project. Glitch was an ambitious game; players would be able to design buildings, create locations, develop economies, and form organizations with other players.
Tiny Speck had clear targets for success and tracked the game's launch scrupulously. They had to.
Massively multiplayer games provide persistent worlds, each a host to thousands of players. Each player constantly causes myriad small changes in the world; the game has to be able to save those changes and communicate them to every other player. This is a resource-intensive process—maintaining the necessary infrastructure requires a team of engineers and technicians. To keep all of this running, MMO games have to sell subscriptions.
Glitch hit its launch targets. The back end scaled. They were able to add new features. Most importantly, the number of subscribers matched Tiny Speck's projections.
But only two months later, Tiny Speck unlaunched Glitch. Gameplay issues made Glitch a slow burn. Players were subscribing, but without quick gratification, they weren't compelled to stick around. The game never recovered. For Glitch's development team, this was a heartbreaking failure. But for Tiny Speck, this was a blessing in disguise.
The Glitch team was sprinkled across North America—coast to coast in the United States and spilling over into Canada. To keep the team connected, Tiny Speck developed an in-house messaging system. The Glitch development process provided an organic testing ground for this messaging system to grow based on the team's needs. When they introduced a new feature, they did so based on confirmed demand, and gained instant feedback on what worked and what didn't.
Inadvertently, Tiny Speck was running a pilot project for their real killer app: a Searchable Log of All Conversation and Knowledge. Stewart Butterfield called it Slack.
Slack offered enterprises clear added value. True to the name, Slack users had searchable access to all conversation and knowledge within their organization. That greased collaboration through inter-departmental transparency, and reduced communication costs by streamlining and centralizing communication.
When it came to rolling Slack out, Tiny Speck already had a successful model based on their experience developing Slack. Taking scaling literally, they reached out to larger businesses, launching further pilot projects for larger teams. Each time, they collected information on Slack's value contribution to the team, and made adjustments to their product accordingly.
Glitch had taught Tiny Speck the importance of customer retention for SaaS companies. Getting enterprises to use Slack was nice, but what they really needed was for enterprises to keep using Slack. Looking for a metric to define this success, they found the number indicating the tipping point from user to keeper. After a team sent 2000 messages in Slack, they were hooked—invested in using the service. This was the win-win scenario: Slack realized added value for their users, and users knew it.
Tiny Speck continued to take Slack's launch slowly, starting with a preview release, collecting feedback, and always looking out for that magic number. By the time they launched Slack publicly in 2014, they'd seen team after team cross the 2000-message threshold. They knew they could scale aggressively because they knew that Slack delivered value for their users.
Today, Slack delivers added value to roughly 750,000 organizations. How’s that for a measure of success?
Defining value
The story of Slack provides an illustrative contrast between two projects where much was similar, but critical differences led to different outcomes.
While Tiny Speck tracked Glitch's launch meticulously, it was all for nothing—they weren't looking for what they needed to see until it was too late. Slack, on the other hand, followed a deliberate launch pattern, characterized by radical awareness of their value proposition, and continually assessing and redefining. Tiny Speck created a solution that delivered value—both for their users, and for themselves.
On a superficial level, calling value a condition of success seems trivial. Look deeper, though, and we see that ‘value’ is a vague term, open to the interpretation of the subject assessing it. If we want to create, add, multiply value, we first have to decide what value is, based on what we want. Do we want to cut costs? Increase Productivity? Make a better product? We also need to keep in mind that we have to balance any value accrued against the cost of the solution. From this perspective, things like convenience are also valuable.
The earlier steps to rollout success provide us with a groundwork for understanding and defining value. Our use case should pose a problem that our solution can realistically solve. Our strategy includes goals—achieving those goals should deliver value. Stakeholders provide valuable input on what they want and need to meet their goals.
Every AiSight validation phase begins with the use case—here, we’re already thinking about maximizing value contribution. We consider technical feasibility, how critical each asset is, and scalability—there's limited value in monitoring one-off assets. Then we consider the costs we’re preventing: direct costs such as internal maintenance, material, energy, preparation; and indirect costs such as machine costs, losses through downtime, quality issues, repair costs. Finally, we look for opportunities for improvement in spaces such as overall equipment efficiency (OEE).
AiSight’s value proposition is clear: unlimited machine uptime. The goal is clear, but measuring something without limits is obviously impossible. We therefore approach validation phases looking for measurable evidence of removed limits. Within the context of digital manufacturing, we have a wealth of targets from which to generate KPIs: efficiency, costs, safety, asset performance, downtime, work order management, inventory management. But these aren’t the whole story.
We also want to find long-term benefits that aren't immediately tangible. Using our solution can deliver increased knowledge of machines and process changes. These emergent insights might not be measurable now, but value addition can manifest in the long run. Measurement is an ongoing process.
Measuring value
Back in part one of this series, The Use Case, we stressed how important it is that our use case be able to deliver measurable value. If it can't, it can't be a good use case. At this point, we're past measurement being a theoretical concern—it's time to measure our value contribution.
When measuring the value contribution of our digitization project, we have many metrics to choose from: return on investment, inventory turnover ratio, OEE, planned maintenance percentage, the list goes on. The metrics we use will depend on how we've defined value.
Regularly measuring and reporting on our digitization project's value contribution not only informs us of what's working, it keeps stakeholders engaged. Not every stakeholder will have regular contact with the solution, or first-hand knowledge of its value contribution. Metrics are a great way to reliably communicate how the pilot project is going.
There’s a famous quote—usually attributed to Peter Drucker: “What gets measured, gets managed”. According to the Drucker Institute, Drucker never said this, and even emphasized the opposite: gathering qualitative information through human interaction is both important, and an act of good management. It's important, therefore, to consider feedback from stakeholders who are directly involved with the pilot project—who have first-hand experience. And, ultimately, qualitative feedback from customers and end-users is essential.
During AiSight validation phases, we measure our solution's value contribution. We want to know how much downtime we're preventing, how much more efficient maintenance practices become, and how much money that all translates into. But we also recognize the limits of measurement alone. The challenge is clear: we prevent machine breakdowns. So, how do you measure something that isn’t there?
This challenge leads us to always work closely with maintenance teams. First, they provide qualitative feedback. We know our solution works because they tell us. Second, when maintenance teams follow up on our alerts, they confirm our solution's findings. This not only provides the information we need to start measuring value contribution, it cultivates trust, opens conversation with maintenance teams, and deepens our understanding of the entire process. From that position, we can find additional value beyond what we originally defined, and replicate it too.
Repeat success
Pilot projects are experiments—when we record them, we should be able to replicate them. When we know that the result of the experiment is good, we should follow through and replicate, ideally on a larger scale!
Replicating our success on a larger scale—that's rollout. The truth is, scaling comes with the same commitments as a pilot project. We learn through the pilot what to expect from rollout. Each phase of rollout, we make sure of our use case, act within a strategy, involve stakeholders, and keep looking for added value.
As long as we follow those steps to rollout success, there's only one more thing to look out for, and that's the subject of our next, and final, article in this series: Integration.
Want to get started with AiSight’s solution?
© 2022 AiSight GmbH. All rights reserved.
Phone: +49 (0) 30 40363399
Website: https://aisight.de