• Skip to primary navigation
  • Skip to main content
  • Skip to footer
Kanban University

Kanban University

Management Training, Consulting, Conferences, Publishing & Software

  • Contact Us
  • Certified Kanban
  • Events
  • Resources
  • About
  • Sign In

David Anderson

Evangelizing A Product Concept By Validating A Design

January 17, 2016 by David Anderson

The following article was first published June 7th, 2000. It describes a project to create a design for a set of wireless data applications for Nokia at their Americas office in Las Colinas, TX. This edition has been slightly edited for style, and in content to keep it relevant for an audience 16 years later. The original did not contain the name of the client, the team members or the narrative of the project, in order to protect client confidentiality. Given that almost 16 years have passed and the client is no longer in business, I feel it is reasonable to elaborate on the original article with the narrative detail.

At the time, I thought of what we were doing as Agile UX, because the design approach is iterative and rapid. Actually, we didn’t have the “Agile” word until 2001. The style of this process felt like eXtreme Programming for UX. It didn’t occur to me to think of it as Lean UX. The client was Nokia, at the time, the world’s largest mobile phone company. They were extremely secretive about their future plans. This wasn’t a startup even though the project was a “startup” within Nokia and the domain – wireless data applications – was bleeding edge at the time. The technology was WAP and our purpose was not only to create a business Nokia could exploit but to enable Nokia to use our work as an archetype to encourage network operators and 3rd parties to develop usable applications on WAP. [For those not familiar with WAP, it was a 2G wireless data technology which used a text-based screen format, via the hard key interface of what were known as “candy bar” phones.] read more…

Evangelizing A Product Concept By Validating A Design

In larger technology companies it can often be difficult to develop an understanding of the advantages of doing good product design early. This can be even more so with UX design processes that should be done early, close to the beginning of a project, while the product is being defined and the requirements written. At this stage it is often true there is no funding to build prototypes, to do testing, and with recognizable brands, it often isn’t acceptable to test something in the market. It is not unusual to find a number of very skeptical people around, who question, the time, budget and effort which must go into a UX design for an innovative new product or line of products, when there are easy, low risk alternatives for enhancing existing products.

So how do you overcome this skepticism? How do you sell early UX desing to a skeptical audience? Working at Nokia’s Americas HQ [in 2000] I discovered that we can win people over by using usability testing to give influencers and decision makers experiential immersion with a working prototype.

In human factors and usability engineering, employees of the firm are considered “invalid” participants in tests because they might have foresight into how the product operates. The idea is that employs spoil the scientific nature of the test results. [In 2000 human factors and usability testing was very much a science conducted by professionally trained people, in 2016 that is much less so.] However, by carefully selecting a set of “invalid” test participants, you can sow the seeds for future success with the product and gain buy-in to fund the project to completion.

This strategy is not without its risks. If your design isn’t good the project may get killed a lot earlier. Visibility can be a double-edged sword. This article seeks to advise you how to select the candidate evangelists and how to manage the risks of negative reactions by ramping the introduction of participants in product testing as you refine the level of fidelity and usability. The goal is to gain an influential band of company evangelists for your UX design. These people should become the ones who will go forth and spread the word enabling you to get the budget and schedule you need to create a production product.

Creating WAP Applications for Nokia

At Nokia’s Americas HQ in Las Colinas, TX, a younger business school graduate who held a director level position in the business development and product strategy unit, was tasked with proving the value and providing a demonstration platform for the new wireless data technology known as WAP. He hired the consulting firm I worked with to develop this with them. We put together a small cross-functional team of 4 people and we deployed to cubicles on the same floor of the building in Las Colinas, TX. We were surrounded by people who did sales, marketing and strategic planning for Nokia mobile phone sales all across Latin America. For many people on the floor, Spanish was their native language. Our team consisted of: me, officially as business analyst (but later UX designer); Carly, a technical writer; Scott, a we developer; and Terry, a usability engineer.

The goal was to develop a suite of location-based services and to include a transactional or billing component. The purpose was to demonstrate to wireless network operators that money could be made from wireless data applications and that consumers would find the services valuable.

Exploring the Market and Product Opportunity

In the beginning we sat down with 10-12 product managers, strategic planning and marketing people. We didn’t make much headway using our firm’s playbook which was based on requirements capture techniques described by people such as Gauss, Weinberg and Gilb. The problem was  that the field was too nascent and the clients had little concept of what they wanted save for the high level statement about location based services and billing. The breakthrough came when I introduced them to the technique of writing personas. We developed a persona for each market segment. One we focused on a lot in the early days of the project was the “soccer mom” market segment.

It was on this project that I developed my Lifestyle Snapshots technique which was later to be featured in Tamara Adlin and John Pruitt’s book, The Essential Persona Lifecycle. The scenario I wrote up for the book was different from those that I described on my own blog in April 2000 [to be reposted at this site soon].

Once we had a set of lifestyle snapshots for each persona, we began to look at opportunities for the technology to intervene in the lives of the personas – to do things for them that we thought would be valuable. We prioritized this list of opportunities and developed usage scenarios for them.

This whole process took less than 2 weeks as we bootstrapped the project. It was easier to block out chunks of time to pick the brains of the strategic planning, marketing and product management people and gain some consensus and agreement on the specifics – an outline of the product – which personas, which opportunities for intervention with technology, which usage scenarios.

Then the project moved into a new phase.

Rapidly Developing and Validating a Product Prototype

In phase two, we switched to a mode where Terry and I would work together to design the screens for a usage scenario. While I was working on the designs, Terry would devise how to test them. We would collaborate for perhaps 30 to 60 minutes, work alone for maybe 90 minutes then reconvene to compare our work and refine it with small iterative changes. By lunch time each day, we had a testable design, and a set of tests. Scott had been asked to work a “late shift” and he didn’t start until after lunch. We handed off the design to him and asked him to code it up. When we returned the next morning the code was working. It took a few days to reach a critical mass and get a little bit of a pipeline going. At that point we started to schedule usability tests for several hours each morning. Over lunch Terry and I would analyze the results and iterate on the designs. This could take until 4pm in the afternoon in some cases. We’d hand off modified designs to Scott and collect the working code the next morning. We iterated a version of the product every day and tested it with a fresh set of users, every day. All of the participants in the tests were Nokia employees or their relatives and they all signed NDAs. Throughout the process, Carly kept pace with us with user documentation, a formal specification (we were an outsource consulting firm after all), and content for the screen/applications.

To give you an idea of the scenarios we were capable of delivering: “my kids are hungry. I just picked them up from soccer. Give me directions to the closest McDonald’s with a jungle gym facility.” This scenario was well within our range and capability to deliver – although the data and directions were mocked up for our prototype, the business development people had sources for such data even in 2000.

Test as early as possible

We didn’t have the word “Agile” in 2000. However, the process we were developing was partly inspired by eXtreme Programming. Where I had used a FDD-inspired UX process on other line-of-business apps, for this exploratory (“startup”) environment, I needed something more iterative rather than incremental. We got that from the cross-functional team and the daily work cycle of design-code-test.

Terry and I realized we wanted to test as soon as possible – just days into the 2nd phase of the project and every day from then on. We wanted to be seen to provide frequent, tangible deliverables of real business value [yes, I used that phrase in 2000.] The sponsor was spending budget on this exploratory work and we wanted to show him something quickly. We wanted to build trust with results.

Guideline: Test as early as possible and never later than 3 months into a very large project.

More than an MVP – Enough Functionality to Create Evangelists

The size and scope of the project will determine exactly what you are testing within a given time period. For a big project, you are testing prototypes, possibly in a mockup environment. For smaller projects you may be able to test the real product in the production environment. With large projects, you are testing a prototype. It may be a high fidelity prototype or a low fidelity prototype – in 1998 I was using cardboard cut-out mockups at a bank in Singapore but not with any formal usability testing to provide quantitative feedback. The prototype is likely to feature only a limited set of functionality. Ideally this would represent sufficient functionality you could launch it for a single market segment. [In 2016, the trendy term for this is MVP – minimum viable product – and often it is only sufficient to give insight into the market, not sufficiently complete to launch to a segment.]

If our goal is to turn the testers into evangelists then it is essential that the prototype you do test, has been developed from a thorough user centered design perspective and has some functional integrity. For example, in a home banking web application, it should allow you to perform functions such as account balance check, inter-account funds transfer, and bill payment, completely end-to-end. It must look to the user like a system which has real potential and true business value. The prototype must deliver on 1 or more goals for each persona we’re testing. Achieving goals is a means of delivery true vaue. Goal achievement is something tangible for the user. It turns them into believers. We aren’t just validating small aspects of functionality, we are creating political capital.

Why Are We Wasting Money On Something Fake?

Some of you may meet resistance to the notion of a prototype which by implication is a ‘throw away’. It is worthwhile remembering the wisdom of Fred Brooks’, “Plan to throw one away because you will anyway!” This argument stems from the fact that you can throw away an early prototype and get the design correct or you can wait and take a high risk that you will have to throw away the production system because you didn’t get the design correct. Writing in the early 1970s Brooks was advising us to validate our product designs early or waste lots of time and money reworking a production system later.

When uncertainty is high, options on future products have significant value. Hence, you ought to be willing to spend more to purchase the option to deliver the right thing, to the right market at the right time. The cost of the prototype is the cost of purchasing the option. It is real option theory in action.

Test in at least 3 phases

So you have a system or prototype ready for usability testing. You have met the criteria to be frequent, tangible and of true business value. Now it is time to test. It is time to consider how to minimize the risk of showing an early product to skeptical influencers and decision makers at your firm.

You are trying to achieve a political win for design and design processes. You risk showing a poor design and a badly written piece of software to a skeptical audience. You must consider that bugs in the code will reflect badly on the design and the whole principle of design. To manage this risk, we took a 3 phase approach.

Guideline: You must deliver a complete design for at least one user goal before testing.

Test with “friendly” users first

We started with the clients – the marketing, product management and individual contributors from strategic planning at Nokia.

Initially select a few members from your own team or closely related people. For example members of your QA team; technical writers; analysts; people who supplied requirements; marketing people; sales engineers; anyone as long as they are closely related to your project or product and have some skin in the game. They feel that they have a personal input on what has been produced so far.

You need at least 5 and ideally 8 of these friendly test participants. If you are to get 8 sets of tests results in half a day, you need small test scenarios that you can complete in 30 minutes, or you need multiple test labs and staff to run them in parallel.

Run the usability test in the normal manner. Typically we found 3 or 4 really stupid design errors, or poor choices which needed fixed. We delivered those to Scott when the testing session ended. With a formal usability lab with a 1-way mirror, it is possible to prepare the top priority results from the test while the tests are happening. You don’t need to wait for formal analysis of the results post hoc.

Phase 1 doesn’t have a time period. Instead it has exit criteria. The primary exit criteria is that you have a stable working prototype without functional defects and with all the obvious face palm moment design flaws removed. When you have this: End of phase 1.

Test with “real” users next

Now it is time to run the proper user testing. You will have selected a number of users, perhaps several sets, based on target market, demographics, known user groups, professions, job specifications etc.. Bring these people in and run the same tests on your new design.

At Nokia, we did this with family members of staff from the business development business unit and had them sign NDAs. While family members, they did meet the market segment criteria to represent the personas we were testing.

We wanted to refine our design every day. We felt that 5 to 8 data points was enough. [At the time, this flew in the face of established usability engineering as a science where much larger sample sets of test results were expected to draw reasonable conclusions.] In other words, each time you have enough data, make an improvement to your product, iterate the design quickly. Naturally, you will have to prioritize the changes, some major design ideas may need to be dropped. You will also need to select your developers carefully. Not every developer likes to make rapid changes like this. Select an individual who has “hacker” like mentality and just loves fast lifecycle iterations. [Note: these 2 sentences were written in 2000 before we had Agile and when this style of working was less well accepted in the software development community.]

A worthwhile tip is that you might leave 1 day free in your test schedule for every 5 to 8 test participants. This will give your programming team an extra day to make and test any changes that you ask from them. In other words, develop a bit of a pipeline. Every other day, have a different area of the functionality to test, to extend the lead time for bigger changes. [Note: this idea of pipelining changes and extended the lead time beyond the test cycle cadence is a natural fit for Kanban. We didn’t the notion of kanban systems for product development in 2000.]

Guideline: Iterate quickly, make several well advised design improvements during testing with real users.

Guideline: Select a developer(s) who are suited to the nature of prototyping and rapid design changes.

The result of this phase should be some solid usability test results and a much improved design which has been shown through the testing to deliver on the establish user goals for the design. End of phase 2.

Phase 3: Selecting the Evangelists

The third phase is where we re-run the tests but this time using company employees who are not directly involved in the project or with the product. They have no skin in the game. Our goal is to turn these people into evangelists. We will do this by demonstrating the true business value of the design and by emphasizing how you got to such an elegant design. As we gain in confidence with our results, we will seek to invite higher and higher level managers to test the product design prototype.

Evangelists must have influence

As an initial strategy for selecting evangelists, you may like to consider the question, “Who do we need to influence?” It might be the third line manager, your bosses boss, or maybe the VP of Finance or the Director of Sales, or maybe the technical support team who are tired of supporting terrible earlier products.

So the first approach might simply be to invite them to the tests. Have a senior official send them an email or a letter, saying that they have been selected to participate in the usability testing of the next generation product and that this is their opportunity to get an early “heads up” on what is coming through the development pipe. That usually works.

If these people don’t fit your target demographic then consider inviting their kids, or parents, or spouses, or golf partners, or whoever will match your demographic but is closely related to the people you need to influence.

Using the Law of the Few

There is a second, more scientific approach, to selecting the evangelists. We can use The Law of the Few described in “The Tipping Point” by Malcolm Gladwell. Gladwell describes three key personality types which we can use to communicate our message. These are Connectors, Mavens and Salesmen. Just a few of these people ought to be enough to tip the skeptics and see that the design message is evangelized throughout the organization. Connectors, Mavens and Salesmen are the people with influence in a community. These are exactly the type of people to whom we need to sell our new design and its benefits.

Connectors are the type of people who know everyone. They are the people you go to, to get the latest gossip in the office. The people at cocktail parties who introduce you to someone that “you really should meet”. Everyone knows one or two connectors because the connector makes it his/her business to know you.

Mavens are people who know lots of stuff rather than lots of people. A company maven may well be a product manager. Someone who is employed to know all about all the competitors and their products. Or it might be someone in a development role, who knows lots about technology or lots about the computer network. The kind of guy that you call when your computer is broken. These kinds of people get around the company and they get to know lots of people. The network guy fixes your computer but he also fixes the CEO’s computer!

Salesmen are the proverbial bullshit merchants who just don’t take “No” for an answer. They could “sell sand to Arabs” and “ice to Eskimos.” They will tend to latch on to one single thing which they see as the advantage and they will go out and sell that advantage. Finding these people is easy. They probably sold you something recently like a lottery ticket for a charity or a share of a syndicated race horse. Influencing them isn’t hard either. Just make sure that they can see that one key advantage and let them go after it.

Usability Testing with Evangelists

So now you have selected the final set of test participants. You have arranged for a senior figure to invite them to the tests. You have refined the tests you will be asking them to complete and the design that you will be showing them. You are ready for phase 3.

It is important that potential evangelists are tested with goal directed questions. Ask them to solve problems of tangible business value. Something which they can see offers value to the user and either profit potential or improved service, to the business.

Run the test much like any other test. Offer some play time at the beginning, go through suitable introductions and make the test participant feel at ease. Present the test questions as normal.

If you care about usability engineering as a science then you will want to keep the test results separate from the phase 2 test results. These phase 3 testers are potentially invalid users and spoil you scientific data from phase 2. However, any negative results obtained from this third group should still be considered valid and you should still act to fix problems uncovered by the evangelist group.

After the questions are complete, ask the participant to provide feedback. Let them talk, let them say what they think. This is your chance to turn them into an evangelist.

It is key that you take the opportunity to sell the user centered design process which led to your product design. Hopefully they will give you an invitation by complimenting the design, or saying something like, “this is much better than KML Corp’s competitive offering”. It is key that you sell the science of the usability testing. The test participant must not leave with the notion that they just participated in a marketing focus group. Emphasize that the design team derive important data from formal testing and that successive rounds of earlier testing have provided numerous improvements already.

If you have done a good job, then your participants ought to leave the test with a warm feeling. They realize that they achieved a number of goals with the new product and that those goals were achieved as easily as might be expected given the constraints of the technology [which with WAP and 2G candy bar phones, were considerable.] Hopefully, they may consider that the design is superior to previous products and better than the competitors’ products. If this has happened then they will go forth among their colleagues and they will tell them, “the new product – I saw it – excellent. Can’t wait to see the sales figures”.

With a message like that circulating, you will have no problems in the next budgetary round when you need to ask for renewed funding for Interaction Design and Usability Engineering.

Epilogue

So what happened at Nokia? Given what has happened at Nokia in the intervening 15 years, this may come as no surprise.

So we had a working prototype – a set of WAP applications with stubbed out back end functionality but nonetheless working. To convert this prototype into a product system was going to take some time and some money. Perhaps a department of 15 to 20 people for some period of months. With on-going support and subsequent versions, we were looking at a few million dollars per year over 2 to 3 years and hence a total budget of $10-20 million dollars. [High tech workers in Dallas are not cheap and in the tech bubble of 2000 were hard to come by.]

Permission was needed. It was a big portfolio level decision. Senior managers were flown in from Finland. They were shown the prototype and given the full pitch on market segmentation, go-to-market strategy and so forth. It became clear during the meeting that they had not seen anything like at HQ in Finland. No one had been able to demonstrate real consumer value in WAP 2G data. Doors were closed. When they reopened the decision had been made not to fund the project. We the consultants went home – job done. [In fact, I left to join Sprint PCS, to design wireless data applications and take an influential role in their 3G rollout in 2002.] The business development people dusted themselves off and started to work on another idea. In real option theory terms, Nokia had chosen not to convert this option instead to discard it. Life goes on.

[The original of this article appeared on June 7, 2000 at http://www.uidesign.net/papers/2000/evangelize.html and can be found via Internet archival services]

Filed Under: Foundations Tagged With: Lean UX, Product Desing, Product Management, Real Options, Usability Engineering, User Experience, UX

Defining KPIs in Enterprise Services Planning

January 15, 2016 by David Anderson

All KPIs should be fitness criteria metrics. All KPIs should be recognizable by your customers and addressing aspects of how they evaluate the fitness of your product or service. If your customer doesn’t recognize or care about your KPIs then they aren’t “key”, “performance” indicators, they may indicate something else but they aren’t predictors of how well your business is performing or likely to perform in future.

This blog follows my recent posts on Market Segmentation and Fitness for Purpose Score explaining how we define Fitness Criteria Metrics. These metrics enalbe us to evaluate whether our product, service or service delivery is “fit for purpose” in the eyes of a customer from a given market segment. They are effectively the Key Performance Indicators (KPIs) for each market segment. All other metrics should either be guiding an improvement initiative or indicating the general health of your business, business or product unit or service delivery capability. If you can’t place a metric in one of these categories then you don’t need it.

Project Manager and Mom

Read more…

Fitness Criteria Metrics

We left our story of Neeta the busy project manager, mother of 4 kids, having established that she represents a member of two market segments – the “working late ordering food for the team in the office” cluster, and the “feed my children, its an emergency!” cluster. We also determined that the main metrics of concern are: delivery time; quality – both functional quality (the menu and order accuracy) and non-functional quality (hot, tasty, artisan, gourmet or maybe not); predictability (of delivery time, and perhaps of quality too); safety or regulatory concerns, perhaps including trust that organic ingredients were used or not. In our example, these 4 main metrics apply whether Neeta is ordering pizza for the team or whether she is ordering for her family. However, the satisfaction thresholds vary significantly based on the context.

When Neeta orders for the team they are happy to wait an hour to 90 minutes for delivery. If minor errors are made in order accuracy it is unlikely to matter too much. However, there is a threshold. If some of the team are vegetarian then there must be some vegetarian pizzas delivered or we’d consider the order a failure. The menu is important in the sense that the geeks want a more exotic set of choices. They are fussy about their non-functional requirements. They want the pizza hot, tasty, artisan and gourmet. They aren’t too particular about predictability of delivery time or order accuracy. A 30 to 45 window for delivery is probably acceptable and a few minor errors in the order is also acceptable. They do care about health and safety in the restaurant but they only care about the traffic safety of the delivery boy in so much as it doesn’t endanger the quality of the pizza on arrival.

When Neeta orders for her family, the threshold levels are significantly different. The kids are really hungry and impatient and now they know that pizza is coming, they are super-excited about it. Fast delivery is essential. Predictable delivery is essential: the 6 year old is now running his countdown timer on his iPad. The menu was important but only in so far as it offered simple plain cheese pizza with tomato sauce. The kids are so excited that they won’t mind if the pizza is a little cold on arrival, nor will they mind if it got shaken up a bit during transportation. Order accuracy is important to the kids. If it isn’t a plain cheese pizza they will be extremely upset and unlikely to eat it at all. Meanwhile, mommy can worry about safety and regulatory concerns but they may have a preference for a restaurant that promises to use organic ingredients. They have no concept of whether they trust this assertion. The restaurant said it was organic – mommy can worry about whether that is true or not.

So in summary…

Kids

Fast delivery

100% order accuracy

Not concerned too much on non-functional quality

Predictable delivery

Not concerned too much on safety or regulatory concerns

Geeks

Longer delivery acceptable

Some errors in order accuracy acceptable

Extremely fussy about non-functional quality

Wider tolerance for unpredictable delivery

Not too concerned about safety or regulatory issues

In 4 of these 5 categories we have significantly different fitness evaluation thresholds for these two segments.

If we are to successfully serve both segments, driving improvements so that we have high levels of customer satisfaction in both segments, we must use the higher thresholds with each metric as our benchmark. Alternatively, we need to segregate our service delivery by segment. We can do this by introducing two classes of service, one of each segment. This might work through pricing, for example, would Neeta pay a premium for the guaranteed fast delivery for the kids? Or it might work through capacity allocation and demand shaping based on it. We might, for example, refuse to take large commercial destination orders during peak times for domestic orders. In this example, we trade off educating our corporate clients to order earlier in the day, versus the risk that they will go elsewhere. We do this because we value the domestic market.

There is no formula for this. No right or wrong. We make choices for our business in terms of which segments we wish to serve and how well we wish to serve them. Choices come with consequences. We need to be prepared to live with these consequences and be willing to be accountable for them.

The metrics and threshold values we’ve developed for each of our segments should become the KPIs for our business and specifically in this case, the pizza delivery service. We should put in place the mechanisms, instrumentation and customer feedback, to measure these metrics. We can use the results at Service Delivery Reviews, Operations Reviews and Strategy Reviews to determine how well we are serving our markets, where we need to make improvements and which segments we wish to serve.

In my next post in this series I will take a look at the other types of metrics: those which guide improvements; and general health indicators. My experience working with clients in 2015 is that most existing KPIs are in fact merely general health indicators. As a consequence these businesses are optimizing for the wrong things and customer satisfaction and their ability to survive and thrive in the market is impaired. All KPIs should be based on threshold values for fitness criteria metrics derived from analysis of the market segments you choose to serve.

Read Part 2: Your KPIs Probably Aren’t! But What Are They?

Filed Under: ESP Tagged With: Enterprise Services Planning, ESP, Fitness Criteria Metrics, Fitness for Purpose, Kanban, Key Performance Indicators, KPIs, Marketing, Strategic Planning

Market Segmentation for Enterprise Services Planning

January 14, 2016 by David Anderson

I realized after posting my article on Fitness For Purpose Score that it isn’t reasonable to expect readers to know the background and context that stimulated it. It isn’t reasonable that I assume readers are up-to-date with speeches I’ve given ove the last two years covering Evolutionary Change, Fitness for Purpose and Enterprise Services Planning. So I felt some explanation of how we do market segmentation for ESP was in order to provide better context for Fitness For Purpose Score.

How do we know whether a change in our service delivery capability represents an improvement? This is the fair and reasonable question that should drive our decision making about how we manage, how we make decisions, and which changes we choose to invest in, consolidate and amplify. In evolutionary theory, a mutuation survives and thrives if it is “fitter” for its environment [this is actually a gross simplification but it will do for introductory paragraph on a related but different topic of marker segmentation.] So how do we know whether or not a change to our service delivery capability makes it fitter for its environment? What do we mean by “environment” in this context? “Environment” is the market that we deliver into. So “fitness” is determined by whether the market feels our product or service and the way we deliver it, is “fit for purpose.” So to understand “fitness” to enable and drive evolutionary improvements, we first need to understand our market and what defines “fitness for purpose.” To do this we segment the market by customer purpose and the criteria with which they evaluate our “fitness for [that] purpose.” …

At Lean Kanban Inc we create our market segmentation by clustering narratives about our customers. We do this by telling stories about them. The technique is a direct application of Dave Snowden’s technique from his Cynefin Framework. To explain this in our training and in the speeches I linked above, I tell the tale of Neeta, a fictional project manager and mother of 4. Neeta is based on a real woman who works in the Canadian public sector and has considerable Kanban expertise. Neeta needs to order pizza for delivery to her office to feed her team who are working late against a deadline. On another evening the same week, she needs to order pizza for delivery to her home to feed her children who are hungry because she came home late. Neeta doesn’t represent one market segment, she represents two! The reason for this is that the purpose, context and fitness selection criteria are different in each of the two contexts.

Project Manager and Mom

When Neeta orders pizza for her children she needs: fast delivery – ideally within 20 minutes; she needs order accuracy – the kids only like plain cheese pizza; the non-functional quality doesn’t matter too much, the kids will eat cold pizza so long as it is cheese pizza; she needs a simple menu and predictable service; she wants delivery when promised because the kids need their expectations set and they are unforgiving; she also cares that the restaurant is clean and can be trusted to follow health and safety regulations; she may care whether or not they use organic ingredients because she is feeding her family.

When Neeta orders  pizza for her office her need are similar but some of the criteria vary and the threshold values are different: she needs delivery in up to 90 minutes; order accuracy is important but if one or two mistakes are made it won’t make a big difference; however, the non-functional quality matters, hot, tasty, pizza with gourmet flavors and exotic ingredients are required for these discerning geeks; it doesn’t matter if delivery isn’t as predictable as it might be, so long as they show up eventually – the team are busy; and yes, she still cares whether the restaurant meets health and safety legislation standards but organic ingredients probably isn’t so much of a concern.

In other words, Neeta decides whether she likes the pizza service and whether she will use it again, based on two different sets of criteria, depending on her context. This may lead her to use different service providers for each purpose, if one provider can’t meet both sets of her needs. As a result Neeta represents two segments, not one.

Pizza Boy

How would you know that Neeta represents two segments and not just one? Traditional demographic profiling wouldn’t give you this insight! Well perhaps she uses different credit cards or payment mechanisms depending on context? And the delivery address is different. So there are some obvious clues. However, the people in the business who know Neeta’s story are the person who took her telephone order, and the delivery boy who delivered the pizzas. It is these frontline staff who understand the customers best.

If you are to cluster customer narratives to determine segmentation, you need to bring frontline staff into the story telling sessions. You need to listen for context, purpose and selection criteria and create segments based on affinity of these aspects of the market. Give each cluster a nickname. Recognize that an individual customer can appear in multiple segments depending on their context on a specific day and time.

The challenge of this for many companies is that the people who best understand the customer’s context, purpose and selection criteria are often the lowest paid, shortest tenured, highest turnover staff in the business. Foolishly, many companies under value, the value of customer facing staff. Traditional 20th Century service delivery businesses take a transaction view of customer interaction rather than a relationship view. If you value repeat business and you value the insights that will enable your business to evolve and survive in a rapidly changing market then you need to value customer facing people and involve them in your strategic planning.

Once you have the clustered narratives defining your segment, now select the segments you want to serve. This is a key piece of strategic planning. Which businesses do you want to be in? Which don’t you care about? Which do you want to actively discourage? Based on this you will develop the Fitness Criteria Metrics to drive your management decision making and evolutionary improvement.

Designing Fitness Criteria Metrics, choosing their threshold values, and making them your KPIs (Key Performance Indicators) will be the subject of my next post.

Filed Under: ESP Tagged With: Enterprise Services Planning, ESP, Kanban, Marketing, Strategic Planning

Fitness For Purpose Score and Net Fitness Score

January 11, 2016 by David Anderson

Regular followers of my work will know that I have expressed dissatisfaction with Net Promoter Score (NPS). Steve Denning in his book Radical Management suggested NPS was “the only metric you’ll ever need.” Steve is a writer for Forbes, an investment magazine. High NPS scores correlate with high stock prices and hence from an investor’s point of view NPS is an important metric. If you are a CEO of a public company, who receives a large portion of your salary as bonuses based on changes in the stock price then NPS is an important metric. However, many of my clients who collect NPS data report to me that it isn’t an actionable metric. NPS merely tells you whether you are winning or losing. It doesn’t tell you what to do!

There are some antidotes to NPS’ failings. The second question asking reviewers to “tell us why you gave the rating in the previous question?” provides the opportunity for short narratives. These micro-narratives can be clustered using a tool such as Sensemaker and useful information can be extracted. There may be actionable information hidden in the clustering of narratives. This advanced use of NPS information is very much still in its infancy and not readily available to many or most businesses.

I’ve decided to introduce a new metric into our own surveys. I call this Fitness For Purpose Score. I am hopeful this will become a key strategic planning tool in Enterprise Services Planning.

Fitness For Purpose Score

It is often true that businesses do not know the purpose with which a customer consumes their product or service. A product or service designed for a specific purpose may get used for something else. Some of the more famous examples, are washing machines used to make lassi yoghurt drinks for Indian restaurants. In evolutionary science this is known as an exaptation: where something designed for one purpose is adapted for use with another purpose. To have actionable metrics for product or service delivery improvement, you need to understand the customer’s purpose for consuming your offering. When you understand this purpose, you can create the appropriate fitness criteria metrics. With Enterprise Services Planning (and Kanban) we use fitness criteria metrics to drive improvements. Fitness criteria metrics are used at all levels to compare capability with expectations. Fitness for Purpose Score is intended to help us understand purpose and whether or not our current capability meets expectations. If it doesn’t we can probe for thresholds to establish new fitness criteria metrics.

This is how our sales and marketing team will be using Fitness For Purpose Score in our own surveys in 2016.

Question 1: What was your purpose [in attending our training class? What did you hope to learn, take away, or do differently after the class?]

Question 2: Please indicate how “fit for purpose” you found [this class]?

  1. Extremely – I got everything I needed and more
  2. Highly – I got everything I needed
  3. Mostly – I got most of what I needed but some of my needs were not met
  4. Partially – some of my purpose was met but significant & important elements were missing
  5. Slightly – I took some value from it but most of what I was looking for was missing
  6. Not at all – I got nothing useful

Question 3: Please state specifically why you gave your rating for question 2

Questions 1 and 3 specifically ask for short narrative answers. These micro-narratives can be clustered. Question 1 will provide clusters of purpose which can be validated against our existing market segmentation and may reveal new segments, while question 3 will provide clusters of actionable information for improvements and possibly new fitness criteria metrics or threshold value for existing metrics. We can decide whether or not to pursue specific clusters and whether we are likely to be able to achieve adequate fitness levels to satisfy our customers during our Strategy Review meeting.

For example, our own product is management training, though we also have an event planning and publishing business. We position and sell our intellectual property as management training and we deliver it as training classes and mentoring. We know that a significant segment exists for software process improvement and for process engineers and coaches who consume our products and services in order to help them in their coaching practice. We know this segment exists but we specifically and intentionally don’t cater to it. We feel it would be a strategic distraction and undermine our overall message that managers need to be accountable, to take responsibility, to make better decisions and to take action where and when necessary to improve service delivery. The return-on-investment in our products and services is realized when existing managers change their behavior as a result of our training. And hence, while we appreciate the patronage of process engineers and coaches, we do not specifically cater for their needs.

Net Fitness Score

I purposefully moved away from the NPS use of an eleven point numerical scale [0 thru 10]. My background in human factors, psychology and user experience design, taught me that humans have problems with categorization beyond 6 categories without a specific taxonomy to guide them. This isn’t a result of Miller’s “Magic Number 7” rather the work of Bousfield W.A. & A.K, and Cohen, B.H. between 1952 and 1966 on clustering. For example, if you ask humans to rate something 1 to 10 they will struggle to create 10 distinct categories in their mind. When asked to devise their own taxonomy, or clusters, as lay people to the domain, they will tend to create no more than 6 categories. Hence, a scale of 0 through 5 is most appropriate for general consumption. I believe the NPS people tried this but discovered that in some cultures, such as Finland, people never give the top score on principle. They always choose one below the best. Hence, the NPS reaction to this was to double the scale using 0 through 10 so that people could give a 9 when they are really giving a 4.5. My feeling on this is that it highlights the issues with numerical scales and undeclared taxonomies. The solution of doubling the scale, however, creates a randomness in the system and generates noise in the data reducing the signal strength, because of the general human issue of modeling categories against the scale. Fixing one problem, the cultural propensity never to give top ranking, creates another problem, a cognitive issue in the general population to struggle with more than 6 undeclared categories. Hence, to avoid both problems, I am declaring the categories with narrative.

Scores of 4 and 5 are intended to indicate that someone is satisfied and the product or service was fit for their purpose.

Score of 3 is intended to indicate a neutral person. They didn’t get everything they needed to be delighted with the service but they got something acceptable for their investment in time and money.

Scores of 2 or below are intended to show dissatisfied customers who felt their purpose was unfulfilled by the product or service. This may be because the product is poor or it may be because the purpose was previously unknown or represents a segment that the business has strategically decided to ignore. Not all dissatisfied customers need to be serviced fully and satisfied: some customers, you simply don’t want – they represents segments you aren’t interested in pursuing.

Net Fitness Score [NFS] = % satisfied customers – % dissatisfied customers

NFS can be improved through better marketing communications that direct the right audience to your business and dissuade the wrong audience. So NFS can be used to drive excellence in marketing as well as used to explore new segments and the fitness criteria metrics that light them up as viable and profitable businesses.

Filed Under: Foundations Tagged With: Enterprise Services Planning, Kanban, Kanban Cadences, Marketing, Strategic Planning, Strategy Review

Scrumsplaining #1: Kanban is Scrum Without Sprints

January 11, 2016 by David Anderson

“scrumsplaining” is the phenomena where a Scrum practitioner tries to explain why you can’t use some alternative approach without actually making any attempt to understand the other approach or a different point of view or paradigm. In “scrumsplaining” everything is explained using the paradigm of the Scrum framework.

Scrumsplainers are generally people who’ve been using Scrum for a while but there performance has plateaued, no further improvements have been occurring and they are looking for something fresh to inject innovative thinking and further improvements. Alternatively Scrumsplainers are consultants and advisors who are afraid that they’ll lose status as an authority figure if a client or practitioner who looks up to them decides to adopt something that would erode the use of Scrum practices in their workplace.

This is the first of a series of Scrumsplaining Kanban blogs.

Scrumsplaining #1: Kanban is just Scrum without Sprints

If you are already doing Scrum and you stop using Sprints and drop the regular cadence and just use a single continuous Sprint then you are doing Kanban.

Except, dropping sprints would be bad. People get exhausted from just working, working, working, all the time. And then you don’t have a cadence, and there would be no retrospectives, so things would never improve.

Hmmm. I see. So, you need sprints so that people take a rest?

Yes!

Really? Why don’t you just stop for a rest when you need one?

People forget to do that. Without sprints the pace isn’t sustainable.

I see. So what happens if something is too big to complete within a sprint?

Well, we break it up into smaller pieces, 2 or more.

So you start working on something, and before it is finished you stop for a rest when the sprint is finished?

Yes.

Wouldn’t it be better to stop once you’ve finished and before you start something else?

No. It’s better to stop on a regular cadence.

Isn’t that disruptive? Isn’t there a cost when you re-immerse yourself in the next part of the work?

Well yes, but that is better than not stopping for a rest and maintaining a sustainable pace.

Hmmm. So tell me again why you don’t just stop for a rest when you need one?

That would be bad.

Doesnt Kanban suggest we limit our WIP to keep the pace sustainable?

But some things could be big and take a long time, that wouldn’t be sustainable.

Hmmm. Isn’t Kanban an approach to evolutionary change? Aren’t things meant to improve as you use it? Can’t you just hold regular retrospectives without sprints?

That would be weird. Why would you do that? It isn’t efficient to hold a retrospective on its own without a demo or a planning meeting.

Kanban isn’t Scrum without Sprints. Kanban is born out of a different paradigm and a different philosophy. Kanban uses the paradigm of limit work items in progress and the concept of a system capacity for a single service. New work is pulled into the system for service delivery when capacity is available. Limiting WIP creates a stress that stimulates about improving flow of work through the system to improve service delivery lead time and predictability. Service Level Expectations (SLEs) or formal Service Level Agreements (SLAs) are used and compared to Service Level Capabilities (SLCs) at regularly scheduled retrospectives called Service Delivery Reviews.

Kanban takes a service-oriented and evolutionary approach to improvement. You start with what you do now and design a kanban system to wrap over it. Kanban adapts to your existing context and helps you improve from there. Kanban is not prescriptive instead it is adaptive and designed in context. You limit WIP and you pull work when there capacity free as other work is completed. Kanban systems model entire service delivery workflows and span multiple teams. Kanban system workflows have been known to involve up yo 100 people but 20 to 30 is more typical. Commitment to customers is made using a probabilistic approach of service level expectations/agreements. In every respect this is different from Scrum.

Scrum expects you to change your context so that Scrum as a prescriptive and defined process will work. Scrum expects you to make grand revolutionary changes to your organization, workflow, planning and interaction with customers. Scrum uses daily and sprint (usually two-weekly) planning commitments as the catalyst for improvements. Scrum limits time into periods called Sprints, it does not explicitly limit work items and pull work as capacity is free. Scrum uses batch-transfers at each Sprint. Scrum uses deterministic planning and specific deterministic commitments. It does not use probabilistic planning. Scrum is designed to work at a team level where a team is 3 to 12 people and most typically around 6.

While the mechanics of a kanban system could be loosely described as “Scrum without sprints” and the popular Scrum tracking tool, Rally, implements its shallow Kanban functionality using a single, continuous Sprint, it is so wrong-headed to describe Kanban as “Scrum without sprints” it prevents people from seeing the applicability, the benefits, and the opportunity.

When you view Kanban through the lens of the Scrum paradigm, you get “Scrumsplaining!”

Filed Under: Agile Tagged With: Kanban, Scrum, Scrumsplaining

Scrumsplaining #2: No Sense Of Urgency With Kanban

January 11, 2016 by David Anderson

“scrumsplaining” is the phenomena where a Scrum practitioner tries to explain why you can’t use some alternative approach without actually making any attempt to understand the other approach or a different point of view or paradigm. In “scrumsplaining” everything is explained using the paradigm of the Scrum framework.

Scrumsplainers are generally people who’ve been using Scrum for a while but there performance has plateaued, no further improvements have been occurring and they are looking for something fresh to inject innovative thinking and further improvements. Alternatively Scrumsplainers are consultants and advisors who are afraid that they’ll lose status as an authority figure if a client or practitioner who looks up to them decides to adopt something that would erode the use of Scrum practices in their workplace.

In this 2nd post in the series we look the argument that you need the peer pressure elements designed into Scrum in order to maintain a sense of urgency and get things done.

Scrumsplaining #2: If we adopt Kanban we’ll lose our sense of urgency

[The following conversation paraphrases a VP of a business unit at a major Internet/telecom equipment manufacturer. I’ve allowed 12+ months to elapse before posting this, and it remains anonymous. The company is American. The BU is based in New England. The Scrumsplainer was in charge of a 600 person BU partly staffed in India.]

Without the daily Scrum and regular Sprint commitments there will be no sense of urgency!

I see. Why do you think that is?

Well we need people making commitments to their peers everyday or they’ll just slack off and play ping pong all the time.

I see. Why is that?

Well, they’re inherently lazy. We need to keep them under pressure or we’ll get nothing done.

You really believe that?

Yes. Our people are too comfortable!

How many people in this BU?

About 600!

How many are new since you arrived?

About 300!

So you’ve hired 300 lazy people since you started? Do you select lazy people when hiring?

No. No, of course not.

So they become lazy after you hire them?

Well, they are all very comfortable. There is no sense of urgency.

Every Scrum team in your BU missed its sprint commitment last sprint. Every team in the building has missed its sprint commitment for the last 22 sprints. On average they are missing each commitment by 50%. At today’s retrospective not a single Scrummaster reported this. No one is willing to own the missed commitments.

Exactly, there is no sense of urgency!

So you are currently using Scrum and yet there is no sense of urgency. So tell me again why you would lose the sense of urgency if you adopt Kanban?

Our people are too comfortable and we need to keep them under pressure.

Would you say that you believe in McGregor’s Theory X and people need to be extrinsically motivated?

No. I believe in Theory Y. People are intrinsically motivated.

And yet, you need to keep them under pressure with daily and di-weekly commitments or they’ll just slack off? In what way are they intrinsically motivated?

….

Do you really believe your people, hundreds of them are inherently lazy?

Our people are too comfortable and need to be kept under pressure.

Have you considered that if your people aren’t intrinsically motivated and have no sense of urgency, there may be a failure of leadership?

Kanban has a (usually) daily Kanban meeting around the kanban board. It isn’t however a Scrum meeting. In a Kanban meeting you iterate over the tickets from right to left: Closest to completion to most recently started. There is a focus on the work and not on managing the workers or peer pressuring them to complete tasks.

Kanban has no time-boxed batch planning mechanism such as the Sprint in Scrum. Instead items are pulled individually and delivery is measured against a service level expectation or agreement on lead time.

So the elements of Scrum which are designed to create a sense of urgency using peer pressure are indeed missing.

Kanban simply uses a different paradigm. Kanban is service-oriented and models service delivery workflows from request to delivery. Kanban makes the customer/service requestor and the business need or business risks visible and transparent throughout the workflow. Workers in a kanban system know who the work is for and why they are doing it. They know which business risks are associated with the item and they understand why it being given a specific class of service.

There isn’t a sense of urgency, there is a sense of service. There is a collective pride in service delivery excellence. A sense of pride in service delivery creates urgency where and when it is appropriate. Transparency and visibility, collaborative working and a sense of pride are what make Kanban work. Kanban is designed around the assumption that knowledge workers indeed conform to Theory Y: They are intrinsically motivated.

To make Kanban work there needs to be leadership. That leadership has to value customer service and instill a sense of collective pride in good service. When you hear a scrumsplainer, arguing Kanban lacks a sense of urgency, you are actually listening to a misdirection. The scrumsplainer lacks courage and fails to show leadership!

Filed Under: Agile Tagged With: Kanban, Scrum, Scrumsplaining

  • « Go to Previous Page
  • Go to page 1
  • Go to page 2
  • Go to page 3
  • Go to page 4
  • Go to page 5
  • Interim pages omitted …
  • Go to page 7
  • Go to Next Page »

Footer

Subscribe to our newsletter

Privacy Policy

Address

Kanban University
1570 W Armory Way Ste 101,
#188, Seattle, WA 98119
USA

Contact Us

info@kanban.university
© 2022 Kanban University. All rights reserved. Accredited Kanban Trainer and Kanban Coaching Professional are registered trademarks of Kanban University.