28 May 2010

Before the advent of the web, and more specifically, Web 2.0 technologies, effective innovation was a slow process and was usually limited to large organizations which had the resources to genuinely filter through thousands of ideas.

According to the American Management Association’s (AMA) data, most companies lack a formal process for selecting ideas. They surveyed 1356 global managers and found that:
  • - 48% “don’t have a standard policy for evaluating ideas.”
  • - 17% use an “independent review and evaluation process.”
  • - 15% said “ideas were evaluated by the unit manager where the idea was proposed.”
As Robert Tucker states, “An effective selection process connects your “idea funnel” to your “idea pipeline.” Without it, this winnowing is haphazard, hierarchical, and discouraging to would-be innovators.”

One of the best examples of idea management connecting the idea funnel to the idea pipeline is that of Disney. According to Peter Schneider, who was president of Disney features, many of Disney’s best ideas came from their home-grown “Gong Show”. Several times a year employees had an opportunity to pitch their ideas to Eisner (Disney’s CEO) and several other top managers. Up to 40 of them would perform or present their ideas until a ‘gong’ would sound. Eisner and his team would then discuss each one and decide which ones would fit strategically with their vision. Disney Stores, as well as many of their animated features were born through this process.

Eisner and the Disney team clearly understood the value of listening to their employees and applying strategic criteria to each idea, however, in the face of this process, and the fact that top management had to sit, sift and decide based on input from 40 employees, this process was highly inefficient and ineffective. For one, most employees would faint at the thought of standing directly in front of Eisner to pitch their idea, so only the bravest would stand a chance at presenting their idea. Second, of these 40 probably came through some resource intensive pre-vetting otherwise Eisner et al (and their expensive salaries) probably would have had to listen to some pretty bad ideas. Finally, this process took place several times a year, thus it was not a continuous day-to-day process.

In a company with tens of thousands of employees ideas lie everywhere, but finding the best ones has always been a problem only because vetting them has traditionally been a resource intensive task, and thus the numbers reported by AMA are a reflection of management teams not wanting to dedicate the time, effort, and resources to such a task.

Idea Management Software is changing the game. Most players in the market, including ourselves (INCENT), have developed effective algorithms and ‘game-like’ situations to help ideas flow at an unprecedented rate into the funnel, reduce the pre-vetting strain on management resources, and ultimately yield a higher number of high quality ideas for management to review.

So how do these systems work? In a nutshell (and I won’t go into the pros and cons of the different algorithms), ideas are posted via the web to a common platform accessible globally by all users. The users can review, add feedback, and vote on the ideas they feel passionate about. The ideas which are ranked highest are then reviewed by an expert committee and rigorously analyzed against a predetermined criteria scorecard (the better Idea Management systems include this feature, but with the most basic ones this process goes into ‘manual’ mode.)

Criteria are usually defined around the company culture and values. For example, Bank of America’s scorecard reflects the following: ease of implementation, associate impact, customer delight, and revenue potential. Once the criteria are applied to the ideas, those with the highest ‘expert’ scoring will likely be introduced to the idea pipeline.

However, the greatest value in these systems is not their ability to filter the ideas and to help organizations identify the best ones more efficiently, but rather it’s the transparency of the process, which keeps users engaged, and the centralization of all the ideas, which makes them searchable by the entire organization. The ‘lessons learned’ database that is built through the use of the system is what systematically helps improve the quality of the ideas and the rate at which problems are solved throughout the organization. Especially with intra-innovation, or continuous improvement, individuals looking for ways to solve problems may find that similar or identical problems have already been addressed in other parts of the organization. In lean parlance, these systems enable “Yokoten”, which in Japanese means “across everywhere”

Posted on Friday, May 28, 2010 by George R.

No comments

11 May 2010

Taiichi Ohno used to say “Where there is no standard there can be no Kaizen”, which can be translated to “you can’t improve what you don’t measure.” However, these two sayings have a critical missing element... the term “accurate”. Many organizations fall victim to poor data interpretation and instead of improving their processes they do them more harm.

Good data analysis is an integral part of good idea management systems. Breaking down the raw data and identifying trends in idea quality, user participation, and aging of ideas can help program administrators improve the process, however, the wrong slicing and interpretation of the metrics can quickly hinder it.

This concept always brings me back to my love of golf and in particular one of my pet peeves… the “Putting Average Leaderboard”.

It has always struck me as odd that one of the statistics most used by sports analysts to measure a pro golfer’s performance is their putting average. Normally, when a golfer has a bad year following a good one, they will usually look at their putting average as the culprit for the fall from grace.

Needless to say, putting accounts for close to 50% of strokes on a golf course, and putting averages are mostly the result of a golfer’s ability to get the ball close to the hole with the other 13 clubs prior to using the putter to finesse the ball into the hole. When there is only a 0.10 average put per hole separating the top putter and the 80th on the list, and when you realize that 2010’s top two money leaders, Ernie Els and Phil Mickelson are ranked 54th and 53rd respectively on the average putts list, it is time that analysts realize that this list does not come close to predicting how good a golfer is.

It’s a matter a fact that the top golfers find themselves putting for birdie more often than the golfers at the top of the putting list. The golfers with the lower averages are usually the ones having to chip and putt for par, and these are precisely the ones we seldom hear winning a green jacket and the first ones in line to join the Nationwide tour.

For Phil and Ernie, if they were subject to ‘management decisions’ made by interpreting the data, they would probably be sent to ‘putting-re-certification’ class. Unfortunately this would cause them to spend time away from sustaining and developing their other skills, and would likely lead them to fall off the top of the money list.

The irony would be that they would likely climb up the putting charts, giving ‘management’ the impression that the re-certification classes were effective and failing to realize that they have hindered their ability to be top performers by placing them in a position to sink more ‘PAR’ putts.

Thus, from the analytical sense, the putting average list has no value and the data represents a red herring, that if followed as most analysts interpret it, would lead to good golfers losing their winning ways.

… and as the old saying goes… “There are two things that don’t last very long: dogs chasing cars and pros putting for pars”

Posted on Tuesday, May 11, 2010 by George R.

No comments

03 May 2010

Seeing that my natural vacation sanctuary, where I normally go to break from life’s stresses and enjoy time with my family is about to be permanently destroyed, I decided to break with the Idea Management and Lean tone of this blog in order to reflect a little on quality management. For many years, I helped Mercedes-Benz suppliers improve their quality through lean tools, but also through the use of statistics. Even though I was never formally trained as a Six-Sigma ‘grasshopper’ and much less a Sensei, I did use many of the statistical tools found in the Six Sigma toolbox.

The FMEA has always been a key tool in the auto industry to identify areas of product quality risks and thus planning how to mitigate them. Those components that can play a role in a potential catastrophic failure, loss of life or loss of property, get treated with extra care. To generalize, these components are ‘serialized, and data is recorded along the entire manufacturing chain. Every critical process is monitored, and equipment is designed to “inspect” its own quality, and in the case of critical characteristics, it’s designed to check the quality of preceding processes. Redundancy is so built that if one inspection process fails, the next one will catch the defect. These redundant checks are designed in layers and their ultimate goal is to ensure no bad parts exit the manufacturing process.

With that said, to put six sigma quality in perspective, aircraft are a good example of redundant systems at work. Critical systems in aircraft are designed with multiple backup systems. (Keeping math simple, and not using real life numbers) If a hydraulic system has a natural tendency to fail once in 100,000 uses, applying a backup system ensures that in a worst case scenario, a simultaneous failure will only occur once in 100 billion uses.

In general there are two major reasons for quality failures: The first is the failure to identify a potential failure mode and thus not guarding against it and is normally due to lack of historical reference or a lesson learned. The second is by far the worst, and it’s the failure of people to follow established procedures. This is critical because it is not a reflection of the actual workers, but rather a reflection on management.

Bunji Tozawa said “Blame the process and not the person”. What he eludes to is that management is responsible for the processes and thus a failure is essentially their fault.

Like the auto industry, big oil relies on suppliers, and it’s extremely critical to ensure these suppliers manage and maintain their internal procedures. In the auto industry we don’t only measure and rate suppliers by their ability to supply good parts, but we also audit their adherence to their quality systems and have different means of flagging potential problems before they occur. The proactive approach is taken to ensure that human lives are not lost driving cars.

Having a deep understanding of quality systems, redundancy, and personal ties inside the oil industry, putting my head around the sinking of the BP platform and BPs overall safety and environmental record (Pipeline 2006, Refinery explosion 2005) is almost impossible to think that it all happened because of ‘bad luck’ or failed equipment! This wasn’t a failure as in case one: Not identifying a potential failure mode, but instead a failure to follow procedures and adhering to best practices.

A reason why in general the global oil safety record is good is because of strict processes and procedures (I also wrote about managing safety and how Schlumberger uses idea management systems to manage identified safety risks.) Keep in mind that there are more oil wells in operation than there are aircraft in the air on any given day, and the number of catastrophic incidents pale in comparison to the aeronautical industry. (Here’s an old CNN article showing the worse accidents through 2001)

The bottom line, when the dust clears, a thorough investigation will likely yield a lack of self auditing and supplier auditing practices inside BP, and management’s inability or unwillingness to ensure that the entire corporate culture is driven by adherence to established procedures. A good indicator here will be BP’s response. As they start to blame Transocean (the operator of the rig), “a faulty blow-out” control system, a missed maintenance step, or operator error, what they really will be saying is that management has been incompetent and unable to drive a corporate culture that adheres to strict safety and environmentally relevant procedures.

Posted on Monday, May 03, 2010 by George R.

No comments