Evaluating the KQV Productivity Triad

The following passage is the start of a series of vignettes related to the business world, business decisions, and omnichannel strategies. Some of them will be adapted to book chapters at some point. At this point, the vignettes are essentially a form of intellectual sandboxing that allows me to write about complex situations.

Tentative Title: The first steps in fixing a team: Evaluating the KQV productivity triad concept

Introduction

Imagine for a moment that you work for an organization as a fixer. Some people really do take on this role as a profession. Most people end up getting involved for a short time out of necessity. Within this role your core function within the organization or as a temporary work assignment is to parachute into a situation and fix something. For the most part, you know that you are going to be deployed to fix a significant problem. Where you are going will be clearly defined. However, for the most part how you are supposed to achieve the desired outcomes will be and will remain highly mysterious. It will be a real challenge. It has to be a challenge that is worth overcoming. That has to be the stage and it has to be well understood.

Most of the time the fix will involve working with a group. The following paragraphs of insightful prose relate specifically to working with groups or organizations that have team members. Perhaps at some point a general theory will be proposed, but at this time please accept this evaluation of a special case scenario that involves evaluating a group. Building a general theory can at this point be considered a topic tabled pending future research. The rest of this intellectual inquiry focuses on introducing a framework that can be used for evaluation. Making sure that people do the right things at the right teams for the right reasons is even harder than it sounds. This framework goes a layer deeper than my general philosophy of letting leaders lead, managers manage, and employees succeed.

Evaluation

Evaluating groups is about understanding how knowledge, quality, and velocity drive meaningful productivity. In the end, making major changes to the productivity of a group requires a combination of planning and opportunity. I call that path to evaluating productivity the evaluation of the knowledge, quality, and velocity (KQV) productivity triad. Understanding KQV requires a great depth of understanding about the group being evaluated. It is very rare that you will start an evaluation of an organization that is mature enough to have a key performance indicatory (KPI) compendium and performance dashboards. Those types of artifacts are usually evidence of organizational maturity.

Before breaking down the components of the KQV it would be prudent to talk about why a fixer has to have a solid exit strategy. I ask myself this core question every day during the course of evaluations, “If I walked away today without any warning or preparation, then what things would people keep doing and why would they keep doing them?” People steal great ideas. You can accept that as an almost absolute fact. Another thing that should just be accepted is that good ideas are sticky. People keep doing things when they believe in them to the point of taking ownership in them. Taking ownership of something is a very powerful motivational factor. Both great and good ideas are generally sticky. They tend be very sticky. They benefit from ownership and from sustained interest.

The concept of stickiness really matters when you are evaluating an organization. Any and all recommendations or corrective actions have to be the right suggestions. They have to prove to be sticky. They have to be great. People have to want to steal them and move forward with full ownership of them. If you want to drive any one element of organizational knowledge, quality, or velocity, then all of those changes have to be things that people would steal and they have to be sticky. I always try to channel Stephen Covey’s 7 Habits of Highly Effective People philosophy and begin the process with the end in mind (1989). That is why being focused on ensuring that all recommended changes are the right changes that will end up being incredibly sticky.

KQV

Each topic involved in the productivity evaluation needs to be explored in more detail. Specifically we are about to examine the knowledge, quality, and velocity (KQV) components of the productivity triad.

On Knowledge – The knowledge part of the KQV triad describes what the group needs to know to be successful within the organization. It is all about the collective understanding housed within the team. Organizational knowledge is easier to talk about than it is to define. An organization includes a number of people who have a unique set of knowledge, skills, and abilities that make up the people capacity of the group in total. The knowledge part of people capacity is defined by what people know. It is about how the combination of information and experience is translated into action by a team of people. When you parachute into an organization and start mapping out what is going on it is pretty easy to start evaluating tribal knowledge, daily work instructions, and new hire training materials. It takes practice to truly map what is happening. That mapping has to include what people need to know along the path to do what they are doing.

On Quality – The quality part of the KQV triad is always a little more elusive than the knowledge part of the equation. The opportunity for exceptional quality exists when the steps in a process are well documented and repeatable. If a situation exists where team members are following a well-defined set of repeatable steps to achieve an outcome, then the adherence to those steps could be measured in terms of quality. Any high quality recipe for success includes things that are repeatable. A team member working on the same set of repeatable tasks might do things flawlessly for the first 5 hours of a shift. During that sixth hour the team member might get distracted for a moment and miss a step. That lack of adherence to the well-defined and repeatable steps could generate a problem down the line. That problem could be a serious gap in quality. Most of the time quality is not that easily defined. It is something that has to be observed via some form of sampling. A number of quality related items will probably end up in the KPI compendium. They will also probably be featured in any major departmental dashboard.

On Velocity – The Velocity part of the KQV triad is usually very easy to understand and describe, but incredibly hard to measure in a detailed way. Velocity in this case can be operational defined as the speed between completing elements in the well-defined set of repeatable steps mentioned above. In general, figuring out ways to measure velocity is difficult. Most of the time team members are performing work without any real tracking system. Very few work streams include a defined time stamp at each step along the way. Those system are easier to study via numerical analysis. Most of the time datasets have to be built out via sampling and or other methods of observation. Introducing velocity tracking into an organization requires an extreme amount of planning and a defined process that includes traceable steps.

Conclusion

Parachuting into a situation to help fix something creates some interesting dynamics. Outside of those dynamics a certain set of objectives have to be achieved. The prime objective usually includes fixing something. That fix typically includes rolling out a plan to improve the knowledge, quality, and velocity of a group. Throughout the course of building out a plan an evaluation usually occurs. All of the steps in that plan have to ultimately lead to the creation of a KPI compendium and dashboards. Getting that built out is the key to ensuring ongoing oversight and accountability.

If a clear line of sight exists into the organization, then it will be easier for leaders to understand where the group has been and where they are going in terms of trends and results. That level analysis drives the foundation of solid evidence based decision making. The KQV triad sets the foundation for evaluating an organization. By understanding all three elements it makes it easier to understand how the organization functions and what is necessary to sustain that organization.

The bottom line on KQV is simple. It is pretty straightforward. At some point you are going to be asked to help a department increase productivity. Those increases are going to expected to occur with increases in quality. That is only going to happen based on a balanced KQV improvement strategy. Helping increase productivity in the right ways means a high degree of quality and velocity. High quality velocity is normally paired with an increase in overall departmental knowledge. People have to know what to do and what they are doing before they can transition to the next level of productivity.

The Next 5 Topics include writing about 1) my stop doing list, 2) the power of investing in people, 3) the importance of pracademics, 4) my Disneyworld experience, and 5) omnichannel contact strategies

On Vendor Management

I was going to write down a few thoughts about vendor management this morning. The results of that endeavor were mixed. This writing endeavor may need to happen again in the form of an academic literature review…

Contracting services from another company seems to be happening more and more. Most modern companies work with a number of vendors. Some of those contracts are larger than others. Some of the relationships require a little more work than others. A few of the contracts are so large that a company might designate an employee to engage in vendor management. I have tried to put together a few thoughts on this subject for some time. Most of my attempts have ended up in a false start. They have ended up in the digital equivalent of a crumpled up piece of paper. I might as well have just pressed the delete key.

The hardest part about engaging in vendor management usually involves getting the right evidence to make solid evidence based decisions. Most of the relationships end up being all about the data and the outcomes. The process of how those data and outcomes becomes a secondary consideration. I really want to spend some time writing about vendor recovery and how to help turn the corner. That may be a topic that I commit some time to writing about toward the end of the month.

Right now my thoughts are a little fragmented. Yesterday wore me out. I ran my miles and am doing well physically, but emotionally I am tired. Yes — I’m still wearing the Fitbit Surge super fitness watch every day. Only a few people have asked about the watch throughout the last two weeks. Most of them thought it was a Samsung product. They wanted to know how I liked wearing a smartwatch. Throughout the last two weeks nobody has recognized the watch as a Fitbit product.

On Crisis Management

Crisis management has been at the forefront of my considerations. It has been something that I have been thinking about for several weeks…

Project managers are often brought in to manage various forms of crisis. Most project managers are not experts in risk management or crisis management. They get brought in based on availability. Building out a project plan to resolve a crisis might seem like a great idea. That plan has to fundamentally address the root cause of the problem or the plan will be predestined for failure. A crisis or two is bound to occur within the workplace. Some of them change the very nature of the business by putting it at risk and some of them are relatively harmless.

People sometimes try to turn an escalation into a full brown crisis. Some of those actions can be very self-serving. Escalations can be a very healthy part of how an organization handles business. Unfortunately, some work streams can become so inundated with escalations that the original method of doing business no longer works. That breakdown in process can have devastating effects on the employees and the quality of the work being done. Environments where everything is an escalation take on different types of cadences.

Working with an escalation manger is often a very interesting process. They typically work from the framework hear-listen-do (HLO). They have to figure out the problem. It could be a process that got very far off the tracks or a fundamentally broken part of an application.

Building out a plan to address a crisis presents an interesting challenge. The timeframes are usually compressed. Results are usually measured in interesting ways.

On rebuilding without stopping

Rebuilding and rebranding are mainstays within the business world. These things happen. Some departments within an organization do not have a down period. They run on a tight monthly schedule. That schedule keeps the cadence of events regular. However, at all times — things are in motion. That motion won’t be stopping anytime soon. Production cannot be halted. Consider for a moment the very real challenge of rebuilding or retraining personal while production is still occurring. Any action taken to build or train has to occur without breaking the production workflow.

I have been pondering that scenario for a couple of weeks. I have been pondering the reality of rebuilding a software group from the ground up without stopping production. The group has to continue working. They cannot stop. The group in question has deliverables due every day. Every one of the deliverables is highly visible within the organization. That makes is hard to change directions. Any change of direction has to enhance quality and increase velocity without creating any real disruption.

It comes down to a few simple considerations. Some of them will become apparent within the follow unrelated example. I’m not sure a player from the audience could seamlessly join the orchestra on stage. They would have to know the material and join in rhythm. They have to know that their addition would add value. An orchestra is a very well refined system with a clear path forward and a conductor keeping time. Having that level of instructions changes the focus to execution. An orchestra is judged on their execution. The entire product is heard. Anything out of place truly does become apparent. The audience can literally hear the problems.

At this point, in our software team scenario quality issues are not as readily apparent. The members of the team feel like every moment of their days are occupied with tasks. Over the course of the last couple of quarters the team has become fatigued.

Even a quick review of the group would turn up some basic things to examine. It looks like the group would benefit from additional on the job training, clear work instructions, and a work product tracking system. Before implementing that type of measure a number of alternatives have to be considered. It is during that consideration that the scenario becomes even more interesting. It becomes interesting because the process of mapping the changes to the current process means balancing objectives within the current workflow.

On Quality Velocity

The Love Field airport in Dallas was fairly quiet this morning at 5:00 AM. Starbucks had a huge line of people waiting to order and a large group of people waiting for coffee. I always forget what it is like to travel this early in the morning. My preference for evening flights home is well founded. I’m not sure why that preference gets ignored so frequently.

From time to time it may be necessary to manage a work queue within a defined workflow. That workflow defines the path in which work is going to be completed. The process is well defined. Whatever unit of work being done builds up in the queue. Managing that queue has come to the forefront of my thoughts today. Perhaps later a treatise on workflows will be forthcoming. Today is not that day. Today is a day to think about completing the work. Achieving a degree of quality velocity during the course of managing a work queue requires planning and the right framework of accountability. Setting up that framework of accountability ensures that work is done quickly with a high degree of quality. It also means that if speed or quality metrics are not being met the data is available to explore the causes of that imbalance.

In this example, velocity is a measure of speed to resolution along the workflow and quality is a measure of accuracy based on a calculation of error rates. Depending on the work being done the measure of quality could be defect density or some other calculation. Quality does have to be measured. It should be measured. Mature and well defined workflows should over time have a well-defined mechanism for tracking quality velocity over time. Ensuring that mechanism is setup and running is the part of the equation that has caught my attention today.

Within any workflow a defined beginning and end exist. All of the points in the path from the beginning to the end have to be well understood. Mapping those points into some type of workflow is usually either very straightforward or a real adventure. It could be as easy as defining the unit of work and tracing the route of one unit through the system. Thinking of a workflow as a living system can change your view of things. Complexity within living systems can change your evaluation path. Truly complex systems can be incredibly hard to map. Brevity will always be the heart of whit. Taking something that is truly complex and presenting a simple to understand version requires a keen understanding.

After the workflow has been mapped, the next step in the process would be to figure out how to retain information and quality and velocity. It is very possible that as a workflow crystalized from formational chaos that no mechanism for tracking quality or velocity was built into the system. Adding those layers of tracking may be easy or it could be incredibly challenging. Setting up a mechanism for collecting that information without having adverse effects on the process will introduce an interesting planning challenge to the agenda.

Introducing that framework of quality velocity related accountability opens the door to advanced methodologies. Data mining or something even more advanced like process mining could be introduced. A variety of advanced analytic techniques are becoming more and more mainstream and accessible. People seem to really be accepting the idea that analytic engines can separate the signal from the noise. New methodologies are making it easy to evaluate which signals are the most meaningful. Linking a well-defined course of action to a signal from an analytic engine seems to be a relatively recent phenomenon.