Agile Team Composition: Generalists versus Specialists

In a previous post, I described several of the shortcomings with planning poker, particularly when the tool is used in a context that includes more than just the developer’s shop. Estimating levels of effort for a set of tasks by a close knit group of individuals well qualified to complete those tasks can efficiently and reliable be cast with a tool like planning poker. Once the set of tasks starts to include items that fall outside the expertise of the group, or the group begins to include cross functional team mates, or both, the method becomes increasingly less reliable.

The issue is a mismatch between relative scales when the team includes an increasingly diverse set of project specialties. A content editor is likely to have very little insight into the effort required to modify a production database schema and therefore offer an effort estimation that is little more than a guess based on what they think it “should” be. Similarly for a coder faced with estimating the effort needed to translate 5,000 words of text from English to Latvian. The answer is to add a second dimension to the estimation: a weight factor to the estimator’s level of expertise relative to the nature of the card being considered.

A similar effect is in play when organizing teams according to Agile principles. When implementing Agile methodologies at the organizational level it is important to consider the impact of forming teams of generalists versus specialists. According to Scrum, for example, it is preferable to have a good measure of skill overlap – that is, generalized skills that allow for any one team mate to work on a variety of tasks during a sprint. This is very efficient for highly technical and coherent domains. Differences in preferred coding language among team mates is less an issue when all team mates understand advanced coding practices and the underlying architecture for the solution. In this case, it is easier to measure sprint velocity when a set of complimentary technical skills allow for cross functional participation and re-balancing when needed (vacations, illness, uncommon feature requests, etc.) As with planning poker,however, efficiency degrades as domain expertise on the team becomes more incoherent.

In cases where the knowledge and solution domains have a great deal of overlap, generalization allows for a lot of high quality collaboration. However, when an Agile team is formed to address larger organizational problems, specialization rather than generalization has a greater influence on team velocity. The risk is that with very little overlap specialized team expertise can result in either shallow solutions or wasteful speculation – waste that isn’t discovered until much later. Moreover, re-balancing the team becomes problematic and most often results in delays and missed commitments due to the limited ability for cross functional participation among team mates.

The challenge, then, for cases where knowledge and solution domains have minimal overlap is to find a way to manage the specialized expertise domains in a way that is useful (reliable, predictable, actionable). Success becomes increasingly dependent on how good an organization is at estimating levels of effort when the team is composed of specialists. The answer is to add a second dimension to the estimation: a weight factor to the estimator’s level of expertise relative to the nature of the card being considered.

With a weighted expertise factor calibrated to the problem and solution contexts a more reliable velocity emerges over time. With this enhanced velocity reliability comes an increased reliability in level of effort estimates, greater accuracy in resource planning, and less waste.

 

Creativity Under Pressure

[In the fall of 2013, I completed a course on Coursera titled "Creativity, Innovation, and Change" presented by Drs. Jack V. Matson, Kathryn W. Jablokow, and Darrell Velegol at Pennsylvania State University. It was an excellent class. At the end, they invited the class (How many tens of thousands of us?) to submit short essays about our experence. The plan was to select the best of these essays and roll them into a free Kindle book. Today, they sent out this update:

We are sorry to inform you that we have decided not to proceed with the publication of a CIC eBook. The submissions were read and commented on by four reviewers.  The consensus was that the manuscripts for the most part would take efforts far beyond our capabilities and means to edit and upgrade to meet the standards for publishing.  We are very sorry that the plan did not work out.

What follows is slightly edited version of the essay I submitted for consideration.]

Creativity Under Pressure – When necessity drives innovation.

Background

When you hear someone speak of an individual they know as being creative, what images come to mind? Often, they spring from stereotypes and assumptions about such an individual being an artist of some sort. Someone unconstrained by time or attachment to career, family, or a mortgage. My personal favorite is the image of a haggard individual wearing a beret, a thin cigarette balanced on their lower lip, and busy being inspired by things us mortals cannot see. A foreign accent adds the final touch to firmly set the speaker’s creative individual in the “That’s not me.” category. Our preconceived notions and assumptions assure us we are not creative.

The truth is all of us are creative. The artist’s Muses are not the only source of inspiration. Chance can inspire creative ideas by the convergence of seemingly unrelated circumstances and events. An activity as passive as sleep can lead to creative ideas. Moving away from beauty toward the other end of the inspiration spectrum, the source may not necessarily be pleasant. Frustration and irritation may inspire us to find a creative solution as we spend an inordinate amount of time searching for a way to scratch a just-out-of-reach itch. Crisis, too, can be a source of inspiration, often of the very intense variety. Chance surrounds all of us. We all sleep. And alas, we are all subject to frustrating or crisis situations from time to time in our lives. We are immersed in opportunities for creative inspiration.

Perhaps the least obvious or explored opportunities for applying creativity and experimenting with innovative ideas happen when we are under pressure to perform. At first glance, the tendency is to think that such situations require extensive knowledge, abundant prior practice, and scenario rehearsals in order to navigate them successfully. It’s fair to say that the chances for successfully responding to a crisis situation are greatly enhanced by deep knowledge and experience related to the situation.

The Apollo 13 mission to the Moon is a familiar example of crisis driven creativity by a team of experts. Survival of the astronauts following the explosion of an oxygen canister depended on NASA engineers finding a way to literally fit a square object into a round hole. The toxic build-up of CO2 could only be prevented by finding a way to fit a cube shaped CO2 filter into a cylinder shaped socket using nothing but the materials the astronauts had with them. Of course we know from history the team of engineers succeeded in this exercise of creative improvisation.

Individual experts have also succeeded in devising creative solutions in crisis situations, and in doing so introduced critical changes in protocols and procedures that have saved lives. For example, smokejumber Wagner Dodge’s actions in the Mann Gulch forest fire on August 5, 1949, introduced the practice of setting escape fires as a way to protect firefighters caught in “blow-ups.”

What I’ve always found interesting about these and similar examples is that, although the creative and innovative solutions were found while “on the job” and using established expertise, the solutions were counter to what the individuals and teams were expected to do. The NASA engineers were paid to design and build an extremely high tech solution for sending three men to the moon and bringing them safely back to Earth. Their job descriptions likely didn’t call for the ability to “build a fully functional CO2 scrubber from a pile of junk.” Wagner Dodge was expected put out fires, not start them.

The Course

What else is important in preparing us to respond creatively in high pressure situations? It’s a question that’s dogged me for years. The “Creativity, Innovation, and Change” (CIC) course offered by Pennsylvania State University and taught by Professors Jack Matson, Kathryn Jablokow, and Darrell Velegol offered an opportunity to explore this question.

The first insight from the course was the importance of the “adjacent possible” to creative and innovative problem solving, even in crisis situations. The phrase was originally suggested by the theoretical biologist Stuart Kauffman to describe evolutionary complexity. The idea is that innovative or creative ideas occur incrementally. While they may appear as substantial leaps forward, they are in fact derived from a collection of adjacent ideas that coalesced to make a single idea possible. As an individual explores deeper and farther into an idea space, they extend the boundaries around which adjacent ideas collect, increasing the potential for new idea combinations. In other words, increasing the likelihood of creative or innovative ideas.

In the case of Apollo 13, the deep experience and knowledge of the engineers allowed them to consider a wide spectrum of possibilities for combining an extremely limited number of objects in a way that could remove CO2 from a spacecraft. In the case of Wagner Dodge, his extensive experience with fighting forest fires allowed him to spontaneously combine a variety of “adjacent possibilities” in a way that lead to the idea of lighting an escape fire.

That’s the theory, anyway. There’s a difference, though, between theory and practice. Albert Einstein explains, “In theory they are the same. In practice, they are not.” The CIC course offered an abundance of techniques and methods that facilitated the transfer of learning and strengthened the connection between theory and practice. In particular, there were two important reframes that opened the door to deliberately improving how I approach creativity and innovation in stressful situations:
“Successful” failures are those that are strategic. That is, not random guesses about what will work, but deliberate experiments designed to succeed. Yet if they fail, the design of the experiments also reveal weaknesses that are preventing the eventual success. Unconsciously, I had already become reasonably good a doing this. But there was significant room for improvement. Using many of the methods and techniques offered during the CIC course, I deliberately unpacked my unconscious competence in this area, consciously explored how I could practice becoming even more competent with this skill, and am now exploring ways to integrate the new capabilities back into unconscious competence.

“Failure” is a necessary, even desired process for finding success. This ran counter to my get-it-right perfectionist approach to success. Likely the result of having to work in too many crisis situations where failure was not an option, it was nonetheless a poor strategy for finding success in day-to-day business. In concert with point number one, these failures should be strategic.
Each of these insights are encapsulated in the “Intelligent Fast Failure” (IFF) principle presented in the CIC course and further described in Prof. Matson’s book, “Innovate or Die!”:

The “Intelligent” part refers to gaining as much knowledge as possible from each failure. The “Fast” part means speeding up the trials to quickly map the unknown thereby minimizing frustration and resources spent.

The “fast” part also increases the pool of “adjacent possibilities” and raises the potential for successful innovations to emerge from the process.

The Test

Has all this experimentation and thought practice made a difference in my ability to respond more creatively in stressful situations? A full-on test in crisis mode hasn’t happened as yet. And frankly, I’d count my self fortunate if I never had to face such a test again. But there are indications the changes are having a positive effect.

These days, work typically offers the most abundant opportunities for stressful situations. Most recently, I was tasked with coordinating a significant change in how an organization went about the process of completing projects. The prevailing process had deep roots in the company’s culture and was incapable of scaling to meet growth goals for the organization. With so much personal investment into the old way of doing things, implementing a more agile and scalable process was going to require as much mediation and negotiation as it was process definition and skill development.
Using the techniques and methods that support the IFF principle, I have been successfully implementing a wide range of new ideas and process improvements into the organization in a way that makes them appear less as a threat and more as a value to each of the stakeholders.

Most Workers Come to Work When Feeling Sick

The staffing firm OfficeTeam has an interesting infographic showing how often people go into work when they’re ill. It’s not stated, but implied that “go into work” means “go into the office to work.” My subjective experience matches what the small OfficeTeam study revealed. Far too many people come into the office to work when, in the interests of the health of their co-workers, it would be better they stayed home.

I’ve heard many excuses for why people do this. “It’s just a head cold.”, “I’m past the contagious phase.”, “There’s too much important work to get done this week.”, “I don’t want to get behind any further than I already am.” The list is endless. When my career advanced to the point I needed to manage people, I learned more of what was motivating some of these excuses. They included things like home life was so miserable that the employee would rather come into the office sick than stay at home. There was also a perverse incentive in play at work environments that allotted employees a bulk quantity of paid time off for use as vacation and sick time. Want a longer vacation? Work sick rather than take a “vacation” day to recoup and prevent the spread illnesses to co-workers. As a manager, I frequently used my authority to send people home when they came in sick.

Viewed from another perspective, the OfficeTeam study reveals an often unstated value to telecommuting. When one at least has the option to work remotely and their productivity is not compromised by that option, they can organize their work around such things as seasonal illness much more effectively and prevent the spread of illness to co-workers. Personally, I took paid time off for sick days. I found it much easier to “catch up” when I felt better than try to remain productive when sick. In fact, the work that was done while ill was usually of poor quality and needed to be reworked later. However, when feeling ill while working remote, I would put in a day or two of light work and make up the slack later when I felt better, even if that make-up time was on the weekend.

In this type of scenario and given the long view, productivity is likely to be consistent and of high quality. Furthermore, containing the illness and preventing the spread to other members of the work team helps maintain organizational productivity. A topic for another day is: How do you measure productivity in a remote work force so that it can be evaluated for consistency, quality, and other such metrics?

Essential Graphics #2

Big Data Edition

Those on the Big Data Bandwagon say you can never have too much data. But if you do have too much to manage and are overloaded, you can always jump off the wagon and push.

overload

Parkinson’s Law of Perfection

C. Northcote Parkinson is best known for, not surprisingly, Parkinson’s Law:

Work expands so as to fill the time available for its completion.

But there are many more gems in “Parkinson’s Law and Other Studies in Administration.” On a re-read this past week, I discovered this:

It is now  known  that  a  perfection  of  planned  layout  is  achieved  only  by institutions  on  the   point  of  collapse.  This   apparently  paradoxical conclusion is based upon a wealth of archaeological and historical research, with the  more esoteric details of  which we need not concern  ourselves. In general  principle, however, the method pursued has been to  select and date the buildings  which  appear  60 to have been perfectly  designed for  their purpose. A study and comparison of these has tended to prove that perfection of planning is a symptom of decay. During a  period of exciting discovery or progress there is  no time  to  plan the perfect headquarters.  The time for that comes  later, when all the important work has been done. Perfection, we know, is finality; and finality is death.

The value of re-reading classics is that what was missed on a prior read becomes apparent given the current context. My focus for much of 2013 was on mapping out software design process for a group of largely non-technical instructional designers. If managing software developers is akin to herding cats, finding a way to shepherd non-technical creative types such as instructional designers (particularly old school designers) can be likened to herding a flock of canaries – all over the place in three dimensions.

What made this effort successful was framing the design process as a set of guidelines that were easy to track and monitor. The design standards and best practices, for example, consist of five bullet points. These are far from perfect, but they allow for the dynamic vitality suggested by Parkinson. If the design standards and best practices document ever grew past something that could fit on one page, it would suggest the company is overly specialized and providing services to a narrow slice of the potential client pie. In the rapidly changing world of adult eduction, this level of perfection would most certainly suggest decay and risk collapse as client needs change.

Collaboration and Code Authorship Credits

In their book “Team Geek”, Brian Fitzpatrick and Ben Collins-Sussman make the following valid point:

The tradition of putting your name at the top of your source code is an old one (heck, both of us have done it in the past [GE: As have I.]), and may have been appropriate in an age where programs were written by individuals and not teams. Today, however, many people may touch a particular piece of code, and the issue of name attribution in a file is the cause of much discussion, wasted time, and hurt feelings.

They don’t elaborate on this much other than to describe the complexity of fairly assigning credit when multiple individuals touch the code over time. Their solution is to assign credit at the project level rather than the code level. Code level authorship can be derived from the version control system if needed. The solution fixes this and many other issues.

In my own experience, I haven’t observed any “hurt feelings,”  but I haven’t managed teams the size of those managed by Fitzpatrick and Collins-Sussman. What I have observed is a reluctance to modify code if an author has been identified in the code. There are many sources for this reluctance. A few are:

  • “It’s not my code, so [insert author's name here] should be making any changes.” This is, basically, a shifting of burden response.
  • “[Insert author's name here] is [belligerent/moody/possessive/the alpha geek] and I don’t want to deal with that.”
  • “[Insert author's name here] wrote this?! What a [hack/newbie]. Why do I have to clean this up?” Responses like this seek to establish blame for any current or future problems with the code.

When authors are not explicitly identified in the code, reluctance to make code changes based on responses like those above are minimized and the current coder can better focus on the task at hand.

Trust and Managing People

I’ve frequently heard managers express the need to “trust” their employees with the work they hired them to do before giving them full control over their responsibilities. On one level, this makes sense. But it is a very basic level, usually involving detailed tasks. Hiring an individual into a help desk position, for example, might require several weeks of detailed supervision to insure the new hire understands the ticket system, proper phone etiquette, and the systems they will be required to support. When I hear the “trust” criteria come from C-level executives, however, it’s usually a reliable red flag for control issues likely to limit growth opportunities for the new hire or perhaps even the organization.

The trust I’m referring to is not of the moral or ethical variety. What one holds in their heart can only be revealed by circumstance and behavior. Rather, it is the trust in a fellow employee’s competency to perform the tasks for which they were hired.

If you’ve just hired an individual with deep experience and agreed to pay them a hefty salary to fulfill that role, does it make sense for a senior executive to burden himself with the task of micro-monitoring and micro-managing the new hire’s performance until some magical threshold of “trust” is reached regarding the new hire’s ability to actually fulfill the role? When challenged, I’ve yet to hear an executive clearly articulate the criteria for trust. It’s usually some variant of “I’ll know it when it see it.”

Better to follow the military axiom: Trust, but verify. Before you even begin to interview for the position, know the criteria of success for fulfilling the role. Much easier to monitor the individual’s fit – and more importantly make corrective interventions (which are likely to be very minor adjustments) – on an infrequent basis rather than attempt to do two jobs at once (yours and that of the position for which you just hired someone.)

Agile Planning Poker has a Tell

As an exercise, planning poker can be quite useful in instances where no prior method or process existed for estimating levels of effort. Problems arise when organizations don’t modify the process to suite the project, the composition of the team, or the organization.

The most common team composition for these these types of sizing efforts have involved technical areas – developers and UX designers – with less influence from strategists, instructional designers, quality assurance, and content developers. With a high degree for functional overlap, consensus on an estimated level of effort was easier to achieve.

As the estimating team begins to include more functional groups, the overlap decreases. This tends to increase the frequency of  back-and-forth between functional groups pressing for a different size (usually larger) based on their domain of expertise. This is good for group awareness of overall project scope, however, it can extend the time needed for consensus as individuals may feel the need to press for a larger size so as not to paint themselves into a commitment corner.

Additionally, when a more diverse set of functional groups are included in the estimation exercise, it become important to captured the size votes from the individual functional domains while driving the overall exercise based on the group consensus. Doing so means the organization can collect a more granular set of data useful for future sizing estimates by more accurately matching and comparing, for example, the technical vs support material vs. media development efforts between projects. This may also minimize the desire by some participants to press harder for their estimate, knowing that it will be captured somewhere.

Finally, when communicating estimates to clients or after the project has moved into active development, project managers can better unpack why a particular estimate was determined to be a particular size. While the overall project (or a component of the project) may have been given a score of 95 on a scale of 100, a manager can look back on the vote and see that the development effort dominated the vote whereas content editors may have voted a size estimate of 40. This might also influence how manager negotiate timelines for (internal and external) resource requirements.

Accountability as a Corporate Value

My experience, and observation with clients, is that accountability doesn’t work particularly well as a corporate value. The principle reason is that it is an attribute of accusation. If I were to sit you down and open our conversation with “There is something for which you are going to be held accountable.”, would your internal response be positive or negative? Similarly, if you were to observe a superior clearly engaged in a behavior that was contrary to the interests of the business, but not illegal, how likely are you to confront them directly and hold them accountable for the transgression? In many cases, that’s likely to be a career limiting move.

There is a reason no one gives awards for accountability. Human nature is such that most people don’t want to be held accountable. People do, however, want others to be held accountable. It’s a badge worn by scapegoats and fall guys. Consequently, accountability as a corporate value tends to elicit blame behavior and, in several extreme cases I’ve observed, outright vindictiveness. The feet of others are held to the accountability fire with impunity in the name of upholding the enshrined corporate value.

Another limitation to accountability as a corporate value is that it implies a finality to prior events and a reckoning of behaviors that somehow need to balance. What’s done is done. Time now to visit the bottom line, determine winners and losers, good and bad. Human performance within any business isn’t so easily measured. And this is certainly no way to inspire improvement.

So overall, then, a corporate value of accountability is a negative value, like the Sword of Damocles, something to make sure never hangs over your own head.

Yet, in virtually every case, I can recognize the positive intention behind the listed value. What I think most organizations are going after is more in line with the ability to recognize when something could have been done better. To that end, a value of ‘response ability” would serve better; the complete package of being able to recognize a failure, learn from the experience, and respond in a way that builds toward success. On the occasions I’ve observed individuals behaving in this manner repeatedly and consistently, the idea of “accountability” is near meaningless. The inevitable successes have as their foundation all the previous failures. That’s how the math of superior human performance is calculated.

Agile 2.0

The IT radar is showing increased traffic related to The-Next-Big-Thing-After-Agile. The hype suggests it’s “Agile 2.0″ or perhaps “Ultra Light Agile.” This also suggests the world is ready for something I’ve been working on for quite some time: Ultimate Ultra Extreme Lean-To-The-Bone Hyper Flexible Agile Software Development Methodology. The essence of all previous methodologies distilled to one, easy to remember step:

  1. Get it done.

You read it here first.

And, your welcome.