C. Northcote Parkinson is best known for, not surprisingly, Parkinson’s Law:
Work expands so as to fill the time available for its completion.
But there are many more gems in “Parkinson’s Law and Other Studies in Administration.” On a re-read this past week, I discovered this:
It is now known that a perfection of planned layout is achieved only by institutions on the point of collapse. This apparently paradoxical conclusion is based upon a wealth of archaeological and historical research, with the more esoteric details of which we need not concern ourselves. In general principle, however, the method pursued has been to select and date the buildings which appear 60 to have been perfectly designed for their purpose. A study and comparison of these has tended to prove that perfection of planning is a symptom of decay. During a period of exciting discovery or progress there is no time to plan the perfect headquarters. The time for that comes later, when all the important work has been done. Perfection, we know, is finality; and finality is death.
The value of re-reading classics is that what was missed on a prior read becomes apparent given the current context. My focus for much of 2013 was on mapping out software design process for a group of largely non-technical instructional designers. If managing software developers is akin to herding cats, finding a way to shepherd non-technical creative types such as instructional designers (particularly old school designers) can be likened to herding a flock of canaries – all over the place in three dimensions.
What made this effort successful was framing the design process as a set of guidelines that were easy to track and monitor. The design standards and best practices, for example, consist of five bullet points. These are far from perfect, but they allow for the dynamic vitality suggested by Parkinson. If the design standards and best practices document ever grew past something that could fit on one page, it would suggest the company is overly specialized and providing services to a narrow slice of the potential client pie. In the rapidly changing world of adult eduction, this level of perfection would most certainly suggest decay and risk collapse as client needs change.
In their book “Team Geek”, Brian Fitzpatrick and Ben Collins-Sussman make the following valid point:
The tradition of putting your name at the top of your source code is an old one (heck, both of us have done it in the past [GE: As have I.]), and may have been appropriate in an age where programs were written by individuals and not teams. Today, however, many people may touch a particular piece of code, and the issue of name attribution in a file is the cause of much discussion, wasted time, and hurt feelings.
They don’t elaborate on this much other than to describe the complexity of fairly assigning credit when multiple individuals touch the code over time. Their solution is to assign credit at the project level rather than the code level. Code level authorship can be derived from the version control system if needed. The solution fixes this and many other issues.
In my own experience, I haven’t observed any “hurt feelings,” but I haven’t managed teams the size of those managed by Fitzpatrick and Collins-Sussman. What I have observed is a reluctance to modify code if an author has been identified in the code. There are many sources for this reluctance. A few are:
- “It’s not my code, so [insert author’s name here] should be making any changes.” This is, basically, a shifting of burden response.
- “[Insert author’s name here] is [belligerent/moody/possessive/the alpha geek] and I don’t want to deal with that.”
- “[Insert author’s name here] wrote this?! What a [hack/newbie]. Why do I have to clean this up?” Responses like this seek to establish blame for any current or future problems with the code.
When authors are not explicitly identified in the code, reluctance to make code changes based on responses like those above are minimized and the current coder can better focus on the task at hand.
I’ve frequently heard managers express the need to “trust” their employees with the work they hired them to do before giving them full control over their responsibilities. On one level, this makes sense. But it is a very basic level, usually involving detailed tasks. Hiring an individual into a help desk position, for example, might require several weeks of detailed supervision to insure the new hire understands the ticket system, proper phone etiquette, and the systems they will be required to support. When I hear the “trust” criteria come from C-level executives, however, it’s usually a reliable red flag for control issues likely to limit growth opportunities for the new hire or perhaps even the organization.
The trust I’m referring to is not of the moral or ethical variety. What one holds in their heart can only be revealed by circumstance and behavior. Rather, it is the trust in a fellow employee’s competency to perform the tasks for which they were hired.
If you’ve just hired an individual with deep experience and agreed to pay them a hefty salary to fulfill that role, does it make sense for a senior executive to burden himself with the task of micro-monitoring and micro-managing the new hire’s performance until some magical threshold of “trust” is reached regarding the new hire’s ability to actually fulfill the role? When challenged, I’ve yet to hear an executive clearly articulate the criteria for trust. It’s usually some variant of “I’ll know it when it see it.”
Better to follow the military axiom: Trust, but verify. Before you even begin to interview for the position, know the criteria of success for fulfilling the role. Much easier to monitor the individual’s fit – and more importantly make corrective interventions (which are likely to be very minor adjustments) – on an infrequent basis rather than attempt to do two jobs at once (yours and that of the position for which you just hired someone.)
As an exercise, planning poker can be quite useful in instances where no prior method or process existed for estimating levels of effort. Problems arise when organizations don’t modify the process to suite the project, the composition of the team, or the organization.
The most common team composition for these these types of sizing efforts have involved technical areas – developers and UX designers – with less influence from strategists, instructional designers, quality assurance, and content developers. With a high degree for functional overlap, consensus on an estimated level of effort was easier to achieve.
As the estimating team begins to include more functional groups, the overlap decreases. This tends to increase the frequency of back-and-forth between functional groups pressing for a different size (usually larger) based on their domain of expertise. This is good for group awareness of overall project scope, however, it can extend the time needed for consensus as individuals may feel the need to press for a larger size so as not to paint themselves into a commitment corner.
Additionally, when a more diverse set of functional groups are included in the estimation exercise, it become important to captured the size votes from the individual functional domains while driving the overall exercise based on the group consensus. Doing so means the organization can collect a more granular set of data useful for future sizing estimates by more accurately matching and comparing, for example, the technical vs support material vs. media development efforts between projects. This may also minimize the desire by some participants to press harder for their estimate, knowing that it will be captured somewhere.
Finally, when communicating estimates to clients or after the project has moved into active development, project managers can better unpack why a particular estimate was determined to be a particular size. While the overall project (or a component of the project) may have been given a score of 95 on a scale of 100, a manager can look back on the vote and see that the development effort dominated the vote whereas content editors may have voted a size estimate of 40. This might also influence how manager negotiate timelines for (internal and external) resource requirements.
My experience, and observation with clients, is that accountability doesn’t work particularly well as a corporate value. The principle reason is that it is an attribute of accusation. If I were to sit you down and open our conversation with “There is something for which you are going to be held accountable.”, would your internal response be positive or negative? Similarly, if you were to observe a superior clearly engaged in a behavior that was contrary to the interests of the business, but not illegal, how likely are you to confront them directly and hold them accountable for the transgression? In many cases, that’s likely to be a career limiting move.
There is a reason no one gives awards for accountability. Human nature is such that most people don’t want to be held accountable. People do, however, want others to be held accountable. It’s a badge worn by scapegoats and fall guys. Consequently, accountability as a corporate value tends to elicit blame behavior and, in several extreme cases I’ve observed, outright vindictiveness. The feet of others are held to the accountability fire with impunity in the name of upholding the enshrined corporate value.
Another limitation to accountability as a corporate value is that it implies a finality to prior events and a reckoning of behaviors that somehow need to balance. What’s done is done. Time now to visit the bottom line, determine winners and losers, good and bad. Human performance within any business isn’t so easily measured. And this is certainly no way to inspire improvement.
So overall, then, a corporate value of accountability is a negative value, like the Sword of Damocles, something to make sure never hangs over your own head.
Yet, in virtually every case, I can recognize the positive intention behind the listed value. What I think most organizations are going after is more in line with the ability to recognize when something could have been done better. To that end, a value of ‘response ability” would serve better; the complete package of being able to recognize a failure, learn from the experience, and respond in a way that builds toward success. On the occasions I’ve observed individuals behaving in this manner repeatedly and consistently, the idea of “accountability” is near meaningless. The inevitable successes have as their foundation all the previous failures. That’s how the math of superior human performance is calculated.
The IT radar is showing increased traffic related to The-Next-Big-Thing-After-Agile. The hype suggests it’s “Agile 2.0” or perhaps “Ultra Light Agile.” This also suggests the world is ready for something I’ve been working on for quite some time: Ultimate Ultra Extreme Lean-To-The-Bone Hyper Flexible Agile Software Development Methodology. The essence of all previous methodologies distilled to one, easy to remember step:
- Get it done.
You read it here first.
And, your welcome.
In a basic growth model, some finite resource is consumed at a rate such that the resource is eventually depleted. When that happens the growth that was dependent on that resource stops and the system begins to collapse. If it happens that the resource is renewable eventually the rate of consumption matches the rate of renewal and the system enters into a state of equilibrium (no growth). This is illustrated by the black line in Figure 1. In this second scenario if the rate of consumption exceeds the rate of renewal the system will again collapse.
In the Solow model of growth (neoclassical growth model) a new element is introduced: the effect of technology or innovation on the growth curve. Without innovation, in systems where technology stays fixed, growth will eventually stop. The introduction of innovative solutions to resource problems, however, has the effect of raising the upper bound to growth limits. This is illustrated by the red line in Figure 1.
A prevailing assumption with innovation is that it is necessarily synonymous with invention. To be innovative is to create something that has not previously existed. This is an erroneous assumption. History is filled with accounts of dominant societies furthering their success by adopting innovative discoveries made by smaller societies. The adoption of Arabic numerals by countries that had previously used Roman numerals is a striking example of a dominant society integrating an innovation from a smaller society.
The challenge for an organization, then, isn’t so much how to be innovative, rather, how to better recognize and adopt innovations discovered elsewhere. More succinctly, how to better seek out and distinguish innovative solutions aligned with the organization’s strategy from those that simply rate high on the coolness scale.
This is interesting: Getting Crafty: Why Coders Should Try Quilting and Origami
I’ve never done any quilting (I’ve a sister who’s excellent at that), but I’ve done origami since forever. In fact, origami was a way to explain to other people what I did for a living. I’d start with a 6” x 6” piece of paper that was white on one side and black on the other (digital!). Then I’d fold a boat, or a frog, or a crane. That’s what software developers work with – they begin with ones and zeros, bits and bytes. From there, they can build anything.
Not sure the explanation always worked, but at least it was entertaining and instilled a small measure of appreciation for what I and other software developers did for a living. Certainly better than describing the challenges of buffer overflow issues or SQL injection attack counter measures. That approach gets one uninvited from parties.
There is an interesting conversation thread on Slashdot asking “What practices impede developers’ productivity?” The conversation is in response to an excellent post by Steve McConnell from 2008 addressing productivity variations among software developers and teams and the origin of “10x” – that is, the observation noted in the wild of “10-fold differences in productivity and quality between different programmers with the same levels of experience and also between different teams working within the same industries.”
The Slashdot conversation has two main themes, one focuses fundamentally on communication: “good” meetings, “bad” meetings, the time of day meetings are held, status reports by email – good, status reports by email – bad, interruptions for status reports, perceptions of productivity among non-technical coworkers and managers, unclear development goals, unclear development assignments, unclear deliverables, too much documentation, to little documentation, poor requirements.
A second theme in the conversation is reflected in what systems dynamics calls “shifting the burden”: individuals or departments that do not need to shoulder the financial burden of holding repetitively unproductive meetings involving developers, arrogant developers who believe they are beholding to none, the failure to run high quality meetings, code fast and leave thorough testing for QA, reliance on tools to track and enhance productivity (and then blaming them when they fail), and, again, poor requirements.
These are all legitimate problems. And considered as a whole, they defy strategic interventions to resolve. The better resolutions are more tactical in nature and rely on the quality of leadership experience in the management ranks. How good are they at 1) assessing the various levels of skill among their developers and 2) combining those skills to achieve a particular outcome? There is a strong tendency, particularly among managers with little or no development experience, to consider developers as a single complete package. That is, every developer should be able to write new code, maintain existing code (theirs and others), debug any code, test, and document. And as a consequence, developers should be interchangeable.
This is rarely the case. I can recall an instance where a developer, I’ll call him Dan, was transferred into a group for which I was the technical lead. The principle product for this group had reached maturity and as a consequence was beginning to become the dumping ground for developers who were not performing well on projects requiring new code solutions. Dan was one of these. He could barely write new code that ran consistently and reliably on his own development box. But what I discovered is that he had a tenacity and technical acuity for debugging existing code.
Dan excelled at this and thrived when this became the sole area of his involvement in the project. His confidence and respect among his peers grew as he developed a reputation for being able to ferret out particularly nasty bugs. Then management moved him back into code development where he began to slide backward. I don’t know what happened to him after that.
Most developers I’ve known have had the experience of working with a 10x developer, someone with a level of technical expertise and productivity that is undeniable, a complete package. I certainly have. I’ve also had the pleasure of managing several. Yet how many 10x specialists have gone underutilized because management was unable to correctly assess their skills and assign them tasks that match their skills?
Most of the communication issues and shifting the burden behaviors identified in the Slashdot conversation are symptomatic of management’s unrealistic expectations of relative skill levels among developers and the inability to assess and leverage the skills that exist within their teams.