Collaboration and Code Authorship Credits

In their book “Team Geek”, Brian Fitzpatrick and Ben Collins-Sussman make the following valid point:

The tradition of putting your name at the top of your source code is an old one (heck, both of us have done it in the past [GE: As have I.]), and may have been appropriate in an age where programs were written by individuals and not teams. Today, however, many people may touch a particular piece of code, and the issue of name attribution in a file is the cause of much discussion, wasted time, and hurt feelings.

They don’t elaborate on this much other than to describe the complexity of fairly assigning credit when multiple individuals touch the code over time. Their solution is to assign credit at the project level rather than the code level. Code level authorship can be derived from the version control system if needed. The solution fixes this and many other issues.

In my own experience, I haven’t observed any “hurt feelings,”  but I haven’t managed teams the size of those managed by Fitzpatrick and Collins-Sussman. What I have observed is a reluctance to modify code if an author has been identified in the code. There are many sources for this reluctance. A few are:

  • “It’s not my code, so [insert author’s name here] should be making any changes.” This is, basically, a shifting of burden response.
  • “[Insert author’s name here] is [belligerent/moody/possessive/the alpha geek] and I don’t want to deal with that.”
  • “[Insert author’s name here] wrote this?! What a [hack/newbie]. Why do I have to clean this up?” Responses like this seek to establish blame for any current or future problems with the code.

When authors are not explicitly identified in the code, reluctance to make code changes based on responses like those above are minimized and the current coder can better focus on the task at hand.

Trust and Managing People

I’ve frequently heard managers express the need to “trust” their employees with the work they hired them to do before giving them full control over their responsibilities. On one level, this makes sense. But it is a very basic level, usually involving detailed tasks. Hiring an individual into a help desk position, for example, might require several weeks of detailed supervision to insure the new hire understands the ticket system, proper phone etiquette, and the systems they will be required to support. When I hear the “trust” criteria come from C-level executives, however, it’s usually a reliable red flag for control issues likely to limit growth opportunities for the new hire or perhaps even the organization.

The trust I’m referring to is not of the moral or ethical variety. What one holds in their heart can only be revealed by circumstance and behavior. Rather, it is the trust in a fellow employee’s competency to perform the tasks for which they were hired.

If you’ve just hired an individual with deep experience and agreed to pay them a hefty salary to fulfill that role, does it make sense for a senior executive to burden himself with the task of micro-monitoring and micro-managing the new hire’s performance until some magical threshold of “trust” is reached regarding the new hire’s ability to actually fulfill the role? When challenged, I’ve yet to hear an executive clearly articulate the criteria for trust. It’s usually some variant of “I’ll know it when it see it.”

Better to follow the military axiom: Trust, but verify. Before you even begin to interview for the position, know the criteria of success for fulfilling the role. Much easier to monitor the individual’s fit – and more importantly make corrective interventions (which are likely to be very minor adjustments) – on an infrequent basis rather than attempt to do two jobs at once (yours and that of the position for which you just hired someone.)

Agile Planning Poker has a Tell

As an exercise, planning poker can be quite useful in instances where no prior method or process existed for estimating levels of effort. Problems arise when organizations don’t modify the process to suite the project, the composition of the team, or the organization.

The most common team composition for these these types of sizing efforts have involved technical areas – developers and UX designers – with less influence from strategists, instructional designers, quality assurance, and content developers. With a high degree for functional overlap, consensus on an estimated level of effort was easier to achieve.

As the estimating team begins to include more functional groups, the overlap decreases. This tends to increase the frequency of  back-and-forth between functional groups pressing for a different size (usually larger) based on their domain of expertise. This is good for group awareness of overall project scope, however, it can extend the time needed for consensus as individuals may feel the need to press for a larger size so as not to paint themselves into a commitment corner.

Additionally, when a more diverse set of functional groups are included in the estimation exercise, it become important to captured the size votes from the individual functional domains while driving the overall exercise based on the group consensus. Doing so means the organization can collect a more granular set of data useful for future sizing estimates by more accurately matching and comparing, for example, the technical vs support material vs. media development efforts between projects. This may also minimize the desire by some participants to press harder for their estimate, knowing that it will be captured somewhere.

Finally, when communicating estimates to clients or after the project has moved into active development, project managers can better unpack why a particular estimate was determined to be a particular size. While the overall project (or a component of the project) may have been given a score of 95 on a scale of 100, a manager can look back on the vote and see that the development effort dominated the vote whereas content editors may have voted a size estimate of 40. This might also influence how manager negotiate timelines for (internal and external) resource requirements.

Accountability as a Corporate Value

My experience, and observation with clients, is that accountability doesn’t work particularly well as a corporate value. The principle reason is that it is an attribute of accusation. If I were to sit you down and open our conversation with “There is something for which you are going to be held accountable.”, would your internal response be positive or negative? Similarly, if you were to observe a superior clearly engaged in a behavior that was contrary to the interests of the business, but not illegal, how likely are you to confront them directly and hold them accountable for the transgression? In many cases, that’s likely to be a career limiting move.

There is a reason no one gives awards for accountability. Human nature is such that most people don’t want to be held accountable. People do, however, want others to be held accountable. It’s a badge worn by scapegoats and fall guys. Consequently, accountability as a corporate value tends to elicit blame behavior and, in several extreme cases I’ve observed, outright vindictiveness. The feet of others are held to the accountability fire with impunity in the name of upholding the enshrined corporate value.

Another limitation to accountability as a corporate value is that it implies a finality to prior events and a reckoning of behaviors that somehow need to balance. What’s done is done. Time now to visit the bottom line, determine winners and losers, good and bad. Human performance within any business isn’t so easily measured. And this is certainly no way to inspire improvement.

So overall, then, a corporate value of accountability is a negative value, like the Sword of Damocles, something to make sure never hangs over your own head.

Yet, in virtually every case, I can recognize the positive intention behind the listed value. What I think most organizations are going after is more in line with the ability to recognize when something could have been done better. To that end, a value of ‘response ability” would serve better; the complete package of being able to recognize a failure, learn from the experience, and respond in a way that builds toward success. On the occasions I’ve observed individuals behaving in this manner repeatedly and consistently, the idea of “accountability” is near meaningless. The inevitable successes have as their foundation all the previous failures. That’s how the math of superior human performance is calculated.

Agile 2.0

The IT radar is showing increased traffic related to The-Next-Big-Thing-After-Agile. The hype suggests it’s “Agile 2.0” or perhaps “Ultra Light Agile.” This also suggests the world is ready for something I’ve been working on for quite some time: Ultimate Ultra Extreme Lean-To-The-Bone Hyper Flexible Agile Software Development Methodology. The essence of all previous methodologies distilled to one, easy to remember step:

  1. Get it done.

You read it here first.

And, your welcome.

Innovation and Limits to Growth

In a basic growth model, some finite resource is consumed at a rate such that the resource is eventually depleted. When that happens the growth that was dependent on that resource stops and the system begins to collapse. If it happens that the resource is renewable eventually the rate of consumption matches the rate of renewal and the system enters into a state of equilibrium (no growth). This is illustrated by the black line in Figure 1. In this second scenario if the rate of consumption exceeds the rate of renewal the system will again collapse.

In the Solow model of growth (neoclassical growth model) a new element is introduced: the effect of technology or innovation on the growth curve. Without innovation, in systems where technology stays fixed, growth will eventually stop. The introduction of innovative solutions to resource problems, however, has the effect of raising the upper bound to growth limits. This is illustrated by the red line in Figure 1.

innovation_boost

Figure 1 – Innovation Boost
(click to enlarge)

A prevailing assumption with innovation is that it is necessarily synonymous with invention. To be innovative is to create something that has not previously existed. This is an erroneous assumption. History is filled with accounts of dominant societies furthering their success by adopting innovative discoveries made by smaller societies. The adoption of Arabic numerals by countries that had previously used Roman numerals is a striking example of a dominant society integrating an innovation from a smaller society.

The challenge for an organization, then, isn’t so much how to be innovative, rather, how to better recognize and adopt innovations discovered elsewhere. More succinctly, how to better seek out and distinguish innovative solutions aligned with the organization’s strategy from those that simply rate high on the coolness scale.

Using Origami to Explain What I Do

This is interesting: Getting Crafty: Why Coders Should Try Quilting and Origami

I’ve never done any quilting (I’ve a sister who’s excellent at that), but I’ve done origami since forever. In fact, origami was a way to explain to other people what I did for a living. I’d start with a 6” x 6” piece of paper that was white on one side and black on the other (digital!). Then I’d fold a boat, or a frog, or a crane. That’s what software developers work with – they begin with ones and zeros, bits and bytes. From there, they can build anything.

Not sure the explanation always worked, but at least it was entertaining and instilled a small measure of appreciation for what I and other software developers did for a living. Certainly better than describing the challenges of buffer overflow issues or SQL injection attack counter measures. That approach gets one uninvited from parties.

 

Achieving 10x

There is an interesting conversation thread on Slashdot asking “What practices impede developers’ productivity?” The conversation is in response to an excellent post by Steve McConnell from 2008 addressing productivity variations among software developers and teams and the origin of “10x” – that is, the observation noted in the wild of “10-fold differences in productivity and quality between different programmers with the same levels of experience and also between different teams working within the same industries.”

The Slashdot conversation has two main themes, one focuses fundamentally on communication: “good” meetings, “bad” meetings, the time of day meetings are held, status reports by email – good, status reports by email – bad, interruptions for status reports, perceptions of productivity among non-technical coworkers and managers, unclear development goals, unclear development assignments, unclear deliverables, too much documentation, to little documentation, poor requirements.

A second theme in the conversation is reflected in what systems dynamics calls “shifting the burden”: individuals or departments that do not need to shoulder the financial burden of holding repetitively unproductive meetings involving developers, arrogant developers who believe they are beholding to none, the failure to run high quality meetings, code fast and leave thorough testing for QA, reliance on tools to track and enhance productivity (and then blaming them when they fail), and, again, poor requirements.

These are all legitimate problems. And considered as a whole, they defy strategic interventions to resolve. The better resolutions are more tactical in nature and rely on the quality of leadership experience in the management ranks. How good are they at 1) assessing the various levels of skill among their developers and 2) combining those skills to achieve a particular outcome? There is a strong tendency, particularly among managers with little or no development experience, to consider developers as a single complete package. That is, every developer should be able to write new code, maintain existing code (theirs and others), debug any code, test, and document. And as a consequence, developers should be interchangeable.

This is rarely the case. I can recall an instance where a developer, I’ll call him Dan, was transferred into a group for which I was the technical lead. The principle product for this group had reached maturity and as a consequence was beginning to become the dumping ground for developers who were not performing well on projects requiring new code solutions. Dan was one of these. He could barely write new code that ran consistently and reliably on his own development box. But what I discovered is that he had a tenacity and technical acuity for debugging existing code.

Dan excelled at this and thrived when this became the sole area of his involvement in the project. His confidence and respect among his peers grew as he developed a reputation for being able to ferret out particularly nasty bugs. Then management moved him back into code development where he began to slide backward. I don’t know what happened to him after that.

Most developers I’ve known have had the experience of working with a 10x developer, someone with a level of technical expertise and productivity that is undeniable, a complete package. I certainly have. I’ve also had the pleasure of managing several. Yet how many 10x specialists have gone underutilized because management was unable to correctly assess their skills and assign them tasks that match their skills?

Most of the communication issues and shifting the burden behaviors identified in the Slashdot conversation are symptomatic of management’s unrealistic expectations of relative skill levels among developers and the inability to assess and leverage the skills that exist within their teams.

From Gamification to Simulation: Enhancing the Transfer of Learning

Each year brings to the business world a new swarm of buzzwords. Many are last year’s buzzwords, humming the same tune at a different pitch, fighting to find new life in the buzzword-eat-buzzword business world. Others are new arrivals from beyond the information horizon. I caught one of the new ones in my net earlier this year: “Gamification,”

The most succinct definition of gamification comes from research lead by Deterding et. al.1

“Gamification” is the use of game design elements in non-game contexts.

Sounds pretty straightforward, simple even. So why is gamification buzzing in everyone’s business ear?

To begin with, it’s the result of the millenials growing up with games and extending their early experiences into the adult world. According to a 2008 Pew Research Center survey, “Fully 97% of teens ages 12-17 play computer, web, portable, or console games.”2 With this level of participation achieved by 2008, it isn’t a stretch to consider the participation in games by the same age group for the prior 5 or more years to be a similarly high occurrence. The “gamification” experience, then, would include many, if not most, the rising young professionals in a wide variety of industries. Indeed, as Deloitte reports, “The average game player today is 37 years old, and 42% of game players are women.”3

That this would be the case isn’t unusual. It happens with every generation. I remember commenting to a friend in the early ’00 about how all the ugly, boxy, chunky car styles prevalent on the car dealer lots were the consequence of a generation raised on Transformer cartoons ascending to automotive engineering positions.

Secondly, the technology has evolved such that designing and introducing game elements into business environments is a much more straightforward process. Toolkits, development libraries, API’s, and design practices have become more robust and standardized. Gamification design principles are being applied to a variety of contexts in part because it is much easier to do so now than it was 10 or even 5 years ago.

However, the ascendent application of gamification in business should not presuppose an intrinsic value. There is plenty of room to question its value. In fact, several professional game designers, such as Amy Jo Kim, CEO of Shufflebrain, foresee the word “gamification” eventually disappearing from the lexicon of business. Rather, gamification “will become part of the toolkit of many different types of design. In a similar way ‘AI’ went away. We don’t think of Amazon as an AI system, though it does have what used to be called artificial intelligence in it with its collaborative filtering mechanism.”4 In other words, the word will pass, the buzz will die. What will be left is a trail of design techniques that will have folded into a larger set of existing techniques for enhancing user experiences.

Perhaps most importantly, in the context of business and professional development, there is little evidence to support the idea that gamification achieves anything deeper than entertainment and basic operant conditioning of simple behavioral changes. While the strategic application of gamification design principles can engage a learner and engender motivation, thereby enhancing an individual’s learning experience, their ability to drive deep learning by themselves is probably not possible. It’s a case of short-term factors producing short-term benefits. When the task is to acquire a deeper and broader understanding of a particular subject, a more immersive approach, such as it possible with simulations, is more effective.

Therein lies the challenge for those of us interested in preparing the rising generation of professionals with the skills needed to become the next generation of business leaders. To this end, even the most robust application of gamification principles will fail. The thinking skills needed to create competent leaders can’t be learned on the level of gaming. Amassing the greatest number of leadership points, the largest collection of “leadership” badges, and the longest run on a leader board have about as much in common with quality leadership as collecting cookbooks has with becoming a master chef.

It is highly probable the skillful application of game elements can enhance the effects of a simulation. If they are over or misapplied, however, they risk becoming the latest manifestation of the “dancing bologna” so prevalent in the early days of the world wide web. To this end, the insights from Werbach and Hunter offer guidance:

To figure out where gamification might fit your needs, consider the following four core questions: Motivation: Where would you derive value from encouraging behavior? Meaningful Choices: Are your target activities sufficiently interesting? Structure: Can the desired behaviors be modeled through a set of algorithms? Potential Conflicts: Can the game avoid conflicts with existing motivational structures?5

 

References

1 Deterding, S., Dixon, D., Khaled, R., Nacke, L. (2011). From game design elements to gamefulness: Defining ‘Gamification’, MindTrek’11, September 28-30, 2011, Tampere, Finland.

2“Teens, Video Games and Civics”, Pew Internet & American Life Project, Pew Research Center Publication, September 16, 2008, Retrieved from http://pewresearch.org/pubs/953/

3“Tech Trends 2012”, Deloitte, Retrieved from http://www.deloitte.com/view/en_US/us/Services/consulting/technology-consulting/technology-2012/index.htm

4Lecture 8.5 – Amy Joe Kim interview with Kevin Werbach, Gamification course offered by The Wharton School, University of Pennsylvania through Coursera.org.

5Werbach, K; Hunter, D. (2012). For the win: How game thinking can revolutionize your business (Kindle Locations 556-563). Perseus Books Group. Kindle Edition.

A Modest Proposal: 2.1

From the genius of David Burge comes an enhancement to my modest proposal for gently deflating the higher education bubble:

In the name of Consumer Protection, recent college graduates should have the ability to return the diploma and not make any reference to receiving education from the college in exchange for a 100% refund of college tuition. This may be extended with a graduated (ha, get it?) reduction for the last four years, with a red line at January 20, 2008.

Genius.