Practical Solutions for your Software Development Challenges


Monthly Tidbits

 

TINY TACTICS TO SAVE TIME AND MONEY


Every week we visit clients, help solve their problems and share information. Since many of the challenges we see are common across organizations, we share one or two of them each month with example solutions.

Tidbit #1: Writing Clear Requirements - One Requirement Per Sentence

Tidbit #2: Agile, Just Code and Ship, Merging with CMMI

Tidbit #3: Improving Your Service Delivery with CMMI® for Services - Quick Look

Tidbit #4: Measuring a Process

Tidbit #5: Where do you Spend Your Time – Fulfillment or Demand?

 

Tidbit #6: Work Planning and Monitoring & Control for a Services Organization

Tidbit #7: Lessons learned When Appraising a Maturity Level 2 Services Organization

Tidbit #8: Using Checklists to Define Best Practices and Improve Performance

Tidbit #9: Reducing Project Friction - Making Work Routinely Faster

Tidbit #10: Using Scrum Wisely - Where Does Design Fit?

Tidbit #11: Using Medical Checklists to Simplify CMMI Process Development - Keeping it Very Simple

Tidbit #12: Synchronizing Scrum and Waterfall

Tidbit #13: Kick-starting a Service Delivery Team - Pragmatically Using CMMI for Services

Tidbit #14: When Scrum Uncovers Stinky Issues and Then Gets Blamed

Tidbit #15: Improving Product/System/IT Development with CMMI® for Development - Quick Look

Tidbit #16: Management Perspective: I Just Want to Deliver My System On Time

Tidbit #17: Everyone Gives Me Estimates and Commitments, but Few are Reliable

Tidbit #18: People Rise to the Standard Around Them - In Your Organization Too!

Tidbit #19 - My Organization Wants to be Agile! What is a Good Life Cycle and What Should We Consider?

Tidbit #20 - You're a Leader — Don't Put Up With Status Quo, Lead the Way Forward!



Tidbit #1: Writing Clear Requirements - One Requirement Per Sentence

Having requirements for a project is essential to clarify the desired end goal and eliminate work that does not serve the end user. However, the majority of the requirements documents we review are pages of long narratives that obscure the intended end result. An example is:

Pay for product: The system should allow entry of data items relevant for customer payment, or allow a user ID to be entered to charge the amount to. If an ID is used, and there are insufficient customer account funds, then either request a credit increase or notify that the account is overdrawn and flag the account as 'credit-line'. Email a receipt of the transaction.

The problems with a paragraph-style requirement are that:

  • It takes an engineer a long time to thoroughly understand the requirement.
  • The multiple "and" and "or" words add ambiguity, since the requirement, "Do A and B or C," might actually mean only one of several possible combinations. If these are misinterpreted, then the misinterpretations are implemented as defects that are expensive to detect and correct later.
  • It contains multiple requirements, some of which will be lost and forgotten by the implementer and have to be re-discovered later in test. For example, "Email a receipt of the transaction" might get lost.
  • Test cases are difficult to generate that fully validate each requirement.

In this requirement, the options offered to the user are confusing since it is unclear whether the account should be automatically overdrawn or whether the user should be given other choices first.

Alternatively, one could rewrite the paragraph, specifying one requirement per sentence. For example:

1. Pay for product
1.1

User selects a) pay with credit card or b) debit ID account

1.2 User enters a) credit card or b) ID and password
1.3 If b) and insufficient funds in the ID account then offer user a credit increase option
1.4 If credit increase option selected then notify that the account is overdrawn
1.5 If the account is overdrawn then flag the account as 'credit-line'
1.6 Email a receipt of the transaction

In this style, we use one line per requirement and limit each requirement to one "and" or one "or" to avoid ambiguity. This makes the requirement quicker to read and understand. Test cases can be generated and mapped to each line of the requirement.

[Top of page]


Tidbit #2: Agile, Just Code and Ship, Merging with CMMI

The interest in Agile software development practices continues to grow as companies seek more efficient methods of developing software while meeting market demands for delivery.

Scrum is a software development methodology based on Agile principles. Agile methodologies promote a project management process that encourages frequent inspection and adaptation, a leadership philosophy using teamwork, self-organization and accountability, with strong customer involvement*.

We have seen companies improve their performance using Agile, CMMI, and other frameworks. In this tidbit, we discuss two common questions that we hear when visiting clients and prospects:

  1. "Agile in my company means code and ship, avoiding requirements and design; what shall I do?"
  2. "My company is trying to use CMMI, and now it wants to consider Agile; what shall I do?"

 

1. Agile in my company means code and ship, avoiding requirements and design; what shall I do?

If the only development activities are coding and some test, then we call an organization "Agile declared", not Agile! Although Agile is intended to speed up a project's progress, it still includes basic engineering and management steps.

Whether your organization chooses an Agile approach, Waterfall or an incremental life cycle, activities such as requirements and design are performed to, a) clarify the project at a time when rework is less expensive, and b) reduce the risk of failure. Abandoning these practices increases your risk of budget, deadline or quality problems.

If you want to be Agile, a good place to start is to take basic life cycle phases, (such as requirements, prototype, design and test) and apply them to a small amount of work that takes between two and four weeks to complete. If you are contemplating becoming full-Agile, add the remaining practices that build communication and tracking into the project. Agile does not mean skipping all known best practices; it means adopting smaller versions of existing practices, and ones defined by Agile.

2. My company is trying to use CMMI, and now it wants to consider Agile; what shall I do?

Almost all the practices in Agile map to CMMI practices, but Agile provides more implementation details.

For example:

  • Project status reviews in CMMI can be implemented by the daily stand up meetings in Agile.
  • Measuring project progress can be implemented by the sprint and backlog burndown charts.
  • Basic requirements definition and ownership can be implemented by user stories and the role of the product owner.
  • Effort and size estimation can be implemented by ideal time and story points.

However, not everything in CMMI is in Agile. Some of these additional practices can be implemented in an agile way. For example, simple version control can be implemented by adding a version number to an artifact and taking a picture of it. Other CMMI practices (such as configuration management, risk management, supplier selection, process auditing and skills assessment / training) are additional steps that can be taken to mature an organization.


For a detailed comparison of CMMI and Scrum, see
http://www.processgroup.com/pgpostmar09.pdf

* Wikipedia.

Neil is a Certified Scrum Master, Certified High-maturity CMMI Lead Appraiser and Six Sigma Green Belt. Mary is a Certified Scrum Master, certified High-maturity CMMI Lead Appraiser.

[Top of page]


 

Tidbit #3: Improving Your Service Delivery with CMMI® for Services* - Quick Look

The CMMI-SVC model is a collection of practices for service providers. Examples of service organizations that can use CMMI-SVC are:

  • Maintenance (of products, systems, facilities)
  • Human resources
  • Health care
  • IT services
  • Logistics

When and why to use CMMI-SVC?
The purpose of using any framework is to improve the performance of an organization. For a group that provides services, this can include: reducing the amount of errors in services provided, reducing the labor needed to provide each service, improving coordination and consistency of service delivery, reducing the risk of mistakes and surprises, and maintaining improvement gains made over time.

A summary of the CMMI-SVC model is provided below. The items marked "(svc)" in red are the major differences compared to the CMMI-Development model (CMMI-DEV). The other Process Areas are common to the Development model [see tidbit#15].

 

The benefits from using CMMI Level 2 and 3 practices are to:

  • Clarify customer service requirements early
  • Scope, plan, estimate and negotiate service work to manage expectations and achieve commitments
  • Track progress to know work status at any time
  • Maintain defined quality standards throughout the organization and report strengths and problems to management
  • Manage versions of documents, consumables and components so that time is not wasted using incorrect versions or recreating lost versions
  • Manage and coordinate multiple teams that have cross dependencies
  • Employ practices for capacity planning, incidence resolution, service creation and service continuity across the organization
  • Look for new service opportunities and create services to satisfy those needs
  • Collect lessons learned and team data to systematically improve future organizational performance

Summary of CMMI-SVC (Staged Representation)

CMMI SVC 1p3

CMMI® for Services, Version 1.3

The Maturity Level 2 Process Areas are summarized below.

Service Delivery: Deliver services in accordance with service agreements. Prepare, execute and improve.

Requirements Management: a) Define the services of the group, b) trace defined services to team activities, c) verify that resources, service definition and actual work done match.

Work Planning: Establish and maintain plans (major tasks, estimates, risks and resources) for service work.

Work Monitoring and Control: Understand the group's progress so that appropriate corrective actions can be taken when performance deviates significantly from the plan.

Supplier Agreement Management: Manage the acquisition of products and services from suppliers. This Process Area can be declared Not Applicable (after discussion with the appraiser) if there are no custom, risky, or integrated suppliers.

Measurement and Analysis: Develop and sustain a measurement capability that is used to support management information needs.

Process and Product Quality Assurance: Provide staff and management with objective insight into processes and associated work products.

Configuration Management: Establish and maintain the integrity of work products using configuration identification (labelling), configuration control (known modifications and permission to modify), configuration status accounting (final status of work products), and configuration audits (checks to verify changes).

The service-specific Process Areas are summarized below.

Capacity and Availability Management: Ensure effective service system performance and ensure that resources are provided and used effectively to support service requirements.

Incident Resolution and Prevention: Ensure timely and effective resolution of service incidents and prevention of service incidents as appropriate.

Service System Transition: Deploy new or significantly changed service system components while managing their effect on ongoing service delivery.

Service Continuity: Establish and maintain plans to ensure continuity of services during and following any significant disruption of normal operations.

Service System Development: Analyze, design, develop, integrate, verify, and validate service systems, including service system components, to satisfy existing or anticipated service agreements. [This is an optional Process Area.]

Strategic Service Management: Establish and maintain standard services in concert with strategic needs and plans.

The remaining Level 3 Process Areas are summarized below.

Organizational Process Focus: Coordinate improvements. Take what is learned at the team level and organize and deploy this information across the organization. The result is that all teams improve faster from the positive and negative lessons of others.

Organizational Process Definition: Organize best practices and historical data into a useful and usable library.

Organizational Training: Assess, prioritize and deploy training across the organization, including domain-specific, technology and process skills needed to reduce errors and improve team efficiency.

Integrated Project Management: Perform work planning using company defined best practices and tailoring guidelines. Use organizational historical data for estimation. Identify dependencies and stakeholders for coordination, and comprehend this information into a master schedule or overall work plan.

As work progresses, coordinate all key stakeholders. Use thresholds to trigger corrective action (such as schedule and effort deviation metrics).

Risk Management: Assess and prioritize all types of risks in a project and develop mitigation actions for the highest priority ones. Start by considering a predefined list of common risks and use a method for setting priorities.

Decision Analysis and Resolution: Systematically select from alternative options using criteria, prioritization and an evaluation method.

* Information source = CMMI® for Services, Version 1.3

Condensed list of Level 2 + 3 practices: http://www.processgroup.com/condensed-cmmi1p3-svc-v1.pdf
Full model text: http://cmmiinstitute.com/assets/reports/10tr034.pdf


® CMMI is registered in the U.S. Patent and Trademark Office by Carnegie Mellon University.


[Top of page]




Tidbit #4: Measuring a Process

If your organization is using defined processes for project and organizational tasks, you might be ready for some measures to objectively see how well they are being implemented. Measurement data can help an organization identify strengths, weaknesses and provide historical data for future planning. In this article, we give example measures for some common processes used within a project.

If you are using CMMI and implementing Generic Practice 3.2*, the ideas listed below are examples of what could be measured to obtain insight into specific process implementations. Tailor these examples to fit your needs or use them as a starting point to generate your own measurements.

In the examples, the term Earned Value is used. Some organizations use Earned Value Management to track project work. Each project task is assigned a value, and when that task is complete, the project records that value to indicate how much of the total project is complete. When process steps such as design, verification, risk and project tracking are included in a project's schedule, the completion of these steps is reflected in the total earned value for the project. When these process steps are skipped, or become stalled, the EVM system reflects these problems.

[Definition: # = Number of]

Requirements Management
Planned/actual effort/schedule to perform the process
# Requirements, growth, volatility over time
# Non-compliances for this process

Project Planning and Management
Planned/actual effort/schedule to perform the process
Final Earned Value number (Cost Performance Index, Schedule Performance Index) or planned vs. actual data showing how well planning and tracking went
# Non-compliances for this process

Risk Management
Planned/actual effort/schedule to perform the process
# Risks over time (increasing or decreasing)
# Risks realized or averted due to risk management
# Risks open vs. closed
# Non-compliances for this process

Configuration Management
Planned/actual effort/schedule to perform the process
# CM-related defects
# Percentage of bad builds
# Non-compliances for this process

Process and Product Assurance
Planned/actual effort/schedule to perform the process
Earned Value of the process assurance activities (i.e., whether all assurance activities were performed, and whether they were under/over budget)
# Requests for help vs. #unfrequented (or scheduled) audits (indicating how sought after the Quality Assurance group is)
# Non-compliances for this process

Measurement and Analysis
Planned/actual effort/schedule to perform the process
# Measures collected but unused
# Non-compliances for this process

Requirements Development
Planned/actual effort/schedule to perform the process
# Derived requirements (indicating the amount of initial ambiguity/completeness)
# Requirements, growth, volatility over time
# Defects in requirements documents, or defect density
# Non-compliances for this process

Design and Implementation
Planned/actual effort/schedule to perform the process
# Defects in design and code/drawings, or defect density
# Output over time (e.g., code/changes/design per engineer-week)
# Non-compliances for this process

Product Integration
Planned/actual effort/schedule to perform the process
# Defects found or defect density
Build audit results (e.g., #bad builds)
# Non-compliances for this process

Verification
Planned/actual effort/schedule to perform the process
# Defects found in verification or defect density
# Non-compliances for this process

Validation
Planned/actual effort/schedule to perform the process
# Test cases
# Defects found in validation, defect density, or test pass/fail percentage
# Actual test cycles vs. planned
# Rate of defect find vs. defect resolution
# Non-compliances for this process

Decision Analysis and Resolution
Planned/Actual Effort to perform the process
Earned Value for DAR tasks
# DARs performed per project
# Non-compliances for this process

Process Improvement
Planned/actual effort/schedule to perform the process
Earned Value for improvement tasks
Adoption of processes over time (e.g., #processes or practices used, measured via a mini-assessment)
# Defects found in process assets
# Non-compliances for this process

Training
Planned/actual effort/schedule to perform the process
# Training hours planned vs. received
Training evaluation scores
# Non-compliances for this process

* CMMI® for Development, v1.3, pp 115, http://cmmiinstitute.com/assets/reports/10tr033.pdf

[Thanks to Ed, Bruce, Pat, Christof, Michael and John for the review.]


[Top of page]




Tidbit #5: Where do you Spend Your Time – Fulfillment or Demand?

Over the years, numerous time management models have been developed to help categorize and improve time allocation. Usually these models group activities by separating urgent and important initiatives from less pressing obligations.

An example of a time management model is provided in Figure 1. With this framework, all activities are categorized based on how important and urgent they are. The goal is to spend the majority of one’s time in quadrants I or II, and spend as little as possible on less important activities in quadrants III and IV.

 

Figure 1. Adapted from “The 7 Habits of Highly Effective People,” Stephen R. Covey, Free Press


The difference between the activities in Quadrant I and Quadrant II is that the activities in Quadrant I have urgent deadlines. The same work can be performed in Quadrant II, but it is planned to avoid extreme and chronic urgency. Quadrant II also includes improvement and preventative actions that reduce the overall volume of problems encountered in Quadrant I.

Individuals and teams that over-commit or make numerous errors tend to spend a lot of time in Quadrant I trying to catch up. While activities in this quadrant can produce growth and success, too much time spent here will only increase the volume of work in Quadrant I. For example, a deliverable that is rushed out full of mistakes will lead to a list of urgent repairs. If those repairs are rushed, more urgent repairs will result.

A good target is for 40-70% of activities to be in quadrant II. If you are spending more than 75% of your time on urgent items, you add excessive stress to your efforts and only increase the volume of Quadrant I work.

One might recognize that certain aspects of their life occupy each of the quadrants. Still, how much time do you spend in Quadrant II? The premise is that the more you spend on these activities, the more you can achieve with less stress. Quadrant I should not be empty since this is where the demand on you can cause growth.

Here are some examples of Quadrant II activities. When reading through them, identify some actions you can take.

  • Perform tasks well in advance of deadlines.
  • Estimate and plan work before committing in order to avoid being chronically over-committed.
  • For each crisis (Quadrant I), take steps to prevent future problems:
    • Identify similar errors (or trend) when a single significant error is found in a piece of work.
    • Plan ahead for the next major event, especially if a similar event does not go well.
    • Create and update a checklist so that when an important task is forgotten there is a visible reminder.
  • Schedule improvement activities:
    • Attend classes and seek a mentor.
    • Conduct team-building activities.
    • Assess and implement lessons learned.
    • Complete one small part of the project from beginning to end and apply the lessons to the remaining work.
  • For important and recurring activities, determine whether such efforts can be accomplished faster or more effectively:
    • Eliminate steps that do not impact the desired end result or add risk to the success of the activity.
    • Automate common tasks (e.g., collection, storing, reporting and sharing of project data).
    • Use a common organizational structure and format for project data when individuals frequently move between projects.
    • Avoid manual note-taking which requires later transcription. Always use a PC to collect minutes and actions with a common sharing mechanism (shared web page or database).

Figure 2 provides a similar model to Figure 1; however, here the categories include additional names and are grouped in the graphical layout of an archery target to remind us where to focus.

target chart

Figure 2 – “Time Targets,” adapted from The Time of Your Life, by Anthony Robbins

Whichever representation you prefer, it is important that it be used to identify where time is spent now and to determine which actions you can take to shift your focus. In the end, one needs to be patient and make improvements incrementally and consistently.

[Top of page]


Tidbit #6: Work Planning and Monitoring & Control for a Services Organization

The CMMI services model consists of Process Areas (PA) to help service organizations improve their performance and consistency. Much of the discussion in the CMMI community, and in available training, is focused on the new service-specific PAs rather than the core PAs pulled in from the development model (such as Requirements Management, Work Planning and Work Monitoring & Control). In this, and later tidbit articles, we will discuss some of these core PAs and how they can be used in a services group. This article will focus on Work Planning (WP) and Work Monitoring and Control (WMC).

The first thing to note is that WP and WMC are applied to the operation of a services group, not each service request. The Service Delivery PA handles individual service requests, leaving WP and WMC to handle the overall planning of the group, in addition to unusual special requests. This is a departure from the development model, where WP is applied to each development request (or group of requests).

To explain the goals of WP and WMC, we will take an example of a financial services group. The group:

  • consists of 15 people.
  • tracks the costs of 5-6 large projects at any one time.
  • has been providing project cost and budget tracking services for their division for many years.
  • is not planning on adding any new services in the next calendar year.
  • has been requested to upgrade to a new tool and new financial process for tracking the cost of all company projects.

 

In Table 1 we list the Specific Goals (SG) and Generic Goals (GG) of the process areas and the group’s implementation of them.

Process Area Goal

Example Implementation

WP SG1: Estimates of work planning parameters are established and maintained.

A roles and responsibilities document states the overall strategy and typical tasks performed when providing financial services on each project. Additional common tasks are defined by government regulatory financial procedures.

The effort needed to support all projects for the fiscal year is estimated, based on the number of projects, the complexity of each project and the financial services required.

The tasks for non-standard work are developed, budgeted and tracked. Examples of non-standard work are: moving data to a new tool, incorporating new procedures into the department, and adapting services to deal with new types of projects. 

WP SG2: A work plan is established and maintained as the basis for managing the work.

A specific schedule of financial reports is created for each project. The schedule consists of milestones for financial reporting, task assignments and the budget (effort and costs) required.

Risks are assessed for the department as a whole, looking at new tool, reporting and staffing risks.

All of the artifacts generated by the group (e.g., agreements and financial reports) are stored in pre-defined directories with pre-defined names and read/write access, backed up each day on an off-site server.

A plan is created for the department at the beginning of the fiscal year describing the overall workload, resources, stakeholders and budget. For each project that the department provides financial services, there is an approved service level agreement for the duration of the project.

Each staff member has a training plan that includes any new skills for financial reporting (e.g., learning new tools, new financial reporting requirements and overall career development).

WP SG3: Commitments to the work plan are established and maintained.

The department plan is approved annually and adjusted monthly and covers all department activities. Specific service commitments are defined in service agreements with each project. These are approved before financial tracking starts.

WP GG2: The process is institutionalized as a managed process.

There is a corporate policy that states what annual fiscal planning and tracking activities are needed for the department and the financial planning activities needed for each project. These activities are tracked in the annual fiscal planning review and when each project starts up.

Senior managers receive an orientation on fiscal planning activities for the department.  All staff members receive planning training related to the activities they perform.

WMC SG1: Actual progress and performance are monitored against the work plan.

The hours expended across the department to provide services are tracked monthly.

When the complexity and size of a project changes, effort estimates to provide the financial services to the project are revised.

The commitments in the financial reporting schedule are tracked, and any changes in stakeholders are incorporated.

Weekly team meetings verify overall status of financial tracking activities across the department.

There are specific milestones on each project where financial reporting and progress are evaluated.

WMC SG2: Corrective actions are managed to closure when the work performance or results deviate significantly from the plan.

All action items and issues are tracked in meeting minutes and closed in subsequent reviews.

WMC GG2: The process is institutionalized as a managed process.

There is a corporate policy that states what annual fiscal tracking activities are needed for the department. This covers resource and cost expenditures, and approvals for changes.

Senior managers receive an orientation on fiscal planning/tracking activities for the department.  All staff members receive training in tracking the hours expended on the activities they perform.

Table 1 – Example Implementation of WP and WMC Goals

Summary

These two process areas are not worded conveniently for service organizations, so some translation is required. However, after translation, WP and WMC are useful process areas to verify that a services group estimates and tracks operational items such as effort, tasks, milestones, risks and budget. Performing the practices of WP and WMC ensures that a group can consistently meet its service delivery expectations.

[Top of page]


Tidbit #7: Lessons learned When Appraising a Maturity Level 2 Services Organization

The services model has been out for a while and version 1.3 will be released November 1st. Having conducted a Maturity Level 2 services appraisal*, here are some lessons learned:

Strengths

  • The model is conceptually appropriate for a services organization. When the PAs are performed the organization saves time and money by reducing mistakes and communication problems.
  • Requirements Management (REQM) helps a services group clearly define the services it is providing. When the services change, action is taken to manage the impact of the change.
  • Service delivery (SD) is good at making sure service agreements are established and that the group is trained and ready to provide those services.
  • Project (Work) Planning (PP) is good at estimating and managing the amount of work a group has to do, assessing risk and defining work schedules.
  • Project (Work) Monitoring and Control (PMC) is good at tracking the actual workload and making adjustments.
  • Process and Product Quality Assurance (PPQA) is good at finding errors in service delivery early and making sure mistakes are captured and repaired.
  • Configuration Management (CM) is good at ensuring that the documents and data that the organization cares about are identified, labeled, protected and backed up.
  • Measurement and Analysis (MA) is effective at defining a set of objectives and measurements and ensuring that the data collected are used.

Lessons

  • An appraisal team should plan on rewriting some of the model text so that a non-CMMI familiar audience can understand what to do. For example, a typical services group will not readily understand the following sentence (taken from v1.3):

    "The purpose of Requirements Management (REQM) is to manage requirements of products and product components and ensure alignment between those requirements and the work plans and work products."

The interpretation our appraisal team had was:

The purpose of Requirements Management (REQM) is to, a) define the services of the group, b) trace defined services to team activities, c) verify that resources, service definition and actual work done are aligned."

The need for translation applies to approximately half of the Maturity Level 2 practices, specifically in REQM, PP and PMC.  Examples in PP are:

PP SP 1.2 -- Establish a top-level work breakdown structure (WBS) to estimate the scope of the work

PP SP 1.3 -- Establish and maintain estimates of work product and task attributes

  • There is considerable overlap in the practices. These need to be translated out before a non-CMMI familiar audience will be able to understand what is expected.

Practices such as, "Establish and maintain the service strategy," and "Establish and maintain the approach to be used for service delivery and service system operations," could easily be interpreted as the same. Another example is, "Establish and maintain the overall work plan," and "Establish and maintain the plan for performing the [Service Delivery] process."

These similarities can be glossed over in a CMMI class, but cannot be glossed over during an appraisal. In our appraisal I wrote a translation guide that reworded many of the confusing practices and mapped together practices that had similar interpretations. 

CMMI v1.3 is slightly better for PP and PMC than v1.2 since the word "project" is replaced by "work."  However, REQM still uses the words "project" and "product/component requirements," even though what it means is "work" and "service requirements."  CM, PPQA and MA have only minor wording changes in v1.3.

  • Clarify terms. In our appraisal, the services group was performing bids, proposals, and financial tracking activities for very large construction projects. The construction projects consisted of requirements, products and components. Reading the CMMI text can lead the services teams to think that the practices referred to the projects they were supporting, not the services they were providing to the projects.
  • Train your team in the interpretation of the model before you appraise, otherwise interpretation issues will suck up your appraisal time.
  • Run an informal appraisal so that interviewees have some idea of what your model interpretation is, and you can obtain experience asking interview questions and understanding the responses.

Summary

The services model has many of the components needed to run a services organization. When an appraisal is performed, the wording and overlap in the practices can present challenges that need to be overcome by the team prior to the appraisal.

* Using CMMI 1.2.

[Top of page]


Tidbit #8: Using Checklists to Define Best Practices and Improve Performance

One of the underlying motives to document best practices within an organization is to reduce the mistakes made by project team members and managers. The resulting document can be used to train and remind people on expected practices.

When an organization commits to define best practices, it has to decide how much detail to include in the guidelines and templates. One common tendency is to either write several tomes, hoping that each tome will be read and used, or make the document so abstract that it does not contain any guidance.

This tidbit is a brief look into the use of checklists as a way to concisely document practices and find mistakes in an organization’s work.

Background
In 2009, Atul Gawande, associate professor of surgery at Harvard Medical School, wrote a book titled, Checklist Manifesto: How to Get Things Right[1]. The book details many stories on the development and use of checklists in healthcare, aviation, and other industries.

The basic premise of the book is that a simple checklist can ensure that critical steps have not been overlooked, either due to haste, forgetfulness or inexperience. In the book, measurements were collected from surgeries performed around the world before and after the checklist [2] was employed. The results were:

  • Major complications down by 36%
  • Infections down by approximately 50%
  • The number of patients returned to surgery because of problems declined by 25%
  • 150 fewer patients than normal suffered harm from surgery (measured over 4,000 patients)
  • 27 fewer deaths (47% drop) caused from surgical complications

The surgery checklist was used at three pause points in a surgical procedure: before induction of anesthesia, before skin incision, and before the patient left the operating room. The checklist was described on one page, took one or two minutes to conduct, and contained 22 steps organized into 3 groups.

Guidelines for creating checklists

  • Select from two main styles of checklists: “Do-Confirm,” where critical steps (that should have already been performed) are verified; and “Read-Do” checklists that state what steps to perform given specific situations.
  • Select pause points in the your team’s work flow where the completion of critical steps can be verified.
  • Condense the checklist onto one page and use single bullet point sentences.
  • Ensure items on the checklist are critical (high-risk) and are not already covered by other mechanisms.
  • Label the checklist with a title that reflects its objective, such as “Before project start checklist,” “After requirements gathered checklist,” or “Handoff of product to final shipment checklist.”
  • Run the checklist verbally with the team to ensure that anyone that has an issue can speak up. Assign someone to read the checklist out aloud who can remain objective and undistracted.
  • Plan to revise the checklist content and implementation numerous times until it is able to quickly detect serious problems.

A one-page summary set of guidelines for creating checklists is at: http://www.projectcheck.org/checklist-for-checklists.html

The examples provided from Gawande [3] are mostly medical, so here is a brief example of what a team might have in a checklist used to enter an architecture phase of a project. If the answer to each question is “no,” then the team would stop and develop a corrective action or risk mitigation plan.

Before Architecture Starts Checklist

Are requirements defined for the architecture section being developed?

Have requirements been peer-reviewed for defects and omissions?

Have the requirements been baselined (versioned numbered)?

If the architecture will evolve over time, are there specific plans to assess and communicate changes to stakeholders?

Have all external interfaces to be addressed by the architecture been defined?

Is modeling or a benchmark needed to demonstrate that performance requirements (data traffic, throughput, response times) are feasible?

How you could use checklists

  • Select the areas in your project or organization where you have pain points and turn existing ignored process documents and guidelines into checklists that your team can use.
  • Follow the guidelines above for creating checklists.
  • Put any excessive information not needed on the checklist into training material; reserve the checklist for high-risk and critical steps.

For more advice on making process steps more visible, see the article, Getting New Practices Used and Keeping Them Visible.

References

1 Checklist Manifesto: How to Get Things Right, Metropolitan Books, 2009. (Thanks to Jay S. for insisting that I read the book.)

2. Surgical checklist: http://www.projectcheck.org/uploads/1/0/9/0/1090835/surgical_safety_checklist_production.pdf

3. Website for further checklist examples. (Look at the style, not content): http://www.projectcheck.org/checklists.html

[Top of page]


 

Tidbit #9: Reducing Project Friction - Making Work Routinely Faster

The majority of organizations and teams around the globe want to work faster. However, not all groups have a systematic way to improve their speed, or maintain the speed they have. This tidbit summarizes steps that you can take to assess where your organization has friction and lists practices to improve and maintain your gains.

This article refers to previously written tidbit and newsletter articles that can be utilized to reduce your organization's friction.

1. Consider where your friction points are and select practices to reduce them

Friction in this article refers to the issues that slow an organization or project down, such as delays, approvals, problems, surprises and the lack of skills. Friction cannot be eliminated, but it can be reduced.

The first article "Do more for less" describes many example practices that can be used to speed up your teams. You won't need all of them. Use this initial list to evaluate potential areas for improvement for your group.

Article: Do More for Less - http://www.processgroup.com/pgmininewsjan08.pdf

2. Perform team-level risk management

Any predefined list of issues you use to appraise yourself against will not be a perfect match for your organization. You can determine additional and project-specific areas of friction by performing risk management. Risk management is good at routinely assessing, sorting and prioritizing potential problems before they are realized.

Article: Coping With Risk - http://www.processgroup.com/pgp_v5_n2_1998_08.html#CS

45-min webinar*: Straightforward Risk Management for Projects That You Can Do Now - http://www.itmpi.org/webinars/default.aspx?pageid=841&category=14&speaker=58

3. Capture the essence of your solution in a checklist so that new knowledge can be quickly accessed and routinely used

One way to kill any new practice is to overdocument it. Not only does this bury the essence of the practice, it slows down an organization since they have to plow through numerous pages of stuff to find out what to do.

This article describes ways to capture essential practices in a short checklist.

Article: Using Checklists to Define Best Practices and Improve Performance - http://www.processgroup.com/monthlytidbits.html#tidbit8

4. Sell new practices to your organization and educate them

When you solve an issue and make something better, you need to sell it to others. Just telling people to use the new tool, estimate in a new way or use a new template, will not cause them to adopt it. Education will also be needed to obtain the gains from your audience.

This next article discusses how to sell practices to your audience.

Article: Selling Quality to Your Organization - http://www.processgroup.com/pgpostsept09.pdf

The true value of any new practice is not realized until the population uses it. That requires them to be educated in the new practice so that it becomes second nature. Education does not have to be expensive, it just has to be done. This next article explains some choices to consider.

Article: Do Engineers Need to be Trained? (Training Your Staff on the Cheap) - http://www.processgroup.com/pgmininewsjuly08.pdf

5. Keep the new practice visible

Life has a way of taking over and burying new initiatives, causing any new practice to be lost after a few weeks. This article discusses ways to keep new practices visible.

Short article: Getting New Practices Used and Keeping Them Visible - http://www.processgroup.com/pgpostfeb10.pdf

6. Maintain the gains over time

If you have invested time in fixing an issue, you will want to maintain the gains long-term so that the organization can continue to benefit from it. The simplest way to do this is to use the checklists discussed above at the appropriate time, similar to how the surgeons use them in the checklist article.

Another more comprehensive way is to establish a process and quality audit program to provide a thorough overall picture of which practices are being done and where the gaps are. Audits have to be performed well to be valuable; you can't attempt an audit program and have it immediately give you 20-20 vision into how the organization operates.

The last article explains one organization's implementation of a quality program. If a full-blown quality program is too much, stick with the checklists for now.

Article: Implementing Product and Process Quality Assurance (download PDF file Vol. 13, No. 1, October 2006) - http://www.processgroup.com/pgpostoct06.pdf

* The webinar is recorded by Compaid, Inc. There is a $29.99 fee payable to Compaid. Both webinars listed on their website are separate events of the same risk material. One event is longer because of the Q&A at the end.

[Top of page]


 

Tidbit #10: Using Scrum Wisely - Where does Design Fit?

The Scrum development method is a useful set of guidelines to help teams scope, plan, status and demonstrate their work. When done well, it allows teams to get early feedback on the product they are building and provides a reasonably disciplined method of managing changes in scope.

Scrum includes some guidance for planning, estimation, requirements definition, testing and project statusing. For the subject of design, Scrum advocates “design by discovery,” or "emergent design." That is, as each series of functions is developed, the design emerges. The code is refactored (cleaned up) to stay consistent with current design ideas. There is no specific up front design or analysis phase in Scrum and no specific design guidelines. Agile refers to "just enough design,” but that has many definitions.

In many engineering projects, it is difficult to fully specify a design since the team might not know what will and won't work until it tries it. In less complex, smaller and self-contained projects, the risk of not having a design before coding might not kill the project since problems that are encountered can be fixed with acceptable cost. However, when more people and project locations are involved, and the cost of fixing design errors in the code is greater than fixing design errors before coding, then consideration should be given to some design activity up front. An emerging design is not necessarily a good one.

In the Agile world, a lot of people debate the "traditional" big up front design over allowing the design to emerge through coding. It is commonly seen as an all-or-nothing choice. There is the rationalization that coding involves thinking, and thinking is design, so coding will suffice for design.

What is missing is the understanding of what design is, why it is done, and how to do it in a way that is useful to the team. Fixing the design process is hard; skipping it is easier.

Mature teams that have developed design practices:

  • don't have debates on whether they are "traditional," "agile" or "fill-in-the-blank" -- they don't care about the label
  • focus their design effort on architecture-level design, higher-risk components, and areas that would be expensive to fix if they had design flaws
  • don't do all of the design up front; they know they can't
  • do design iterations to help them make design decisions that can be made now and identify areas of technical risk that need research
  • use prototypes and mockups to feed into the next iteration of the design
  • don't abandon design just because it is hard or less fun than coding
  • capture the design so that it can be reviewed for mistakes and referred to later

In this tidbit, we describe some ideas for hitting a balance*.

One of the characteristics of Scrum is that functionality is developed during every sprint (which is typically a 1-4 week period). This focus on developing functionality from the outset of the project can lead the team to skip (or minimize) non-development activities such as requirements analysis, design and system testing. When they are skipped, the result can be the creation of many parts that don't fit together, or the development of a system that works well on the first release, but cannot easily be changed.

Design is hard to do. When it is historically done poorly (e.g., the design document is too large to manage, unreadable, or drowned out by non-design information) then the baby is often thrown out with the bath water and replaced by declarations such as, "Our code is the design, and any novice can understand our code base of 100,000 non-commented lines of brilliant code." When Scrum is adopted, which has no explicit design phase, then the jettisoned baby and the bath water are promptly forgotten, until the team hits a large roadblock caused by the lack of design. The pendulum is swung back to previous bad design practices and the cycle repeats. This is by no means an agile-only issue. It has been around for decades.

If you research the topic of "design and Agile" or "design and Scrum," you will see a very large range of ideas of what design might mean. We found definitions including, "Developing the features in order of risk," to "Do iteration modeling for a few minutes at the beginning of an iteration,” and advice of "Minimizing, and in some cases eliminating, up-front analysis and design activities saves both time and money."

What is design?
Here is a short summary of "design." It isn't perfect since it is an ambiguous word and there are numerous opinions to choose from.

Design is a map of the system that can be used to:

  • clarify and communicate concepts and definitions to other people involved in the project; this can cover architecture and detailed design issues
  • identify potential sticky areas that need to be investigated and researched to learn what works and what does not
  • find many (but not all) of the errors in a project earlier at less cost than developing the final product
  • provide a common reference point for design decisions
  • assess the impact of changes

The information that can result from design includes:

  • an architecture showing how the main components relate to each other
  • interfaces to the user, to other systems and between components
  • data definitions stating what data will be stored and where
  • how constraints will be handled, such as: design for testing, expansion, security, portability and technology (e.g., PC-based, cloud-based)

Design information can be captured:

  • in textual design notes (e.g., a series of one-sentence statements such as, "The server responds every 3 seconds with X output," "Component Y will handle all system errors," or “3 databases will be used and synced every 5 minutes”)
  • in graphical form (e.g., modeling languages, event tables, flow/timing diagrams)
  • as pseudo code – a textual description of what the code will do so that issues and defects can be found and resolved before time is spent developing code
  • in one document or split up based on the area being defined (e.g., placed in the header of each code file)

Design documentation is usually a hassle to keep up-to-date when design changes are made in the coding phase. You might consider developing architecture and design notes up front, reviewing them for defects and issues, and then updating changes to the design when the code is complete. The level of design documentation to keep, and where it is kept (e.g., separate document and/or code file header section) can be decided based on maintenance needs, ease-of-learning for new teams and future enhancement plans.

Incorporating design activities
Incorporating design activities into Scrum while maintaining the benefits of Scrum is not difficult. The intent of Scrum is to deliver working code every sprint. This benefit can be maintained by changing the percentage of coding and non-coding activities in each sprint. Instead of each sprint being 100% coding and feature testing, start earlier sprints with a small allocation of coding time and front-load these sprints with other tasks, such as architecture design. For example:

Sprint 1: X% architecture design, 100-X% other sprint activities
Sprint 2: Y% architecture design, 100-Y% other sprint activities
Sprint 3: Z% architecture design, 100-Z% other sprint activities

For example, X = 80, Y = 40, Z = 0.

The earlier sprints can use the coding time for high-risk features, core product components and features that will be needed to support later functionality. You can also add more time to later sprints for system test, performance testing and reliability testing when the time needed for coding decreases.

You can select the percentage allocation for the sprints based on the need for that activity and the risk of not doing it. You might decide that design takes zero percent for low risk projects or features, or that design has a large percentage of time allocated for the first 3 sprints to work on architecture, and much less from sprint 4 onwards.

Summary
Don't abandon design because it is not called out specifically in Scrum. Don't get sucked into the near-worthless discussion of "traditional" versus "new." Determine for yourself what design activities you need and when they will get done. It’s your product and your money!

*Thanks to Venkat, Brian and Alan for their valuable inputs.

For Scrum assistance, see Scrum services

For all Scrum-related articles, see Scrum articles

[Top of page]


 

Tidbit #11: Using Medical Checklists to Simplify CMMI Process Development - Keeping it Very Simple

The time needed to write a process is usually a lot less than the time spent using it. For example, a project planning process of 1-2 pages may take a couple of days to develop and then be used numerous times. The benefit of using it outweighs the cost of developing it. For most process areas (PAs) of the CMMI, this is also the case. For example, a simple spreadsheet for risk management is a small effort to pay for managing numerous risks over months and years. However, in the CMMI for Services model, some PAs can take more effort to implement than their usage might warrant.

Requirements Management (REQM) in the Development model is a good example of where the effort in setting up the process is small in proportion to its benefit. A few days to determine how to manage requirements changes is small compared to the numerous requirements changes that are typically managed over the life of a project.

In the services model, REQM is used to define the basic services of a group. In many organizations, these service requirements don't change much (or at all) over time. Any PA, however, still expects a process to be defined (at some level); a policy to be developed; a plan for defining the service requirements (which might be less than a 1-day event); resources to be defined, training in the process to be provided; and process monitoring and auditing.

These are, of course, the generic practices of the PA. They are reasonable expectations but need to be done in proportion to the benefit and usage of REQM. For example, if the event of defining the process takes a few days, and it is used one day per year, then that is a lot of process for low usage.

To reduce this overhead, consider using a checklist. A previous article (Tidbit #8), described two kinds of checklists: "Read-Do" checklists that state what steps to perform given specific situations; and "Do-Confirm" checklists where critical steps (that should have already been performed) are verified. Below is an example of a checklist used for REQM in the services model. A mapping to the practices has been added to show how the PA has been implemented.

~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Service Requirements Management Process

Purpose: A checklist used to understand, confirm and manage changes to service requirements.

Policy: All service changes are managed using this checklist [gp2.1]

Do

Plan the requirements definition/review event [gp2.2]:

  • Date: _____
  • Time / resources needed: (gp2.3) _____
  • Responsibility: [gp2.4] _____
  • Stakeholders [sp1.2, gp2.7]:
    • 1: Role = Agree to services: <Name>. Commitment _____
    • 2: Role = Agree to services: <Name> Commitment _____
    • 3: Role = Provide expertise: <Name> Commitment _____
    • 4: Role = Fund service: <Name> Commitment _____
    • 5: Role = Team member 1: <Name> Commitment _____
    • 6: Role = Team member 2: <Name> Commitment _____
    • 7: Role = Senior manager approval: [gp2.10] <Name> Commitment _____

Discuss new and changed service requirements with stakeholders to clarify understanding: [sp1.1, 1.3, 1.5]

  • Review current service requirements
  • Review proposed changes to service requirements
  • Human resources needed to implement change _________________
  • New materials/consumables/computers needed to implement change:_________________
  • Current commitments and deadlines impacted:_________________
  • Added risks and mitigation actions _________________
  • Record stakeholder commitments next to name [sp1.2]

Record major issues/actions

Update traceability mapping [sp1.4]

  • Label service requirement 1 thru N
  • List impacted deliverables and documents for each requirement
  • State test method (e.g., peer review, test case, pilot) for each service requirement

Save this document as service-roles-vN.doc on X drive with change history comments [gp2.6]

Check

Training has been provided to perform the steps above? [gp2.5]

  • If not, training date / time / who _____

Check that all process steps above are performed [gp2.8]

  • Team check done?
  • Corrective actions needed/taken? _______________

Objective/independent check done [gp2.9]:

  • Auditor name: _______________
  • Audit date: _______________
  • Pass/fail?: _______________
  • If fail, corrective actions needed: _______________

Senior management aware of this service requirements event, results, issues? [gp2.10]

  • Comments: _______________

~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

If you are providing the services of an IT system (such as Amazon.com) or college admission service with 50,000 students, then your requirements management process might be more comprehensive than a simple checklist. However, in a small organization, or one where the services are straightforward and don't change much, then a checklist might be adequate.

REQM Practice definition from CMMI

  • SP 1.1 Develop an understanding with the requirements providers on the meaning of the requirements.
  • SP 1.2 Obtain commitment to requirements from project participants.
  • SP 1.3 Manage changes to requirements as they evolve during the project.
  • SP 1.4 Maintain bidirectional traceability among requirements and work products.
  • SP 1.5 Ensure that project plans and work products remain aligned with the requirements.
  • GP 2.1 Establish and maintain an organizational policy for planning and performing the process.
  • GP 2.2 Establish and maintain the plan for performing the process.
  • GP 2.3 Provide adequate resources for performing the process, developing the work products, and providing the services of the process.
  • GP 2.4 Assign responsibility and authority for performing the process, developing the work products, and providing the services of the process.
  • GP 2.5 Train the people performing or supporting the process as needed.
  • GP 2.6 Place selected work products of the process under appropriate levels of control.
  • GP 2.7 Identify and involve the relevant stakeholders of the process as planned.
  • GP 2.8 Monitor and control the process against the plan for performing the process and take appropriate corrective action.
  • GP 2.9 Objectively evaluate adherence of the process and selected work products against the process description, standards, and procedures, and address noncompliance.
  • GP 2.10 Review the activities, status, and results of the process with higher level management and resolve issues.

[Top of page]

 

Tidbit #12: Synchronizing Scrum and Waterfall

Many organizations manage large projects using the Waterfall life cycle (See Figure 1). The life cycle (when used correctly) helps manage scope, time, cost and risk. It also provides visible milestones to coordinate team members and managers involved in the project. Like all life cycles, Waterfall has potential downsides, such as finding technical challenges late in the project and getting end-user feedback after many decisions have already been made. All of these risks can be managed by early prototyping, which is common in process-mature organizations.

Figure 1 - Waterfall and Scrum, and an Example Coordination

Over the past 10-15 years, Scrum* has become a popular approach to develop software, and by nature does not have phases. This means that if a software development team wants to use Scrum, while another group or company wants to use Waterfall, coordination is needed.

In Figure 1, Scrum breaks down all work into Sprints (typically 2- to 4-week increments). The intent is that code is developed and demonstrated during each sprint to seek early feedback. This feedback helps identify and resolve technical challenges and obtains user clarifications early in the project.

Incorporating Analysis Activities into Each Sprint
If a team wants to use Scrum but not lose the analysis activities of Waterfall, it can incorporate those practices into each sprint. The intent of Scrum is to deliver working code at the end of every sprint. This benefit can be maintained by changing the percentage of coding and non-coding activities for each sprint. For example, each sprint could be composed of:

  • 70% Requirements
  • 30% Other (design, code, test)

or

  • 70% Design (for the components the team has requirements for)
  • 30% Other (process new requirements, code, test)

or

  • 70% Code (for the pieces the team has design for)
  • 30% Other (process new requirements, code, design, test)

Synchronizing Waterfall and Scrum Teams
Let us suppose that a large project (using Waterfall) has a software component and the software team wants to use Scrum. The large project expects there to be a software requirements document at the end of the requirements phase after 8 weeks, (see Figure 1).

The Scrum team:

  • Negotiates that draft one of the requirements document will be available for review by week 4 (end of sprint 1).
  • Plans to develop draft 2 of the requirements by week 6 based on feedback and investigation (mid sprint 2).
  • Plans to develop draft 3 by week 8 based on feedback and investigation (end of sprint 2).

Draft 3 is baselined as the "requirements document" until further changes are needed. Changes are then looked at every 2 weeks at the beginning of each sprint.

The team uses 4-week sprints as follows:

  • Sprint 1: 70% requirements elicitation and analysis, 30% other activities (design, code, test of high-risk or core components). Deliver requirements draft 1 after sprint 1.
  • Sprint 2: 70% requirements elicitation and analysis, 30% other activities (design, code, test of high-risk or core components). Deliver requirements draft 2 at week 6. Deliver baselined requirements after week 8 (end of sprint 2).
  • Sprint 3: 70% Design, 30% other activities (requirements, code, test of high-risk or core components).
  • Sprint 4: 70% Design, 30% other activities (requirements, code, test of high-risk or core components).

The 30% allocation of each sprint is to develop code that is known to be needed or to investigate risky areas early. The component is demonstrated at the end of each sprint to obtain feedback. Depending on the item built, this could be feedback from the development team, product owner, end user, systems engineer or test engineer.

The other phases and analysis expectations of Waterfall can be met and synchronized in a similar way. The remaining sprints will have a time allocation of X% analysis activity, 100-X% build real code and get feedback, where X could be 80, 60, 40, 20, 0, and will change over the course of the project.

The approach of incrementally performing the analysis and developing the documents expected by Waterfall is made easier by breaking each document into early drafts, such as:

  • XX document draft 1
  • XX document draft 2
  • XX document draft N
  • XX document baseline

The benefit to the Scrum team is that they might have a lot more working code completed than typical before the Waterfall coding phase starts. Assuming the team has the discipline to develop components for which there are already requirements and test cases defined, then code will not be wasted. If the team is using all of their 30% coding time allocation to guess and develop speculative features, then all bets are off.

* See processgroup.com/pgpostoct2012.pdf (page 2) for summary of Scrum.

Questions, comments? Contact us.

[Top of page]


Tidbit #13: Kick-starting a Service Delivery Team - Pragmatically Using CMMI for Services

You are probably more than aware of the problems facing your service delivery organization. The list of problems usually starts with an overwhelming string of commitments and optimistic deadlines.

Your staff members are working progressively longer hours, and new customers are anxiously waiting for your team to start working with them. Meanwhile, the supplier you are depending on is not delivering the quality you expect, and changes in department priorities are causing havoc with your plans for creating a new service.

On top of all this, your group has been signed up to be “certified at CMMI Level 3.” At best, this sounds like just another documentation exercise with little or no positive impact on your group.

When an improvement program is focused on a process framework such as ISO or CMMI, it is common for it to be treated as nonessential—a luxury that is affordable only when the business climate is rosy. Even when the business climate blossoms, it can be difficult to fit in additional activities.

Figure 1 - The goal-problem approach to improvement

An alternative approach is to focus on the organization’s business goals and problems and to tie improvement activities directly to current service-specific work. With this approach, improvement focuses on the real issues of the organization, and each change is driven by a specific need. The scope of the improvement program is not defined by an improvement framework, but rather by the organization’s goals and problems (Figure 1).

The goal-problem approach summarized in this article keeps the organization focused on compelling issues that people want to fix now. The improvement plan centers on the organization’s challenges, with small actions continuously taken to move the organization toward its goals. Improvement frameworks are adopted fully, but in small pieces, and each piece is fitted to a service-related problem or goal. Progress is measured by improved organizational results.

The steps below illustrate the goal-problem approach for planning an improvement program. In this tidbit I am going to just cover steps 2 and 7 with examples from a typical service organization. The remainder of the steps can be read in the first reference [*].

Scope the Improvement

  1. Establish plan ownership.
  2. State the major goals and problems.
  3. Group the problems related to each goal.
  4. Ensure that the goals and problems are crystal clear and compelling.
  5. Set goal priorities.
  6. Derive metrics for the goals.

Develop an Action Plan

  1. Enumerate actions using brainstorming and a process framework.
  2. Organize the action plan based on the goals and problems.
  3. Add placeholders for checking progress and taking corrective action.


Step 2. State the major goals and problems

The goals and issues your organization faces today can define the scope of the improvement program. Example goals (desired state) and related problems (current state) are shown in the first two columns in figure 2.

Some organizations have a need to become appraised at CMMI Level 3 sometime during their contract. If this is your situation, the magic is to realize that the practices of the CMMI (or the framework of your choice) can be immediately used to address many of the challenges you have now. That is, CMMI implementation and work are treated as related activities; work creates goals and challenges, and CMMI practices are fed in the order needed (and to the right level of depth) to address these issues. Implementing CMMI is not about creating paperwork; it is about smoothing out the bumps in your business.

If your organization is not using a published framework, then the practices in CMMI mentioned here can be treated as free optional advice.


Step 7. Enumerate actions using brainstorming and a process framework

The last two columns in figure 2 shows example improvement actions that can be used to address the problems. These actions are simple versions of the practices from CMMI. The last column is the reference to the full definition of the practice in the CMMI document**.

The Improvement Action column does not contain all the actions needed. Organizations are encouraged and expected to add their own expertise to build upon the core practices in the framework (such as ISO or CMMI).

Goal

Problem

Improvement Action

Framework Reference [**]

1. Reduce customer complaints, rework and errors by 20% for existing services Proposal errors Perform work product checks for errors

PPQA SP 1.2

 

 

[See page 8]

  Service delivery errors Perform service delivery preparation, readiness check and implement a request tracking system

SD SG 2

 

[See page 5]

Perform service delivery maintenance and repair  (improvement) activities

SD SP 3.3

[See page 5]

Perform service delivery process checks

SD GP 2.9

[See page 5]

  Poor handling of complaints from customers (losing complaints, not returning calls) Establish and maintain an incident management system for processing and tracking incidents

IRP SG 1

 

[See page 11]

Identify, controlled, and addressed individual incidents

IRP SG 2

[See page 11]

  Billing errors Perform work product checks for errors

PPQA SP 1.2

[See page 8]

     

 

2. Prepare to execute a new awarded contract Lack of staffing for new contract Look at previous work to see what effort was needed

SD SP 1.1

[See page 5]

Estimate effort and cost for current work

WP SP 1.5

[See page 3]

Plan for resources to perform the work (e.g., people, materials, space)

WP SP 2.4

[See page 3]

Reconcile available and estimated resources

WP SP 3.2

[See page 3]

  Inadequate skills for new staff Plan for knowledge and skills needed to perform the work

WP SP 2.5

[See page 3]

  No document management system for new customer records Plan for the management of data (e.g., customer requests and records)

WP SP 2.3

[See page 3]

Identify what documents and data need to be organized, versioned and backed up

CM SP 1.1

 

[See page 6]

Setup a system (manual or computer) for managing documents and data

CM SP 1.2

 

[See page 6]

  Services scope not well defined Establish and review scope of services with customer and obtain commitment

REQM SP 1.1, 1.2

SD SP 1.2

[See pages 2, 5]

  Potential problems with suppliers (unreliable and poor quality) Assess and mitigate risks of suppliers (what could go wrong)

WP SP 2.2

SAM SP 1.2

[See pages 3, 9]

Establish and track supplier agreements

SAM SP 1.3, 2.1, 2.2

[See page 9]

Escalate issues to senior management for resolution

SAM GP 2.10

[See page 9]

  Inadequate physical space for new staff and computer equipment Plan for resources to perform the work

WP SP 2.4

[See page 3]

Establish a work environment based on organization standards (or develop an organization standard)

IWM SP 1.3

 

[See page 15]

  Unprepared for spikes in customer demand over the duration of the contract Track actual performance (e.g., effort, deliverables, cost) and take corrective actions

WMC SP 1.1, 2.2

 

[See page 4]

Analyze our capacity and availability to ensure that demand can be met and resources are utilized

CAM SG 1, SG 2

 

[See page 12]

  No backup plan to maintain the service in the event of staff and weather disruptions Determine which services are essential and need continuity planning Establish continuity plans Pilot and evaluate backup system before it is needed

SCON SG 1, 2, 3

 

 

[See page 13]

     

 

3. Move existing services to the cloud and provide customers with mobile phone/tablet access No expertise in mobile platforms Plan for knowledge and skills needed to perform the current work

WP SP 2.5

 

[See page 3]

Systematically assess longer-term training needs and plan for longer-term skill improvement

OT SG 1, SG 2

 

[See page 20]

  No plan to manage the synchronization of customer cloud data and existing data Elicit the requirements for cloud data synchronization Design system (or delegate) Verify and validate system

SSD (all of it)

 

[See page 21]

  Potential quality problems of the cloud software contractor Assess and mitigate risks of suppliers (what could go wrong)

WP SP 2.2

[See page 3]

Establish and track supplier agreements

SAM SP 1.3. 2.1, 2.2

[See page 9]

Escalate issues to senior management for resolution

SAM GP 2.10

[See page 9]

Figure 2 – An organization's goals, problems, and improvement actions

 

References and Notes

* Making Process Improvement Work for Service Organizations, Neil Potter and Mary Sakry (http://www.processgroup.com/book.html)

[Note: CMMI for Services is targeted at organizations such as sales, medical records, finance, contracts, IT development, and support.]

** A full list of practices used in the right-hand column are defined at: http://www.processgroup.com/condensed-cmmi1p3-svc-v1.pdf

Further complimentary resources are at:
http://www.processgroup.com/services19cmmiw.html

[Top of page]

 


Tidbit #14: When Scrum Uncovers Stinky Issue and Then Gets Blamed

Introduction
Scrum is a simple and useful approach for managing software development projects. When performed correctly, it breaks work into manageable pieces and assesses technical risk. Some teams, however, run into trouble very quickly because Scrum is blamed when it uncovers stinky issues. In many cases Scrum is highlighting an existing problem, not causing it.

Example problems that are uncovered are:

  • Conflict among different divisions in the company regarding the product vision and target customer
  • The product being developed is turning out to be much harder than expected
  • The use of a new technology is not economically feasible or reliable
  • The team lacks domain knowledge
  • The project team’s rate of progress is not enough to complete the project on time
  • Suppliers are unqualified and cannot deliver their commitments
  • Work is declared “done” whether it is done or not
  • The Scrum team is not trained or allowed to implement Scrum correctly

Scrum can find many of these issues early because real work is required to be performed in each sprint. Real work performed in early sprints can uncover issues at a point in the project when there is nothing else yet to blame. When these issues are embarrassing, or no one wants to address them, blaming Scrum can be the easiest option.

All of the example problems above are ones that a “traditional” (non-Scrum) team would likely have found if they were performing risk management and prototyping to mitigate the highest risk areas. Scrum finds similar issues early because each sprint includes some development activity where issues become apparent.

In life cycles that are not well implemented, or ones where no risk management or prototyping are performed, stinky issues can be hidden for long periods of time or conveniently mixed with other issues that might have arisen.

For example, a lack of domain expertise among developers can be nicely mixed in with “no time for writing test cases,” “too many meetings caused by Scrum to learn the domain,” “rushed design reviews,” and “the requirements changed too many times.” The longer a problem is left unaddressed, the more project issues there are to blame. No direct or logical link is needed between the item blamed and the presumed cause.

Not every Scrum implementation is perfect, and it cannot be assumed that Scrum is always innocent. There are many Scrum teams that have not learned Scrum very well or are missing some of the additional skills required to make Scrum successful. Examples are release planning, requirements elicitation, requirements writing, design, test planning, test-case writing, configuration management, and domain knowledge. Just performing Scrum without these practices highlights another stinky issue.

What to do about it
In the case where Scrum finds a stinky issue, be gentle!  Whoever has been cornered didn't expect it, and this level of accountability is very uncomfortable. Consider the following options:

  • Determine if your Scrum implementation helped cause this problem or just found it.

Example 1: The selected supplier has been asked to develop a design and provide an early version of a simple database. Usually suppliers are given six months to design and code the database. This time the Scrum team asks to see a simple version to run data migration tests in the first iteration. The supplier provides data and interfaces in the wrong format and does not understand the requirement. After further investigation, it is found that the supplier was the absolute cheapest of all the ones considered. In this example, Scrum found a stinky issue very early but did not cause it.

The team develops screen shots of a new system and shows them to the customer. The customer states that all the information is incorrect and that the team has no understanding of the domain. Management gives the task to another team to “demonstrate” the companies’ expertise. The customer makes similar comments. The Scrum process is blamed for having too many meetings and focusing on academic Sprint goals rather than technical customer issues.

In this case there are two stinky issues: The Scrum team could have recognized a skill problem when working on the requirements backlog. Second, since other teams had the same problem, the larger stinky issue is how management employs, trains and maintains the domain knowledge of the staff for any project.

Example 2: The team generates a list of user stories. A few of them are clear; the remaining user stories are at best placeholders for investigation. All of them are considered high-priority and are allocated to the release with an established deadline. The Scrum team assumed that it would be able to de-scope functionality at any time or move the deadline back. Marketing and management assumed that a commitment was a commitment and that Scrum had magical powers to deliver “fast” and on time. In this case the implementation of Scrum was blamed, and this was probably a good call.

There are many things the team could have done to assess and mitigate their risks, for example:

  1. Only allowed cleaned-up user stories to enter the release
  2. Included domain experts and testers in backlog and sprint reviews to check the accuracy of implementation
  3. Revisited priorities – it is unlikely that everything has exactly the same (high) priority
  4. Determined a possible incremental delivery schedule to deliver lower-priority functionality later
  5. Provided a most-likely and least-likely release date and provided updates at each sprint with velocity data
  6. Assessed and mitigated risks, and communicated them to management until the message was recognized

There is no guarantee as to what exactly management will respond to, but anything might be better than to ask for a huge delay just before the deadline.

A second stinky issue is that management assumed Scrum would fix their software development delivery problem, and there is no need to pay attention to the clarity of incoming requirements.

  • Have the Scrum Master fix it

The Scrum Master’s job is to address impediments.  This might involve the help of other teams and managers in the organization. If an issue is particularly stinky, the Scrum Master might escalate the issue to get additional visibility and assistance. Tread carefully; you might be seen as a troublemaker (a messenger to be shot). Bring solutions, not just problems.

  • Suggest a solution to the identified problem

If Scrum is highlighting the problem, the responsible party might need help. Stinky issues such as the coordination of multiple project teams around the globe, large numbers of defects entering final test, and the lack of domain knowledge within teams might have gone unsolved for years. This might imply that management is open to constructive ideas.

  • Escalate to senior management

If it is a long-term, systemic or critical problem, it might be time to communicate the issue to management. Sometimes management simply has no idea that the issue exists, and are capable of solving it. Sometimes they are aware of the problem, but it was treated as background noise until now. Sometimes they don't know what to do and have decided to ignore it. Mention the problem, don't give names if that is safer (let them investigate for themselves), and let them take the level of action they think appropriate.

  • Work around the issue

Some organizations never address stinky issues. Be prepared for no action and develop a work-around plan. That might mean that your team has to track program dependencies, work with customers and marketing on commitments, or train suppliers in how they should do their job.

Summary
Look out for situations where Scrum (or any process) is blamed because it found a stinky issue. Break down the problem into the parts the team can address, and the parts that are underlying or systemic. Lead the way and take action rather than engage in lengthy blame battles.

Further complimentary Scrum articles are at processgroup.com/newsletter.html#scrum
Further education in Scrum is at processgroup.com/services18asdcs.html

Questions, comments? Contact us.


 

Tidbit #15: Improving Product/System/IT Development with CMMI® for Development* - Quick Look

The CMMI-DEV model is a collection of practices aimed at organizations that develop products, systems and IT solutions. The model is organized into five levels, each level defining more advanced practices to improve schedule, budget, risk and quality performance. The levels provide a road map for sustained incremental improvement.

Example development organizations that use CMMI-DEV are:

  • Software application / embedded
  • IT solutions
  • Hardware (electrical, mechanical, optical, electronic)
  • Systems engineering (large software or hardware system or a software/hardware combination)

When and why to use CMMI-DEV?
The purpose of using any framework is to improve the performance of an organization. For a group that develops products, this can include: improving schedule and budget accuracy, managing risks, eliciting requirements, defining designs, implementation, finding defects early, reducing rework and sharing best practices across the organization.

 

CMMI practices can be implemented within any life cycle or methodology (such as Scrum, Waterfall, Spiral or Incremental).

 

The benefits from using CMMI Level 2 and 3 practices are to:

  • Clarify customer requirements early
  • Scope, plan, estimate and negotiate work to manage expectations and achieve commitments
  • Track progress to know project status at any time
  • Maintain defined quality standards throughout the organization and report strengths and problems to management
  • Manage versions of documents and code so that time is not wasted using incorrect versions or recreating lost versions
  • Manage and coordinate multiple teams that have cross dependencies
  • Employ engineering practices for design, implementation, verification and testing to reduce defects
  • Use defect data to understand and manage work quality throughout the project
  • Collect lessons learned and project data to systematically improve future organizational performance


A summary of the CMMI-DEV model is provided below. The items marked "(dev)" in red are the Process Areas that are unique to the Development model. The other Process Areas are common to the Services model [see tidbit#3].


Summary of CMMI-DEV (Staged Representation)

CMMI® for Development, Version 1.3

The Maturity Level 2 Process Areas are summarized below.

Configuration Management: Establish and maintain the integrity of work products using configuration identification (labelling), configuration control (known modifications and permission to modify), configuration status accounting (final status of work products), and configuration audits (checks to verify changes).

Measurement and Analysis: Develop and sustain a measurement capability that is used to support management information needs.

Project Planning: Establish and maintain plans (major tasks, estimates, stakeholders, risks and resources) for project work.

Project Monitoring and Control: Understanding the group's progress so that appropriate corrective actions can be taken when performance deviates significantly from the plan.

Process and Product Quality Assurance: Provide staff and management with objective insight into processes and associated work products.

Requirements Management: a) Define requirements baselines for a project, b) manage changes so that technical and resource impact is assessed, c) trace requirements to related downstream work products so that test coverage of requirements can be performed and the impact of requirements changes assessed with more accuracy.

Supplier Agreement Management: Manage the acquisition of products and services from suppliers. This Process Area can be declared Not Applicable (after discussion with the appraiser) if there are no custom, risky, or integrated suppliers.

The Level 3 Process Areas are summarized below.

Requirements Development: Elicit requirements, develop requirements from the information gathered, and analyze requirements for ambiguities and errors.

Technical Solution: Select among design alternatives, perform design activities and implement the design.

Product Integration: Plan and execute integration testing of components as they are completed, or when all components are complete. Check that interfaces are correct before spending time in system testing. Communicate interface changes to impacted areas.

Verification: Perform peer-reviews on selected documents and code to find errors early and quickly. Plan and execute component level testing and analyze the results (e.g., defect density, defect pass rate, defect escape rate and root cause).

Validation: Plan and execute testing focused on the end-user's environment and needs. Analyze the results (e.g., defect density, pass rate, escape rate and root cause).

Organizational Process Focus: Coordinate all improvements. Take what is learned at the team level and organize and deploy this information across the organization. The result is that all teams improve faster from the positive and negative lessons of others.

Organizational Process Definition: Organize best practices and historical data into a useful and usable library.

Organizational Training: Assess, prioritize and deploy training across the organization, including domain-specific, technology and process skills needed to reduce errors and improve team efficiency.

Integrated Project Management: Perform project planning using company defined best practices and tailoring guidelines. Use organizational historical data for estimation. Identify dependencies and stakeholders for coordination, and comprehend this information into a master schedule or overall project plan.

As project work progresses, coordinate all key stakeholders. Use thresholds to trigger corrective action (such as schedule and effort deviation metrics).

Risk Management: Assess and prioritize all types of risks in a project and develop mitigation actions for the highest priority ones. Start by considering a predefined list of common risks and use a method for setting priorities.

Decision Analysis and Resolution: Systematically select from alternative options using criteria, prioritization and an evaluation method.

* Information source = CMMI® for Development, Version 1.3

Condensed list of Level 2 and 3 practices: http://www.processgroup.com/condensed-cmmi1p3-dev-v1.pdf

Full model text: http://cmmiinstitute.com/assets/reports/10tr033.pdf


® CMMI is registered in the U.S. Patent and Trademark Office by Carnegie Mellon University.

 

 

[Top of page]

 


Management Perspective

Tidbit #16

I Just Want to Deliver My System On Time -
I Don't Know Why We are Always Late and I Can't Pinpoint the Cause

I'm Drowning
If you are a manager of an organization developing products or IT solutions, you might feel like you are drowning in a sea of missed deadlines and emergency meetings.

How Did You Get to This Point?
Here is an example of how managers often arrive at this situation.

David is a senior manager that runs an organization developing custom systems of software and hardware. The CEO told David two years ago that revenue needed to increase 30% and that three new sales people were being added. David nodded to the CEO and immediately felt a bonus looming.

The sales group did indeed sell more systems, and initially for each sale they asked David, "Can you deliver this by date X?" David said, "Yes" because at that time his staff was glad for the work (and of course there was a CEO sales directive).

After 12 months, the sales group didn't see a need to ask David any more about commitments since he always nodded when asked, implying that his team could meet the schedule.

Sales volume increased, and David was given more deadlines. After some sloppy coding and testing, rework and customer questions increased causing a 10% support tax on the engineering group.

For the past 6 months every sales request was labeled "Urgent." In response, David established daily sessions with the CEO and sales group to discuss the priorities for that day.

David and the sales group noticed that testing took 30% of the schedule, so they jointly decided to limit testing to 15%. More systems were delivered, but the support tax on the group increased from 10 to 20%.

Now, all projects are late by at least 6 months, and each month, fewer systems are delivered compared to the previous month.

What are the likely causes driving the deadline problem?

  • There is no mechanism in place to check that commitments can be met before they are made. When David says, "Sure, I can do that," the sales group hears, "It's free." Now, David does not insist on being asked.
  • David does not develop a plan, estimate or schedule for the project work on his team's plate. David and his team have no reliable data to know their capacity or the effort needed to fulfill each request.
  • As soon as quality is compromised, the support tax increases. Since there is less time for work, there is less time for quality. This is a downward spiral.

What to do next?
The situation can be fixed (and stay fixed) by some careful detective work to pinpoint the causes, and some carefully planned improvements.

Typical solutions include:

  • The installation of a simple system to continually align requests to capacity.
  • A reduction in defects from beginning to end of a project to reduce the support tax.
  • A reduction of the transition time spent when staff members move between numerous "urgent" projects in the same week. The "urgent" label indicates not enough decision on priorities.

If you have questions or comments about this article, or would like help with your deadline challenges, please Contact us.


[Top of page]

Tidbit #17

Everyone Gives Me Estimates and Commitments, but Few are Reliable

Why Estimate?

Deadlines drive most decisions in business, either because they drive revenue or they are one domino in a chain of events that drives revenue. Given that deadlines are important, adequate attention needs paying to estimation quality.

Fundamental steps can be followed to make estimates reliable, whether teams use techniques such as guessing, Planning Poker, Delphi, historical data, or a mathematical model.

Possible Causes of Unreliable Estimates

Here are four (of many) to consider:

  • People are unsure of how to estimate: Instead of robust estimates based on analysis, estimates and commitments are rough calendar predictions without factoring in resource availability, current commitments, dependencies or variations in scope. They are doing their best with the estimation knowledge they have, but it's not what you need to run a business.
  • People tell you what you want to hear: When you ask for an estimate (while reminding the project manager of the existing hard deadline), you might be just hearing back, "Sure, we can do that." This is not an estimate. It is a way to avoid confrontation or avoid looking weak in the moment.
  • A single estimate with no assumptions is provided even when project scope is variable: If the scope of the project has not been nailed down yet (and it may never be), then any single estimate, with no accompanying assumptions, is unreliable from the outset. For example, "The (undefined) data translation system will be finished June 15th at 4.05 PM."
  • Risks that might impact the estimate are not assessed or mitigated: Assuming the project will proceed with no problems immediately makes an estimate unreliable.

Fundamental Steps

There are some fundamental steps upon which all estimates can be based.

  • Educate people how to estimate so that they are able to provide good data. An estimate should at least include:
    • What is in scope (and what is out of scope)
    • Uninterrupted time needed
    • (Optionally) size estimates such as the number of story points or reports
    • The definition of the units
    • Project assumptions (conditions that must remain true for the estimate to be valid)
    • Estimate and date options, based on scope options, actual resource availability, and risks.
  • Remind people that when you ask for an estimate, you want more than a, "Next month is possible" response. You want an estimate that is reliable, contains effort and calendar time, and options.
  • Ask for an estimate range with assumptions when the project scope is clearly ambiguous:
    • The ambiguity of the project can be used to your advantage to generate options, e.g., scope A, B, C is 100 days; A, B is 50 days, A is 25, D is unknown and will be estimated after A is complete.
    • There are two choices with item D. Work with the customer to define D in the way they want and build credibility or provide a date, pretend it will work out and lose credibility.
  • Ask for risks (potential problems) that might make the estimate unreliable.

Your staff can estimate given some guidance and support, and they can use the data to manage their own work and priorities. When they do so, you will have less stress, less surprise, and be able to meet commitments.

If you have questions or comments about this article, or would like to discuss your estimation or other challenges, please contact us.

 


[Top of page]

Tidbit #18

People Rise to the Standard Around Them - In Your Organization Too!

Introduction

Have you wanted for a long time to address specific challenges of your group and make or sustain a change? The way your group operates each day is largely dependent on the standards they see around them. As a leader (or even a member) of a group, there is a lot you can do to raise these norms. Consider the following points:

1: Changing the standard changes expectations and performance

Some questions to consider:

  • Do you know why there is no gum on the pavements of Singapore?
  • Do you know why some teams release almost no defects, and others release hundreds?
  • Do you know why you are quiet in a library?

The answer, common to the three questions, is because the standard and expectations have been clearly set by the environment, and to behave otherwise would stand out as unusual or unacceptable.

Sure, there are other factors such as posted rules and fines, but these rules are followed because avoiding them would be personally embarrassing. The standard you set as the leader of your group is the standard everyone else sees and one they will eventually rise or fall to.

2: What standards are you setting for your group?

Here are some examples to chew on:

  • If you start your meetings late, people notice and might care less if they start their meetings late. They will also notice that they don't have to be at your meeting on time.
  • If you permit poor quality work to go to the customer, your staff members will notice and might lower their standard to yours. Of course, they won't tell you that more poor quality work went out; that will be your surprise later.
  • If you set deadlines without any thought to scope, estimates and risks, then that is the standard you are communicating to others to follow. Any deadline will do.
  • If you keep organizational goals a secret, or hide the status of a project because it is in chaos, then that is the standard that is now acceptable. The standard is, "Don't be crystal-clear about how the project is going, because no one else does that." Well, at least until it is so late that a huge slip is unavoidable.

The opposite is also true. Good things that you are doing are probably reflected by good things your people are doing. Either take some credit for that, or at least realize that you can maintain this strength by maintaining your leadership behavior.

3: Design the organization you want, by example

Slowly but surely you can address the norms of the group without too much fanfare or drama. For example:

  • If you want teams to collaborate and cooperate with each other, then raise the standard and demonstrate that.
  • If you want predictable deadlines, then state your expectations for the thinking (planning) and data that goes into a predictable deadline and don't accept anything else.
  • If you are tired of getting calls from the customer about poor quality and slow responsiveness, then first demonstrate that you are responsive and quality-focused in your own work. Second, explain to your team what you want them to do, otherwise you might get the standard they see, or the standard they think you want because they don't see anything else.

The bottom line:

  • You are getting what you expect and tolerate.
  • The people around you are largely, with exceptions, following the standard you set.

Your next steps:

  • Pick one thing that you are frustrated with.
  • Determine what you might be doing to demonstrate that standard.
  • Change the way you behave; try the change on a small scale first.
  • Explain clearly what you expect from others and why (be nice). The why is important so people know your reasoning and context.
  • Demonstrate the new behavior for 60 days. Remember, you want to be seen as serious, otherwise you set a new standard of "flavor of the week," and you don't want that.

If you have questions or comments about this article, or would like to discuss your norm challenges, please contact us.

[Forward this email to your boss! Subject: Here's a cool trick for you] - quick link

 


[Top of page]

Tidbit #19

My Organization Wants to be Agile! What is a Good Life Cycle and What Should We Consider?

Introduction

The discussion over which development life cycle works best has gone on for thousands of years. All life cycles can be adjusted to give the results you want, or poorly implemented and generate inadequate results. The success of any life cycle or methodology comes down to the amount of effort invested in its refinement to meet the needs of an organization. This is not difficult and is well within your grasp.


Some life cycles are better out-of-the-box at managing project risk, technical risk, design risk, approvals or changes. Others are better at getting early user feedback, tracking project progress or coordinating teams.

The out-of-the-box features are unlikely to meet all of your challenges, hence the need for refinement. For example, Agile/Scrum is very simple and provides a way to chunk work into small increments, obtain end-user feedback early and manage commitments visibly. It needs to be modified to analyze requirements, develop designs, and perform system and end-to-end testing.


Here are three points to consider when leading your organization to select and use a life cycle:

1. Determine what you want out of a development life cycle

The purpose of any life cycle is to manage customer needs, time, money, risk and quality. These are the variables that you want to define when selecting or adjusting your life cycle. For example,

  • If you want to manage technical risk, add risk management and prototypes in the beginning
  • If you want to manage requirements and usability risks, add more customer interaction sessions
  • If you want to manage cost and schedule risks, add robust planning and estimation and an objective way to monitor effort expended and work complete
  • If you want speed, focus on information handoffs, creating short documents, and finding defects and risks early


2. Refine the life cycle -- don't settle with chronic issues

A life cycle is working when you can achieve project goals and not suffer severe problems along the way. Problems will always occur, but the life cycle should help your teams identify and avoid many of them, allowing the organization to focus on the harder problems that can't be foreseen.

If you don't pay attention, you might see:

  • Mounds of unused documentation
  • Numerous additional requirements and technical issues discovered late in the project
  • Wildly inaccurate estimates
  • Teams treading on each other's toes with little coordination

3. Keep it concise

Whatever life cycle you have now, or the one you create, describe it concisely. If there are 10 significant steps to your life cycle, start with five pages as the maximum size of the description, one half-page per step. More will probably not be read, so consider not creating it!

If you have questions or comments about this article, or would like to discuss your life cycle selection and refinement challenges, please contact us.

[Forward this email to your boss! Subject: Here's a cool tip for you] Quick Link

Related articles you might enjoy:

[Top of page]


Tidbit #20

You're a Leader — Don't Put Up With Status Quo, Lead the Way Forward!

Introduction

Do you repeatedly have any of these challenges in your organization?

  • Rework and bug fixes consume your resources
  • Customers complain about your products
  • Projects are chronically late
  • There are major surprises at the 11th hour

If your organization is not performing the way you want, why do you put up with it?

Is it because:

  • you are unsure of what to do to fix the situation long-term?
  • you are afraid of looking bad if the change doesn't work out?
  • you are trying to fix the problems yourself, but really have no time?

If you answered "Yes" to at least one of these questions, read on, you can do it!

1. You are unsure of what to do

All you need is to know that there are problems and be able to describe them. Then your job is to lead the organization to the goal, not necessarily come up with all the ideas of how to do this.

In fact, it is to your benefit to stay on the sideline so that your people come up with workable solutions they can live with. They just need you to lead them. So the great news is, you never have to admit that you don't know what to do!

Steps you can take

a. Enumerate the challenges you care about
b. Set priorities; if you can only pick three things to fix in the next three months, what are they?
c. Poll the organization to determine who:

  • has an interest in working on one of the items
  • might have solved it elsewhere before (not necessary, just desirable)
  • can stay on-task and not build the most complicated solution ever seen by mankind

If no one bites, look externally. Your problems can be fixed.

When you identify your small team, tell them the challenge and ask them for ideas to solve it. If the ideas look promising, give them the job.

2. You are afraid of looking bad if the change doesn't work out

If you are nervous about a bunch of people changing your organization in a way that you don't like, then add a few controls on the project to keep you in the loop. For example:

  • Only give the team 2-4 weeks to come up with 2-3 solution options so they don't go off the deep end
  • Check in with them every week to see where they are -- remind them of the goal and original challenge
  • As a team, select one of the solutions, try it for 2-4 weeks on one project and monitor the result

If it succeeds, great, collect lessons learned and plan on the next deployment. If it fails, you have not lost much time or money. Collect lessons learned and try again, and again, and again. Leaders lead and they don't give up.

Whatever happens, realize that you earn respect from your people because you acknowledged the problem, you were willing to take action, you were humble enough to admit that the first attempt did not work, and you were willing to support their next try. This is the hard work necessary to earn respect.

3. You are trying to fix the problems yourself, but really have no time

You will never have enough time to lead the organization, manage day-to-day events, and participate in fixing systemic problems. You do have time to set and explain the goals, delegate work, and check that progress is being made.

When you delegate and communicate, be very careful that you communicate something that people can get motivated about and that it addresses something they will personally see benefit in. Confirm understanding by having the team members repeat back to you what they think you said.

Summary

Assume that the challenges you face are solvable, and that the people in your organization can solve them with a little guidance. Don't wait – you want to be on a different path so you end up in a new destination soon.

If you would like help in taking the next step, please contact me for a complimentary 45-minute chat or just send an email.

[Forward this email to your boss! Subject: Here's a cool tip for you] Quick Link

[Top of page]

© The Process Group