QSM provides unparalleled support throughout the product acquisition, installation, and implementation process.
For nearly five decades, QSM has helped organizations bring data-driven discipline to software project estimation, tracking, and benchmarking. Our methodology and tools turn project complexity into measurable, defensible outcomes.
In today's world, Data Analytics is everywhere helping us recognize patterns and trends, providing insights, and driving business decisions. Software engineers, cost analysts, and project and portfolio managers need data to:
So ensuring your data is valid and relevant is crucial.
Episode 2: Why Quantify Software Size? is part of our video series, Software Size Matters! Why Quantifying Software Size is Critical for Project Estimating, Tracking, and Performance Benchmarking, that explores how you not only can but should be using software size measures at every stage of the software project, program, or portfolio management process. In this episode, we'll talk about how to keep your measurement process simple and useful and show key software project performance insights gained from software size.
Watch Episode 2: Why Quantify Software Size?
In Episode 1 we said that not only do software project time, effort, quality and productivity increase with software size, but that they increase in a nonlinear fashion.Understanding these relationships helps us accurately predict software development, project costs, duration, and defects.
Gathering and analyzing software project data isn't cost free, so we need to make sure that measurement is:
For example, which team roles are included in the effort, planned or actual? Which activities are included in the planned or elapsed schedule or time to market?
Sticking to just five core metrics makes measurement efficient. Think of them as 5 dimensions of a software project:
Focus on high level metrics shared by all software projects. Larry Putnam, senior the founder of Qsm, was so passionate about it, he wrote a book, Five Core Metrics: The Intelligence Behind Successfil Software Management. Check it out.
Do we really need to collect all this data? Most software organizations collect only a subset of the five core performance metrics. But without size, these metrics can't be meaningfully compared. Is time to market five months good or bad? How big a team do we need to deliver on schedule? Both depend on how much work is required to design, build, test, and deliver.
If your organization only records cost, staffing, effort, time, and maybe defects, how will you meaningfully compare performance or calculate capacity? Consider two projects:
PROJECT A
PROJECT B
Time: 6 months
Effort: 10,000 hours
Defects: 500
On the surface, these two projects look the same. What can we reasonably deduce about them? Not much, because we have no way of quantifying the value delivered or the work required to create that value. It might be better to have one project take longer and use less effort, or use more effort to reduce the time, but likely produce more defects. Remember software project staffing and effort alone don't tell us how much work was accomplished, only how much, only how many staff per hour were assigned or charged to the project.If we add software size, some quantity of features or value constructed and delivered, we can calculate productivity and defect density, an important quality indicator.
* 50 Features
* 300 Features
* Assume “features” are roughly equivalent in “size” (same # of low-level programming/configuration steps required to implement an average feature)
For productivity and throughput, a ratio based productivity using either time or effort per feature, we get 8.3 versus 50 features per month and 200 versus 33 hours per feature. For defect density, we get 10 versus 1.7 defects per feature. QSM's productivity measure, the Productivity Index (13.9 for Project A vs 19.5 for Project B), measures the overall project efficiency and includes time in the calculation. The fact that it is a unitless number means it is comparable across many project and application types and organizations. Click here to learn more. An important point is that literally every productivity measure uses resources, required time, effort, or cost per product unit. Ignoring how much product you plan to build makes it literally impossible to calculate productivity.
Once we add size to the major management metrics, predictable and robust patterns emerge as size increases on the X axis left to right, the average value of each major management metric goes up. But these relationships aren't linear. That means that simple rules of thumb can't be used to model complex behavior. Variation of each size is greater for metrics like effort and defects, which are often influenced more by management strategy or time pressure than metrics like schedule.
Knowing the average values for schedule, effort, defects, and productivity at various project sizes helps organizations:
To see how size supports, estimate, validation and post-delivery benchmarking, let's compare our two sample projects. Taking project size into account enables meaningful project comparisons and provides context. The shaded region indicates more efficiency. Comparing our two software development projects against industry trend data, we can see that 50 feature project used more effort and took more time than average than industry averages for a project of the same size. Because both time and effort were above the industry average, productivity was lower than average.
Now let's look at the 300 feature project. The 300 feature software development project used less effort and delivered in less time than the industry average for a project of the same size. Result. Productivity was higher than average and defects were lower.
Takeaways: