Oct 30, 2004 (01:10 AM EDT)
Why It's Time for FP++

Read the Original Article at InformationWeek

1   2  

If I can write 100 lines of code per hour, but my coworker can only write 50, it seems obvious that I'm a more productive developer, right? Similarly, if I can complete a project that was scoped to be 250 man hours in only 200, am I just incredibly talented or was the scoping conservative? Although these questions are hypothetical, effective software development management necessitates addressing the underlying issues that cause the ambiguity in answering these questions. Function points put us on the right track, but still fall short in our current age of enterprise computing.

Everyone understands that effective management requires effective measurement. Just as CEOs require financial measurements to run their organizations, without these measurements, our ability to identify and address issues is greatly diminished. With the failure rate of IT development projects spinning so out of control that they're either killed, or kill the budget and timeline, the need for these measures has never been more profound.

Specifically, we need to determine an effective means to quantify software applications into a consistent set of units based on the development effort required to produce the intended functionality. This concept can then be extended to improve your organization's software development capabilities by examining the following information:

  • Size. An application's size would be determined as the base-level measurement with respect to other applications both in a particular organization and industry. Size information will then drive the other metrics listed here.
  • Productivity. The number of work units completed divided by time. Productivity information drives decisions to questions, such as which developers complete the most amount of work in a constant amount of time, what's the optimal project team configuration (with regard to skill set and experience) for projects of a certain size, should consultants or in-house staff be used, and has a certain tool, framework, or methodology helped complete projects faster?
  • Quality. Quality can be determined by looking at ratios such as the number of defects per application size or the uptime of applications by size. The capability of a quality assurance group can be examined by comparing the ratio of the defects found during preproduction divided by the application's size, to the same ratio of defects found after production. (You obviously would want the former to be a much larger ratio).
  • Scoping and budgeting. Having a consistent means to measure an application's size will greatly improve scoping and project estimation (as someone scoping can estimate the project timeframe based on the timeframe of projects of a similar size, or use calculated productivity values if data on such a project isn't available). I would envision that organizations with a culture where both the business and the IT organization are accountable for the success of a software development initiative would allocate budgeted funds based on a project size (and understand that additional funds will need to be allocated in the event that the project size changes).
  • Maintenance effectiveness and requirements. In addition to better understanding the scope of developing applications or a certain size, you can also measure the maintainability of an application by looking at metrics such as maintenance hours (or dollars) over project size. (This assumes that the nature of the maintenance work is relatively consistent across commonly skilled resources: These activities include items such as installation, log archiving, backups, auditing, manual processes, and so forth.) Furthermore, armed with data such as the annual maintenance cost per application size based on organization or industry metrics, you can accurately estimate what will be required to maintain applications.

The Challenges

However, unlike business financial measurements such as revenue or profit, the concept of determining a measure to effectively size applications is quite difficult. This is because of the myriad of approaches you may take to develop and implement an application, general ambiguities in the requirements, and external influences such as partner systems, the organization's architecture, politics, and so on.

The commonplace measurements that attempt to address this need fall short for the following reasons:

  • Lines of code. In my opinion, this measurement tops the list of not-very-meaningful items that IT managers track in terms of their pointlessness (with the possible exception of measuring the disk space requirements of your version-control system). The flawed assumptions in this measurement are that programming a piece of functionality will be done consistently regardless of the programmer or the program complexities, and that each line of code is just as complex as every other. Even neophytes can see that these assumptions are ridiculous just by reading the first chapter of a "Teach Yourself How to Program" for a few different programming languages.
  • Man hours. Man hours is a very close second on the list of metrics with little value as it is difficult to: accurately estimate the number of man hours for a software development project up front; because it's dependent on the individual programmer, project team, or tools being used; assumes that tasks are sub-dividable into teams of varying sizes. (Fred Brooks explains clearly why this isn't the case — see Resources.) Furthermore, the man hours statistic provides no capability to measure the information listed previously: What does it mean to finish a 100-man hour project in 80 man hours?
  • Relative hours. Due to the difficulties in quantifying an application, many IT managers simply measure the relative time allocation of their development teams. Managers look at how many hours their developers spend coding vs. designing vs. spending time in meetings. Although measuring relative hours does have some merit in identifying and solving issues in an IT department (such as too many meetings), the data is rarely accurate (as developers are responsible for tracking their own time and would probably be hesitant to admit that they spent eight hours in one week trying to reorganize the content of their hard drives) and only looks at time (after the fact), which makes it impossible to ascertain many of the measurements useful in project scoping and estimating.
  • Code branches/code-level metrics. Some organizations will conduct automated analysis on their software for the purpose of collecting metrics. Items such as the number of branches through the code or code complexity values are calculated based on heuristics. This information is better than the man hours or lines of code measurements as it separates the size of an application's functionality from space (code) and time (man hours). However, in most cases, these metrics can only be collected after the application has been completed and aren't consistent across different technologies, especially technologies that can assist in building applications that aren't code based.