This is the first of a series of articles in which I am going to dig deep into the discipline of software engineering and more specifically software measurement: why small entities can benefit from it, how this methods can be applied in practice in a cost-effective way to increase the profitability of software projects and how managers can use them to make better offers to their clients.
Why effort and not cost?
The answer to this question is both semantic and practical. While cost is a monetary measure, which is specified in currency units, effort is a measure of the work time a project needs to be completed. The conversion between these two introduces variables like the cost of labor. In this case, these costs would include salaries and other contributions that the company needs to do in name of the worker, like taxes, which are better avoided in order to increase the accuracy of the estimation, as they can greatly vary depending on many factors outside of the complexity of the project itself.
The effort is usually measured in man-time units, like for example man-months. This represents the amount of work that a worker can do in one month. I would rather change the term to person-time units, as in person-months, but this is what is usually found on the literature.
The conversion between this two is not hard if done in an approximated way. In order to get that approximation a calculation of the average cost of labor of the people involved in the project is obtained, and then multiplied by the effort. It is necessary to calculate said average using the same time period than the effort units do. For example if the effort estimation provides the amount of person-hours, the average cost of labor for the project must be as well currency per hour (dollars per hour, euros per hour, etc.)
Why function points?
Effort estimation methods that existed before the function points were using Lines of Code (LOCs) as a way to measure the size of a software project. This abstraction provided many problems, like being dependent on the programming language chosen for the implementation of the software, the difficulty of counting “meaningful lines” or the existence of lines of code that are comments, which have no functionality in terms of execution but are necessary and consume time. The Function Points were created as a way to obtain comparable measurements between different projects and avoid these problems.
The idea behind the function points
Function Point Analysis (FPA) was developed by Allan Albrecht of IBM, and first published in 1979. In 1984, the International Function Point Users Group (IFPUG) was formed in order to create and standardize the rules and promote the system usage.
The system is analyzed from two sides, always from the user’s point of view, based on what is requested and received in return.
The Functional Size
The product of this analysis are the Unadjusted Function Points (UFP). It includes both data and transactional functions. The definition of these varies according to the version of the standard in use for the estimation, but we are gonna define them loosely for the sake of clarity for this introduction.
The data functions are usually classified on Internal Logical Files (ILF) which are the tables or other data files that are modified by the application, and External Interface Files (EIF) that are never modified and only referenced by the application. In web applications, these usually translate to database tables, but in some other types of software these can be other types of files.
There are traditionally three categories for this type of functions. The first are External Inputs (EI) which represent a process that incorporates data or control commands from outside the application into the application. This data may alter an ILF or modify the state or behaviour of the system while having meaning in terms of business logic and leaving the system on a consistent state when it finishes. The second type are External Outputs (EO) that calculate a value or update an ILF. The last type, the External Inquiries (EQ) present data directly from the ILF without any calculation.
Calculation of the UFP
The method provides a matrix that assigns a number of Function Points (FP) to each of the types of functions, on different levels of complexity (traditionally three: low, average and high complexities). For each function, the complexity is estimated, and the correspondent value on the table is obtained. The sum of these values for all the functions is the UFP. This represents the functional size of the project.
The Value Adjustment Factor (VAF)
In order to compensate the functional size to the global project complexity, the VAF is calculated. Some of the versions of this methodology skip completely this step of adjustment. The traditional version provides 14 General System Characteristics (GSCs), like “data communication” or “performance”, that are rated, usually from 0 (low) to 5 (high) that are then added to obtain the Total Degree of Influence (TDI). This total degree of influence is then incorporated into this formula to obtain the VAF.
The VAF is a value that can range from 0.65 to 1.35 which is multiplied with the UFP in order to obtain the Adjusted Function Points (AFP).
From Function Points to Effort
There are different approaches to translate the AFP into Effort. One of the most used approaches was described by Capers Jones in 1996, and is called First-Order Estimation Practice. Jones provides a matrix, calculated empirically, that gives an exponent, called First-Order Estimate Exponent, which is based on two factors: the general capability of the development team and the kind of software. Depending on the revision, this values have a range, usually between 0.33 and 0.44. Then, they are input on a formula that given the AFP and said exponent, brings the Effort in man-months.
In the next articles of the series these concepts will be explained individually more extensively, explaining the different variations that exist in this process and where to gather deeper information.