RSS Feed

Software Process Assessments

No matter how your process exactly looks like there should be a way to evaluate it and compare it to other processs. This may be a previous version of your own process, i.e. to measure improemetns or it may be the processes of other companies which will be a criteria to select suppliers. Especially big companies and institutes which have to rely a lot on external software suppliers and sub-contractors felt the need to set up methods to evaluate their suppliers. On the one hand they wanted to have good and relieable products, but on the other hand the technical quality of a product was only one aspect. It also was important for them to receive a "clean output". This clean output is characterized by a good project organization with a realistic planning and schedule, a good documentation of the intermediate work steps, etc. In other words, a good process including its intermediate work products and documents was seen at least as important as the product itself and eventually became a main criteria for a relieable product.

Therefore during the past years an attempt was made to define models and methods to measure the quality of software development processes. The first successful definition of such a model was the Capability Maturity Model (CMM) defined by the Software Engineering Institute (SEI) of the Carnegie Mellon University. This was followed by the extented version CMMI where the "I" stands for "Integrated". This was simply an attempt to set a wider focus, i.e. not only look at software engineering but to the complete system engineering. The basic idea is to define certain activities and key process areas which need to be present for certain maturity grades of processes. They are the following:

Management Organizational Engineering
Level 5
- Technology

Defect Prevention
Level 4
- Software
Level 3



Training Program

Peer reviews
Level 2
Requirements Management


Tracking & Oversight



- -
Level 1
Ad Hoc Processes - -

If you want to know which activities are required to pass an assessment for a certain Key Proccess Area (KPA) I recommend to look at the Self Evaluation Questionaire on this site. Of course this will only give you a rough impression about what is involved to comply to the CMMI standard. Level 1 on the scale is an undefined "ad hoc" process. To reach the next level at least you need to fullfill the requirements for the named key process area. The KPAs of the next higher level have to be an add on to the ones of the previous level. The evaluation of your processes will be very rigid. E.g. if you fail one KPA of level 2 you are automatically downgraded to level 1. There are no intermediate grades. Although I have seen company internal assessments which allow for intermediate grades. The backgroud of the CMMI is that the US Department of Defense wanted to have a way to evaluate possible suppliers of software for their ability to deliver software of good quality. Although the CMMI is widely used and was the only assessment model for many years, it never was adopted as an official standard.

This is different with the so called SPICE assessment model. It existed for quite some time as a technical report and was regarded as almost a standard. Since the beginning of 2006 it became the official standard ISO/IEC 15504. This assessment model has a different setup and finer granularity. First of all they differentiate between the grades (process attributes) and the various process areas. In the different process areas you can have different grades. The capability level rating is as follows:

Level Process Maturity Process Attribute
Level 5 Optimizing Process Process innovation
Process optimization
Level 4 Predictable Process Process measurement
Process control
Level 3 Established Process Process definition
Process deployment
Level 2 Managed Process Performance management
Work product management
Level 1 Performed Process Process performance
Level 0 Incomplete Process -

In order to reach a higher level all attributes of the lower levels have to be fulfilled fully, and the actual level you want to reach has to be fulfilled largely. The definition of the terms "largely", "fully" etc. is as follows:

  • N = Not achieved 0 to 15 % achievement
  • P = Partially achieved > 15 % to 50 % achievement
  • L = Largely achieved > 50 % to 85% achievement
  • F = Fully achieved > 85 % to 100 % achievement

Further it has to be observed, that for each of the process attributes as e.g. "Performance Management", "Work Product Management", etc. (see table above) a number of so called "generic practices" has to be present. Only for level 1 generic practices are not defined. To reach level 1 you have to fulfill the base practices as defined for each sub-process and you have to generate the related work products (see example of the "Supply Process Group" below). I.e. you have to perform the process by doing the related activities and generating the related outputs. This is the entry point and prerequisite to reach the higher levels. A higher level would additionally require you to achieve on top of it a "fully" or "largely" on the generic practices. For each Process Attribute there are 3 to 6 generic practices. A generic practice for the level 2 (managed) is e.g.:

Plan and monitor the performance of the process to fulfill the identified objectives.
  • Plan(s) for the performance of the process are developed.
  • The process performance cycle is defined.
  • Key milestones for the performance of the process are established.
  • Estimates for process performance attributes are determined and maintained.
  • Process activities and tasks are defined.
  • Schedule is defined and aligned with the approach to performing the process.
  • Process work product reviews are planned.
  • The process is performed according to the plan(s).
  • Process performance is monitored to ensure planned results are achieved.

This grade system can then be applied to the various process categories. It is mapped over all defined process categories. There may be various process categories, such as Engineering, Support, Management, etc. This will eventually lead to a process profile and give a very detailed picture about the areas where improvement is still needed. The mapping is visualized in the following picture:

The ISO standard does not define which processes have to be present. The processes to be assessed are called Process Reference Model (PRM) and it is up to the organization which performs the system or software development to select it's own process reference model. Some organizations may be forced by legal contraints to follow certain process models. Others have more freedom to select. A definition is already done for some industries. There is the "SPICE for SPACE" PRM, the Medical SPICE and for general application the ISO/IEC 12207 standard. For the automotive industry the "Automotive SPICE" PRM is defined by the Automotive SPICE Usergroup in behalf of the automotive industry. Their current main process category groups are:

  • The Acquisition process group
  • The Supply process group
  • The Engineering process group
  • The Supporting process group
  • The Management process group
  • The Process improvement process group
  • The Reuse process group

Within these groups you have further detailed sub-processes. The following selection of processes of the engineering process group may serve as an example:

  • ENG.1 Requirements elicitation
  • ENG.2 System requirements analysis
  • ENG.3 System architectural design
  • ENG.4 Software requirements analysis
  • ENG.5 Software design
  • ENG.6 Software construction
  • ENG.7 Software integration test
  • ENG.8 Software testing
  • ENG.9 System integration test
  • ENG.10 System testing

Of course theses sub-processes have further details and exact decriptions of the purpose of the process and the expected outcome. Then there is a Process Assessment Model (PAM) which goes hand in hand with the described PRM. Usually this contains the items of the PRM but adds further details which are needed to evaluate processes. These details are mainly the so called base practices and a list of expected work products which give the details of what should be performed by a certain process. The following screen shot of the Process Assessment Model of the Automotive SPICE usergroup document may serve as an example:

Summarizing the subject I would say that the CMMI and ISO/IEC 15504 assessment models look very different. It is structured differently in the ISO/IEC 15504 and the emphases are different. The main difference is that the CMMI is a so called staged model which allows to classfiy a project or organization by a number. The ISO/IEC 15504 is a continuous model which does not give a "grade" for an organization, but which gives grades on detailed pocesses. However, a good overall process will reach high grades in both assessment models, since they all cover the main aspects of software or system development.

It will depend a lot on your main customers and which assessment model they will prefer. In Europe and Australia this will be most likely the ISO/IEC 15504, and because of historical reasons in North America and the rest of the world this will be most likely the CMMI model. Some big customers will send own assessment teams to their suppliers to measure their process performance. Other customers will be content if you have an independent department in your organization which performs assessments or if you employ external assessors to evaluate your process performance. Interestingly enough I observed that some big companies which enforce high assessment grades on their supppliers do not care too much about their own inhouse processes, and never would reach an acceptable grade for themselves.

In case you need any consulting to align your processes to these standards feel free to contact me. I can also do assessments for you, since I am an official ISO/IEC 15504 assessor.