Rapid Application Development: Project Risk Management

When planning a project you hope for the best, but there is always a chance of something unexpected preventing this. Project risk management is about having the confidence of knowing what to do if the worse occurs, and what this will cost.

Project risk management consists of two key aspects: determining the risk, and designing counter measures.

Determining Risk

When determining risk there are three key aspects to consider:

  • The event causing the risk.
  • The likelihood of the event happening.
  • The impact on the plan if the event occurs.
Risk Events

When planning, it is impossible to determine every risk causing event that may occur. Many things can occur on a project, and you could spend forever trying to anticipate them all. However, as will become clear, it is not important to consider them all, you need to consider a representative selection of events that could effect a project.

Risk Likelihood

Even when we have a set of risk events, we do not generally have accurate statistics about their occurrence. It is unlikely therefore that we can determine their exact likelihood. This is a reason that a subjective scale of high, medium, or low is often used to determine likelihood. It focuses attention on those events you feel will be most likely to occur.

Risk Impact

The next step is to determine what impact a risk event may have on a project. This means determining what effect it will have on the scope, time, resource, or quality aspects of the project plan. These can vary from an annoyance through to a total catastrophe. Various scale scan be used but a similar three point scale of high, medium, or low is also often used.

Why Do It?

The process described so far appears very subjective, liable to error, and will definitely not be complete. So why do it? This is a valid question for many project plans, which just determine risk and go no further. However what we are really interested in is delivering a project plan in a way that satisfies the customer. This is the importance of the counter measures part. It states what you will do if a risk event occurs to mitigate the damage.

Designing Counter Measures

Project Risk Mitigation

When designing counter measures it will be noticed that the same counter measure can be employed to cover several different types of risk. What is more those same counter measures will most likely prove useful against risks you have not anticipated. This is the power of risk planning; by planning how to deal with the worst that you can think of, you are also providing insurance for events you have not considered. Therefore you have raised the possibility that the project will be delivered successfully.

Generic Risk Counter Measures

Risk counter measures will reduce the likelihood, the impact, or both of a risk happening. There are various generic risk counter measures that can be used to reduce risk. As an example supposing you wanted to cross a busy road in order to buy some fish and chips. Your options could be:

  • Eliminate the risk: Buy fish and frozen chips from a supermarket and cook them at home.
  • Cease the activity: Decide not to have fish and chips.
  • Reduce the likelihood: Carefully check the road and only cross when there is no traffic.
  • Reduce the impact: Wear body armour so that if you are hit then it will have less effect.
  • Early warning: Check the road to see if there is any traffic before crossing.
  • Avoid: Decide to have a pizza instead from a shop on your side of the road.
  • Share or transfer: Ring for a taxi to go and collect your fish and chips.
  • Accept risk: Just cross the road anyway.
Selecting Counter Measures

To select appropriate counter measures you check your list of risk events and consider for each of them what you could do to mitigate the effect of it happening. You start with the high likelihood and high impact events, and work down to the low likelihood and low impact events. For each consider the generic counter measures list and see if any can apply to the risk event you are considering. As an example for a test plan if there is a risk of the software being delivered to testing that does not even run, you may cease the testing activity. By documenting this in advance in the test plan all stakeholders are aware of the possibility of this extreme action. Another event may be the loss of one of the testing team. This could be covered by:

  • Reducing the likelihood, by ensuring the team are well paid, and have had a successful health check,
  • Reducing the impact, by having spare team members, or a call off arrangement with a contract agency,
  • Early warning, by having a strong system of staff appraisal, and other measures to see if staff are likely to become unavailable,
  • Accept the risk, by notifying the customer that if the event occurs then the scope, time scale, or quality standards will have to be adjusted and meeting convened to discuss.

There are other possibilities, but there are costs associated with all of them.

A Balanced Risk Management Plan

A balance must be struck between no planning, and planning to the point of project paralysis. The key is to balance the cost of the planning process, along with likely costs of counter measures, against the benefits to be delivered by the project. This can only be done in conjunction with the various stakeholders. If it is done at the start of a project it might even lead to the conclusion that the project is not worth pursuing. If it is pursued then the risk plan enables a more accurate judgment to be made about the likely overall costs. A risk management plan makes you think about what it is you want to achieve, and what you are willing to pay for it. As a result you stand a far higher chance of achieving success.

Source

Rapid Application Development: MoSCoW Prioritisation

To be successful projects need to be properly prioritized for both the requirements and the main project objectives. One mechanism is to use a number system, but this is flawed as it results in all elements being number one. A more useful method is to use a set of words that have meaning such as the MoSCoW method. However to be effective prioritization requires hard choices to be made.

MoSCoW Prioritasation

MoSCoW Prioritasation

Prioritisation of Requirements

An important factor for the success of any project is ensuring that the requirements, are prioritised. In many cases this is not done leading to sure project failure. Sometimes it is the customers’ fault who want the entire system to be delivered now. Other times it is the project manager’s fault because they do not discuss the project with the customer. In either case prayers for miracles are often required if the project is to have any chance of being successful. In my experience miracles rarely happen on projects.

However prioritising is not an easy process, and even less so when done using a number system. The trouble with number systems is that it appears logically to give the features a priority of 1, 2, 3 etc. However who wants a requirement to be a “2” or even a “3”? As a result all requirements become a “1”, which is useless. This can lead to having to resort to additional systems such as giving “1*” and “1**” ratings to try to sort out what is really important. Even this is subject to prioritisation drift – upwards.

Even more damaging with number systems is that features that will not be developed this time are left off the list, and are ultimately lost. This means that designers and developers are unaware of these future needs, and therefore cannot select solutions which will make it easier to accommodate them at a later date.

So effective prioritisation is important but how can it be done if number systems are not effective?

MoSCoW

A more successful method is to prioritise requirements by using words that have meaning. Several schemes exist but a method popularised by the DSDM community is the acronym MoSCoW. This stands for:
M – MUST have this.
S – SHOULD have this if at all possible.
C – COULD have this if it does not effect anything else.
W – WON’T have this time but would like in the future.
The two lower case “o” are there just to make the acronym work. The importance of this method is that when prioritising the words mean something and can be used to discuss what is important.

The “Must” requirements are non-negotiable, if they are not delivered then the project is a failure, therefore it is in everybody’s interest to agree what can be delivered and will be useful. Nice to have features are classified in the other categories of “Should” and “Could.

“Must” requirements must form a coherent set. They cannot just be “cherry picked” from all the others. If they are then what happens is that by default all the other requirements automatically become “Must”, and the entire exercise is wasted.

Requirements marked as “Won’t” are potentially as important as the “Must” category. It is not immediately obvious why this is so, but it is one of the characteristics that makes MoSCoW such a powerful technique. Classifying something as “Won’t” acknowledges that it is important, but can be left for a future release. In fact a great deal of time might be spent in trying to produce a good “Won’t” list. This has three important effects:

  • Users do not have to fight to get something onto a requirements list.
  • In thinking about what will be required later, affects what is asked for now.
  • The designers seeing the future trend can produce solutions that can accommodate these requirements in a future release.
Prioritising the Project Objectives

When a set of requirements has been prioritised then it can be compared against the other planning aspects of project: scope, quality, timescale and resources, and a risk statement produced.

There is a general wish among managers to be able to decide when a project will be delivered, how much it will cost and what it will do. They then think they have removed all the degrees of freedom, and as they have made an assertion, reality will follow their thinking.

Reality will not, as they have left out two significant factors. The first is quality; it may be delivered on time but the quality is appalling. It does what the requirements say, but the system is not robust enough to be used by anybody, as one mistake will make it crash. The other factor is risk, which may be so sky high, that project failure has guaranteed before it starts.

One suggestion is to prioritise the four main factors of scope, quality, timescale and resources, and thus prioritise the key project objectives. Which of them “Must” be delivered, which has the maximum flexibility and is defined as “Could”, with the other two factors between these as “Should”. This means that at least one factor can be allowed to slip, and provide flexibility for setting a proper risk plan to ensure the essential factor is met. This is not losing control, it is acknowledging that building a piece of software is a trip into the unknown, and precautions need to be taken.

Implications of Prioritising the Project

Selecting the right prioritisation order of is not easy. Any choice that is made has tradeoffs.

If nearly all the requirements are prioritised as “Must”, then there is not much flexibility in the scope of a project. By definition the scope is the “Must” factor in the project and a decision has to be taken either about which of the others will have flexibility, or which of the requirements will be down graded from a “Must”.

However many studies have shown that it is better if a project is delivered on time, even if it has few features, than if a project is delivered late, but with a full set of features. This can be likened to saying when is the best time to deliver Christmas crackers to shops, before or after Christmas? Therefore timescale competes to be the most important factor.

If quality is sacrificed then faults will occur in the software. One way around this is to train the users in the use of a new system, so that they only use it in proper fashion, and know how to get around any bugs that are discovered. However if it is an Internet system intended to be used by customers, then this cannot be done, and a reputation damage to the organisation may result, due to a faulty system.

Finally all systems must be produced to a budget, and a business does not have unlimited resources to put into a project. Moreover the business case normally assumes a rate of return, which will be considerably reduced if the resources are increased significantly on a project. Therefore resources have a strong case for being the most important factor.

Regardless you cannot “have it all and have it now”, and a balanced and planned prioritisation of the factors must take place if a project is to have a chance of delivering business value. If it is not then the fifth factor of risk goes sky high, and ceases to be risk and become inevitable.

Source

DSDM: Technology Support

The need for technology support

The technology used to visualize what the developers are thinking and to gain feed-back from that visualization is the basis of much of agile development. However, it is not the total answer. The technology support for a controlled process does not lie solely in the easy generation of analysis and design models, screens, and code. If the process is to be controlled, then strong emphasis should also be placed on automated support for the necessary controls. Controls are an overhead on productive work, albeit a necessary one. Saving in effort can be made by automating the control of the status of, and access to, work products and in ensuring that they have been created correctly.

Agile developers would much rather spend their time creating the solution than controlling it. So it is the control activities that are likely to be squeezed out of their schedules when they are under pressure to deliver. Another area that does not enthrall developers is configuration management. However, configuration management is of the elements of an agile support environment where more things are being produced and changed at a faster rate than in traditional method. The need for support in this area is obviously fundamental. It should be easy for the developers to place their works under configuration management as soon as possible and as often as they should, without causing them to slow down in their development activities. Testing also looms large as something which developers see as a necessary evil, but which would be a much more productive activity with tool support. The list goes on.

DSDM Support Environments

DSDM has defined an agile tool ‘nirvana’. It is an environment which will support the whole process from feasibility to implementation (including aspects such as reverse engineering for legacy code) with all the necessary controls as automated as possible. It does not exist and it is unlikely that any one tool vendor will offer the fully integrated set. Indeed it is yet another cry for an IPSE (integrated project support environment), but one which is designed for SDSM projects. Such an environment requires integration at a number of levels:

  • presentation to provide a common ‘look and feel’ across all tools;
  • data so that all tools share the same repository;
  • control so that one tool can notify and/or initiate actions in other tools;
  • platform, in order to port the toolset from one platform to another

May be such an environment will exist in the future, but in the meantime we have to be more realistic and look for tools that will make savings in time and effort without being too costly. If we focus in the money side, several low-cost tools have been found to have a beneficial impact on effort. Low-cost tools for code and schema generation are available, as are tools for prototype generation. Both of these speed up development markedly compared with coding by hand. Another area where inexpensive tools can help is in the perennial headache of documentation. Automated support for creating documentation is readily available. Fortunately, many tools are self-documenting.

Testing Tools

One of the components of the DSDM support environment is testing tools. There are many varieties of testing tool available on the market and DSDM strongly advocates the use of tools in this area. Producing a tested system in a very short time can only be made easier with effective tools.

A very useful class of tools is capture and replay tools. These can lessen the need for documented test scripts. The quickest way to document tests is to record them as they are performed. A great deal of developer time can be saved through this route. Not only does this eliminate the need for producing ‘paper’ scripts before testing takes place, but the tests can be achieved as evidence of what tests have taken place. Capture and replay tools are also extremely beneficial in building up regression test suites which can be left to run overnight while the developers have a well-earned rest.

Static code analyzers can relieve the effort in code inspection and lessen the need for third-party verification that the code is to the required standard.

If the testing toolset is to be really complete, then dynamic analysis tools will perform tests in the background while demonstrations of a part of the software are taking place. Dynamic analysis includes checking array bounds and detecting memory leakage, etc.; things that should be tested, but which may be difficult to fit into the tight schedule of a project.

Configuration Management Tools

DSDM asks a lot of configuration management. Everything that is produced (analysis models, designs, data, software, tests, test result, etc.) should all be kept in step within another, so that is relatively easy to move back to a known ‘build’ state whenever the development has gone down a blind alley. This requirement means that automated support is essential and that the automated support should allow for many versions used, but this rarely the case. Anyway, given the diversity of products that are under development at any one time, it is probably asking too much to expect all the relevant tools being used in projects to be sufficiently integrated to have every product in step. This means that a specialist configuration management tool should definitely be on the shopping list of an organization that is planning to take DSDM seriously. The ability to baseline everything on a daily basis is the ideal. So the tool should not incur too much of an overhead is its use. There are several excellent configuration management tools available which will do the job perfectly satisfactorily if the procedures governing their use are firmly in place.

Effective Tool Usage

Although there are excellent tools in the market, any tool is inly good as its users. They should not be relied upon as the whole answer. The developers should be confident that they know how to use them properly and that the tools are an asset rather than otherwise. The purchase of a tool environment for agile development should think carefully before buying. It is possible in early DSDM projects to live with what you already have. Indeed it is probably preferable not to introduce too many new things at the same time. Once the developers are used to the process, they will soon see where tool support would be particularly beneficial in their working environment. If tool support is to be bought, the purchaser should read the chapter in the online manual that gives very strong guidance on the characteristics of tools for DSDM. Not least of these is usability. For some reason, software tools are often less usable than their counterparts in the business environment – maybe we just like to make thing hard for ourselves.

Source

  • DSDM Consortium edited by Jennifer Staplenton , 2003.DSDM. In Paul Bocij, Jennifer Staplenton. DSDM Business Focused Development. 2nd ed. The DSDM Consortium. .ISBN 0-321-11224-5

Sort Description:Nine (9) Principles of Dynamic Systems Development

Summary of Nine Principles of DSDM
  1. Active User Involvement is Imperative
    • DSDM – a user-centred approach
    • Active participation through lifecycle
  2. DSDM Teams must be Empowered to Make Decisions
    • DSDM team comprises developers and users
    • Decisions made as requirements refined or changed
    • No need for recourse to higher management
    • Rapid and informed decision-making
  3. The Focus is on Frequent Delivery of Products
    • Team produces agreed products throughout lifecycle
    • Team chooses best approach to achieve objectives
    • Ensures focus on delivery, not just activity
  4. Fitness for Business Purpose is the Essential Criterion for Acceptance of Deliverables
    • Build the right product before you build it right
    • Meeting business need is more important than technical perfection
  5. An Iterative and Incremental Approach is Necessary  to Converge on an Accurate Business Solution
    • DSDM allows solutions to emerge incrementally
    • Developers make full use of user feedback
    • Partial solutions can be delivered to meet immediate needs
  6. All Changes During Development are Reversible
    • All products should be in a known state at all times
    • It should be possible to step backwards, where an approach does not work
    • The team should be willing to embrace change and not be defensive
  7. Requirements are Baselined at a High Level
    • Freezing and agreeing purpose and scope of system
    • Baseline at a level which allows detailed investigation of requirements at a later stage
  8. Testing is Integrated Throughout the Lifecycle
    • Not a separate activity at the end
    • System is tested and reviewed incrementally by developers and users
    • Testing evolves as prototypes mature
    • Aim is to find and fix errors as early as possible
  9. A Collaborative and Co-operative Approach between all Stakeholders is Essential
    • Everyone working together as a team
    • Shared goal of achieving the business objectives
    • Give and take on all sides
    • Involves all parties, not just core team

Nine Principles of DSDM -Detail

Source

  • University of Greenwich,  London, UK

Nine (9) Principles of Dynamic Systems Development Method (DSDM)

Principle 1: Active user involvement is imperative.

DSDM’s strong focus on the business purpose of the system being developed requires that the ultimate users of the system be involved throughout the development project. This is because the system attributes that will make it fit for its purpose cannot be understood well enough in the project’s early stages to commit them to a detailed specification. Therefore, the only way to make appropriate detailed decisions and know that the evolving system is converging on the ideal of “fitness” is to fully involve the users throughout the project.

Principle 2: DSDM teams must be empowered to make decisions.

This principle does not give the team free reign to do whatever they wish. Rather, it advocates that the team be delegated the authority to make most of the day-to-day decisions as the project progresses. With active user involvement, such delegation can result in the team being able to move quickly and steadily toward system delivery. However, when a decision that must be made falls outside the team’s authority (e.g., cost overruns), DSDM recognizes the importance of raising such decisions to the appropriate authority.

Principle 3: The focus is on frequent delivery of products.

This principle means that the project’s progress should be measured by the production of tangible products, rather than by mere activity. The phrase “delivery of products” does not refer only to the incremental delivery of a working system to an end user. Products in this sense include any sort of work product that may be produced as the project moves forward (e.g., a specification, a throwaway prototype, a design document); and delivery could be simply within the project team. DSDM requires that as the project moves forward, it must produce artifacts that prove that progress is being made.

Principle 4: Fitness for business purpose is the essential criterion for acceptance of deliverables.

This principle is the practical manifestation of DSDM’s belief that specifying detailed requirements upfront is not helpful. By placing fitness for purpose above satisfaction of requirements, and by involving users consistently, DSDM zeroes in on the end user as the only one who can say whether or not the system as it is evolving is acceptable.

Principle 5: Iterative and incremental development is necessary to converge on an accurate business solution.

In an environment where it is assumed that the project’s end result cannot be foreseen in great detail, incremental development is the best insurance against the project going terribly awry. Incremental development is essentially an exercise in trial and error, where each new increment is presented to the user who validates (or invalidates) the direction the team has taken. “Converge” is the key word in the principle. It is assumed that the most direct path to the end product is not likely to be known, so DSDM engages in constant checking and correcting of the path to bring the project to a satisfactory end as quickly as is reasonably possible.

Principle 6: All changes during development are reversible.

If we agree that the project is practicing trial and error, then we must expect that there will indeed be errors from time to time. This principle gives us permission to discard erroneous work when necessary. Surely, we will try to salvage the good from a mistake, but we must recognize that there will be times when the most efficient path is to discard some work and try again.

Principle 7: Requirements are baselined at a high level.

The first three words of this principle, “Requirements are baselined,” represent a departure of DSDM from some other Agile methods. DSDM recognizes the importance to the project of stability in scope and direction. By baselining (freezing) the requirements at some level, stakeholders are establishing a stable basis for the team’s work. This does not mean that this baseline will not change; rather, it requires that serious deliberation precede any such change so that all stakeholders understand and agree to what would become the new requirements baseline. The last four words, “at a high level,” is the part of this principle that makes it agile. It leaves the details of what the requirements mean to be worked out between the team and user.

Principle 8: Testing is integrated throughout the life cycle.

Testing does not show up as a step in the DSDM life cycle because, like other Agile methods, DSDM promotes a strong quality-consciousness by all team members. Every task should include an appropriate verification or validation step like a review or test by a team member or user. This principle works together with principles 1, 4, and 5 to continually check the project’s progress toward its goal of a system fit for its business purpose.

Principle 9: A collaborative and cooperative approach between all stakeholders is essential.

This last principle is little more than the sum of the first eight. The only way that principles 1–8 can be applied successfully on a project is if all stakeholders accept DSDM and their roles as DSDM defines them. If any stakeholder does not agree (especially an influential stakeholder), then DSDM cannot work in that environment.

Nine Principles of DSDM -Lite Version

Source

Overview of DSDM : Phases of DSDM life-cycle

5 phases of DSDM lifecycle

phases of DSDM life-cycle

Pre-project

The pre-project phase ensures that only right projects are started and that they are set up correctly. Once it has been determined that a project is to go ahead, funding is available, etc., the initial project planning for the feasibility study is done. Then the project proper begins with the feasibility study.

The feasibility and business studies are done sequentially. They set the ground rules for the rest of development, which is iterative and incremental and therefore they must be completed before any further work is carried out on a given projects

Feasibility study

In this phase the problem is defined and the technical feasibility of the desired application is verified. Apart from these routine tasks, it is also checked whether the application is suitable for Rapid Application Development (RAD) approach or not. Only if the RAD is found as a justified approach for the desired system, the development continues.

Business study

In this phase the overall business study of the desired system is done. The business requirements are specified at a high level and the information requirements out of the system are identified. Once this is done, the basic architectural framework of the desired system is prepared. The team researches the business aspects of the project.

  • Does it make good business sense?
  • Who are the participants and interested parties?
  • What is the best work plan? What is needed to:
    1. Build it
    2. Test it
    3. Deploy it
    4. Support it?
  • What technologies will we be using to build and deploy it?

The systems designed using Rapid Application Development (RAD) should be highly maintainable, as they are based on the incremental development process. The maintainability level of the system is also identified here so as to set the standards for quality control activities throughout the development process.

Functional Model Iteration

This is one of the two iterative phases of the life cycle. The main focus in this phase is on building the prototype iteratively and getting it reviewed from the users to bring out the requirements of the desired system. The prototype is improved through demonstration to the user, taking the feedback and incorporating the changes.Prototyping follows these steps:

  • Investigate
  • Refine
  • Consolidate

This cycle is repeated generally twice or thrice until a part of functional model is agreed upon. The end product of this phase is a functional model consisting of analysis model and some software components containing the major functionality

Design and Build Iteration

This phase stresses upon ensuring that the prototypes are satisfactorily and properly engineered to suit their operational environment. The software components designed during the functional modeling are further refined till they achieve a satisfactory standard. The product of this phase is a tested system ready for implementation.

There is no clear line between these two phases and there may be cases where while some component has flown from the functional modeling to the design and build modeling while the other component has not yet been started. The two phases, as a result, may simultaneously continue.

Implementation

Implementation is the last and final development stage in this methodology. In this phase the users are trained and the system is actually put into the operational environment. At the end of this phase, there are four possibilities, as depicted by figure :

  • Everything was delivered as per the user demand, so no further development required.
  • A new functional area was discovered, so return to business study phase and repeat the whole process
  • A less essential part of the project was missed out due to time constraint and so development returns to the functional model iteration.
  • Some non-functional requirement was not satisfied, so development returns to the design and build iterations phase.

Dynamic System Development Method (DSDM) assumes that all previous steps may be revisited as part of its iterative approach. Therefore, the current step need be completed only enough to move to the next step, since it can be finished in a later iteration. This premise is that the business requirements will probably change anyway as understanding increases, so any further work would have been wasted.

Post-Project – Maintenance

After the product is created, maintenance will inevitably need to be performed. This maintenance is generally done in a cycle similar to the one used to develop the product.

According to this approach, the time is taken as a constraint i.e. the time is fixed, resources are fixed while the requirements are allowed to change. This does not follow the fundamental assumption of making a perfect system the first time, but provides a usable and useful 80% of the desired system in 20% of the total development time. This approach has proved to be very useful under time constraints and varying requirements.

Nine Principles of DSDM -Detail

Nine Principles of DSDM -Lite Version

Source

Dynamic System Development Method (DSDM)

What is DSDM?

DSDM is a framework based on best practice and lessons learnt since the early 1990’s by DSDM Consortium members. It is another approach to system development, which, as the name suggests, develops the system dynamically. It is independent of tools, in that it can be used with both structured analysis and design approach or object-oriented approach.

The Dynamic System Development Method (DSDM) is dynamic as it is a Rapid Application Development method that uses incremental prototyping. DSDM is particularly useful for the systems to be developed in short time span and where the requirements cannot be frozen at the start of the application building. Whatever requirements are known at a time, design for them is prepared and design is developed and incorporated into system. In Dynamic System Development Method (DSDM), analysis, design and development phase can overlap. Like at one time some people will be working on some new requirements while some will be developing something for the system. In Dynamic System Development Method (DSDM), requirements evolve with time.

DSDM focuses on delivery of the business solution, rather than just team activity. It makes steps to ensure the feasibility and business sense of a project before it is created. It stresses cooperation and collaboration between all interested parties. DSDM makes heavy use of prototyping to make sure interested parties have a clear picture of all aspects of the system.

Background of DSDM

Business are putting increasing pressure on their IT suppliers to deliver better systems, faster and cheaper. In today’s rapidly changing world, they are no longer able to wait for years for a system to be provided: the business may have changed radically during the years of development. It is imperative to find a different way of building IT systems. The technology that is now available to developers allows for more speedy production of system but the answer lies not only in the use of tools. The whole process needs to change. The classical, waterfall life-cycle does not take full advantage of modern technology and does not facilitate the change that is inherent in all systems development. It has been around for about 40 years and is basically the solution to an old problem -that of not having a full understanding of the problem to be solved and not having a coherent approach to solving the problem before starting to code a solution.

The waterfall approach of a strict sequence of stages has been seen to be flawed for many years now.Several attempts have been made to move away from it. including Barry Boehm’s iterative style of development using a spiral model of planning, risk analysis, engineering, and customer evaluation. Though excellent, the spiral model did not achieve the penetration into IT practices that it deserved. The emergence in recent years of many ‘Agile’ methods proves the need for a different approach. While some, such as Extreme Programming, have gained wide acceptance they do not cover all aspects of a project and can leave an organization confused as to how to integrate the many solution on offer. This provides one explanation for the less than optimal acceptance of agility as the way forward by the majority of IT solution providers. Another could be that, until recently there has been sufficient pressure from their customers.

In the early 1990s, the IT industry had become increasingly aware of Rapid Application Development following James Martin’s book Rapid Application Development, which gave some excellent pointers as to how to make the concept work, but did not provide the total solution, There are many tools on the market, but to use them often meant buying the vendor’s process as well. the founding members of the DSDM Consortium saw this as a block to the growth of successful and fast solution delivery.

The Consortium was inaugurated in January 1994 with the aim of producing a public domain, commonly agreed method which would be tool-independent. Ed Holt who chaired the Consortium for the first two years said that every organization that bought a RAD tool really needed a new process. DSDM aims to provide that process for building and maintaining systems that meet tight time constraints in a controlled project environment. The Consortium had 17 founder who resented a mix of organizations that remains today: large IT vendors, smaller tool vendors, and user organizations of all sizes.The Consortium now has hundreds of members with established regional consortia in North America, Benelux, Sweden , France and Denmark with interest growing in other countries, such as Australia India And China.

During 1994, the Consortium’s Technical Work Group put together the process and produced guidance material based on the experience and best practies of Consortium members. a few components of the framework were original ideas from experts in particular areas, but most of them were tried and tested -but they had never been brought together as a cohesive approach.

Why DSDM more rapid than the Waterfall ?

DSDM produces industrial strength systems that meet users’ need and are fully extendible and maintainable over long periods of time -they are not one-off or throwaway. In business  terms, they the exact peer of good system developed by the waterfall approach, but take a lot less time to develop.

There are two main reasons. Less is actually done. There is much less time spent in briefing people, and bringing them repeatedly up to speed. Little time is lost through task-switching by users or developers. Most importantly of all, only the features that are needed are actually developed.

The second reason is that problems, misunderstanding and false directions are identified and corrected early, so avoiding the massive rewrites often required late in waterfall projects. This has a further benefit. The resultant code developed under DSDM is consistent and of a piece, whereas waterfall code, by the end of the projects, is often already patched and out of synchrony with its documentation. The result is that DSDM=delivered code is also easier to maintain.

 When to use DSDM?

DSDM is not the panacea to all project ills that developers are often promised. There are classes of system to which the framework is most easily applied and these are the areas which an organization which is less experienced in agile development should focus on to begin with -unless of course the pressure to deliver is so great that an ‘unsuitable’ project must be tackled before the organization is mature in its use of the an agile approach, and DSDM is particular. The framework has been used in a wide variety of projects in a diverse set of organizations. It is difficult to say that the framework should never be used for a particular sort of application.

The main questions to ask when deciding on the appropriateness of a proposed system to DSDM development are:

  1. Is the functionality going to be reasonably visible at the user interface? :- Of course the use interface includes reports as well as screens or indeed any other way of showing the end-user what is happening inside the system. If users are to be involved throughout the development process, they must be able to verify that the software is performing the right actions through prototyping, without having to understand the technicalities of what is happening behind the user interface.
  2. Can you clearly identify all classes of end-users? :-It is essential to involve users within the project who can represent all of the potential end-user population. This has caused some concern when developing systems for widely disparate or geographically dispersed populations. The important thing is to ensure that you can obtain complete coverage of all relevant user views with on the development team. Otherwise there is a danger of driving the development in a skewed direction. Moreover the review process of sending out documents for a matter of weeks to a wide user group is very often not feasible on a DSDM project. Such reviews limit the chances of delivering on time.
  3. Is the application computationally complex? :-This is possible one of the hardest questions to answer. What is complex for one organization is simple for another. A lot will depend on what is available in terms of the building blocks to the development team. The important thing is not to develop too much complex functionality from scratch. This question is closely linked to the first question about the visibility of functionality. For instance, if the system is performing complex actuarial calculations, this could render the project difficult for DSDM. On the other hand, if the calculation processes have been used in previous systems and are tried and tested, the actuaries will trust what is presented to them.
  4. Is the application potentially large? :- DSDM has been used and is being used to produce very large systems, but in every case it has been possible to develop the functionality in fairly discrete chunks. There are several DSDM projects in progress at the time of writing that have a development period of two to three years. This could be viewed as not being rapid development, but increments will be delivered at regular intervals rather waiting until everything is complete before the system is put into operation. If the system is large and there is no possibility of incremental delivery, i.e. everything has to be delivered in a matter of months for the system to be useful, them it must be possible to break down the work for development by parallel teams.
  5. Is the project really time-constrained? :- It is all too easy for business management to say that a system must be delivered by a certain date when they don’t really mean it. This is potentially disastrous for the project. It means that whole the developers are geared up to follow the DSDM guidance; the end-user participation at all levels is not as forthcoming as it should be. At best this is frustrating. At worst, the project goes in the wrong direction because the drive from users is not there and developers start making assumptions about what is needed in order to keep active.
  6. Are the requirements flexible and only specified at a high level?:-This could be reworded as ‘Do you have complete understanding of everything that must be delivered?’ Whatever the project, the answer is just about always ‘No!’ but, for DSDM to work successfully, the level of detailed understanding at the outset of the project should be lower than is the norm.The use of prototyping with knowledge users to elicit requirements as you go is fundamental to the approach. If everything is understood and all the detailed requirements have been agreed and fixed before the software builders come on the scene, major benefits of DSDM will not be achieved, such as building the right system rather than what was originally requested.Also, if the requirements are inflexible, it will not be easy to negotiate what can be left out if the project deadline is approaching and a great deal of work remains to be done.

Sort description of phases of DSDM life-cycle

Nine Principles of DSDM -Detail

Nine Principles of DSDM -Lite Version

 

Source

 

Advantages and Disadvantages of Rapid Application Development (RAD)

Advantages of RAD Software Development
  • The time required to develop the software is drastically reduced due to a reduced requirement analysis business requirements documentation and software requirement specification) and planning stage.
  • All the software prototypes produced can be kept in a repository for future use. The re-usability of the components also enhances the speediness of the process of software development.
  • It is much easier for a project manager to be accurate in estimating project costs which of course means that project cost controls are easier to implement and manage as well.
  • It is a big cost saver in terms of project budget as well as project time and cost due to re-usability of the prototypes.
  • If a component is being picked for the repository, it is already tested and hence need not be tested again. This helps in saving time required for testing.
  • The project management requirements are collected in a dynamic manner. Every time there is a prototype ready, requirements are studied and matched. If there are any additional requirements, these are then included in the next prototype built.
  • There is a strong and continuous participation of the project sponsor who keeps giving feedback in the whole process. Hence the end user satisfaction level is higher when the end result is produced.
  • It promotes better documentation through written test cases.
Disadvantages of RAD Software Development
  • This method may not be useful for large, unique or highly complex projects
  • This method cannot be a success if the team is not sufficiently motivated and nor is unable to work cohesively together.
  • Success depends on the extremely high technical skills of the developers.
  • There are times when the team ignores necessary quality parameters such as consistency, reliability and standardization. Hence this can make project quality management hard to implement during the project management life cycle

Source

General Characteristics of Rapid Application Development (RAD)

Rapid Application Development Methodology

Incremental development

An important element of the philosophy of RAD is the belief that not all a system’s requirements can necessarily be identified and specified in advance. Some requirements will only emerge when the users see and experience the system in use, others may not emerge even then, particularly complex ones. Requirements are also never seen as complete but evolve and change over time with changing circumstances . Therefore, trying fully to specify a system completely in advance is not only a waste of time but often impossible. So why attempt to do it? RAD starts with a high-level, rather imprecise list of requirements, which are refined and changed during the process, typically using toolsets . RAD identifies the easy, obvious requirements and, in conjunction with the 80/20 rule , just uses these as the starting point for a development, recognizing that future iterations and timeboxes (see below) will be able to handle the evolving requirements over time. Hough (1993) suggests using the technique of functional decomposition and each function identified and the requirements listed, but, he says,

the precise design specifications, technical issues, and other concerns should be deferred until the function is actually to be developed

Time-boxing

The system to be developed is divided up into a number of components or timeboxes that are developed separately. The most important requirements, and those with the largest potential benefit, are developed first and delivered as quickly as possible in the first time box. Some argue that no single component should take more than 90 days to develop, while others suggest a maximum of six months. Whichever timebox period is chosen, the point is that it is quick compared with the more traditional systems development timescale.

Systems development is sometimes argued to have three key elements. In tradi- tional development two are typically variable: time and resources (see Figure 6.3). In traditional development when projects are in difficulty, either the delivery time is extended or more resources are allocated or both but the functionality is treated as fixed. In RAD the opposite applies, resources and time are regarded as fixed (allocating more resources is viewed as counterproductive although this does sometimes happen), and so that only leaves functionality as a variable. So, under pressure and when projects are in difficulty, time and resources remain constant but the functionality is reduced.

Traditional development -time and resources

Figure 1.1: Traditional development -time and resources

RAD compartmentalizes the development and delivers quickly and often. This provides the business and the users with a quick, but it is hoped, useful part of the system in a refreshingly short timescale. The system at this stage is probably quite limited in relation to the total requirements, but at least something has been delivered. This rapid delivery of the most important requirements also helps to build credibility and enthusiasm from the users and the business. Often for the first time they experience a system that is delivered on time. This is radically different from the conventional delivery mode of most methodologies which is a long development period of often two to three years followed by the implementation of the complete system. The benefits of RAD development is that users trade off unnecessary (or at least initially unnecessary) requirements and wish lists (i.e. features that it would be ‘nice to have’ in an ideal world) for speed of development. This also has the benefit that, if requirements change over time, the total system has not been completed and each timebox can accommodate the changes that become necessary as requirements change and evolve during the previous timebox. It also has the advantage that the users become experienced with using and working with the system and learn what they really require from the early features that are implemented.

Comparison of timebox development and traditional development

Figure 1.2: Comparison of timebox development and traditional development

Figure 1.2 illustrates three chunks of development and, although the overall time to achieve the full implementation could be the same as with a .traditional development, the likelihood is that the system actually developed at the end of the three timeboxes will be radically different from that developed at the end of one large chunk as a result of the learning and evolving processes which leads to change being made to each specification at the beginning of each timebox.

Some RAD proponents argue that, if the system cannot be divided into 90 day timeboxes, then it should not be undertaken at all. Obviously such an approach requires a radically different development culture from that required for traditional or formalized methodologies. The focus is on speed of delivery, the identification of the absolutely essential requirements, implementation as a learning vehicle, and the expectation that the requirements will change in the next timebox. Clearly such radical changes are unlikely to be achieved using conventional techniques.

Once the duration of the timebox has been decided it is imperative that the system is delivered at the end of it without slippage, as timeboxes are fixed. So how is this achieved? Well, first, by hard work and long hours and, secondly, by the use of the other RAD techniques discussed below. But also if slippage is experienced during development of a timebox then the requirements are reduced still further (i.e. some of the things that the system was going to do will be jettisoned).

The Pareto principle

This is essentially the 80/20 rule and is thought to apply to requirements. The belief of RAD proponents is that around 80 per cent of a systems’ functionality can be delivered with around 20 per cent of the effort needed to complete 100 per cent of the requirements. This means that it is the last, and probably most complex, 20 per cent of requirements that take most of the effort and time. Thus why do it? – just choose as much of the 80 per cent to deliver as possible in the timebox, or at least the first timebox. The rest, if it proves necessary, can be delivered in subsequent time boxes.

MoSCoW rules

In RAD the requirements of a project are prioritized using what is termed the MoSCoW Rules:

  •  M = ‘the Must Haves’. Without these features the project is not viable (i.e. these are the minimum critical success factors fundamental to the project’s success).
  • S = ‘the Should Haves’. To gain maximum benefit these features will be delivered but the project’s success does not rely on them.
  • C = ‘the Could Haves’. If time and resources allow these features will be delivered but they can easily be left out without impacting on the project.
  • W = ‘the Won’t Haves’. These features will not be delivered. They can be left out and possibly, although not necessarily, be done in a later timebox.

The MoSCoW rules ensure that a critical examination is made of requirements and that no ‘wish lists’ are made by users. All requirements have to be justified and categorized. Normally in a timebox all the ‘must haves’ and at least some of the ‘should haves’ and a few of the ‘could haves’ would be included. Of course, as has been mentioned, under pressure during the development the ‘could haves’ may well be dropped and even possibly the ‘should haves’ as well.

JAD workshops

RAD requires high levels of participation from all stakeholders in a project as a point of principle and achieves this partly through the JAD workshop. JAD (Joint Application Development) is a facilitated meeting designed to overcome the problems of traditional requirements gathering (see Section 16.2), in particular interviewing users. It overcomes the long time that the cycle of interviews take by getting all the relevant people together in a short period to hammer out decisions. Normally in the context of RAD, a JAD workshop will occur early on in the development processto help establish and agree the initial requirements, the length of the timebox, what should be included and what excluded from the timebox, and most importantly to manage expectations and gain commitment from the stakeholders. Sometimes a subsequent JAD workshop is used to firm up on the details of the initial requirements, etc. In some RAD approaches the whole process is driven by a series of JAD meetings that occur throughout the timebox.

The fourth element is the presence of an executive sponsor. This is the person who wants the system (or whatever the focus of the meeting is), is committed to achieving it and is prepared to fund it. This person is usually a senior executive who understands and believes in the JAD approach and who can overcome the bureaucracy and politics that tend to get in the way of fast decision making that usually bedevils traditional meetings

Prototyping

Prototyping is an important part of RAD and is used to help establish the user requirements and in some cases the prototype evolves to become the system itself. Prototyping ;helps speed up the process of eliciting requirements, and speed is obviously important in RAD, but it also fits the RAD view of evolving requirements and users not knowing exactly what they want until they see or experience the system. Obviously the prototype is very helpful in this respect.

Sponsor and Champion

Having a committed sponsor and a champion of the systems is an important requirement for RAD and for its success. We have discussed the sponsor above. A champion is someone, often at a lower level of seniority, who is also committed to the project, who understands and believes in RAD, and is prepared to drive the project forward and overcome some of the bureaucracy and politics.

Toolsets

RAD usually adopts, although not necessarily, the use of toolsets to help speed up the process and improve productivity. In general it is usually argued that the routine and time-consurning tasks can be automated as well as using available tools for change control, configuration management, and even code reuse. Reuse of code is another way that RAD speeds the development process. However, it is not just about speed but also quality because existing code or modules have usually already been well tested, not just in development, but in real use. RAD searches for shortcuts and reuses code, maybe clones existing code and modifies it, or utilizes commercial packages, etc. where applicable. This may be code within the organization or bought from outside. Sometimes more than just a little piece of code is used; for example, complete applications may be used and the developers use this as the basis of the new system and change the interface to produce the desired results. Many ‘new’  e-commerce or Internet applications have been developed in this way using the legacy systems and then providing a new ‘umbrella’ set of applications and user interfaces on top. The idea is to leverage existing code, systems, experience, etc.

Specific RAD productivity tools have been around for some time and are developing fast, and existing tools and languages are being enhanced for RAD, particularly for rapid development of Internet and e-business based applications.

Source

  • David Avison & Guy Fitzgerald, 2006. Rapid application development (RAD). In David Avison & Guy Fitzgerald Information Systems Development. 4th ed. Pearson Education Limitied. Ch. 7. pp.128-132.ISBN-13 978-0-07-711417-6

Rapid Application Development Methodology

A method of developing information systems which users prototyping to achieve user involvement and faster development compared to traditional methodologies such as SSADM.
Professor Clifford Kettemborough of Whitehead College, University of Redlands,
defines

Rapid Application Development as an approach to building computer
systems which combines Computer-Assisted Software Engineering (CASE) tools and techniques, user-driven prototyping, and stringent project delivery time limits into a potent, tested, reliable formula for top-notch quality and productivity. RAD drastically raises the quality of finished systems while reducing the time it takes to build them.

Online Knowledge defines

Rapid Application Development as a methodology that
enables organizations to develop strategically important systems faster while reducing
development costs and maintaining quality. This is achieved by using a series of
proven application development techniques, within a well-defined methodology.

History

Rapid application development is a term originally used to describe a software development process introduced by James Martin in 1991. Martin’s methodology involves iterative development and the construction ofprototypes. More recently, the term and its acronym have come to be used in a broader, general sense that encompasses a variety of methods aimed at speeding application development, such as the use of software frameworks of varied types, such as web application frameworks.

RAD vs. Waterfall

In the 1970s and 1980s, several sequential software engineering processes were developed that regarded application development as flowing steadily downward through the different phases of the development process.

Traditional development vs. Rapid Application Development

Traditional development vs. Rapid Application Development

The first description of such a waterfall model is often cited in an article published in 1970 by Winston W. Royce2. However, in this article, Royce presented the model as an example of a flawed, non-working approach. “Waterfall” has in later years become a common term to criticize traditional software development practices. The problem with most waterfall models is the fact that it relies on a methodical requirements analysis phase alone to identify all critical requirements for the application. There is ample evidence that projects using a waterfall approach often fail to provide a useable end product; because projects often exceed their deadlines, the product either fails to meet all requirements or the requirements change during the protracted development phase.

There have been many responses to this problem from the 1980s to today, but almost all of those responses focus on the idea of developing systems in a short timeframe with small teams of highly skilled and motivated staff. It is the iteration step that solves the problems inherent to the inflexible approach of traditional software development.

Why was needed to invent RAD?

The evidence from project failures for projects in the 1980s and 1990s, as evidenced for example by the Standish Group Chaos report (1995), implies that traditional structured methodologies have a tendency to  deliver systems that arrives too late and therefore no longer meet their original requirements. Traditional methods can fail in a number of ways.

  • A gap of understanding between user and developers: Users tend to know less about what is possible and practical from the a technology perspective, while developers may be less aware of the underlying business decision -making issues which lie behind the systems development requirement.
  • Tendency of developers to isolate themselves from users: Historically, systems developers have been able to hide behind  a wall of jargon, thus rendering the user community at an immediate disadvantages when discussing IS/IT issues. While some jargon may be necessary if points are to be made succinctly, it is often used to obscure poor progress with a particular development project.  The tendency for isolation is enhanced by physical separation of some computer staff in their air-conditioned computer rooms. Developers might argue in their defence that users also have their own domain-specific jargon which adds to the problem of deciphering requirements.
  • Quality measured by closeness of product to specification: This is a fundamental difficultly – the observation that ‘ the system does exactly what the specification said it would so’  hides the fact that the system may still not deliver the information that the users need for decision-making purpose. The real focus should be on a comparison of the deliverables with the requirements, rather than of  deliverables  with a specification that was a reflection of a perceived need at a particular point in time.
  • Long development times: SSADM and the waterfall model will reveal that the process of analysis and design can be very laborious and time-consuming. Development times are not helped by the fact that and organization may be facing rapidly changing business conditions and requirements may similarly be changing. There is a real risk of the ‘moving goal-post’  of syndrome causing havoc with a traditional approach to systems development.
  • Business needs change during the development process: This is alluded to above. A method is needed where successive iterations in the development process are possible so that the latest requirements can be incorporated.
  • What users get isn’t necessarily what the want: The first a user may see of a new information system is at the testing or traning stage. At this point , it will be seen whether the system as delivered by the IS/IT professionals in what the user acually needs. An appropriate analogy here is the purchase of a house or car simply on the basis of discussions with an estate  or a garage agent  rather than by actually visiting the house or driving the car. It is unlikely that something purchased  in this way will result in a satisfied customer and there in no reason to suppose that information systems developed is a similar way will be any more successful.
Rapid Application Development Model

Rapid Application Development Model

Not only is there pressure from end-user management for faster systems development, IS/IT department themselves increasingly recognize the need to make more effective use of limited human resources within is their departments while at the same time quickly delivering systems that confer business benefits. All this is is a climate of rapid business change and therefore, rapidly changing information needs. Rapid Application Development (RAD) is a method possible solution to these problems and pressures.

Is RAD  appropriate for all projects?

The Rapid Application Development methodology was developed to respond to the need to deliver systems very fast. The RAD approach is not appropriate to all projects – an air traffic control system based on RAD would not instill much confidence. Project scope, size and circumstances all determine the success of a RAD approach. The following categorize indicates suitability for a RAD approach:

Project Scope
Suitable for RAD – Focused scope where the business objectives are well defined and narrow.
Unsuitable for RAD – Broad scope where the business objectives are obscure or broad.

Project Data
Suitable for RAD – Data for the project already exists (completely or in part). The project largely comprises analysis or reporting of the data.
Unsuitable for RAD – Complex and voluminous data must be analyzed, designed and created within the scope of the project.

Project Decisions
Suitable for RAD – Decisions can be made by a small number of people who are available and preferably co-located.
Unsuitable for RAD – Many people must be involved in the decisions on the project, the decision makers are not available on a timely basis or they are geographically dispersed

Project Team
Suitable for RAD – The project team is small (preferably six people or less).
Unsuitable for RAD – The project team is large or there are multiple teams whose work needs to be coordinated.

Project Technical Architecture
Suitable for RAD – The technical architecture is defined and clear and the key technology components are in place and tested.
Unsuitable for RAD – The technical architecture is unclear and much of the technology will be used for the first time within the project.

Project Technical Requirements
Suitable for RAD – Technical requirements (response times, throughput, database sizes, etc.) are reasonable and well within the capabilities of the technology being used. In fact targeted performance should be less than 70% of the published limits of the technologies.
Unsuitable for RAD – Technical requirements are tight for the equipment to be used.

General Characteristics of Rapid Application Development (RAD)

   Sources