DSDM: Technology Support

The need for technology support

The technology used to visualize what the developers are thinking and to gain feed-back from that visualization is the basis of much of agile development. However, it is not the total answer. The technology support for a controlled process does not lie solely in the easy generation of analysis and design models, screens, and code. If the process is to be controlled, then strong emphasis should also be placed on automated support for the necessary controls. Controls are an overhead on productive work, albeit a necessary one. Saving in effort can be made by automating the control of the status of, and access to, work products and in ensuring that they have been created correctly.

Agile developers would much rather spend their time creating the solution than controlling it. So it is the control activities that are likely to be squeezed out of their schedules when they are under pressure to deliver. Another area that does not enthrall developers is configuration management. However, configuration management is of the elements of an agile support environment where more things are being produced and changed at a faster rate than in traditional method. The need for support in this area is obviously fundamental. It should be easy for the developers to place their works under configuration management as soon as possible and as often as they should, without causing them to slow down in their development activities. Testing also looms large as something which developers see as a necessary evil, but which would be a much more productive activity with tool support. The list goes on.

DSDM Support Environments

DSDM has defined an agile tool ‘nirvana’. It is an environment which will support the whole process from feasibility to implementation (including aspects such as reverse engineering for legacy code) with all the necessary controls as automated as possible. It does not exist and it is unlikely that any one tool vendor will offer the fully integrated set. Indeed it is yet another cry for an IPSE (integrated project support environment), but one which is designed for SDSM projects. Such an environment requires integration at a number of levels:

  • presentation to provide a common ‘look and feel’ across all tools;
  • data so that all tools share the same repository;
  • control so that one tool can notify and/or initiate actions in other tools;
  • platform, in order to port the toolset from one platform to another

May be such an environment will exist in the future, but in the meantime we have to be more realistic and look for tools that will make savings in time and effort without being too costly. If we focus in the money side, several low-cost tools have been found to have a beneficial impact on effort. Low-cost tools for code and schema generation are available, as are tools for prototype generation. Both of these speed up development markedly compared with coding by hand. Another area where inexpensive tools can help is in the perennial headache of documentation. Automated support for creating documentation is readily available. Fortunately, many tools are self-documenting.

Testing Tools

One of the components of the DSDM support environment is testing tools. There are many varieties of testing tool available on the market and DSDM strongly advocates the use of tools in this area. Producing a tested system in a very short time can only be made easier with effective tools.

A very useful class of tools is capture and replay tools. These can lessen the need for documented test scripts. The quickest way to document tests is to record them as they are performed. A great deal of developer time can be saved through this route. Not only does this eliminate the need for producing ‘paper’ scripts before testing takes place, but the tests can be achieved as evidence of what tests have taken place. Capture and replay tools are also extremely beneficial in building up regression test suites which can be left to run overnight while the developers have a well-earned rest.

Static code analyzers can relieve the effort in code inspection and lessen the need for third-party verification that the code is to the required standard.

If the testing toolset is to be really complete, then dynamic analysis tools will perform tests in the background while demonstrations of a part of the software are taking place. Dynamic analysis includes checking array bounds and detecting memory leakage, etc.; things that should be tested, but which may be difficult to fit into the tight schedule of a project.

Configuration Management Tools

DSDM asks a lot of configuration management. Everything that is produced (analysis models, designs, data, software, tests, test result, etc.) should all be kept in step within another, so that is relatively easy to move back to a known ‘build’ state whenever the development has gone down a blind alley. This requirement means that automated support is essential and that the automated support should allow for many versions used, but this rarely the case. Anyway, given the diversity of products that are under development at any one time, it is probably asking too much to expect all the relevant tools being used in projects to be sufficiently integrated to have every product in step. This means that a specialist configuration management tool should definitely be on the shopping list of an organization that is planning to take DSDM seriously. The ability to baseline everything on a daily basis is the ideal. So the tool should not incur too much of an overhead is its use. There are several excellent configuration management tools available which will do the job perfectly satisfactorily if the procedures governing their use are firmly in place.

Effective Tool Usage

Although there are excellent tools in the market, any tool is inly good as its users. They should not be relied upon as the whole answer. The developers should be confident that they know how to use them properly and that the tools are an asset rather than otherwise. The purchase of a tool environment for agile development should think carefully before buying. It is possible in early DSDM projects to live with what you already have. Indeed it is probably preferable not to introduce too many new things at the same time. Once the developers are used to the process, they will soon see where tool support would be particularly beneficial in their working environment. If tool support is to be bought, the purchaser should read the chapter in the online manual that gives very strong guidance on the characteristics of tools for DSDM. Not least of these is usability. For some reason, software tools are often less usable than their counterparts in the business environment – maybe we just like to make thing hard for ourselves.

Source

  • DSDM Consortium edited by Jennifer Staplenton , 2003.DSDM. In Paul Bocij, Jennifer Staplenton. DSDM Business Focused Development. 2nd ed. The DSDM Consortium. .ISBN 0-321-11224-5

Sort Description:Nine (9) Principles of Dynamic Systems Development

Summary of Nine Principles of DSDM
  1. Active User Involvement is Imperative
    • DSDM – a user-centred approach
    • Active participation through lifecycle
  2. DSDM Teams must be Empowered to Make Decisions
    • DSDM team comprises developers and users
    • Decisions made as requirements refined or changed
    • No need for recourse to higher management
    • Rapid and informed decision-making
  3. The Focus is on Frequent Delivery of Products
    • Team produces agreed products throughout lifecycle
    • Team chooses best approach to achieve objectives
    • Ensures focus on delivery, not just activity
  4. Fitness for Business Purpose is the Essential Criterion for Acceptance of Deliverables
    • Build the right product before you build it right
    • Meeting business need is more important than technical perfection
  5. An Iterative and Incremental Approach is Necessary  to Converge on an Accurate Business Solution
    • DSDM allows solutions to emerge incrementally
    • Developers make full use of user feedback
    • Partial solutions can be delivered to meet immediate needs
  6. All Changes During Development are Reversible
    • All products should be in a known state at all times
    • It should be possible to step backwards, where an approach does not work
    • The team should be willing to embrace change and not be defensive
  7. Requirements are Baselined at a High Level
    • Freezing and agreeing purpose and scope of system
    • Baseline at a level which allows detailed investigation of requirements at a later stage
  8. Testing is Integrated Throughout the Lifecycle
    • Not a separate activity at the end
    • System is tested and reviewed incrementally by developers and users
    • Testing evolves as prototypes mature
    • Aim is to find and fix errors as early as possible
  9. A Collaborative and Co-operative Approach between all Stakeholders is Essential
    • Everyone working together as a team
    • Shared goal of achieving the business objectives
    • Give and take on all sides
    • Involves all parties, not just core team

Nine Principles of DSDM -Detail

Source

  • University of Greenwich,  London, UK

Rapid Application Development Methodology

A method of developing information systems which users prototyping to achieve user involvement and faster development compared to traditional methodologies such as SSADM.
Professor Clifford Kettemborough of Whitehead College, University of Redlands,
defines

Rapid Application Development as an approach to building computer
systems which combines Computer-Assisted Software Engineering (CASE) tools and techniques, user-driven prototyping, and stringent project delivery time limits into a potent, tested, reliable formula for top-notch quality and productivity. RAD drastically raises the quality of finished systems while reducing the time it takes to build them.

Online Knowledge defines

Rapid Application Development as a methodology that
enables organizations to develop strategically important systems faster while reducing
development costs and maintaining quality. This is achieved by using a series of
proven application development techniques, within a well-defined methodology.

History

Rapid application development is a term originally used to describe a software development process introduced by James Martin in 1991. Martin’s methodology involves iterative development and the construction ofprototypes. More recently, the term and its acronym have come to be used in a broader, general sense that encompasses a variety of methods aimed at speeding application development, such as the use of software frameworks of varied types, such as web application frameworks.

RAD vs. Waterfall

In the 1970s and 1980s, several sequential software engineering processes were developed that regarded application development as flowing steadily downward through the different phases of the development process.

Traditional development vs. Rapid Application Development

Traditional development vs. Rapid Application Development

The first description of such a waterfall model is often cited in an article published in 1970 by Winston W. Royce2. However, in this article, Royce presented the model as an example of a flawed, non-working approach. “Waterfall” has in later years become a common term to criticize traditional software development practices. The problem with most waterfall models is the fact that it relies on a methodical requirements analysis phase alone to identify all critical requirements for the application. There is ample evidence that projects using a waterfall approach often fail to provide a useable end product; because projects often exceed their deadlines, the product either fails to meet all requirements or the requirements change during the protracted development phase.

There have been many responses to this problem from the 1980s to today, but almost all of those responses focus on the idea of developing systems in a short timeframe with small teams of highly skilled and motivated staff. It is the iteration step that solves the problems inherent to the inflexible approach of traditional software development.

Why was needed to invent RAD?

The evidence from project failures for projects in the 1980s and 1990s, as evidenced for example by the Standish Group Chaos report (1995), implies that traditional structured methodologies have a tendency to  deliver systems that arrives too late and therefore no longer meet their original requirements. Traditional methods can fail in a number of ways.

  • A gap of understanding between user and developers: Users tend to know less about what is possible and practical from the a technology perspective, while developers may be less aware of the underlying business decision -making issues which lie behind the systems development requirement.
  • Tendency of developers to isolate themselves from users: Historically, systems developers have been able to hide behind  a wall of jargon, thus rendering the user community at an immediate disadvantages when discussing IS/IT issues. While some jargon may be necessary if points are to be made succinctly, it is often used to obscure poor progress with a particular development project.  The tendency for isolation is enhanced by physical separation of some computer staff in their air-conditioned computer rooms. Developers might argue in their defence that users also have their own domain-specific jargon which adds to the problem of deciphering requirements.
  • Quality measured by closeness of product to specification: This is a fundamental difficultly – the observation that ‘ the system does exactly what the specification said it would so’  hides the fact that the system may still not deliver the information that the users need for decision-making purpose. The real focus should be on a comparison of the deliverables with the requirements, rather than of  deliverables  with a specification that was a reflection of a perceived need at a particular point in time.
  • Long development times: SSADM and the waterfall model will reveal that the process of analysis and design can be very laborious and time-consuming. Development times are not helped by the fact that and organization may be facing rapidly changing business conditions and requirements may similarly be changing. There is a real risk of the ‘moving goal-post’  of syndrome causing havoc with a traditional approach to systems development.
  • Business needs change during the development process: This is alluded to above. A method is needed where successive iterations in the development process are possible so that the latest requirements can be incorporated.
  • What users get isn’t necessarily what the want: The first a user may see of a new information system is at the testing or traning stage. At this point , it will be seen whether the system as delivered by the IS/IT professionals in what the user acually needs. An appropriate analogy here is the purchase of a house or car simply on the basis of discussions with an estate  or a garage agent  rather than by actually visiting the house or driving the car. It is unlikely that something purchased  in this way will result in a satisfied customer and there in no reason to suppose that information systems developed is a similar way will be any more successful.
Rapid Application Development Model

Rapid Application Development Model

Not only is there pressure from end-user management for faster systems development, IS/IT department themselves increasingly recognize the need to make more effective use of limited human resources within is their departments while at the same time quickly delivering systems that confer business benefits. All this is is a climate of rapid business change and therefore, rapidly changing information needs. Rapid Application Development (RAD) is a method possible solution to these problems and pressures.

Is RAD  appropriate for all projects?

The Rapid Application Development methodology was developed to respond to the need to deliver systems very fast. The RAD approach is not appropriate to all projects – an air traffic control system based on RAD would not instill much confidence. Project scope, size and circumstances all determine the success of a RAD approach. The following categorize indicates suitability for a RAD approach:

Project Scope
Suitable for RAD – Focused scope where the business objectives are well defined and narrow.
Unsuitable for RAD – Broad scope where the business objectives are obscure or broad.

Project Data
Suitable for RAD – Data for the project already exists (completely or in part). The project largely comprises analysis or reporting of the data.
Unsuitable for RAD – Complex and voluminous data must be analyzed, designed and created within the scope of the project.

Project Decisions
Suitable for RAD – Decisions can be made by a small number of people who are available and preferably co-located.
Unsuitable for RAD – Many people must be involved in the decisions on the project, the decision makers are not available on a timely basis or they are geographically dispersed

Project Team
Suitable for RAD – The project team is small (preferably six people or less).
Unsuitable for RAD – The project team is large or there are multiple teams whose work needs to be coordinated.

Project Technical Architecture
Suitable for RAD – The technical architecture is defined and clear and the key technology components are in place and tested.
Unsuitable for RAD – The technical architecture is unclear and much of the technology will be used for the first time within the project.

Project Technical Requirements
Suitable for RAD – Technical requirements (response times, throughput, database sizes, etc.) are reasonable and well within the capabilities of the technology being used. In fact targeted performance should be less than 70% of the published limits of the technologies.
Unsuitable for RAD – Technical requirements are tight for the equipment to be used.

General Characteristics of Rapid Application Development (RAD)

   Sources

Running prototype demonstrations in DSDM

In order to get the best value out of prototype demonstrations, the guidelines below should be followed.

The demonstrators should prepare the audience for any prototype demonstration session. The objectives of the session should be clearly stated together with the known limitations of what will be seen. It can be too easy to assume that, since one or two users have been involved in the development of a prototype, the rest of the user community are as knowledgeable about what is going on. Indeed, the Ambassador Users will talk to their colleagues but this communication channel should not be totally relied upon until working software has been demonstrated to the wider user population, as it is often difficult to explain what has not been seen – the reason why DSDM is the way it is.

During the session, discussion should be encouraged. Putting the prototype through its paces at such a speed that the users cannot see what may be expected of them in the future is worse than useless. The development team may come away from the session feeling reassured that they are following the right track, whereas all that has happened is that any comments have been stifled.

The Scribe should record all comments made during the demonstration. Otherwise, since the focus of the developers  and the users will be on the behaviour of the prototype, some feedback may be forgotten.

Source:

Prototyping cycles in DSDM

Each of the development phases (the Functional Model and the Design and Build Iterations) contains iteration through
prototyping. There are basic controls that must be implemented in order to ensure the success of these activities.
These controls are built into a prototyping cycle and are aligned to the cycles within the timebox process.

A prototyping cycle passes through four stages:

  •  identify prototype
  • agree plan
  • create prototype
  • review prototype.
Identify prototype
Before embarking on building a prototype, what is to be prototyped must be clearly identified. This decision will be based on the relative priorities given to the functional and non-functional requirements. Having partitioned the application into possible prototypes, it should be made clear which are the essential parts of each prototype and which are “extras” through applying the MoSCoW rules. This will limit the scope of activity in each prototyping cycle and determine the priorities of the developers and users who are building the prototype.
The prototypes can be selected by various criteria: business area, basic processing required, the user groups, information accessed and the criticality of the processing to the final system.
The results of reviews from previous prototyping cycles provide valuable information when identifying the prototypes to be developed.

Acceptance criteria for the prototype should be defined in outline before any development takes place in order to lead the prototyping activity in the most useful direction.

Agree plan

The Development Plan in its schedule of timeboxes sets a limit on the time to be spent on each prototype. The team must agree the detailed plan for the current prototyping cycle. This includes prototyping the Must Haves first. Lesser parts will be dealt with if time is available.

The plan should not be allowed to slip unless significant problems arise, e.g. unexpected and dramatic changes in scope. However, such problems will probably require halting all activity temporarily while the project direction is
rethought.

It is important that the reasons for the time limit are clearly understood by the users. It is their priorities that will identify the essential components of a prototype. They must be made fully aware that asking for in-depth investigation of a particular area may mean that they have to decide what other area they can do without.

Create prototype

The prototypes are usually developed collaboratively with users. However later in development where issues such as performance are being addressed, the users will take a back seat.

Prototypes are not necessarily automated. For instance, early in development, a paper-based storyboard may be more cost-effective and flexible than a fully automated user interface to unexplored functionality. Both business and technical imperatives will drive the choice of prototyping medium.

Review prototype

Each prototype should be reviewed by the prototyping team (both developers and users) and other interested parties, where appropriate, (e.g. business analysts and senior user management) to ascertain:

  • which objectives it has met successfully
  • which areas will have to be included in later development
  • which missed areas can be safely postponed (or possibly incorporated in another project) in order to achieve the project’s timescales
  • the acceptability of what has been produced.

As well as verifying that the relevant quality criteria have been met, there are two major aims of the review:

  • to ensure that the development team are following the right track
  • to get the users at all levels to buy in to the completed and future work.
Sources:

Categories of prototypes as recommended by DSDM

Dynamic Systems Development Method (DSDM) is a framework for delivering business solutions that relies heavily upon prototyping as a core technique, and is itself ISO 9001 approved. DSDM uses the word ‘prototyping’ because that is the industry ‘standard’, but they are not truly prototypes: they are partial system components. A DSDM prototype is not ‘all done by mirrors’, but is built using the platform on which all the development work is done, and meeting all the required  standards. In other words, the prototypes are intended to be evolutionary {may be evolved horizontally (breadth then depth) }rather than throwaway {each section is built in detail with additional iterations detailing subsequent sections}: they will evolve into the delivered system. Of course, there will be occasions when it is better to throw something away and start again, but the aim at all times should be to build on what is there already.

Four categories of prototype are recommended by DSDM that are used at different stages of development, and have very different purpose. They are:

  • Business prototypes
  • Usability prototypes
  • Performance and capacity prototypes
  • Capability / design prototypes

While the purpose of each prototype category is different, it will often be the case that some combination of them will be used. For instance, a common combination is the business and usability prototype, but this approach should not be taken as a matter of course. If the functionality is at all complex, it may be better to get it right before worrying about the presentation aspects. Conversely, if there is no standard for user interface design, it is good idea to get some usability prototyping done first. The categories of prototype to be built in a timebox should be decided at its outset based on the aims of the timebox.

Business prototypes

  • Purpose:
    A business prototype demonstrates the developers’ understanding of the functional requirements. The developers can use this prototype to demonstrate to the users how the final system could work. This will enable the users to better formulate their real business requirements.
  • Description:
    A business prototype is designed only to demonstrate how the business processes are supported by the computer system. It is not designed to look good or to be particularly user-friendly: nor will secondary functionality, such as error checking, necessarily be implemented. It is very important that the users understand the purpose of demonstrating a business prototype.
  • How used:
    A business prototype enables the developers to demonstrate to the users their understanding of the key system requirements early in the project. This early prototype can then be evolved to cover more of the functionality and to incorporate non-functional aspects. Typically, the developer will use a scripted demonstration to ensure that the functionality is demonstrated to best effect. Any discrepancies between the developers’ understanding and the business requirements are noted.
  • Position in the project lifecycle :
    The first business prototype will be demonstrated as early as possible in the project lifecycle (possibly as early as the Feasibility Study, but no later than early in the Functional Model Iteration). This will confirm the right system is being built. First, the fundamental functionality of the system will be demonstrated. Later on, more detailed functional requirements can be demonstrated and/or other areas can be business prototyped.The final business prototype will clearly demonstrate to the users how the system will work. It may not look very pleasing, and have lots of missing functionality, but it will ensure the right system is being built.
Usability prototypes
  • Purpose: A usability prototype ensures the computer system will be as easy and intuitive to use as possible. Users should enjoyusing the system and it should be obvious how the system can be used. If this is done well, end-users will need less training before they can use the system effectively, while still having a good understanding of the full capabilities of the system.
  • Description:
    Well-designed computer systems are simple and straightforward, easy to learn and understand, and fun to use. It is impossible to achieve usability without trying the system out on the users. A usability prototype demonstrates how the user interacts with the system. It may not actually automate the business process. For example a form is displayed, the user adds data, but nothing is written to disk. The user can understand how to move around the system and how the  interface works.
  • How used:
    A user carries out a number of tasks using the prototype.  Any difficulties encountered by the user in achieving those tasks are noted so that the usability of the system can be improved.Contrast this with the business prototype where the developer will typically demonstrate the system, to show the functionality only. A business prototype may be rather user-unfriendly so it is better if the developer removes the need for a user to learn the interface.There is a danger of building a usability prototype that cannot be developed into the final system. Developers must take care not to create any unrealistic expectations.
  • Position in the project lifecycle:
    usability prototype may be built during either the Functional Model Iteration or the Design and Build Iteration, but there are many benefits in building it as early as possible. For a computer system to be easy to use, it must have a simple conceptual  model that the users can easily understand to help them move around the system.
    A usability prototype confirms that a good conceptual model has been chosen. If this is not done early on in the project and the model is wrong, it could adversely affect the design of the system. To reduce the risk of this, a high-level usability prototype should be built in the Functional Model Iteration or early on in the Design and Build Iteration. More detailed usability, such as the buttons in a window, can be designed later. If standards for the user interface are chosen early on, then all programs can be built to the agreed standard, reducing the need for rework. Having a stable installation Style Guide in place before prototyping commences on any project would help even more.In practice there are many benefits in combining the business and usability prototypes.
Performance and capacity prototypes
  • Purpose:
    A performance and/or capacity prototype ensures that the final computer system will be able to handle the full peak time workload required. While users may be involved, this category of prototype is typically for the benefit of the developers.
  • Description:
    This prototype deals with the non-functional aspects of the system, such as concurrent transaction volumes, data loading, overnight batch reports, and month end batch runs, as well as on-line screen performance. Checks will be made that the target system has enough resources to function adequately in the live environment, even when other systems are competing for machine resources.
  • How used:
    Performance and capacity prototypes are usually developed for the benefit of the developers to ensure the computer system can meet its desired performance requirements. A test scenario is set up and repeated while changing different aspects of the system to see how it performs. This process is best automated so that it is easily repeated, but it can be
    done manually where a group of users follow a script. The system is monitored to see where any “bottlenecks” occur.
  • Position in the project lifecycle:
    This category of prototype is typically used during Design and Build Iteration, after the required functionality has been determined. Often existing business prototypes will be used for performance testing. Sometimes early in the project design, the developers may be concerned as to whether a certain piece of functionality can be provided within the
    constraints of the machine/network environment. In this case, a performance and capacity prototype can be specially built to check where the limits might be. Such a prototype might look very different from the final system.
Capability/technique prototypes
  • Purpose:
    Developers often have a range of design options and sometimes a choice of tools to use. A capability/technique prototype tries out a particular design approach or tool to help in choosing between these options.
  • Description:
    This category of prototype is typically limited in functionality and is for the benefit of the developers only. The prototype demonstrates the capabilities and limitations of an approach, a technique or a tool to the developers.
  • How used:
    The developer builds a number of prototypes and weighs up the benefits of each technical approach.
  • Position in the project lifecycle:
    Although selection of a tool is usually made outside any one particular project, a tool capability prototype can be developed during the Feasibility Study to ensure that the potential technical approach will be soundly based. Later design capability/technique prototypes are created during the design phase of a project.Different designs or tools are used to build prototypes that represent the options available to the developer. Each prototype can then be assessed and the best approach, technique or tool selected.
Sources: