Category Archives: Testing

Challenges of testing when using Scrum

Recently I dived into the merits and challenges of software development (or more generic IT products for that matter) in Agile based settings, in particular using Scrum.  Scrum is one of those methodologies that are part of a wave of Agile based methodologies (LEAN, extreme programming among others) that emerged in the 90’s and in 2001 united for the so called Agile Manifesto. Although Scrum is around for some time, it seems that in recent years Scrum has taken its place in countless organisations as the preferred method of developing software. While studying the Scrum approach I kept a focus on the inclinations for the testing process. I will not go into lengths to give a full description of the Scrum framework in this article but instead focus on possible weaknesses and strengths in relation to the testing process.

A clearly positive aspect of Scrum is that it is not hard to understand. Scrum is defined as a process framework, it does not prescribe an in depth approach on every aspect of the software development. Scrum contains a comprehensive workflow and its own taxonomy (with concepts like SprintIncrementDefinition of Done) is limited. The essential of the Scrum approach could be defined as: transparant software development based on a just-in-time workload, the effort to ensure all involved have a clear picture of the products that will be delivered and the benefit of a team based operation where the group effort is more important than individual success of the team member.

Because the scope of a development iteration (the Sprint) is limited (usually 2 weeks and maximum 4 weeks) the chances of deviation between the mutual understanding of the work to be done and the actual performed work could theoretically not be a too big surprise. Along the path of the Sprint there are mechanisms in place (Daily ScrumSprint Review and Introspective Review) to eliminate risks even further. As -in general- testing activities are based around inventorying and addressing the potential risks, the Scrum approach from a testing perspective is a step in the right direction to deliver quality products.

Yet besides these positive aspects, Scrum also will introduce some trade offs. Before diving into these trade offs, some perspective is needed on how to judge whether something might be considered a weakness or strength when we look at the Scrum process and its intertwining with testing. Over the past decades the most clear contraposition to the Scrum approach of software development has been the waterfall method. The waterfall method implies that there is strict sequential order of activities in the process: it is -for example- uncommon to start coding before the functional and technical requirements are completely set and approved by the stakeholders. An implication of this method is that the time to bring a product to the market might be taking a (too) long time. When the analysis phase within a waterfall project is carried out, internal or external (market) situations might have changed and/or the set requirements have changed. New requirements could be added, old ones could be not longer applicable. In an ever more competitive business world the rise of Agile based methods became new, interesting ways to overcome the waterfall method’s shortcomings. A less positive track record for timely delivered waterfall based projects also contributed to a rise in interest for agile methodologies and Scrum in particular.

Yet for testing purposes the waterfall method unmistakable has its advantages. The well known V-model for testing represents this.

A test strategy could be plotted against the deliverables (or project phases) in the waterfall based project. It gives the test manager time and opportunity to carefully plan and carry out the test strategy, changes never come too abrupt. And if changes come they could be easily adapted into a refined or updated test strategy.

In the Scrum world things work different. The Sprint is short cyclic iteration with a predefined outcome of the Sprint, usually defined as ‘working software’. This approach offers challenges in the testing process. First of all the testing effort has a tendency to become inefficient compared to a waterfall based approach. A test process needs to be in place within each Sprint leading to some inherent overhead (like writing test scripts, setting up test environment, inviting testers for the User Acceptance Test). It is still easier and less time consuming to perform a test process once for a larger number of tasks than numeral times with each Sprint for a specific or small number of tasks. The members of the Development Team (usually between 3 to 9 members) have no specific role within the team and should be able to take on the tasks of another team member. So a team member who develops software should also be able to test the software. This might lead to effects of tunnel vision (approving one’s own work) and a lack of segregation of duties.

The outcome of a Sprint does not necessarily has to be a piece of software that is implemented in the live environment. Preferably it is, but in the Definition of Done it also can be determined that the Scrum Development Team has no role in implementing the software in a live environment. This might be the case when that specific task is to be performed by the IT Operations Department. From a test manager’s perspective this caveat needs attention because it leads to non-addressed risks in the period between the deliverance of the Development Team versus the actual implementation in a live environment.

To overcome these potential trouble with the testing process different ideas have been introduced to deal with potential shortcomings. We will have a look at two of them: Agile Testing Quadrants and Test-Driven Development.

Agile Testing Quadrants

Developed by Brian Marick it places different tests in different quadrants based on their relevance (technology driven versus customer driven, supporting the team versus product critique). An advantage of this visual representation is that the Development Team is aware of the tests that support the quality of their products.

Test automation is an important aspect of trying to solve issues of reducing the testing time it will take from the resources of the Development Team or stakeholders to perform tests. We need to be aware though that test automation is not the ultimate cure for time related issues. Tests have to be coded, so it takes time to complete them before the test automation can facilitate the products that emerge in Quadrant 2. Test ware needs to be maintained as well. For example external factors like an organisation upgrading to a new version of a browser or a fixed security leak need to be taken in account when keeping the test ware up to date.

In Quadrant 4 tests like Performance, Load and Security Testing are positioned. These type of tests require special expertise that is not necessarily available by default within the Development Team. Also the same as for test automation is applicable: the test ware for Performance, Load and Security Tests needs to be initially produced and afterwards actively maintained in order to keep them relevant or simply working in future Sprint iterations/ increments . With Security Tests you might even ask if it is desirable to run these test from within the team due to their delicate nature and from a perspective of segregation of duties.

Test-driven development (TDD) has its origins in extreme programming, an other Agile method. With TDD the developer starts his work by writing the actual test before writing the software code for the product to be delivered. Although TDD has a lot to be said about that goes beyond the scope of the article the clear advantage is that it solves issues raised in previous paragraph. The creation and maintenance of test ware becomes an integral part of the work by the Development Team.

Conclusion
Scrum needs an organisational change of mentality to gain the best results. If only the IT software development is isolated and using Scrum, while other parts of the organisation (Marketing, Finance, Purchase, HR) do not participate, Scrum will be less relevant or even fail to work. But even within the IT department itself the integration of Development Team with the Operations organisation, becomes unavoidable and mandatory to tackle the problem of the possible gap between the Scrum product according to the Definition of Done and the ‘real’ live implementation date.

Test automation is necessary to keep control over the time aspects concerned with testing. But as test automation needs its own level of maintenance it is certainly not the answer to all challenges that the Development Team faces. Especially the tests in Quadrant 4 (Performance, Load, Security) require special expert knowledge that will not always be present by default from the Development Team. If that is not the case it will need to be planned with the required resources and might lead to more planning or product related issues that are difficult to match with the short cyclic iteration of the Sprints.

The future of the test manager in IT projects

Now that 2016 is coming to an end, I share with you some observations I made in the past year concerning testing in IT projects. At first glance these observations seem disconnected, but I started to realise that they were forebodes of general developments in the IT world and will also have their impact on IT testing and the role of the test manager. Connecting the observations also shed some light for me on how a test manager can stay afloat in a rapidly changing IT landscape.

The first observation was that during meetings participants very often mentioned they were busy testing and measuring by the number of times they mentioned it, these testing activities seemed to take a considerable amount of their time and effort and could be considered an integral part of their daily job activities. Yet these testing activities were not (always) part of their job description or project role. It could be that these testing activities were not mentioned or recognized in a test plan, let alone be known in advance to the test manager.

The second observation was that a lot of focus in IT testing is geared towards aligning testing activities with developments around continuous delivery and iterative development. The result of these new developments is that more and more energy within companies is geared towards working ‘agile’ and subsequently test automation becomes a hot and interesting subject, partly because a lot of the testing work now directly comes into the hands of the developers.

The test manager should have or develop a fine radar to pick up the signals about project members running their own tests. Not in a negative manner as if the test manager missed out on something in his planning phase but by understanding that testing is a very natural way of how humans make progress and how the human mind works. So testing is a basic human quality that enables us to get forward with IT projects and the test manager will never (nor has to) be able to catch all the testing activities during the IT project or in advance during the planning phase.

The tester on the other hand might not always be completely aware of the significance of the testing activities. He or she may think that the result of the test is not essential for the progress of the project (other stakeholders might hold a different opinion). Then there are two other effects that the test manager needs to be aware of. First, if the test was performed to see if the idea (or read: service, product, piece of code) works and the test is negative, then the tester might get shy to communicate the failure of the tests. It becomes personal and therefore not an experience to share broadly and be proud of.

If the test result was positive the tester could boast about the results, while the test might have a not too big relevance for the final outcome of the IT project. This is where the importance of the test manager is at stake. He is one of the few project members with the necessary objectivity who can valuate these ‘under the radar’ tests within the project and determine the attention they need as well as the level and range in which these test results should be communicated.

Gone are the days when the test manager could write his impressive test plan (that only few would read from beginning to end) and gone are the days when the test manager could claim time slots to execute with his team the carefully designed and well prepared test cases.

So what is the connection between the first and the second observation? With the current developments in IT projects, the test manager will no longer be able to completely influence or comprehend all test activities within an IT project before the project starts. For a test manager to be successful within a project, the test manager will have to develop a sensitivity to the signals about on going and intended test activities.

The number of implicit tests within IT projects appeal to elementary human qualities that are essential for progress in general and in our case progress within the IT project. They stimulate the participation of the tester, yet there needs to be an independent source who eventually can judge, evaluate and inform about these tests. The outcome of this process will determine how the test manager can support the testers (with necessary facilities and resources) and how information about the test results could be distributed among the stakeholders. These will be important features for the test manager to survive in an IT industry dominated by rapid changes in methodologies and views concerning the deliverance of IT projects. He can rely less and less on his own plans and methods but will have to bring the information from test results in the projects to the surface.