How to Fix Dysfunctional RPA User Acceptance Testing

🌇
This article was written when I was involved in the RPA industry as an IT manager. However, driving efficiency with RPA is something that I no longer undertake on a full-time basis. The full context for this message is detailed on the following page.

Traditional User Acceptance Testing (UAT) requires that the end-user, or a representative as close to the user as possible, completes testing to make sure that software meets requirements.

Testing for Robotic Process Automation (RPA) is different since what’s being tested is a robot’s execution of tasks and not the experience of an actual user.

When an attended robot is in question, testing is more approachable because a user can start a bot and see it run to completion on their machine. The user can also pause the robot, provide it with more scenarios, and restart it easier than with an unattended or hybrid bot.

Testing is still possible with bots that are not attended, but it requires additional consideration regarding environments to be used by the automation, data staging, and entry/exit criteria.

This article dives into the various types of RPA errors and ways to identify them before automated processes move to production. Although it is not comprehensive, it should serve as a good diving point for additional research and action on your part.

Simply relying on UAT won’t address errors that may be lurking under the surface due to gaps in requirements, programming, or bot design.

Types of Software Development Errors

The author of The Art of Agile Development categorizes software development defects as follows (James Shore: Alternatives to Acceptance Testing, 2010):

Programmer errors — those that are caused by programmers as they turn requirements into code.

Design errors — errors that are caused by a design gap or emerge because of growing complexity that isn't accounted for in the design.

Requirements errors — those that are caused by improperly documented and approved requirements (at the business level).

Systemic errors — those that are caused by the environment of software implementation and the processes used to deliver working software.

We’ll refer to these categories as we define the way RPA errors can be mitigated or removed.

Minimizing RPA programmer errors

Prevent programmer errors through the completion of unit tests, integration tests, and end-to-end integration tests.

An RPA unit test involves testing something in the smallest ‘unit’ possible, such as opening up Outlook or Excel, or loading the last fifty emails into computer memory for processing. James Shore goes on to further define a unit test as one that doesn’t cross process boundaries and doesn't involve input/output testing.

Photo by Goran Ivos / Unsplash

An example integration test would be confirming that emails from memory are properly saved into Excel (this crosses a process boundary and inputs information into a new application).

An integration test can usually be completed with ‘dummy’ data if it makes the development processes easier and there's no inherent risk in doing so. Some development workflows may be set up to test the smallest of changes in a 'development environment' which are promoted to higher stages (such as user acceptance testing) upon successful test results.

End-to-end testing would involve testing the entire bot, from the beginning to the end of the process. Such testing uses realistic data in a development or UAT environment. With end-to-end testing, feeding the bot ‘test’ values directly is discouraged except for when there's no better alternative.

When focusing on minimizing RPA programmer errors, we want to start by adopting test-driven development (TDD). TDD is a recommendation from James Shore as well, but it's also a leading process in development which improves quality and cycle time (Wikipedia contributors, 2020).

TDD accomplishes high quality by recommending the following:

  1. First, writing unit tests that validate requirements pass
  2. Secondly, writing the code to meet the requirements
  3. Validating execution with unit tests from step one
Photo by ThisisEngineering RAEng / Unsplash

Unit testing saves time that'd otherwise go towards developing code that isn’t required, and it also keeps development simple while improving developer morale.

Unit tests can be written through custom programming in UiPath, but there are also testing frameworks which come ready-made for this purpose including UiPath's Test Manager.

RPA consultants often have their own unit testing framework as well, but it’s important to note that testing is an evolving capability in this space. 'Automated automation tests' present their challenges and don't remove the need for pair programming.

After unit tests pass, and pair programming reviews are complete, integration tests are held. At this stage you might validate test cases such as:

  1. Is the bot capable of connecting to the database?
  2. Can the automation access the necessary APIs and can it run transactions that update the database as required?

There are testing frameworks which can handle integration test cases, but it’s important to note that peer code reviews are still critical to quality at this stage.

Some developers choose to correct issues themselves after completing unit testing and running preliminary integration tests, saving peer review for last. While such an approach might work for seasoned developers, junior developers benefit from peer coding early in the delivery of automation, as do senior developers when programming something completely new to them. If business users often discover issues that could have been identified in unit testing or integration tests, then that's a sign that the developers' process could be improved and it is worth raising for discussion with the right parties.

End-to-end testing is completed to verify all aspects of a bot's operation including automation of a user interface, the logic used, exception handling, database and API connections, and more. Whereas business users are sometimes involved in integration testing, they're always present for end-to-end tests.

Eliminating RPA design errors

RPA is an abstraction of programs (automated at the user interface layer) and is often developed using a drag-and-drop user interface. Because of this light approach, it's fitting that design remains simple to build, test, and expand upon.

James Shore has various resources regarding simple design, such as this blog post here.

Design errors often happen when you delegate design decisions to RPA developers that are at an entry or mid-level. Decisions ought to be made with the necessary business stakeholders and solution architects instead. If a solution architect isn't available, a senior developer may make such decisions with the necessary stakeholders.

Photo by ThisisEngineering RAEng / Unsplash

Senior developers can usually handle the design of automation that save a few hours worth of transactions a day and run on one machine. For engaging projects which require the orchestration of multiple unattended bots, or projects that require dynamic machine provisioning for scaled processes, a solution architect is a must.

Additional resources from the software development practice can be applied to the context of RPA and have been linked below:

Minimizing RPA requirements errors

Requirements are best documented when working with multiple teams (not just subject matter experts on one team) which factor in all levels of requirements:

  1. The customer requirements
  2. InfoSecurity and Compliance
  3. Management
  4. Employees
  5. Leadership
  6. Business partners

For companies starting off with automation, it's likely that some stakeholders, such as InfoSec and Compliance, will be more present in the first automation projects. After that, it's expected that documentation will have been established which defines how to address future projects within the necessary gaurdrails.

InfoSec and Compliance would still review proposed automation projects, to ensure that there are no inherente risks before development starts, but chances are they that'll be more hands-off as time goes on due to the nature of work becoming more routine.

Business analysis is a well documented field and it is a critical component to successful RPA implementations. There are extensive resources such as the Business Analysis - Third Edition book written by Debra Paul, James Cadle, and Donalds Yeates which go into more detail on the matter.

Don't delegate RPA business analysis to RPA developers.

Delegating RPA business analysis to RPA developers is a practice that should not take place. This blurs the lines between those that are responsible for understanding the business (and communicating the requirements) and those that deliver valid automation using RPA technology.

Photo by ThisisEngineering RAEng / Unsplash
Combining developers with business analysts will create a nightmare fueled by contradictory intents.

RPA is a type of software development that can be just as complex as traditional development, and it comes with the same consequences as poorly written applications. Simply because the barrier to entry is lower with RPA doesn't mean that business analysis best practices can be thrown out. After all, if the RPA developer misses a requirement in their analysis, and does not develop it accordingly, they could very well miss the requirement in integration tests as well (since they would have created the test cases), and by the time the end-user is aware of the problem it will take that much more effort to resolve and re-test.

Eliminating RPA systemic errors

If the previous types of errors were properly accounted for, you'd end up with zero systemic errors. Errors that occur by this point indicate that there is something wrong with the processes used to deliver and test software.

Additional testing can be undertaken in the form of exploratory testing, in which testers use experience to determine where bugs may exist and where to create cases to catch them. James Shore recommends only bringing on exploratory testing when it's clear that bugs have escaped all other processes and additional oversight is required (and then dialing that back as necessary).

Should errors be discovered before production, or after, a root cause analysis should take place to determine what breakdown made it possible for the problem to manifest. Errors should also be prioritized for correction according to their severity and frequency.

Footnote:

The term 'Robotic Process Automation' is used due to its popularity in the industry. Process automation is a clearer term that'll ideally replace 'RPA.'

References:

James Shore: Alternatives to Acceptance Testing. (2010, February 28). http://www.jamesshore.com/v2/blog/2010/alternatives-to-acceptance-testing

Wikipedia contributors. (2020, February 7). Test-driven development. Wikipedia. https://en.wikipedia.org/wiki/Test-driven_development