Mobile banking users will double and hit a quarter of the world's population by 2019, says a KPMG report. Gartner says that it expects 50 percent of consumers in mature markets to use smartphones or wearables for mobile payments by 2018. With the emergence of web and mobile banking services, a large percentage of everyday transactions have already shifted away from older channels such as bank branches or ATMs.
Banks today understand the need to focus on creating a consistent cross-channel experience for their customers in order to hold their attention. While developing applications that are high on performance and usability is the key, increasing customer demand for mobile applications brings a corresponding need for robust mobile testing as well.
In most testing scenarios for banking applications, QA teams are inclined to focus on ensuring app security. This is a bigger concern in retail banking apps that operate on a public platform, where the user is more susceptible to cyber-thefts. However, besides ensuring security with standard app testing methods (functional, security, and performance testing), there are four critical areas that many experienced testers tend to overlook.
Generating domain specific test data
A customer’s personal information such as her phone number, address, DOB or bank account information is the kind of confidential data that can easily be misused if it is exposed in the public domain. Therefore, in most countries there are regulatory requirements to protect personal data. However, this often leads to unavailability of production-like data and poses a challenge for testers where they may not have access to the right kind of test data.
A good case for generating the right test data can be found in retail banking apps that warrant testing across different devices with different screen resolutions. For example, a customer might have a long name and $ 1 billion in her account ($1,000,000,000) while another customer might have a few thousand dollars as her account balance. In both cases, testing the app with production like data is critical to ensure that the screen layout is intact with different data sets.
Using an approach that combines data masking and synthetic data creation, one can successfully handle this problem. Through data masking, the account related information is masked by automated scripts. The masked data is then inserted into a cut-down version of the database for testing purposes. Similarly, synthetic test data is created in a test environment after understanding the business flow end-to-end.
Highlighting differences clearly in test plans
Test plans should clearly outline the functional, data or technical variances between production and test environments so that the associated risks are mitigated well in advance. In most banking applications, data transfer (file transfer) across different systems in production is generally automated through FTP; whereas in most test environments, this file transfer is done manually. Let’s say the monthly account statement of your bank account is transferred automatically in production to a data warehouse (DWH) in order to maintain the transaction history, but is being transferred manually in the test environment (i.e. testers put the file in a designated directory and then the DWH consumes it). The automated transfer in production warrants testing of scenarios such as delay in transfer, transfer of the same file more than once, a new file overriding the existing file, etc. In mobile apps, a delay in transfer might lead to significant customer dissatisfaction as smartphone users expect a quick response from the apps. Such scenarios need to be taken into account and the risks should be handled or highlighted clearly in test plans.
Handling high volumes
Another testing challenge in case of retail banking apps is handling the high volume of data that banks maintain. A small operation can generate substantial data volume. For instance, a simple operation by an online user may result in capturing the log-in date, time, user location and the entire sequence of steps that she may have performed. Banks may need to maintain sufficient resources to store such details that can help them prevent and solve cyber-crimes. In fact, data capturing is often an auditing requirement and is part of regulatory compliance. However, mobile devices can have limited hardware resources, such as RAM, processor speed, etc. It is critical that device performance is tested with high volume of data to ensure that the end user’s experience is pleasant while using the app. The end user will not prefer to use an app with compromised performance even if the underlying functionality works as expected.
In this case, the test approach must advocate the creation of distinct data sets for every interface, thus ironing out key areas of impact for a particular feature across interfaces. Separate test data sets should also be created for each environment because the available back-end DB for one environment may have a different cut-down version of data from production. While we do so, the performance of the device needs to be measured with tools such as Little Eye or Xcode.
Understanding usage patternsHigh fragmentation of devices and platforms is another challenge that a QA normally faces. Attempting to test all combinations is often a very costly and futile exercise on a mobile as new combinations are introduced regularly in the market. Testing on different platforms like iOS, Android, and Windows, covering various operating system versions like iOS 5.x, 6.x, 7.x, Android 4.4, 5.0 and a range of devices from manufacturers such as Nokia, Samsung, HTC or Apple is virtually impossible.
Instead of taking on a monumental task of testing in a fragmented manner, adopt a subjective approach. Select only those platforms or devices that have the highest penetration in a given geography. For instance, in US and Canada, the focus may be on high-end iOS/Android phones while for Brazil and other Latin American countries, the application may be tested on low-end Android phones. For optimal testing, gain similar insights by mining data from Google Analytics, Dynatrace etc. Get inputs from your marketing research teams and know what your users are doing with the application. For instance, 70%-80% of the customers often only access the retail banking app to check their account balance and make small transactions such as bill payment or money transfer. Understand the customer’s mindset and the app usage patterns, and prioritize testing efforts accordingly.
Finally, app testing teams need to be well informed and follow a strategic and technically creative direction. Therefore, when working on specialized domains like mobile banking applications, going the extra mile, beyond the standard testing practices is an essential part of a tester’s learning and growth.