Compatibility tests, complex process depending on the needs of the project
Compatibility tests can have different meaning depending on the organization or context, but if we had to define them in a sentence it would be the following one:
Those tests whose primary objective is to ensure that the application or product is displayed and works as specified, in as many combinations of hardware and software (operating system, browser, etc.) as possible, to reach successfully the maximum number of users, mainly guaranteeing its functionality and visualization, regardless of the execution context.
In the world of testing, and depending on the project we face on, the most common compatibility tests focus on the following 4 areas:
Let's think for example about games (mobile, PCs / Mac and even web); We must ensure that at least our product will work on a set of target platforms (in relation to supported processors, screen size and resolution, amount of memory, and graphics chip capacity) that we must properly test.
In this case, we will focus mainly on ensuring the compatibility of our product in previous versions (backward compatibility) and current versions of the operating system, once again using the target platforms of the tests.
If the application under test is web, we will observe the latest studies available on the internet on the usage statistics of the most used browsers on the market. We can ensure that with the 4 most common (Chrome, Firefox, Safari and Edge) we will be covering more than 90% of the potential users of our web applications:
However, we must also considerar that each of these browsers has versions and there could be visual or functional defects due to issues regarding the way each browser (and version) interprets the HTML and CSS code, for example. These tests are known as Cross Browser Testing. There are also tools on the market that allow, given a series of browsers, to automatically validate the captures obtained by an image comparison algorithm, to guarantee that the construction and display of each page is consistent in all browsers.
With the rise of smartphones and native apps, and considering the wide range of Android devices, these types of tests are now mandatory in any mobile development project. It cannot be otherwise, since with the rating systems in the stores of both platforms (Google Play on Android and App Store on iOS), and the ease of uninstalling them or switching to other competing apps, ensuring compatibility must be a key element within the product testing processes.
When the defined QA strategy prescribes the performance of these tests, it is necessary, first of all, to carry out a detailed analysis of our users and which are the most common devices on which the app is installed and the versions of the platform that they use. The objective of this analysis is to filter and focus the effort dedicated to testing, seeking the highest return on investment, since it is impossible to cover the complete casuistry of tests on all existing devices. We must try to ensure that we cover the highest percentage of users possible with the smallest set of devices. It is very convenient to be able to cross this data with the functionalities that we are most interested in ensuring (for example, users with iPhone 12 make more purchases in the app than those who have an iPhone 8) to, if necessary, prioritize some devices over others.
The largest volume of compatibility tests is carried out on mobile devices
At present, we can say that the largest volume of compatibility tests is carried out on mobile devices and, given their update frequency and the rate of appearance of new terminals on the market, having a farm of physical devices does not seem the best decision in terms of costs mainly due to:
- The need for periodic acquisition of terminals to adapt to the reality of the market and end users, and the depreciation suffered by the terminals that we already have. On the other hand, we must bear in mind that the obsolescence of the iOS and Android versions will make them no longer useful for this type of test.
- The cost of time dedicated to maintenance (factory resets), manually adjusting the OS versions of the device in each test cycle, establishing a reserve system for testers, possible breakdowns, spare parts, etc.
- Limitations in terms of location, especially with the pandemic and teleworking where each member of the team works in a different location. Continuously sending the phones supposes a loss in the time of use and for the cost of the shipment itself.
For all this, the most common and practical recommendation is to use the services of providers of physical mobile devices in the cloud such as SauceLabs, reference partner of our firm and main provider of this type of solutions. With these licensed tools, we have the ability to test almost any device, with any version of its operating system, simply by uploading the application (apk or ipa) in the cloud, selecting the device on which we want to test, and connecting to it remotely to carry out the test. In addition, we have the possibility of carrying out cross browser testing (both desktop and mobile) in a transparent way, with the same advantages as the use of mobile devices (quickly having access to a computer with the operating system that we want and the browser that we want to validate). Finally, they present a series of very practical utilities when testing, such as screen capture, video recording, location simulation, use of the device's camera, etc.
Finally, to carry out the planning, execution, monitoring and reporting of compatibility tests, there are many solutions on the market, such as Xray (a Jira plugin), which allows working with the concept of test environment, in such a way that only one test is defined that is tested in N “environments”. The test environment can be the operating system or platform, the browser, the device model, etc., and these elements can be combined with each other. Some benefits of using test environments are:
- Avoid duplicating tests, since the same is executed on different environments.
- Allows seeing the latest status of the tests in the different environments.
- Traceability of the coverage for each environment and of the general coverage considering the results of each environment.
- Generation of reports, including traceability, for each environment or globally.
In the end, something that at first glance may seem simple, such as validating the correct operation of an application in multiple systems or devices, can become very complicated depending on the needs of the project. Trust our QA services and our experience with SauceLabs solutions to perform your compatibility tests. Either manually, automated, on browsers or mobile devices, try Izertis.