Pull to refresh

Automation in mobile QA testing

Level of difficultyMedium
Reading time7 min
Views1.1K

It takes a lot of time and effort to develop a game. Finding and fixing errors before the release is one of the most crucial stages of the whole process, and the bigger your project is, the more people are usually involved in testing. Even the most uncomplicated games require a proper and thorough examination by QAs. The processes are automated to provide high-level project maintenance by increasing testing speed and reducing the influence of the human factor.

Automated testing is done with the help of specific programs, like Selenoid and Appium (although such frameworks are rarely used in games). 

However, the chances of successful automation depend primarily on the genre. Plus, it doesn’t cover all stages. For example, while the analytical issues can be automated, the visual aspect and gameplay are still tested manually (or are they really? We’ll get back to this later). We differentiate the two most popular types of auto testing: 

QA automation in AppQuantum

AppQuantum usually resorts to such automation. Let’s say testers need to control the integrations’ functioning — if the analytical events are collected correctly and whether they are compatible with the description of their internal parameters. Due to a large number of such events, they are checking the information manually after the tests required too much time and effort. 

To facilitate the process and avoid errors caused by the human factor, a special script has been created by our specialists. This script is run manually during the game, collecting the analytical events from its database, where the event’s benchmark is stored. “Event’s benchmark” is our internal term that refers to the way the event should be presented in the database.

After this test is complete, a specialist is left with two JSON files: the reference sample and the one based on the test results. Then we examine whether the events were collected correctly. We do not check their digital content (which specific user provided the data) — it’s a matter of minor importance for us. And yet, we still do it manually if we need to check this information. Sometimes it can be challenging for a tester to tell apart a test payment and a real one, so the analytics may be inaccurate. However, the problem of tracking test payments is solved by the developers.

As a result, this script has significantly reduced the time required for checking analytical events and, at the same time, made it more precise. Even though it took just one week to write the script, maintaining this autotest requires a lot of time and additional expenses.

All of the project integrations necessary for a developer if they want to work with AppQuantum should be included in a so-called “dev-list”. In case of any changes, the script must also be modified immediately. Otherwise, it won’t function properly. 

Tip: Small projects are easier to test manually. Automated testing is likely to take longer and cost more. Not always automation equals speed and profit.

Each time we receive a new project, we must make sure that our internal analytics are integrated correctly. This is significantly important as it influences the functioning of a game directly. 

Of course, there are also regular game tests in our portfolio where we use automation.

For example, not so long ago, we conducted an experiment using AirTest, Python’s framework. How does it work? With its help, we capture the first game session that necessarily includes completing the tutorial and purchasing all project’s in-apps. Automation cuts the time of the regression test and lets us quickly track the state of all essential elements of the game: tutorial, all steps, in-app purchases, and points of ad monetization. Right now, we are saving 15-20 minutes on average on each iteration of the regression testing. The fully operational automation was created in a week and a half. We’ve assembled the autotest in only 10 minutes, but the next 10 days we’ve spent on making it work right :) 

Further, we are planning to integrate AirTest into our device farm (or test farm, we will tell more about it a little later) to conduct tests with the whole pool of devices at once. Through this, we can “catch” critical crashes in any of the elements on every device. Theoretically, AirTest will also let us verify checksums and check the balance. For example, we will purchase all in-apps, and then we will compare the received amount of crystals on all devices.  

The first test showed that integration with the device farm is realizable, so we will definitely continue our work on that option and make our automation even cooler.

So, we can’t further avoid the elephant in the room. What is a test farm? This is our own fleet of devices. With its help, QAs can quickly check the functioning of a build on various devices in search of any errors. However, sometimes that is not enough, and a specialist needs to figure out by himself what has gone wrong based on the data obtained. Thus we’ve discovered that after the security system (SS) had been updated, the number of crashes during showing of ads increased on the smartphones Huawei P20 and P30. A more detailed analysis later showed that the problem lay in a conflict between the SS update and a certain version of Unity. The wide variety of devices on our farm helped us discover the bug. But how is it connected with automation?

The thing is, there is a certain trigger set in our fleet of devices. It is activated every time an indicator with ERROR appears in the logs, and the farm automatically takes a screenshot of the error on the user's screen. Then the screenshot and the log are placed together in a separate slot in the database. This way, we can determine at which point in gameplay an error occurred and the actions taken by a player. The testers are currently trying to 

integrate video recording and its transmission into the farm through the server without extra load for a device. This should provide faster checking speed and make the analysis even more accurate.

We are striving to minimise the number of steps required for testing, thus being able to dedicate more time to more serious tasks where using automation alone is not enough or it cannot be performed correctly due to a specific genre. That is why we plan to implement analytics checking in the farm's interface. 

Firstly a particular project is selected from the list, and then its reference development list is loaded. After the analytical events are collected, the check will provide us with the results: the events that came and those that are missing. This can be done not only by QAs but also by business development managers. Such an approach to automation will spare us from involving QA specialists early in product testing.

Is it possible to automate gameplay testing?

Generally speaking, it is, but there is a catch.

For instance, you can easily write a macro for an idler with 12 buttons that must be pressed on time. Though in most other genres (let’s take casual games, hyper casual games, match-3 and others), things are much more complex. Due to their specific structure, automated testing here can’t provide accurate and reliable results. If a developer changes the interface (moves a button, for example), then the autotest also needs to be altered.

Let's take merge games as an example. Creating a macro for unit testing in such projects is nearly impossible. Here objects tend to appear randomly, and then they are moved and united. So macros won't function properly since the spawning points can’t be identified  (though in some games, such things can be predicted). This testing can be performed inside Unity at the stage of development. However, it won't influence the following manual QA testing in any way. 

Of course, there are even more gameplay aspects that can be correctly tested only manually. For instance, a game designer changed some shotgun characteristics: he determined the total damage but missed out on the fact that now it is the damage of one pellet. Having completed several levels with this weapon, you catch yourself thinking that something is definitely off, while the autotest won't detect anything: the gun shoots and deals damage, and consequently, it functions properly. 

The situation is different with the games where players must survive and collect resources besides simple shooting. Here it is possible to write a macro that will guide a character and collect loot from certain points. Thus you can compare the claimed amount of loot with its actual amount. This automation will work only if you can say exactly where the loot spawns. And yet again, the slightest change of the map or loot positions will lead to an inevitable update of the script, which might take longer than creating a new one (which is already quite time-consuming). 

 Any autotest can function properly neither in analytics nor in the gameplay testings without the constant support of a QA. What is the point, then? Any automation of any process won't possibly be able to substitute a human completely. Its main aim is to help you save time by collecting information (or finding a missing comma when analysing events). The examples of our automation confirm this idea in terms of analytics. While automated processes can significantly facilitate a specialist’s life, drawing conclusions based on received data is still the responsibility of a real person.

So will Machine Intelligence ever be able to substitute a human? 

In some software programs, machine intelligence (MI) analyses what exactly happens after a certain button has been pressed. Specialists then use this as a foundation to automate simple processes, though the data collected there can't be called fully reliable.  

No MI can play a game and tell its impressions afterwards: which locations lack ammo or if the interface seems too complex at times. Here the automation may prove useless since it's way easier for a real person to evaluate the visuals, music and lighting and decide whether playing a game and using specific mechanics is enjoyable for him. 

After all, games are created for people and not for robots. 

Impression of gameplay is vital information that QAs must convey to business development managers. It is their feedback that helps make projects better. 

Bonus: Hidden advantages of the automated testing

There are some cases when mobile game macros have turned into full-fledged mechanics over time. The author of this article has previously worked in a survival genre where he was to test the loot’s weight. Checking a good number of locations with different probabilities of finding resources seems to be an unpleasant task. So with the help of programmers, a macro was developed that guided a character to collect loot at spawn points. Several emulators were launched with a prepared build, and after just 10 minutes, we confirmed that everything was functioning correctly (with a split-test, or an A/B test — the marketing test, where they divide users according to various parameters and then collect received data).

It took half an hour to create the functionality when the character is controlled with just one hand: press the button to collect loot. Now, this mechanic has become typical for the genre. However, the ability to attack and loot simultaneously was something wholly new and fascinating at that time. 

Mobile games' automation has always lagged behind, as running autotests in apps to compare numerical values is less challenging than conducting analytics of the whole game. Our QA engineers are trying to automate everything that is possible (and profitable), keeping up with the latest trends and technologies in the field. 

If there appears to be a new framework suitable for automation, we will definitely try it out so we can find new solutions and test your games even better.

Tags:
Hubs:
Rating0
Comments4

Articles