Mobile App Automation: Running Dream11 Seamlessly On iOS & Android
- Published on
Playing fantasy sports on your Android or iOS devices, making teams and hustling like the players on ground is surely exciting. At Dream11, this is exactly what we intend to deliver to our 100M+ users — a wholesome experience where they can use their skills to build and own their fantasy sports teams and engage with their favourite sports like never before. But have you ever wondered how the Dream11 app operates on your devices? Android or iOS — doesn’t matter! The process is detailed and meticulous, and we have just the solution for it.
The Challenges:
There are different teams at Dream11, all working to give the best of the best user experience. But, a small change in any service or app often takes a lot of time and effort as it requires testing everything on both the platforms, i.e., on Android and iOS. This can result in longer regression cycles.
Any new feature development requires regression around the core flows every time, which results in longer release cycles as well.
Our Solution:
A Mobile App Automation Framework which would help us in the following ways:
- Increase efficiency and reduce execution time — helping the teams in performing feature regression and release regression.
- Test code reusability, thereby, making the need to engineer new code for different frameworks redundant.
- Run on multiple environments to hasten the process and gain confidence on the builds.
- Minimise maintenance cost as different teams had different frameworks to automate different user journeys belonging to their respective dream teams.
Tools and Technologies Used:
The tools and technologies we used to achieve this approach are:
- Java: the core programming language
- Maven: a dependency and build management tool
- TestNG: for test cases management
- Appium: for mobile automation
- Jenkins: a continuous Integration and testing (CI/CT) tool
- Android Studio and Xcode: for interacting with emulators and real devices
The Framework Lifecycle:
A combination of all the components depicted in the above diagram gives us the entire test framework. Let us take a look at each framework component in detail:
- TestNG runner is the entry point which triggers all the test classes or packages that need to run. Before the test execution starts, we run a backend health check to see if all the servers are running healthy, and only then does it proceed. Or else, it stops the execution.
- Device manager gives a list of iOS and Android devices connected to the host machine.
- Config file manager gives the predefined set of properties or environment variables, in case we want to run it on either specific platforms like Android or iOS, or on a particular staging environment.
- Device allocation manager is responsible for detecting, distributing and running the test cases among all the connected devices in a queued fashion.
For instance, if there are three connected devices and four test cases to run, the first three cases are distributed to the connected devices. Then, the fourth test starts executing on the device that completes its assigned test case first. In the background, the allocation manager is responsible for allocating and deallocating devices from the test case as and when a single test execution is completed on each of the devices.
The Framework Level Utility:
There are varying utilities that were added and created:
- Appium server manager utility is responsible for starting and stopping Appium server and the service manager
- Data creation utility is a centralized data creation utility that helps in creating test data for user journeys
- TestRail helper utility is the test case management tool. The execution of regression tests, creates a test run and updates the test result in an automated way, attaching the necessary logs and screenshots.
While an execution is in progress, we capture the data/information/details that help us in analysing the results in real time:
- The Logger is responsible for capturing custom logs and logcat (for Android)
- Report portal is an AI powered third-party dashboard that helps us analyse and visualise the automation test report. The AI capability helps analyse the flakiness of a test based on historical data for a particular test
- Screenshots & videos: The entire cycle of execution is recorded in a video. In case of a test failure, the video is attached with the test case in the report along with a screenshot at the point of failure
Adding Test Cases:
Now, let’s understand how test cases are added/segregated and fed to the TestNGRunner. Below is an illustration for a test scenario where the user has to perform the registration on the application and navigate to the home page after successful registration.
We use Page Object Model design pattern which helps segregate pages and locators in a much organised way. Thereby, a change in any locator will only require a change in one page object file. Each of the test classes have their test pages and their corresponding page objects.
CI/CT:
We use Jenkins as our CI/CT tool for checking out code from github and executing it. Once the Jenkins job is triggered, the tests are run on an in-house automation lab. The automation lab is a set of physical devices connected using a USB hub, and emulators. We also have support to run the tests on various devices hosted on cloud. The CI/CT primarily consists of two jobs: Build and Test.
- Build: This job is responsible for generating an Android Package (APK) from the user defined branch, pointing to any env, and later triggering a downstream Test job after a successful generation of the APK. Apart from the manual execution, the job is also run on a nightly basis with the APK generated from the latest master branch and pointing to a dedicated automation env.
- Test: The APK is copied from the builder workspace and installed on all the connected devices. The tests are then triggered on the devices in a parallel or a distributed manner based on the user input. We also send out alerts on slack, which helps notify the team on the start and finish of the suite along with the links to the reports.
The Future Roadmap:
The focus ahead will be on building the capability to capture the per screen application programming interface (API) calls to monitor and analyse performance impact, enhance the events capturing mechanism and the performance-related metrics like memory, CPU, Network or battery.
If you want to build solutions offering the best possible fantasy sports experience to over 100 million users, join us by applying here!