In this article, I’d love to share some of my learnings from creating Testing Strategies for mobile teams.
Creating those strategies turned out to be a game changer and truly a discovery. It helps teams boost their testing experience, make their tests easy and straightforward to write, improve coverage and address common testing issues.
Writing tests has always been an important part of mobile development. It allows us to automatically verify user experiences or validate business logic that we ship.
However, oftentimes testing gets confusing as the project grows since:
- We keep adding new tests, new ways of testing, new approaches, new best practices and so on making tests more advanced and complex over time.
- In this fast-paced environment, teams often find themselves writing some tests, but there is no alignment on how, where and what tests should actually be added.
- Testing lacks consistency in terms of style, organization, coverage and even vision – everyone tests in slightly different ways missing a full picture and leaking bugs intro production.

Without basic structure and specifications, testing becomes chaotic, insufficient, inconsistent.
Testing must be easy, otherwise it becomes a burden
While we all must be great engineers, everyone makes mistakes and bugs leak into production. To prevent this, we write tests. However, what will happen if tests are hard to add?
When writing tests become cumbersome and unpleasant, fewer and fewer people care about the essence of those tests. We soon end up with just a handful of enthusiasts who still try to make sense of testing in the team. However, adding tests must be easy, straightforward and meaningful. The easier it is, the more tests will be added simply because it is not a big deal anymore.
What do we even test?
Do we just test that our code works? Or do we verify that the entire feature functions as per requirements?
The thing is – a feature can be tested in various ways, starting from unit tests for its business logic to E2E scenarios where it’s just a part of a bigger user flow.

A feature has multiple facets and all of them matter and meaningful coverage should address those different facets. So if your functionality XYZ has some UI, we should strive to test the UI appearance, UI behavior, UI states, navigation/transitions, etc.
Rather than just throwing all kinds of tests at your project, making a Testing Strategy first helps teams focus on the aspects that matter the most for them and their product.
How to create a Testing Strategy
The goal of a testing strategy is to build a strong foundation that will help the team verify the features we ship by providing clear guidance on what, how, where and why tests should be added.
It simply makes testing easy, straightforward and meaningful by addressing testing issues, introducing improvements and setting up the stage for productive testing.
I summarized my process of creating a testing strategy into just 3 steps.
Step 1: Create a foundation
First and foremost, start with understanding the current state of things along with the needs of your team and product. The foundation you will create should be aligned with your project first before any obscure industry-wide recommendations.
Here are some questions that can help your teams explore your current testing experiences and practices:
| Test-specific | Outcome-centric | Team-oriented |
|---|---|---|
| What do you test at the moment? How do you test it? What types of testing is available? What types of testing is used? Are there problems with the current tests? (e.g. stability) Are all features tested equally? | Are there still many bugs in production? What kind of bugs are those? Would certain tests be able to prevent those bugs from happening? What is sufficient test coverage in the team’s opinion? Does the team have sufficient coverage? | Does everyone in the team add tests? Does everyone test in the same way? (e.g. snapshot test for 1 state or all possible screen states) Does the team use the same patterns/approaches when testing? (e.g. usage robot pattern, unified ViewModel setup, etc.) |
Answering those questions (and some more) must help the team align on what matters and what gaps there are at the moment. Once we have the team’s desires and needs confirmed, we can proceed to mapping out our first foundational pieces – the testing pillars.
Highlight what matters with testing pillars
Testing pillars represent what matters to the team most in terms of testing. It can be “Efficient test execution” if it takes a long time to run suits or “Comprehensive Coverage” if some aspects of the app are not covered, or even “Easiness of writing tests” if it is simply hard to add new tests.
The team defines their own pillars based on what struggles and issues it encounters when testing. The goal is to recognize and emphasize those things the team should focus on.
Example Team “Aurora”
Team “Aurora” tried figuring out their testing, brainstormed the questions and collected the feedback.
The team found out that no one actually likes writing UI tests because they use a framework (let’s call it “Appiun”) that takes a lot of effort and time to just write a single test. The team adds those tests only when it is absolutely necessary.
Most of the team just writes unit tests.Their unit test coverage is sufficient and business logic errors are rare. However, there are 3 ways of testing ViewModels and 3 mocking libraries actively used which means everyone tests in a slightly different way.
From time to time, they see UI-related bugs in production. The few UI tests available focus on basic happy path E2E scenarios and are generally stable, but take a long time to execute so they only run it in nightly CI jobs to make PR builds faster.
Having this input, the team agreed upon 3 core testing pillar:
- Low-cost testing
From the perspective of developer time and effort, tests must be easy to write, update and maintain over time (why: the “Appiun” tests made adding UI test very time-consuming) - Comprehensive coverage
We want to extend our coverage of UI and E2E scenarios (why: we have very few E2E tests and don’t cover other main user flows, the team have seen bugs originating from those flows) - Unified approach
As a team, we should align on how we test our features in the same way. We should unify our stack, approaches and structure to ensure consistency across tests.
Once you see the shaped pillars, you might already see how we can approach our goals and what we can do to address the known issues!
- We can start using Snapshot tests for UI as they are easy and fast to add.
- We can replace the heavy framework with native UI testing libraries (maybe even just partially in the beginning)
- And so on.
Keep all those suggestions and enhancements for the next step where you will make the testing plan.
Step 2 – Writing the strategy
Once the team knows what they want, need or struggle with – it can now draw the itinerary to those goals.
To write our strategy, we should use the inputs received during the discussion and process of shaping testing pillars. A simple strategy should:
- Define what we should do to succeed
Based on what matters to the team (pillars), we can now map out ideas and next steps that will help us achieve desired results and address current issues. - Specify how we should do it
Specify how we should proceed with the new setup. For example, define a testing pyramid, add simple rules like “When adding a Snapshot test for a component, verify its loading, success and error states” or create full guidelines for each type of testing like “This is how we test UI…”. - Our dependencies, tooling and components we require.
E.g. a tooling admin panel for test users, separate dev env, mocking from backend side, etc.

Defining what we should do
This is the section where you add all the ideas for enhancements for expectations and resolutions for known issues. The actions should be focused on the pillars the team defined earlier.
Actions do not have to focus solely on writing a test. A team can address much more than just that. Here are some examples how actions can target different aspects of testing:
| Process-specific | Code-specific | Infrastructure-specific | Etc. |
|---|---|---|---|
| Should we define how we will test or define core test scenarios when planning the feature development? Do we want to run all our automated E2E tests on the PR CI job? In addition to automated tests, do we run a manual testing run for new features? Would the entire team participate? | Can we have the same universal structure for our unit tests? Can we use better organization patterns for our UI tests? (e.g. Robot pattern) Should we align on the same libraries we use for mocking/assertions/etc.? | Should we update our CI setup to run E2E UI tests faster? Do we need a better test abstraction over ViewModels to have an easier time testing state management? Could we run our Unit Tests with parallelization to have faster suites? | Should we introduce Snapshot testing for verifying UI appearance? Should we migrate to a better testing framework? Can we ask Backend to create basic test setup endpoints for us? What is our test ratio? 20% for UI tests and 80% unit tests? Do we want to improve/update that? |
Example Team “Aurora”
Getting back to our test team “Aurora”, their suggestions included:
- Low-cost testing
1. Introduce Snapshot tests for verifying UI
2. Migrate from old framework to a better native UI testing library
3. Create test infrastructure/utilities that will help test ViewModels and state management in the same way for everyone in the team - Comprehensive coverage
1. Start testing UI more with Snapshot tests
2. Discuss Contract Testing with the Backend team
3. Extend E2E tests to cover more main user flows - Unified approach
1. Use the same patterns for E2E testing
2. Remove redundant libs for mocking and only use library A
3. Add a simple abstraction to enforce the same test structure for XYZ
Specifying how we proceed
Congratulations, you are past the hardest part of the process, which is figuring out what the team needs in the first place. The next step now is to specify how the team can approach their goals and tasks.
This section of the strategy simply states “We want to X in Y way with Z to do Goal A” for each of the goals the team defined. In addition to that, it mentions other docs, artifacts, pictures that are necessary or helpful to streamline the testing process.
Artifact examples:
| Process-specific | Code-specific | Infrastructure-specific | Dependencies, tooling |
|---|---|---|---|
| Test pyramid Testing process timeline Template for a feature test plan | Guidelines on how to write snapshot tests Guide to how to test ViewModel states | Instructions on how to create a QA user using backend endpoints A diagram showing how to parallelize UI tests execution | A document that links all our testing resources and admin panels in one place Etc. |
Example Team “Aurora”
Team Aurora started mapping out their tasks to achieve the goals.
During the process, they decided to define their desired test pyramid. They want to clearly specify that the unit tests are a foundation of the validation process. Then they want to have more Snapshot Tests and Integration tests. Only 20% of their tests should be E2E UI tests as they are harder to introduce and maintain, the team says.
The pyramid is supposed to help them have a clearer picture and show where they see their new introduction of Snapshot tests.

After that, the team decided to clarify how the overall testing process should look like by drawing a simple timeline diagram:

For other objectives, they introduced improvement tasks such as “Improve linting for test files” and added what new documents, guidelines they would need to have:

And so on.
The exact documents and procedures will be unique to the team and project, but overall it should draw the path that will help team achieve better testing.
Defining dependencies, tooling, additional requirements
The strategy should include details regarding any constraints, additional resources and specific details that should be known to have a full picture.
It is worth noting down all constraints that developers experience. Once they are recognized, the team could try to address them even if some blocking part is under a different team/org.
Not only can it help to define interconnection with other teams, but serve as a memo as to what tools/resources/channels the team currently uses for testing.
Step 3: Revisit, review, refine
The codebase continues to grow and even after making initial improvements, the team will likely encounter new challenges after some time
Whether it is scaling E2E tests effectively or adding better test performance monitoring, the strategy should reflect those new struggles, needs and desires.
Thus, the testing strategy is supposed to be revisited and reviewed once in a while to ensure that it reflects the latest priorities and addresses new challenges.
It is very important to remember that the team runs the document, not the dry specifications run how the team tests. Be comfortable to make changes to it every now and then, try scheduling half-year syncs with the team to update it.
During the review, the team can simply go through the same process/questions to identify what needs to be updated, whether it is the testing pillars, objectives, tasks or artifacts.
Here is an example structure for the sync:
- The progress made towards the existing goals and objectives;
- What went well, what didn’t go well;
- Questions reviews for pillars, identify any new challenges;
- The strategy execution plan review.
The end.
Thank you very much for reading this post. Please share your feedback in case you feel something is missing or can be improved!