SlideShare a Scribd company logo
Efficient test automation is crucial for reliable software testing, and
WebdriverIO provides a robust framework to achieve this. This blog
will highlight best practices that enhance the performance and
maintainability ofyour automation efforts. We’ll cover keytopics such
as setting up yourtest environment, adopting the Page Object Model
(POM) for bettertest organization, and leveraging WebdriverIO
commands effectively.
Additionally, we’ll explore strategies for parallel test execution to
reduce runtime, best practices for locating elements to avoid
flakiness, and optimizing test reliabilitywith custom waits. We’ll also
address common pitfalls, cross-browsertesting integration with
platforms like BrowserStack, and maintaining test stabilitywith retry
AUTOMATED TESTING BEST PRACTICES WEBDRIVERIO WITH JAVASCRIPT
BuildingaRobustTestAutomation
FrameworkwithWebdriverIO:Best
Practices
• •
BY QATEAM
mechanisms.
Bythe end ofthis blog, you’ll have practical insights to enhance your
WebdriverIO automation strategy, ensuring a smoother and more
efficient testing process.
Table ofContents
Setting Up YourTest Environment Efficiently
Integrating the Page Object Model (POM) for BetterTest
Organization
🎯Benefits of POM in Large-Scale Projects
Example: Refactoring Tests Using POM
🎯Benefits of Refactoring with POM
Efficient Use ofWebDriverIO Commands
🎗️
Best Practices for Locating Elements in WebDriverIO
Strategies to Avoid Flaky Element Selectors
Using Custom Locators Effectively in Complex
Applications
Optimizing Test Reliabilitywith Custom Waits
Crafting Custom Wait Utilities for Flaky Scenarios
Example: Custom waitForElementText Utility
Implementing waitForShadowDom for Shadow DOM
Elements
Example: Custom waitForShadowDom Utility
Avoiding Common Pitfalls in WDIO Tests
Caching Elements—Why It’s a Bad Practice
Example of Stale Element Issue:
Optimizing Cross-BrowserTesting with WDIO
Best Practices for Handling Browser Compatibility
Example: Integrating BrowserStack and Sauce Labs
Maintaining Test Stabilitywith Retry Mechanisms
Configuring Retries in WebdriverIO (WDIO) Configuration
Strategies for Reducing Flakiness in CI Pipelines
Stabilize Test Data and Environments
Optimise Test Execution
Use Better Synchronization Techniques
Monitor CI Infrastructure Performance
Improve Test Quality
Employ Smart Retries with CI Tools
Utilize Hooks for Setup Tasks in WebdriverIO
What Are Hooks in WebdriverIO?
Types of Hooks in WebdriverIO
Benefits of Using Hooks in WebdriverIO
Best Practices for Using Hooks in WebdriverIO
Use Custom Commands for Reusability
What Are Custom Commands in WebdriverIO?
What Are Custom Commands?
Howto Create Custom Commands in WebdriverIO
Basic Syntax of a Custom Command
Using the Custom Command in Tests
Adding Commands to Specific Elements
Best Practices for Custom Commands
Benefits of Custom Commands
Example: Login Command with Error Handling
Custom Commands and Parallel Execution
Where to Define Custom Commands?
Enhance Debugging with Screenshots in WebdriverIO
Howto Capture Screenshots with WebdriverIO
Use Descriptive File Names
Example: Parallel Tests with Screenshots
Optimise Test Structure for Readability in WebdriverIO
Key Practices for Improving Test Readability
Organize Tests Using Describe Blocks
Keep Tests Short and Focused
Implement the Page Object Model (POM)
Use Hooks for Setup and Cleanup
Modularize Repetitive Logic Using Custom Commands
Avoid Hardcoding Test Data
Consistent Naming Conventions
Add Meaningful Assertions
Benefits of an Optimized Test Structure
Conclusion
Setting UpYourTest Environment
Efficiently
To harness the full potential ofWDIO, you need a solid test
environment setup. Here’s a step-by-step guide for setting it up.
For setting up WebdriverIO you can refer our WebdriverIO Setup
Integrating the Page Object Model
(POM) forBetterTest Organization
The Page Object Model (POM) is a design approach that improves test
automation by organizing page elements and their Methods in
separate files, awayfrom test files. This makes yourtests more
manageable, especiallywhen working with applications that have
multiple pages or complex workflows.
🎯Benefits ofPOM in Large-Scale
Projects
Centralized Maintenance
UI changes are only updated in the relevant page object file,
reducing effort.
Code Reusability
Page methods (like login or search) can be reused across
multiple test cases.
Improved Readability
Test scripts become concise, focusing only on business
logic and assertions.
BetterScalability
Adding new pages orfeatures becomes easier by extending
existing page objects.
Reduced Flakiness
Encapsulating waits or interactions inside page objects
makes tests more stable and reliable.
Example: RefactoringTests Using POM
Without POM (DirectTest Logic inTests)
describe('Login Test', () => {
it('should log in successfully', async () => {
await browser.url('https://www.saucedemo.com/v1/');
const username = await $('#user-name');
const password = await $('#password');
const loginButton = await $('#login-button');
await username.setValue('standard_user');
await password.setValue('secret_sauce');
await loginButton.click();
const message = await
$('//div[@class="product_label"]').getText();
expect(message).toBe('Products');
});
});
With POM (RefactoredApproach)
Login Page Object (login.page.js):
class LoginPage {
get username() { return $('##user-name'); }
get password() { return $('#password'); }
get loginButton() { return $('#login-button'); }
get welcomeMessage() { return $('//div[@class="product_label"]');
}
async open() {
await browser.url('https://www.saucedemo.com/v1/');
}
async login(user, pass) {
await this.username.setValue(user);
await this.password.setValue(pass);
await this.loginButton.click();
}
async getWelcomeMessage() {
return this.welcomeMessage.getText();
}
}
export default new LoginPage();
RefactoredTest Script (login.test.js)
import login from '../../PageObjects/SauceLabPo/login.page.js';
describe('Login Test using POM', () => {
it('should log in successfully', async () => {
await login.open();
await login.login('standard_user', 'secret_sauce');
const message = await login.getWelcomeMessage();
expect(message).toBe('Products');
});
});
🎯Benefits ofRefactoringwith POM
Centralised Maintenance: Any change to the login form only
requires updates in login.page.js.
CleanTest Scripts: Tests nowfocus on validation ratherthan
page interactions.
Scalability: Adding newtests becomes easier by reusing the
Login Page methods.
Efficient Use ofWebDriverIO Commands
Let’s explore some best practices for using WebDriverIO commands to
enhance test efficiency and reliability.
1.WhentoAvoid Protocol Methods inWebDriverIO
Protocol methods (e.g., browser.elementClick(),
browser.executeScript()) communicate directlywith the WebDriver,
bypassing WDIO’s built-in error handling, retries, and implicit waits.
Using these methods can lead to flakytests, especially if elements
aren’t available due to dynamic content or latency issues.
WhentoAvoid:
UI is notfullyloaded: Use WDIO commands like element.click()
which automatically retry until the element is ready.
Inconsistent behaviour: Use methods like .waitForDisplayed()
to ensure stability before performing actions.
const button = await $('#login-button');
// Avoid this
await browser.elementClick(button.elementId);
// Use this instead
await button.click(); // WDIO retries on failure
2. Use of.waitForand DynamicWaits overStaticTimeouts
Static waits (browser.pause()) halt execution for a fixed duration,
slowing tests unnecessarily. Dynamic waits, such as
.waitForDisplayed(), only pause until an element is ready, improving
both stability and speed.
Using browser.pause() introduces a fixedwaittime (e.g.,
pause(5000)), which slows down tests unnecessarily and makes
them fragile. Even if an element becomes available earlier, the test
will still wait forthe full duration, increasing execution time.
BetterAlternative: Use dynamic waits such as
.waitForDisplayed() to pause only as long as needed.
Example:
const message = await $('#login-button');
await message.waitForDisplayed({ timeout: 5000 }); // Waits up to
5 seconds
// Avoid this
await browser.pause(5000); // Always waits 5 seconds, even if
unnecessary
3. ConfiguringWDIOforParallel Execution
Running tests in parallel shortens test execution time, especiallyfor
large suites. Configure parallel execution in the wdio.conf.js file using
the maxInstances setting.
Parallel execution enables WebDriverIO to run multiple test cases or
browser instances simultaneously. Instead of running tests
sequentially (one after another), it divides the workload across
available instances (browsers or devices), significantly reducing
overall test time.
Forexample:
Ifyou have 10 test files and set maxInstances: 5, WDIO will launch
5 tests at once, then start the next 5 when the first batch
completes.
In cloud platforms like BrowserStack, parallel execution spreads
tests across multiple devices or browsers, ensuring faster
coverage and scalability.
exports.config = {
maxInstances: 5, // Run 5 tests in parallel
capabilities: [{ browserName: 'chrome' }],
};
4. ReducingTest Runtime in CI Environmentswith Parallel
Workers
When running tests in CI/CD, use parallelworkers to distribute tests
across multiple runners. Tools like Jenkins, GitHub Actions, or GitLab
CI allow splitting test suites bytags or groups.
Example: Split tests using tags or groups.
npx wdio run wdio.conf.js --suite login
npx wdio run wdio.conf.js --suite checkout
In your CI pipeline, assign suites to parallel workers:
jobs:
test-login:
runs-on: ubuntu-latest
steps:
- run: npx wdio run wdio.conf.js --suite login
test-checkout:
runs-on: ubuntu-latest
steps:
- run: npx wdio run wdio.conf.js --suite checkout
Using parallel workers reduces the execution time by distributing
tests across multiple agents, which is essential forfast feedback in CI
pipelines.
🎗️
Best Practices forLocating Elements
in WebDriverIO
Strategies toAvoid FlakyElement
Selectors
Flaky selectors can break tests when UI elements change. Here are
key strategies to make selectors reliable:
Use UniqueAttributes: Prefer id, data-testid, or custom
attributes (e.g., data-test) over CSS classes, which may change
during UI updates.
AvoidAbsolute XPaths: Instead, use relative XPath (e.g.,
//button[text()=’Submit’]).
WaitforElement States: Use dynamic waits like
.waitForDisplayed() to ensure elements are ready.
Use CSS overXPath: CSS selectors are often faster and more
readable.
// Good: Reliable CSS selector with custom attribute
const submitButton = await $('[data-test="submit-button"]');
// Avoid: Unreliable XPath with complex hierarchy
const submitButton = await $('//div[2]/form/button[1]');
Using Custom Locators Effectivelyin
ComplexApplications
In complex UIs, elements may not have unique attributes. You can
define custom locators to improve test reliability.
Example ofCustom Locator:
// Define custom selector logic (e.g., locating element by partial
text)
browser.addLocatorStrategy('partialText', async (text) => {
const elements = await $$('*'); // Select all elements
return elements.filter(async (el) => (await
el.getText()).includes(text));
});
// Use custom locator in test
const element = await browser.$('partialText=Welcome');
await element.click();
This approach makes interacting with tricky elements simpler and
more maintainable overtime.
OptimizingTest Reliabilitywith Custom
Waits
Custom wait utilities improve test stability, especially in scenarios
where standard wait methods (like .waitForDisplayed()) aren’t
sufficient.
Crafting Custom Wait Utilities forFlaky
Scenarios
Sometimes, elements maytake longerto appear or change state due
to dynamic content, animations, ornetworkdelays. A custom wait
utility ensures yourtests only proceed when specific conditions are
met, reducing flakiness.
Example: CustomwaitForElementText Utility
async function waitForElementText(selector, expectedText, timeout
= 5000) {
await browser.waitUntil(
async () => (await $(selector).getText()) === expectedText,
{ timeout, timeoutMsg: `Text not found: ${expectedText}` }
);
}
Usage :
await waitForElementText('#status', 'Success', 3000);
ImplementingwaitForShadowDom for
ShadowDOM Elements
Shadow DOM elements are encapsulated and require special
handling. A custom wait method ensures you can reliably interact
with them
Example: CustomwaitForShadowDom Utility
async function waitForShadowDom(selector, timeout = 5000) {
await browser.waitUntil(
async () => {
const shadowRoot = await browser.execute((el) =>
el.shadowRoot, $(selector));
return shadowRoot !== null;
},
{ timeout, timeoutMsg: `Shadow DOM not found for ${selector}`
}
);
}
Usage:
await waitForShadowDom('#shadow-host');
Avoiding Common Pitfalls in WDIOTests
Caching Elements—WhyIt’s a Bad
Practice
Caching elements means storing references to them (e.g., const
button = $(‘#btn’);) and reusing them throughout the test. This
practice is problematic because DOM elements maychange
between interactions (due to re-renders or state changes), causing
stale element exceptions.
Example ofStale Element Issue:
// Cache element reference
const button = await $('#btn');
// If the DOM updates, this button reference becomes stale
await button.click(); // Might throw an error
Solution: Always fetch elementsfresh right before interacting with
them.
// Get element fresh before each interaction
await $('#btn').click();
Avoiding these pitfalls helps keep tests fast, maintainable, and
stable, reducing flakiness in WebDriverIO automation.
Optimizing Cross-BrowserTestingwith
WDIO
Best Practices forHandling Browser
Compatibility
Use StandardWeb Locators: Avoid browser-specific selectors
that may behave differently across browsers.
Incorporate DynamicWaits: Different browsers may render
elements at different speeds. Use .waitForDisplayed() instead of
pause().
Set Browser-Specific Capabilities: Define capabilities for
browsers (like Chrome, Firefox) to handle known differences.
Enable HeadlessTesting: Use headless mode in CI pipelines to
speed up cross-browsertests.
Example: Integrating BrowserStack and
Sauce Labs
BrowserStackConfiguration inwdio.conf.js:
exports.config = {
user: process.env.BROWSERSTACK_USERNAME,
key: process.env.BROWSERSTACK_ACCESS_KEY,
services: ['browserstack'],
capabilities: [
{ browserName: 'chrome', os: 'Windows', os_version: '10' },
{ browserName: 'firefox', os: 'OS X', os_version: 'Monterey'
},
],
};
With services like BrowserStack or Sauce Labs, you can run tests
across multiple browsers and platforms without managing local
environments. This ensures better compatibilitycoverage and
fasterfeedback in CI/CD pipelines.
MaintainingTest Stabilitywith Retry
Mechanisms
Configuring Retries in WebdriverIO
(WDIO) Configuration
You can implement retries in the WDIO configuration to rerun failed
tests. Here’s howto do it:
Example: wdio.conf.js
exports.config = {
// Retry failed specs at the suite level
mochaOpts: {
retries: 2, // Retries the entire suite 2 times
},
// Retry failed tests at the spec level
specFileRetries: 2, // Retries individual spec files
specFileRetriesDelay: 5, // Time delay (in seconds) before a
retry
// Retry tests based on worker level (optional)
specFileRetriesDeferred: true, // Defers retries to the end of
the run
// Other configurations
runner: 'local',
framework: 'mocha', // or 'cucumber', 'jasmine'
capabilities: [{
maxInstances: 5,
browserName: 'chrome',
}
],
reporters: ['spec'],
};
Explanation:
mochaOpts.retries: Retries the entire test suite if anytest fails.
specFileRetries: Retries a particulartest/spec file.
specFileRetriesDelay: Introduces a delay before retrying the
spec file.
specFileRetriesDeferred: Iftrue, retries are deferred until all
othertests have run.
This helps maintain test stability by reducing transient failures,
especially useful in CI pipelines.
Strategies forReducing Flakiness in CI
Pipelines
Here are some strategies to reduce flakiness in CI:
StabilizeTest Data and Environments
Use MockData: Avoid relying on external systems by mocking
APIs and databases.
IsolateTest Environments: Run tests on fresh, isolated
environments (e.g., Docker containers).
SetTimeouts Carefully: Adjust timeouts based on expected
response times and network variability.
OptimiseTest Execution
Parallel Execution: Run tests in parallel to minimise
dependencies.
Rerun FailedTests: Use retries for intermittent issues, as
configured above.
Queue Management: Limit parallel jobs ifyour CI/CD
infrastructure faces bottlenecks.
Use BetterSynchronizationTechniques
Avoid HardWaits: Replace static waits with WebDriverwaits
(e.g., waitForDisplayed).
PollforState Changes: Use retries or polling for state-
dependent elements.
MonitorCI Infrastructure Performance
Reduce BrowserResource Usage: Use headless browsers or
adjust resolution to save resources.
Detect Bottlenecks: Analyze job durations and resource
utilization to identify bottlenecks.
ImproveTest Quality
ModularizeTests: Break down large tests into smaller,
independent ones.
Handle External Dependencies Gracefully: Add fallbacks for
API rate limits ortimeouts.
Log and Debug Better: Enable logs, screenshots, orvideo
capture forfailed tests to make debugging easier.
EmploySmart Retrieswith CITools
Use tools like Jenkins, GitHubActions, or CircleCI to configure
test reruns based on exit codes. Example: Use retry plugins in
Jenkins orworkflows with retry steps in GitHub Actions.
Utilize Hooks forSetupTasks in
WebdriverIO
WhatAre Hooks in WebdriverIO?
Hooks are lifecycle methods that run before or after specific events in
a test’s execution, like test suites, test cases, or session initialization.
They help streamline repetitive tasks like test environment setup,
logging, and resource cleanup.
Types ofHooks in WebdriverIO
beforeSession and afterSession
Use case: Setup and teardown tasks that need to run once
per session.
Example: Configuring environment variables or clearing
logs.
beforeSession: function (config, capabilities, specs) {
console.log("Starting a new test session.");
// Set environment-specific variables
process.env.TEST_ENV = 'staging';
},
afterSession: function (config, capabilities, specs) {
console.log("Test session ended.");
// Perform cleanup tasks
}
before and afterHooks
Use case: Run setup/cleanup logic before or after all tests
in a suite.
Example: Database connection orAPI token generation.
beforeSuite: function (suite) {
console.log(`Preparing suite: ${suite.title}`);
// Seed database with mock data
seedDatabase();
},
afterSuite: function (suite) {
console.log(`Finished suite: ${suite.title}`);
// Clear any suite-specific data
}
beforeSuite and afterSuite
Use case: Manage pre-test preparations for a specific suite.
Example: Seeding test data or resetting a particular app
state
beforeSuite: function (suite) {
console.log(`Preparing suite: ${suite.title}`);
// Seed database with mock data
seedDatabase();
},
afterSuite: function (suite) {
console.log(`Finished suite: ${suite.title}`);
// Clear any suite-specific data
}
beforeTest and afterTest
Use case: Handle setup/cleanup at the individual test level.
Example: Resetting app state before each test or capturing.
beforeTest: function (test) {
console.log(`Starting test: ${test.title}`);
// Reset app state before test
browser.reloadSession();
},
afterTest: function (test, context, { error, result, duration,
passed }) {
if (!passed) {
console.error(`Test failed: ${test.title}`);
// Capture screenshot on failure
browser.saveScreenshot(`./screenshots/${test.title}.png`);
}
}
onComplete Hook
Use case: Actions after all test executions, such as
generating reports.
onComplete: function (exitCode, config, capabilities) {
console.log("All tests completed.");
// Generate test report
generateTestReport();
}
Benefits ofUsing Hooks in WebdriverIO
Code Reusability: Centralise common setup tasks, reducing
duplication.
ImprovedTest Reliability: Ensure the environment is ready
before tests run.
Clean Up Resources: Free up memory and avoid state issues by
running teardown logic.
Consistency: Reduce human error by automating initialization
and teardown across all tests.
Simplified CI/CD Pipelines: Automatically generate reports and
manage logs at the session ortest level.
Best Practices forUsing Hooks in
WebdriverIO
Avoid using browser.pause() inside hooks to maintain test
speed.
Use conditional logic for environment-specific setups (e.g.,
different actions for staging vs. production).
Modularize reusablefunctions (e.g., seedDatabase()) to keep
hook logic concise and maintainable.
Capture relevanttest data (like screenshots or logs) in
afterTest hooks forfailed tests.
Use Custom Commands forReusability
WhatAre Custom Commands in
WebdriverIO?
Custom commands allowyou to extend WebdriverIO’s default set of
commands with reusable logic tailored to your specific testing needs.
Instead of repeating the same code in multiple tests, you can
encapsulate logic into a command and call it throughout yourtest
suite, improving maintainability and readability.
WhatAre Custom Commands?
Custom commands are user-defined functions that extend
WebdriverIO’s command set. These commands allowyou to perform
repeated tasks (like login flows, form submissions, or complex
assertions) without duplicating code.
Howto Create Custom Commands in
WebdriverIO
Basic Syntax ofa Custom Command
You can define custom commands inside the WebdriverIO
configuration or in a separate file.
SyntaxforRegistering a Custom Command:
browser.addCommand('login', async (username, password) => {
await $('#username').setValue(username);
await $('#password').setValue(password);
await $('#login-button').click();
});
Inthis example:
The custom login command accepts username and password as
parameters.
It interacts with the username and password fields, then clicks
the login button.
Using the Custom Command inTests
Once added, you can use the login command as part ofyourtest
scripts:
it('should login with valid credentials', async () => {
await browser.url('https://example.com/login');
await browser.login('testuser', 'securepassword');
});
Adding Commands to Specific Elements
You can also define commands for specific WebdriverIO elements:
browser.addCommand('waitAndClick', async function () {
await this.waitForDisplayed();
await this.click();
}, true); // Pass `true` to make it an element-level command
// Usage in test
it('should wait and click on the button', async () => {
const button = await $('#submit-button');
await button.waitAndClick();
});
Best Practices forCustom Commands
Encapsulate complex logic: Commands should handle intricate
or repetitive flows like login, navigation, or data setup.
Promotetest readability: Use descriptive names for
commands to make tests more intuitive.
Keep commands modular: Create commands that handle small,
discrete tasks to avoid bloated logic.
Use errorhandling: Ensure commands account for potential
issues, such as missing elements ortimeouts.
Benefits ofCustom Commands
Improves Reusability: You can reuse custom commands across
multiple test files, reducing code duplication.
IncreasesTest Readability: By abstracting complex flows into
commands, test cases become easierto understand.
Centralised Maintenance: Any changes in the logic (e.g.,
element locators) need to be updated only once within the
command.
Supports Complex Scenarios: Commands allowthe
combination of multiple Webdriver commands for more
sophisticated test flows.
Example: Login Commandwith ErrorHandling
browser.addCommand('safeLogin', async (username, password) => {
await $('#username').setValue(username);
await $('#password').setValue(password);
await $('#login-button').click();
const errorMessage = await $('#error-message');
if (await errorMessage.isDisplayed()) {
throw new Error('Login failed: Invalid credentials');
}
});
// Usage
it('should login safely', async () => {
await browser.safeLogin('invalidUser', 'wrongPassword');
});
Custom Commands and Parallel
Execution
WebdriverIO commands are especially useful when running tests in
parallel. Encapsulating logic into commands helps ensure
consistency across different threads and simplifies debugging.
Where to Define Custom Commands?
1. Inthe Configuration File (wdio.conf.js): Good for project-wide
custom commands.
2. In HelperFiles orPage Objects: Useful for project-specific
flows. For example, define a login command in the LoginPage
object.
Enhance Debuggingwith Screenshots
in WebdriverIO
Taking screenshots during test execution is a critical strategyto
improve debugging by capturing the application state at key
moments. Screenshots provide visual feedback that helps identify
issues like UI changes, element loading failures, ortest flakiness.
Here’s howyou can use screenshots effectivelywith WebdriverIO.
Howto Capture Screenshotswith WebdriverIO
Taking Full Page Screenshots
You can use browser.saveScreenshot() to capture the entire visible
part ofthe page.
it('should take a full-page screenshot', async () => {
await browser.url('https://example.com');
await browser.saveScreenshot('./screenshots/fullPage.png');
});
Capturing Element-Level Screenshots
You can also capture a specific element’s screenshot, which is useful
for debugging element-specific issues.
it('should take a screenshot of a specific element', async () => {
const logo = await $('#logo');
await logo.saveScreenshot('./screenshots/logo.png');
});
Saving Screenshots onTest Failures (Using Hooks)
To automatically capture screenshots when a test fails, you can use
WebdriverIO’s hooks like afterTest in the configuration file.
afterTest: async function (test, context, { passed }) {
if (!passed) {
const screenshotPath = `./screenshots/${test.title}.png`;
await browser.saveScreenshot(screenshotPath);
console.log(`Saved screenshot for failed test: ${test.title}`);
}
}
Best Practices forUsing Screenshots
forDebugging
Use Descriptive File Names
Save screenshots with dynamic names based on the test title or
timestamp to easily identifythem in large test suites.
const timestamp = new Date().toISOString();
await
browser.saveScreenshot(`./screenshots/test_${timestamp}.png`);
Capture Screenshots During Critical Flows
Take screenshots at important checkpoints in yourtests, such as
after page navigation orform submission, to trace where failures
occur.
it('should navigate and verify screenshot', async () => {
await browser.url('https://example.com');
await browser.saveScreenshot('./screenshots/page_loaded.png');
await $('#submit').click();
await browser.saveScreenshot('./screenshots/after_click.png');
});
Integrate Screenshotswith CI/CD Pipelines
Store screenshots in your CI/CD reports (e.g., Jenkins or GitHub
Actions) for easier debugging.
Combine Screenshotswith Logs andVideos
Use cloud platforms like BrowserStack or LambdaTest to capture
screenshots alongside video recordings for enhanced debugging.
These services also provide automatic screenshots forfailed tests,
network logs, and browser console logs.
Example: ParallelTestswith Screenshots
Ifyou’re running tests in parallel (e.g., on BrowserStack), it’s
important to handle screenshots carefullyto avoid name collisions
between threads. Use a unique ID ortimestamp in the file names.
const workerID =
browser.capabilities['bstack:options'].sessionName || 'default';
const screenshotPath =
`./screenshots/${workerID}_${Date.now()}.png`;
await browser.saveScreenshot(screenshotPath);
Benefits ofDebuggingwith Screenshots
Visual Context: Provides a clearview ofwhat the user interface
looked like during the test failure.
FasterIssue Resolution: Allows developers and testers to
quickly spot UI issues without reproducing the test manually.
Reduces Flakiness: Helps identify subtle UI changes ortiming
issues that could cause flakytests.
Seamless CI Integration: Automated screenshots provide
immediate insights in CI reports.
OptimiseTest Structure forReadability
in WebdriverIO
Creating a well-structured test suite is essential for improving
readability, making your code easierto maintain, debug, and scale. An
optimised test structure ensures that tests remain intuitive for both
current and future team members.
KeyPractices forImprovingTest
Readability
Use DescriptiveTest Names
Test names should clearly describe the purpose and expected
behaviour ofthe test.
Example:
it('should display an error message for invalid login', async ()
=> {
// Test logic here
});
OrganizeTests Using Describe Blocks
Group related tests logically using describe blocks to improve
clarity.
Example:
describe('Login Page Tests', () => {
it('should load the login page successfully', async () => {
/*...*/ });
it('should display an error for invalid credentials', async () =>
{ /*...*/ });
});
KeepTests Short and Focused
Each test should ideallyverify one behaviour orfeature to keep it
focused. Long tests are harderto read and maintain.
Implement the Page Object Model (POM)
Use the Page Object Model to separate UI elements and logic
from test scripts, improving code readability and maintainability.
Example:
class LoginPage {
get username() { return $('#username'); }
get password() { return $('#password'); }
get loginButton() { return $('#login-button'); }
async login(user, pass) {
await this.username.setValue(user);
await this.password.setValue(pass);
await this.loginButton.click();
}
}
const loginPage = new LoginPage();
Use Hooks forSetup and Cleanup
Use before and after hooks to handle test setup and teardown
logic, keeping tests focused only on the actual behaviourthey
validate.
Example:
before(async () => {
await browser.url('https://example.com');
});
after(async () => {
await browser.deleteSession();
})
Modularize Repetitive Logic Using
Custom Commands
Use WebdriverIO’s custom commands to encapsulate frequently
used logic and make tests cleaner.
Example:
browser.addCommand('loginAsAdmin', async () => {
await browser.url('/login');
await $('#username').setValue('admin');
await $('#password').setValue('admin123');
await $('#login-button').click();
});
it('should allow admin to login', async () => {
await browser.loginAsAdmin();
expect(await browser.getTitle()).toBe('Admin Dashboard');
});
Avoid HardcodingTest Data
Use external data files or configuration files to manage test data,
reducing duplication and increasing flexibility.
Example:
const credentials = require('./data/credentials.json');
it('should login with valid credentials', async () => {
await loginPage.login(credentials.username,
credentials.password);
});
Consistent Naming Conventions
Follow consistent naming patterns fortest files, functions,
variables, and Page Objects. This makes code easierto read and
understand.
Add MeaningfulAssertions
Ensure yourtest assertions reflect the intent ofthe test, so
anyone reading the code can understand what’s being validated.
Example:
expect(await $('.error-message').getText()).toBe('Invalid username
or password');
Benefits ofan OptimizedTest Structure
EasierMaintenance: Clearer structure allows easier updates
and bug fixes.
Reduced Duplication: Using Page Objects, custom commands,
and data files minimises redundant code.
FasterOnboarding: Newteam members can quickly understand
and contribute to the test suite.
Improved Debugging: Cleaner, focused tests make it easierto
identifythe root cause of issues.
Conclusion
In this guide, we’ve explored what makes WebDriverIO (WDIO) a
standout choice for automationtesting. We’ve covered essential
practices such as setting up yourtesting environment and utilising
the Page Object Model (POM), which significantly enhance both the
reliability and maintainability ofyourtests.
By applying techniques like custom waits and smart element locators,
you can effectively address flakiness and improve test stability.
Additionally, leveraging cloud platforms for cross-browsertesting
ensures that your application performs smoothly across various
environments.
Overall, adopting these best practices will help you streamline your
automation efforts and deliver high-quality software. By staying
informed about these strategies, yourteam will be well-equipped to
maximize the benefits ofWDIO in yourtesting endeavors.
Witness howourmeticulous approach and cutting-edge
solutions elevated qualityand performanceto newheights.
Beginyourjourneyintotheworld ofsoftwaretesting excellence.
To knowmore refertoTools &Technologies & QAServices.
Ifyouwould liketo learn more aboutthe awesome serviceswe
provide, be sureto reach out.
HappyTesting 🙂
TAGS:
MaximizingTest E…

PREVIOUS POST
 Git Commands for… 
NEXT POST
Related Blogs

More Related Content

PDF
Advanced Techniques to Build an Efficient Selenium Framework
PDF
WebdriverIO & JavaScript: The Perfect Duo for Web Automation
PPT
Test Automation Framework Online Training by QuontraSolutions
PDF
WebDriverIO Tutorial for Selenium Automation.pdf
PPTX
Automated Testing Of EPiServer CMS Sites
PPT
Designing a Test Automation Framework By Quontra solutions
PPTX
Qa process
PPTX
How to make a Load Testing with Visual Studio 2012
Advanced Techniques to Build an Efficient Selenium Framework
WebdriverIO & JavaScript: The Perfect Duo for Web Automation
Test Automation Framework Online Training by QuontraSolutions
WebDriverIO Tutorial for Selenium Automation.pdf
Automated Testing Of EPiServer CMS Sites
Designing a Test Automation Framework By Quontra solutions
Qa process
How to make a Load Testing with Visual Studio 2012

Similar to Building a Robust WebDriverIO Test Automation Framework (20)

PPTX
Qa process
PDF
Streamline Testing: Transition from Manual to Automation with Selenium & C#
PDF
Streamline Testing: Transition from Manual to Automation with Selenium & C#
PDF
What is a Test Automation framework.pdf
PPT
Unit Testing
PPTX
Deep Dive Modern Apps Lifecycle with Visual Studio 2012: How to create cross ...
PPTX
Mastering Test Automation: How To Use Selenium Successfully
PDF
Test Automation Framework Design | www.idexcel.com
PDF
Best Practices for Selenium Test Automation in Python
PPTX
Automation, Selenium Webdriver and Page Objects
PDF
Effective testing of rich internet applications
PPTX
Selenium Tutorial for Beginners | Automation framework Basics
PDF
Test Automation Frameworks- The Complete Guide.pdf
DOC
New features in qtp11
PPT
Unit Testing Documentum Foundation Classes Code
PPT
ASP.NET OVERVIEW
PPT
Test Automation Framework Development Introduction
PPT
Selenium-Webdriver With PHPUnit Automation test for Joomla CMS!
PPT
Unit Testing DFC
PPT
XML2Selenium Technical Presentation
Qa process
Streamline Testing: Transition from Manual to Automation with Selenium & C#
Streamline Testing: Transition from Manual to Automation with Selenium & C#
What is a Test Automation framework.pdf
Unit Testing
Deep Dive Modern Apps Lifecycle with Visual Studio 2012: How to create cross ...
Mastering Test Automation: How To Use Selenium Successfully
Test Automation Framework Design | www.idexcel.com
Best Practices for Selenium Test Automation in Python
Automation, Selenium Webdriver and Page Objects
Effective testing of rich internet applications
Selenium Tutorial for Beginners | Automation framework Basics
Test Automation Frameworks- The Complete Guide.pdf
New features in qtp11
Unit Testing Documentum Foundation Classes Code
ASP.NET OVERVIEW
Test Automation Framework Development Introduction
Selenium-Webdriver With PHPUnit Automation test for Joomla CMS!
Unit Testing DFC
XML2Selenium Technical Presentation
Ad

More from digitaljignect (20)

PDF
Examples of SOLID Principles in Test Automation
PDF
Rest Assured Basics: A Beginner's Guide to API Testing in Java
PDF
A Beginner's Guide to API Testing in Postman
PDF
Boosting QA Efficiency: Benefits of Cypress for API Automation
PDF
Everything You Need to Know About Functional Testing: A Guide
PDF
Git Commands for Test Automation: Best Practices & Techniques
PDF
Cypress Automation : Increase Reusability with Custom Commands
PDF
Top CI/CD Tools Every QA Automation Engineer Should Use
PDF
Effortless Test Reporting in Selenium Automation
PDF
Cypress Test Automation: Managing Complex Interactions
PDF
Advanced Mobile Automation with Appium & WebdriverIO
PDF
Optimizing Cypress Automation: Fix Flaky Tests & Timeouts
PDF
Advanced Test Automation: WDIO with BDD Cucumber
PDF
Advanced Selenium Automation with Actions & Robot Class
PDF
Visual Regression Testing Using Selenium AShot: A Step-by-Step Approach
PDF
Mastering BDD with Cucumber & Java for Test Automation
PDF
Automated Visual Testing with Selenium & Applitools
PDF
AI in Modern Software Testing: Smarter QA Today
PDF
Appium in Action: Automating Flutter & React Native Apps
PDF
Web Application Security Testing Guide | Secure Web Apps
Examples of SOLID Principles in Test Automation
Rest Assured Basics: A Beginner's Guide to API Testing in Java
A Beginner's Guide to API Testing in Postman
Boosting QA Efficiency: Benefits of Cypress for API Automation
Everything You Need to Know About Functional Testing: A Guide
Git Commands for Test Automation: Best Practices & Techniques
Cypress Automation : Increase Reusability with Custom Commands
Top CI/CD Tools Every QA Automation Engineer Should Use
Effortless Test Reporting in Selenium Automation
Cypress Test Automation: Managing Complex Interactions
Advanced Mobile Automation with Appium & WebdriverIO
Optimizing Cypress Automation: Fix Flaky Tests & Timeouts
Advanced Test Automation: WDIO with BDD Cucumber
Advanced Selenium Automation with Actions & Robot Class
Visual Regression Testing Using Selenium AShot: A Step-by-Step Approach
Mastering BDD with Cucumber & Java for Test Automation
Automated Visual Testing with Selenium & Applitools
AI in Modern Software Testing: Smarter QA Today
Appium in Action: Automating Flutter & React Native Apps
Web Application Security Testing Guide | Secure Web Apps
Ad

Recently uploaded (20)

PDF
Blue Purple Modern Animated Computer Science Presentation.pdf.pdf
PDF
Optimiser vos workloads AI/ML sur Amazon EC2 et AWS Graviton
PDF
Mobile App Security Testing_ A Comprehensive Guide.pdf
PDF
Spectral efficient network and resource selection model in 5G networks
PPTX
Big Data Technologies - Introduction.pptx
PPTX
VMware vSphere Foundation How to Sell Presentation-Ver1.4-2-14-2024.pptx
PDF
Encapsulation_ Review paper, used for researhc scholars
PPTX
Programs and apps: productivity, graphics, security and other tools
PDF
MIND Revenue Release Quarter 2 2025 Press Release
PPTX
sap open course for s4hana steps from ECC to s4
PPT
Teaching material agriculture food technology
PDF
Unlocking AI with Model Context Protocol (MCP)
PDF
7 ChatGPT Prompts to Help You Define Your Ideal Customer Profile.pdf
PDF
cuic standard and advanced reporting.pdf
PDF
Peak of Data & AI Encore- AI for Metadata and Smarter Workflows
PPTX
Cloud computing and distributed systems.
PPTX
Effective Security Operations Center (SOC) A Modern, Strategic, and Threat-In...
PPTX
Understanding_Digital_Forensics_Presentation.pptx
PPTX
Digital-Transformation-Roadmap-for-Companies.pptx
PDF
Reach Out and Touch Someone: Haptics and Empathic Computing
Blue Purple Modern Animated Computer Science Presentation.pdf.pdf
Optimiser vos workloads AI/ML sur Amazon EC2 et AWS Graviton
Mobile App Security Testing_ A Comprehensive Guide.pdf
Spectral efficient network and resource selection model in 5G networks
Big Data Technologies - Introduction.pptx
VMware vSphere Foundation How to Sell Presentation-Ver1.4-2-14-2024.pptx
Encapsulation_ Review paper, used for researhc scholars
Programs and apps: productivity, graphics, security and other tools
MIND Revenue Release Quarter 2 2025 Press Release
sap open course for s4hana steps from ECC to s4
Teaching material agriculture food technology
Unlocking AI with Model Context Protocol (MCP)
7 ChatGPT Prompts to Help You Define Your Ideal Customer Profile.pdf
cuic standard and advanced reporting.pdf
Peak of Data & AI Encore- AI for Metadata and Smarter Workflows
Cloud computing and distributed systems.
Effective Security Operations Center (SOC) A Modern, Strategic, and Threat-In...
Understanding_Digital_Forensics_Presentation.pptx
Digital-Transformation-Roadmap-for-Companies.pptx
Reach Out and Touch Someone: Haptics and Empathic Computing

Building a Robust WebDriverIO Test Automation Framework

  • 1. Efficient test automation is crucial for reliable software testing, and WebdriverIO provides a robust framework to achieve this. This blog will highlight best practices that enhance the performance and maintainability ofyour automation efforts. We’ll cover keytopics such as setting up yourtest environment, adopting the Page Object Model (POM) for bettertest organization, and leveraging WebdriverIO commands effectively. Additionally, we’ll explore strategies for parallel test execution to reduce runtime, best practices for locating elements to avoid flakiness, and optimizing test reliabilitywith custom waits. We’ll also address common pitfalls, cross-browsertesting integration with platforms like BrowserStack, and maintaining test stabilitywith retry AUTOMATED TESTING BEST PRACTICES WEBDRIVERIO WITH JAVASCRIPT BuildingaRobustTestAutomation FrameworkwithWebdriverIO:Best Practices • • BY QATEAM
  • 2. mechanisms. Bythe end ofthis blog, you’ll have practical insights to enhance your WebdriverIO automation strategy, ensuring a smoother and more efficient testing process. Table ofContents Setting Up YourTest Environment Efficiently Integrating the Page Object Model (POM) for BetterTest Organization 🎯Benefits of POM in Large-Scale Projects Example: Refactoring Tests Using POM 🎯Benefits of Refactoring with POM Efficient Use ofWebDriverIO Commands 🎗️ Best Practices for Locating Elements in WebDriverIO Strategies to Avoid Flaky Element Selectors Using Custom Locators Effectively in Complex Applications Optimizing Test Reliabilitywith Custom Waits Crafting Custom Wait Utilities for Flaky Scenarios Example: Custom waitForElementText Utility Implementing waitForShadowDom for Shadow DOM Elements Example: Custom waitForShadowDom Utility Avoiding Common Pitfalls in WDIO Tests Caching Elements—Why It’s a Bad Practice Example of Stale Element Issue: Optimizing Cross-BrowserTesting with WDIO Best Practices for Handling Browser Compatibility Example: Integrating BrowserStack and Sauce Labs Maintaining Test Stabilitywith Retry Mechanisms Configuring Retries in WebdriverIO (WDIO) Configuration
  • 3. Strategies for Reducing Flakiness in CI Pipelines Stabilize Test Data and Environments Optimise Test Execution Use Better Synchronization Techniques Monitor CI Infrastructure Performance Improve Test Quality Employ Smart Retries with CI Tools Utilize Hooks for Setup Tasks in WebdriverIO What Are Hooks in WebdriverIO? Types of Hooks in WebdriverIO Benefits of Using Hooks in WebdriverIO Best Practices for Using Hooks in WebdriverIO Use Custom Commands for Reusability What Are Custom Commands in WebdriverIO? What Are Custom Commands? Howto Create Custom Commands in WebdriverIO Basic Syntax of a Custom Command Using the Custom Command in Tests Adding Commands to Specific Elements Best Practices for Custom Commands Benefits of Custom Commands Example: Login Command with Error Handling Custom Commands and Parallel Execution Where to Define Custom Commands? Enhance Debugging with Screenshots in WebdriverIO Howto Capture Screenshots with WebdriverIO Use Descriptive File Names Example: Parallel Tests with Screenshots Optimise Test Structure for Readability in WebdriverIO Key Practices for Improving Test Readability Organize Tests Using Describe Blocks Keep Tests Short and Focused Implement the Page Object Model (POM) Use Hooks for Setup and Cleanup Modularize Repetitive Logic Using Custom Commands
  • 4. Avoid Hardcoding Test Data Consistent Naming Conventions Add Meaningful Assertions Benefits of an Optimized Test Structure Conclusion Setting UpYourTest Environment Efficiently To harness the full potential ofWDIO, you need a solid test environment setup. Here’s a step-by-step guide for setting it up. For setting up WebdriverIO you can refer our WebdriverIO Setup Integrating the Page Object Model (POM) forBetterTest Organization The Page Object Model (POM) is a design approach that improves test automation by organizing page elements and their Methods in separate files, awayfrom test files. This makes yourtests more manageable, especiallywhen working with applications that have multiple pages or complex workflows. 🎯Benefits ofPOM in Large-Scale Projects Centralized Maintenance UI changes are only updated in the relevant page object file, reducing effort. Code Reusability Page methods (like login or search) can be reused across multiple test cases.
  • 5. Improved Readability Test scripts become concise, focusing only on business logic and assertions. BetterScalability Adding new pages orfeatures becomes easier by extending existing page objects. Reduced Flakiness Encapsulating waits or interactions inside page objects makes tests more stable and reliable. Example: RefactoringTests Using POM Without POM (DirectTest Logic inTests) describe('Login Test', () => { it('should log in successfully', async () => { await browser.url('https://www.saucedemo.com/v1/'); const username = await $('#user-name'); const password = await $('#password'); const loginButton = await $('#login-button'); await username.setValue('standard_user'); await password.setValue('secret_sauce'); await loginButton.click(); const message = await $('//div[@class="product_label"]').getText(); expect(message).toBe('Products'); }); }); With POM (RefactoredApproach) Login Page Object (login.page.js): class LoginPage { get username() { return $('##user-name'); }
  • 6. get password() { return $('#password'); } get loginButton() { return $('#login-button'); } get welcomeMessage() { return $('//div[@class="product_label"]'); } async open() { await browser.url('https://www.saucedemo.com/v1/'); } async login(user, pass) { await this.username.setValue(user); await this.password.setValue(pass); await this.loginButton.click(); } async getWelcomeMessage() { return this.welcomeMessage.getText(); } } export default new LoginPage(); RefactoredTest Script (login.test.js) import login from '../../PageObjects/SauceLabPo/login.page.js'; describe('Login Test using POM', () => { it('should log in successfully', async () => { await login.open(); await login.login('standard_user', 'secret_sauce'); const message = await login.getWelcomeMessage(); expect(message).toBe('Products'); }); }); 🎯Benefits ofRefactoringwith POM Centralised Maintenance: Any change to the login form only
  • 7. requires updates in login.page.js. CleanTest Scripts: Tests nowfocus on validation ratherthan page interactions. Scalability: Adding newtests becomes easier by reusing the Login Page methods. Efficient Use ofWebDriverIO Commands Let’s explore some best practices for using WebDriverIO commands to enhance test efficiency and reliability. 1.WhentoAvoid Protocol Methods inWebDriverIO Protocol methods (e.g., browser.elementClick(), browser.executeScript()) communicate directlywith the WebDriver, bypassing WDIO’s built-in error handling, retries, and implicit waits. Using these methods can lead to flakytests, especially if elements aren’t available due to dynamic content or latency issues. WhentoAvoid: UI is notfullyloaded: Use WDIO commands like element.click() which automatically retry until the element is ready. Inconsistent behaviour: Use methods like .waitForDisplayed() to ensure stability before performing actions. const button = await $('#login-button'); // Avoid this await browser.elementClick(button.elementId); // Use this instead await button.click(); // WDIO retries on failure
  • 8. 2. Use of.waitForand DynamicWaits overStaticTimeouts Static waits (browser.pause()) halt execution for a fixed duration, slowing tests unnecessarily. Dynamic waits, such as .waitForDisplayed(), only pause until an element is ready, improving both stability and speed. Using browser.pause() introduces a fixedwaittime (e.g., pause(5000)), which slows down tests unnecessarily and makes them fragile. Even if an element becomes available earlier, the test will still wait forthe full duration, increasing execution time. BetterAlternative: Use dynamic waits such as .waitForDisplayed() to pause only as long as needed. Example: const message = await $('#login-button'); await message.waitForDisplayed({ timeout: 5000 }); // Waits up to 5 seconds // Avoid this await browser.pause(5000); // Always waits 5 seconds, even if unnecessary 3. ConfiguringWDIOforParallel Execution Running tests in parallel shortens test execution time, especiallyfor large suites. Configure parallel execution in the wdio.conf.js file using the maxInstances setting. Parallel execution enables WebDriverIO to run multiple test cases or browser instances simultaneously. Instead of running tests
  • 9. sequentially (one after another), it divides the workload across available instances (browsers or devices), significantly reducing overall test time. Forexample: Ifyou have 10 test files and set maxInstances: 5, WDIO will launch 5 tests at once, then start the next 5 when the first batch completes. In cloud platforms like BrowserStack, parallel execution spreads tests across multiple devices or browsers, ensuring faster coverage and scalability. exports.config = { maxInstances: 5, // Run 5 tests in parallel capabilities: [{ browserName: 'chrome' }], }; 4. ReducingTest Runtime in CI Environmentswith Parallel Workers When running tests in CI/CD, use parallelworkers to distribute tests across multiple runners. Tools like Jenkins, GitHub Actions, or GitLab CI allow splitting test suites bytags or groups. Example: Split tests using tags or groups. npx wdio run wdio.conf.js --suite login npx wdio run wdio.conf.js --suite checkout In your CI pipeline, assign suites to parallel workers:
  • 10. jobs: test-login: runs-on: ubuntu-latest steps: - run: npx wdio run wdio.conf.js --suite login test-checkout: runs-on: ubuntu-latest steps: - run: npx wdio run wdio.conf.js --suite checkout Using parallel workers reduces the execution time by distributing tests across multiple agents, which is essential forfast feedback in CI pipelines. 🎗️ Best Practices forLocating Elements in WebDriverIO Strategies toAvoid FlakyElement Selectors Flaky selectors can break tests when UI elements change. Here are key strategies to make selectors reliable: Use UniqueAttributes: Prefer id, data-testid, or custom attributes (e.g., data-test) over CSS classes, which may change during UI updates. AvoidAbsolute XPaths: Instead, use relative XPath (e.g., //button[text()=’Submit’]). WaitforElement States: Use dynamic waits like .waitForDisplayed() to ensure elements are ready. Use CSS overXPath: CSS selectors are often faster and more readable.
  • 11. // Good: Reliable CSS selector with custom attribute const submitButton = await $('[data-test="submit-button"]'); // Avoid: Unreliable XPath with complex hierarchy const submitButton = await $('//div[2]/form/button[1]'); Using Custom Locators Effectivelyin ComplexApplications In complex UIs, elements may not have unique attributes. You can define custom locators to improve test reliability. Example ofCustom Locator: // Define custom selector logic (e.g., locating element by partial text) browser.addLocatorStrategy('partialText', async (text) => { const elements = await $$('*'); // Select all elements return elements.filter(async (el) => (await el.getText()).includes(text)); }); // Use custom locator in test const element = await browser.$('partialText=Welcome'); await element.click(); This approach makes interacting with tricky elements simpler and more maintainable overtime. OptimizingTest Reliabilitywith Custom
  • 12. Waits Custom wait utilities improve test stability, especially in scenarios where standard wait methods (like .waitForDisplayed()) aren’t sufficient. Crafting Custom Wait Utilities forFlaky Scenarios Sometimes, elements maytake longerto appear or change state due to dynamic content, animations, ornetworkdelays. A custom wait utility ensures yourtests only proceed when specific conditions are met, reducing flakiness. Example: CustomwaitForElementText Utility async function waitForElementText(selector, expectedText, timeout = 5000) { await browser.waitUntil( async () => (await $(selector).getText()) === expectedText, { timeout, timeoutMsg: `Text not found: ${expectedText}` } ); } Usage : await waitForElementText('#status', 'Success', 3000); ImplementingwaitForShadowDom for ShadowDOM Elements
  • 13. Shadow DOM elements are encapsulated and require special handling. A custom wait method ensures you can reliably interact with them Example: CustomwaitForShadowDom Utility async function waitForShadowDom(selector, timeout = 5000) { await browser.waitUntil( async () => { const shadowRoot = await browser.execute((el) => el.shadowRoot, $(selector)); return shadowRoot !== null; }, { timeout, timeoutMsg: `Shadow DOM not found for ${selector}` } ); } Usage: await waitForShadowDom('#shadow-host'); Avoiding Common Pitfalls in WDIOTests Caching Elements—WhyIt’s a Bad Practice Caching elements means storing references to them (e.g., const button = $(‘#btn’);) and reusing them throughout the test. This practice is problematic because DOM elements maychange between interactions (due to re-renders or state changes), causing stale element exceptions.
  • 14. Example ofStale Element Issue: // Cache element reference const button = await $('#btn'); // If the DOM updates, this button reference becomes stale await button.click(); // Might throw an error Solution: Always fetch elementsfresh right before interacting with them. // Get element fresh before each interaction await $('#btn').click(); Avoiding these pitfalls helps keep tests fast, maintainable, and stable, reducing flakiness in WebDriverIO automation. Optimizing Cross-BrowserTestingwith WDIO Best Practices forHandling Browser Compatibility Use StandardWeb Locators: Avoid browser-specific selectors that may behave differently across browsers. Incorporate DynamicWaits: Different browsers may render elements at different speeds. Use .waitForDisplayed() instead of pause(). Set Browser-Specific Capabilities: Define capabilities for browsers (like Chrome, Firefox) to handle known differences. Enable HeadlessTesting: Use headless mode in CI pipelines to
  • 15. speed up cross-browsertests. Example: Integrating BrowserStack and Sauce Labs BrowserStackConfiguration inwdio.conf.js: exports.config = { user: process.env.BROWSERSTACK_USERNAME, key: process.env.BROWSERSTACK_ACCESS_KEY, services: ['browserstack'], capabilities: [ { browserName: 'chrome', os: 'Windows', os_version: '10' }, { browserName: 'firefox', os: 'OS X', os_version: 'Monterey' }, ], }; With services like BrowserStack or Sauce Labs, you can run tests across multiple browsers and platforms without managing local environments. This ensures better compatibilitycoverage and fasterfeedback in CI/CD pipelines. MaintainingTest Stabilitywith Retry Mechanisms Configuring Retries in WebdriverIO (WDIO) Configuration You can implement retries in the WDIO configuration to rerun failed tests. Here’s howto do it: Example: wdio.conf.js
  • 16. exports.config = { // Retry failed specs at the suite level mochaOpts: { retries: 2, // Retries the entire suite 2 times }, // Retry failed tests at the spec level specFileRetries: 2, // Retries individual spec files specFileRetriesDelay: 5, // Time delay (in seconds) before a retry // Retry tests based on worker level (optional) specFileRetriesDeferred: true, // Defers retries to the end of the run // Other configurations runner: 'local', framework: 'mocha', // or 'cucumber', 'jasmine' capabilities: [{ maxInstances: 5, browserName: 'chrome', } ], reporters: ['spec'], }; Explanation: mochaOpts.retries: Retries the entire test suite if anytest fails. specFileRetries: Retries a particulartest/spec file. specFileRetriesDelay: Introduces a delay before retrying the spec file. specFileRetriesDeferred: Iftrue, retries are deferred until all othertests have run. This helps maintain test stability by reducing transient failures, especially useful in CI pipelines. Strategies forReducing Flakiness in CI
  • 17. Pipelines Here are some strategies to reduce flakiness in CI: StabilizeTest Data and Environments Use MockData: Avoid relying on external systems by mocking APIs and databases. IsolateTest Environments: Run tests on fresh, isolated environments (e.g., Docker containers). SetTimeouts Carefully: Adjust timeouts based on expected response times and network variability. OptimiseTest Execution Parallel Execution: Run tests in parallel to minimise dependencies. Rerun FailedTests: Use retries for intermittent issues, as configured above. Queue Management: Limit parallel jobs ifyour CI/CD infrastructure faces bottlenecks. Use BetterSynchronizationTechniques Avoid HardWaits: Replace static waits with WebDriverwaits (e.g., waitForDisplayed). PollforState Changes: Use retries or polling for state- dependent elements. MonitorCI Infrastructure Performance Reduce BrowserResource Usage: Use headless browsers or adjust resolution to save resources. Detect Bottlenecks: Analyze job durations and resource utilization to identify bottlenecks.
  • 18. ImproveTest Quality ModularizeTests: Break down large tests into smaller, independent ones. Handle External Dependencies Gracefully: Add fallbacks for API rate limits ortimeouts. Log and Debug Better: Enable logs, screenshots, orvideo capture forfailed tests to make debugging easier. EmploySmart Retrieswith CITools Use tools like Jenkins, GitHubActions, or CircleCI to configure test reruns based on exit codes. Example: Use retry plugins in Jenkins orworkflows with retry steps in GitHub Actions. Utilize Hooks forSetupTasks in WebdriverIO WhatAre Hooks in WebdriverIO? Hooks are lifecycle methods that run before or after specific events in a test’s execution, like test suites, test cases, or session initialization. They help streamline repetitive tasks like test environment setup, logging, and resource cleanup. Types ofHooks in WebdriverIO beforeSession and afterSession Use case: Setup and teardown tasks that need to run once per session. Example: Configuring environment variables or clearing logs.
  • 19. beforeSession: function (config, capabilities, specs) { console.log("Starting a new test session."); // Set environment-specific variables process.env.TEST_ENV = 'staging'; }, afterSession: function (config, capabilities, specs) { console.log("Test session ended."); // Perform cleanup tasks } before and afterHooks Use case: Run setup/cleanup logic before or after all tests in a suite. Example: Database connection orAPI token generation. beforeSuite: function (suite) { console.log(`Preparing suite: ${suite.title}`); // Seed database with mock data seedDatabase(); }, afterSuite: function (suite) { console.log(`Finished suite: ${suite.title}`); // Clear any suite-specific data } beforeSuite and afterSuite Use case: Manage pre-test preparations for a specific suite. Example: Seeding test data or resetting a particular app state beforeSuite: function (suite) { console.log(`Preparing suite: ${suite.title}`); // Seed database with mock data seedDatabase(); }, afterSuite: function (suite) {
  • 20. console.log(`Finished suite: ${suite.title}`); // Clear any suite-specific data } beforeTest and afterTest Use case: Handle setup/cleanup at the individual test level. Example: Resetting app state before each test or capturing. beforeTest: function (test) { console.log(`Starting test: ${test.title}`); // Reset app state before test browser.reloadSession(); }, afterTest: function (test, context, { error, result, duration, passed }) { if (!passed) { console.error(`Test failed: ${test.title}`); // Capture screenshot on failure browser.saveScreenshot(`./screenshots/${test.title}.png`); } } onComplete Hook Use case: Actions after all test executions, such as generating reports. onComplete: function (exitCode, config, capabilities) { console.log("All tests completed."); // Generate test report generateTestReport(); } Benefits ofUsing Hooks in WebdriverIO Code Reusability: Centralise common setup tasks, reducing
  • 21. duplication. ImprovedTest Reliability: Ensure the environment is ready before tests run. Clean Up Resources: Free up memory and avoid state issues by running teardown logic. Consistency: Reduce human error by automating initialization and teardown across all tests. Simplified CI/CD Pipelines: Automatically generate reports and manage logs at the session ortest level. Best Practices forUsing Hooks in WebdriverIO Avoid using browser.pause() inside hooks to maintain test speed. Use conditional logic for environment-specific setups (e.g., different actions for staging vs. production). Modularize reusablefunctions (e.g., seedDatabase()) to keep hook logic concise and maintainable. Capture relevanttest data (like screenshots or logs) in afterTest hooks forfailed tests. Use Custom Commands forReusability WhatAre Custom Commands in WebdriverIO? Custom commands allowyou to extend WebdriverIO’s default set of commands with reusable logic tailored to your specific testing needs. Instead of repeating the same code in multiple tests, you can encapsulate logic into a command and call it throughout yourtest suite, improving maintainability and readability.
  • 22. WhatAre Custom Commands? Custom commands are user-defined functions that extend WebdriverIO’s command set. These commands allowyou to perform repeated tasks (like login flows, form submissions, or complex assertions) without duplicating code. Howto Create Custom Commands in WebdriverIO Basic Syntax ofa Custom Command You can define custom commands inside the WebdriverIO configuration or in a separate file. SyntaxforRegistering a Custom Command: browser.addCommand('login', async (username, password) => { await $('#username').setValue(username); await $('#password').setValue(password); await $('#login-button').click(); }); Inthis example: The custom login command accepts username and password as parameters. It interacts with the username and password fields, then clicks the login button. Using the Custom Command inTests Once added, you can use the login command as part ofyourtest
  • 23. scripts: it('should login with valid credentials', async () => { await browser.url('https://example.com/login'); await browser.login('testuser', 'securepassword'); }); Adding Commands to Specific Elements You can also define commands for specific WebdriverIO elements: browser.addCommand('waitAndClick', async function () { await this.waitForDisplayed(); await this.click(); }, true); // Pass `true` to make it an element-level command // Usage in test it('should wait and click on the button', async () => { const button = await $('#submit-button'); await button.waitAndClick(); }); Best Practices forCustom Commands Encapsulate complex logic: Commands should handle intricate or repetitive flows like login, navigation, or data setup. Promotetest readability: Use descriptive names for commands to make tests more intuitive. Keep commands modular: Create commands that handle small, discrete tasks to avoid bloated logic. Use errorhandling: Ensure commands account for potential issues, such as missing elements ortimeouts.
  • 24. Benefits ofCustom Commands Improves Reusability: You can reuse custom commands across multiple test files, reducing code duplication. IncreasesTest Readability: By abstracting complex flows into commands, test cases become easierto understand. Centralised Maintenance: Any changes in the logic (e.g., element locators) need to be updated only once within the command. Supports Complex Scenarios: Commands allowthe combination of multiple Webdriver commands for more sophisticated test flows. Example: Login Commandwith ErrorHandling browser.addCommand('safeLogin', async (username, password) => { await $('#username').setValue(username); await $('#password').setValue(password); await $('#login-button').click(); const errorMessage = await $('#error-message'); if (await errorMessage.isDisplayed()) { throw new Error('Login failed: Invalid credentials'); } }); // Usage it('should login safely', async () => { await browser.safeLogin('invalidUser', 'wrongPassword'); }); Custom Commands and Parallel Execution WebdriverIO commands are especially useful when running tests in parallel. Encapsulating logic into commands helps ensure consistency across different threads and simplifies debugging.
  • 25. Where to Define Custom Commands? 1. Inthe Configuration File (wdio.conf.js): Good for project-wide custom commands. 2. In HelperFiles orPage Objects: Useful for project-specific flows. For example, define a login command in the LoginPage object. Enhance Debuggingwith Screenshots in WebdriverIO Taking screenshots during test execution is a critical strategyto improve debugging by capturing the application state at key moments. Screenshots provide visual feedback that helps identify issues like UI changes, element loading failures, ortest flakiness. Here’s howyou can use screenshots effectivelywith WebdriverIO. Howto Capture Screenshotswith WebdriverIO Taking Full Page Screenshots You can use browser.saveScreenshot() to capture the entire visible part ofthe page. it('should take a full-page screenshot', async () => { await browser.url('https://example.com'); await browser.saveScreenshot('./screenshots/fullPage.png'); }); Capturing Element-Level Screenshots You can also capture a specific element’s screenshot, which is useful
  • 26. for debugging element-specific issues. it('should take a screenshot of a specific element', async () => { const logo = await $('#logo'); await logo.saveScreenshot('./screenshots/logo.png'); }); Saving Screenshots onTest Failures (Using Hooks) To automatically capture screenshots when a test fails, you can use WebdriverIO’s hooks like afterTest in the configuration file. afterTest: async function (test, context, { passed }) { if (!passed) { const screenshotPath = `./screenshots/${test.title}.png`; await browser.saveScreenshot(screenshotPath); console.log(`Saved screenshot for failed test: ${test.title}`); } } Best Practices forUsing Screenshots forDebugging Use Descriptive File Names Save screenshots with dynamic names based on the test title or timestamp to easily identifythem in large test suites. const timestamp = new Date().toISOString(); await browser.saveScreenshot(`./screenshots/test_${timestamp}.png`);
  • 27. Capture Screenshots During Critical Flows Take screenshots at important checkpoints in yourtests, such as after page navigation orform submission, to trace where failures occur. it('should navigate and verify screenshot', async () => { await browser.url('https://example.com'); await browser.saveScreenshot('./screenshots/page_loaded.png'); await $('#submit').click(); await browser.saveScreenshot('./screenshots/after_click.png'); }); Integrate Screenshotswith CI/CD Pipelines Store screenshots in your CI/CD reports (e.g., Jenkins or GitHub Actions) for easier debugging. Combine Screenshotswith Logs andVideos Use cloud platforms like BrowserStack or LambdaTest to capture screenshots alongside video recordings for enhanced debugging. These services also provide automatic screenshots forfailed tests, network logs, and browser console logs. Example: ParallelTestswith Screenshots Ifyou’re running tests in parallel (e.g., on BrowserStack), it’s important to handle screenshots carefullyto avoid name collisions between threads. Use a unique ID ortimestamp in the file names. const workerID = browser.capabilities['bstack:options'].sessionName || 'default'; const screenshotPath = `./screenshots/${workerID}_${Date.now()}.png`;
  • 28. await browser.saveScreenshot(screenshotPath); Benefits ofDebuggingwith Screenshots Visual Context: Provides a clearview ofwhat the user interface looked like during the test failure. FasterIssue Resolution: Allows developers and testers to quickly spot UI issues without reproducing the test manually. Reduces Flakiness: Helps identify subtle UI changes ortiming issues that could cause flakytests. Seamless CI Integration: Automated screenshots provide immediate insights in CI reports. OptimiseTest Structure forReadability in WebdriverIO Creating a well-structured test suite is essential for improving readability, making your code easierto maintain, debug, and scale. An optimised test structure ensures that tests remain intuitive for both current and future team members. KeyPractices forImprovingTest Readability Use DescriptiveTest Names Test names should clearly describe the purpose and expected behaviour ofthe test. Example: it('should display an error message for invalid login', async ()
  • 29. => { // Test logic here }); OrganizeTests Using Describe Blocks Group related tests logically using describe blocks to improve clarity. Example: describe('Login Page Tests', () => { it('should load the login page successfully', async () => { /*...*/ }); it('should display an error for invalid credentials', async () => { /*...*/ }); }); KeepTests Short and Focused Each test should ideallyverify one behaviour orfeature to keep it focused. Long tests are harderto read and maintain. Implement the Page Object Model (POM) Use the Page Object Model to separate UI elements and logic from test scripts, improving code readability and maintainability. Example:
  • 30. class LoginPage { get username() { return $('#username'); } get password() { return $('#password'); } get loginButton() { return $('#login-button'); } async login(user, pass) { await this.username.setValue(user); await this.password.setValue(pass); await this.loginButton.click(); } } const loginPage = new LoginPage(); Use Hooks forSetup and Cleanup Use before and after hooks to handle test setup and teardown logic, keeping tests focused only on the actual behaviourthey validate. Example: before(async () => { await browser.url('https://example.com'); }); after(async () => { await browser.deleteSession(); }) Modularize Repetitive Logic Using Custom Commands Use WebdriverIO’s custom commands to encapsulate frequently used logic and make tests cleaner.
  • 31. Example: browser.addCommand('loginAsAdmin', async () => { await browser.url('/login'); await $('#username').setValue('admin'); await $('#password').setValue('admin123'); await $('#login-button').click(); }); it('should allow admin to login', async () => { await browser.loginAsAdmin(); expect(await browser.getTitle()).toBe('Admin Dashboard'); }); Avoid HardcodingTest Data Use external data files or configuration files to manage test data, reducing duplication and increasing flexibility. Example: const credentials = require('./data/credentials.json'); it('should login with valid credentials', async () => { await loginPage.login(credentials.username, credentials.password); }); Consistent Naming Conventions Follow consistent naming patterns fortest files, functions, variables, and Page Objects. This makes code easierto read and understand.
  • 32. Add MeaningfulAssertions Ensure yourtest assertions reflect the intent ofthe test, so anyone reading the code can understand what’s being validated. Example: expect(await $('.error-message').getText()).toBe('Invalid username or password'); Benefits ofan OptimizedTest Structure EasierMaintenance: Clearer structure allows easier updates and bug fixes. Reduced Duplication: Using Page Objects, custom commands, and data files minimises redundant code. FasterOnboarding: Newteam members can quickly understand and contribute to the test suite. Improved Debugging: Cleaner, focused tests make it easierto identifythe root cause of issues. Conclusion In this guide, we’ve explored what makes WebDriverIO (WDIO) a standout choice for automationtesting. We’ve covered essential practices such as setting up yourtesting environment and utilising the Page Object Model (POM), which significantly enhance both the reliability and maintainability ofyourtests. By applying techniques like custom waits and smart element locators, you can effectively address flakiness and improve test stability. Additionally, leveraging cloud platforms for cross-browsertesting
  • 33. ensures that your application performs smoothly across various environments. Overall, adopting these best practices will help you streamline your automation efforts and deliver high-quality software. By staying informed about these strategies, yourteam will be well-equipped to maximize the benefits ofWDIO in yourtesting endeavors. Witness howourmeticulous approach and cutting-edge solutions elevated qualityand performanceto newheights. Beginyourjourneyintotheworld ofsoftwaretesting excellence. To knowmore refertoTools &Technologies & QAServices. Ifyouwould liketo learn more aboutthe awesome serviceswe provide, be sureto reach out. HappyTesting 🙂 TAGS: MaximizingTest E…  PREVIOUS POST  Git Commands for…  NEXT POST Related Blogs