Last week on Dev Tester, we began talking about data-driven testing, and how it can help your team keep your test suite organized and easy to extend. Many organizations practice data-driven testing because it brings plenty of benefits for running groups of automated tests using different sets of data without having to dig into the code.

Last week's article covered how to use JSON and XML files, two common file types used for testing, to drive your TestCafe tests. It showed how to leverage Node.js, which TestCafe is built on, to read and process data from these files and use them in your tests. With some minor setup, TestCafe can easily handle data-driven testing.

This week, we'll continue the series on data-driven testing with TestCafe, showing how to use CSV and Excel files in your tests. The examples shown in this article resume from the examples in Part 1 of this series, refactoring existing tests using data-driven techniques.

Data-driven testing using CSV files

One file type you'll likely encounter for storing and retrieving data is a comma-separated values files, better known as a CSV file. This file type is a simple text file containing records separated by a delimiter, often a comma. Its simplicity and widespread use across different platforms and systems make it a preferred format for data-driven testing in many teams.

As with the previous examples from last week's article, the CSV file we'll use to refactor our tests contains the email addresses and passwords for the tests. However, the CSV file also contains a header row for classifying the field names. The data in the header won't be a part of the test, but it will serve its purpose when parsing the file.

"email","password"
"[email protected]","airportgap123"
"[email protected]","airportgaptest1"
"[email protected]","airportgaptest2"
"[email protected]","airportgaptest3"

Like with XML files, neither Node.js nor TestCafe has built-in support for processing CSV files, so we'll need a third-party library to take care of this. Node.js has no shortage of CSV parsers for different purposes.

The library we'll use in this example is called Papa Parse. It's a fast CSV parser with a simple, straightforward syntax for ease of use. Papa Parse works similarly to the xml2json library shown in the previous article. It can parse CSV-formatted data and converts it to a JavaScript object that we can use in TestCafe.

First, install the library with the command npm install papaparse in the root directory containing the TestCafe tests. Then we can import the library to parse our CSV and get a JavaScript object containing the data needed to run the tests. With this object, we can iterate through the data, creating and running a test case for each record:

import loginPageModel from "./page_models/login_page_model";

const fs = require("fs");
const parser = require("papaparse")

const csvFile = fs.readFileSync("./data/users.csv", { encoding: "utf8" });
const userData = parser.parse(csvFile, { header: true });

fixture("Airport Gap Login (CSV)").page(
  "https://airportgap-staging.dev-tester.com/login"
);

userData.data.forEach(user => {
  test(`User with email ${user.email} can log in to their account`, async t => {
    await t
      .typeText(loginPageModel.emailInput, user.email)
      .typeText(loginPageModel.passwordInput, user.password)
      .click(loginPageModel.submitButton);

    await t.expect(loginPageModel.accountHeader.exists).ok();
    await t
      .expect(loginPageModel.accountEmail.innerText)
      .contains(user.email);
  });
});

If you followed the previous examples in Part 1 of this series, this pattern should begin to look familiar. The setup code here looks almost identical to the processing an XML file. Once again, we're importing the File System module from Node.js to read the CSV file using the readFileSync method. As mentioned in the XML example, keep an eye out for the type of encoding you need for your particular use case.

With our CSV file stored in a string, we'll use the Papa Parse library to transform the string into a JavaScript object. One of the neat functions included in the Papa Parse library is the option to use the header row as the keys for the object. Setting the header option to true when using PapaParse's parse function returns a JavaScript object structured like this:

{
  data: [
    { email: '[email protected]', password: 'airportgap123' },
    { email: '[email protected]', password: 'airportgaptest1' },
    { email: '[email protected]', password: 'airportgaptest2' },
    { email: '[email protected]', password: 'airportgaptest3' }
  ]
}

Setting the header option means we don't need to process the header separately, making it simple to grab the appropriate information for our tests. All that's left is to perform the same pattern as done before. We iterate through the object, create a test case for each record, and use the necessary data from the object to run the test.

Data-driven testing using Excel files

Excel files aren't nearly as common as CSV files when it comes to testing. However, many organizations still use Excel files. In my experience, it's mostly because the person generating the file has no idea what a CSV file is. Still, it's worth covering how to use Excel files in TestCafe since there's a chance you'll run into an Excel file for your data-driven testing.

Using Excel files for data-driven testing processes can be a bit tricky, depending on the structure of the data. Others may create multiple worksheets inside a single Excel file, add custom formatting, set up calculations and formulas, and so on. For simplicity, however, the example in this article will use an Excel (XLSX) file with a single worksheet called "Airport Gap Users" and no additional formatting:

Again, we'll use a third-party library to read the Excel file and extract the required data for our tests. One of the most manageable libraries to use for handling Excel files in Node.js is node-xlsx. We can use this library to create and read Excel documents and perform advanced functions like manipulating columns and worksheets. We'll stick to using the library only to parse an Excel file.

To start, we need to install the library with the command npm install node-xlsx to have it ready for importing in our test. We can use the library to process the data in the file and convert it to a JavaScript again. Again, this pattern looks familiar to previous examples in this series. However, using Excel files has some additional steps to perform, as seen in the refactored test:

import xlsx from 'node-xlsx';
import loginPageModel from "./page_models/login_page_model";

const excelFile = xlsx.parse("./data/users.xlsx");
const excelSheet = excelFile.find(sheets => sheets.name == "Airport Gap Users");
const excelSheetData = excelSheet.data;

const headers = excelSheetData.shift();
const userData = excelSheetData.map((row) => {
  const user = {}
  row.forEach((data, idx) => user[headers[idx]] = data);
  return user;
});

fixture("Airport Gap Login (Excel)").page(
  "https://airportgap-staging.dev-tester.com/login"
);

userData.forEach(user => {
  test(`User with email ${user.email} can log in to their account`, async t => {
    await t
      .typeText(loginPageModel.emailInput, user.email)
      .typeText(loginPageModel.passwordInput, user.password)
      .click(loginPageModel.submitButton);

    await t.expect(loginPageModel.accountHeader.exists).ok();
    await t
      .expect(loginPageModel.accountEmail.innerText)
      .contains(user.email);
  });
});

There's a lot more activity happening in this test to get the data from the Excel file. Once we import the node-xlsx library, we can use the library's parse method to read the Excel file. The library has built-in support to read the file directly, so we don't need to use the Node.js File System module first.

The data from the Excel file is stored as a JavaScript object, but we can't access the data directly as with a CSV file because Excel files can contain multiple worksheets. The node-xlsx library parses each worksheet as an individual object, so we'll need to find the correct worksheet containing our data. We do this by using find and fetching the sheet by its name, storing it in the excelSheet variable.

Using find isn't necessary in our example since we only have one worksheet and could have stored it using const excelSheet = excelFile[0]. However, it's better to explicitly find the worksheet by iterating through the object in case the Excel file gets modified in the future.

Inside this worksheet object, we have a data key containing arrays of the data stored in the worksheet, which we store in the excelSheetData variable. However, we can't use the data as it is due to the way it's structured:

[
  ['email', 'password'],
  ['[email protected]', 'airportgap123'],
  ['[email protected]', 'airportgaptest1'],
  ['[email protected]', 'airportgaptest2'],
  ['[email protected]', 'airportgaptest3']
]

The node-xlsx library doesn't consider headers when processing the file, so it parses each row as an array. We'll need to transform these arrays into a JavaScript object that we can use to iterate and identify each record and field, as done in other examples. We can handle this in regular JavaScript.

First, we extract the headers contained in the first array, using shift. That leaves the excelSheetData variable with four arrays containing the data needed for the tests. Next, we use the map function to create an array containing each record as an object in the format we want for the test. map goes through each array and creates a new object with the data, setting the header fields as keys for each value.

In the end, we'll have an array of objects suitable for use in our data-driven tests:

[
  { email: '[email protected]', password: 'airportgap123' },
  { email: '[email protected]', password: 'airportgaptest1' },
  { email: '[email protected]', password: 'airportgaptest2' },
  { email: '[email protected]', password: 'airportgaptest3' }
]

The object contains the headers we extracted from the parsed worksheet, set as keys for each record. Now we can quickly grab the data for the test. From here, it's the same process of iterating through each item and creating a test case using the data.

Summary

The examples in this and the previous article give you a taste of using data-driven testing for your test suite. This methodology can help keep your tests short and organized while keeping unnecessary information and repetition out of your code. If you have lots of data to repeatedly test in your application or have to separate test data from written test cases, data-driven testing is an excellent approach to try.

TestCafe allows you to take advantage of the power of Node.js to use both built-in functionality and third-party libraries to expand its ability to process multiple file types. With just a few lines of code, you can read and parse commonly-used file types, ready to use in your test cases. Even if you don't use one of the file formats mentioned here, chances are you can use TestCafe to process your data to drive your tests.

Does your organization practice data-driven testing? Do you have any pros and cons handling different kinds of files in your tests?

Want to boost your automation testing skills?

With the End-to-End Testing with TestCafe book, you'll learn how to use TestCafe to write robust end-to-end tests and improve the quality of your code, boost your confidence in your work, and deliver faster with less bugs.

Enter your email address below to receive the first three chapters of the End-to-End Testing with TestCafe book for free and a discount not available anywhere else.