DEV Community

Abhishek Gupta for ITNEXT

Posted on

Basics of testing in Go

This blog is a part of the Just Enough Go series and provides an introduction to Testing in Go with the help of a few examples. It covers the basics of testing followed by topics such as sub-tests, table driven tests etc.

The code is available in the "Just Enough Go" repo on GitHub

Basics

Support for testing is built into Go, in the form of the testing package. At the bare minimum, you need to:

  • write some code (the one you need to test!) e.g. hello.go
  • write tests in a file which ends in _test.go e.g. hello_test.go
  • ensure that the test function names start with Test_ e.g func TestHello
  • go test to run your tests!

While writing tests, you will make heavy use of *testing.T which "is a type passed to Test functions to manage test state and support formatted test logs." It contains several methods including Error, Fail (variations) to report errors/failures, Run to run sub-tests, Parallel, Skip etc.

The rest of the blog uses a simple example to demonstrate some of the above concepts. It's a canonical hello world app!

package main

import "fmt"

func main() {
    fmt.Println(greet(""))
}

func greet(who string) string {
    if who == "" {
        who = "there"
    }
    return fmt.Sprintf("hello, %s!", who)
}
Enter fullscreen mode Exit fullscreen mode

Hello Tests!

Here is a bare bones unit test for the greet function:

func TestGreet(t *testing.T) {
    actual := greet("abhishek")
    expected := "hello, abhishek!"
    if actual != expected {
        t.Errorf("expected %s, but was %s", expected, actual)
    }
}
Enter fullscreen mode Exit fullscreen mode

The goal is to confirm whether invoking greet with a specific name results in hello, <name>!. We call the greet function, store the result in a variable called actual and compare it against the expected value - if they are not equal, Errorf is used to log a message and mark the test as failed. However, the test function itself continues to execute. If you need to change this behaviour, use FailNow (or Fatal/Fatalf) to terminate the current test and allow the remaining tests (if present) to execute.

Sub-tests

We covered the obvious use case, but there is another scenario that needs to be tested - when the input is an empty string. Let's add this using another test:

func TestGreetBlank(t *testing.T) {
    actual := greet("")
    expected := "hello, there!"
    if actual != expected {
        t.Errorf("expected %s, but was %s", expected, actual)
    }
}
Enter fullscreen mode Exit fullscreen mode

An alternative is to use sub-tests using the Run method on *testing.T. Here is what it would look like in this case:

func TestGreet2(t *testing.T) {
    t.Run("test blank value", func(te *testing.T) {
        actual := greet("")
        expected := "hello, there!"
        if actual != expected {
            te.Errorf("expected %s, but was %s", expected, actual)
        }
    })

    t.Run("test valid value", func(te *testing.T) {
        actual := greet("abhishek")
        expected := "hello, abhishek!"
        if actual != expected {
            te.Errorf("expected %s, but was %s", expected, actual)
        }
    })
}
Enter fullscreen mode Exit fullscreen mode

The test logic remains the same. But, now we have covered the individual scenarios within a single function with each scenario being represented as a sub-test - test blank value and test valid value. The Run method accepts a name and a test function similar to the top-level parent test case/function. All the tests run sequentially and the top-level test is considered complete when sub-tests finish executing.

What's the benefit of doing this? Is it just about not using a separate function? Well, yes, but there are more advantages of using sub-tests

  • All the cases associated with a function/method/functionality can be clubbed together in a single test function - this greatly reduces the cognitive load
  • Explicit naming makes it much easier to spot failures - this is esp. useful in large test suites
  • Just write the (common) setup and tear-down code before and after the sub-tests
  • You have the ability to run sub-tests parallelly (with other other sub-tests within the parent test) - more on this later

Table driven tests

Our test cases follow the same template - all we do is invoke greet with an argument and compare the result with the expected one. Both are sub-tests have duplicated code. Although, this is a trivial example, this happens in real-world projects as well.

table driven tests can come in handy in such cases. It's all about finding repeatable patterns in your test code and setting up tables to define different combinations of use cases - this greatly reduces test code duplication and a lot of copy+paste efforts! These tables can be manifested as a slice of structs with each struct defining the input parameters, expected output, name of the test and any other related detail.

Here is how we can setup table driven tests:

func TestGreet3(t *testing.T) {
    type testCase struct {
        name             string
        input            string
        expectedGreeting string
    }

    testCases := []testCase{
        {name: "test blank value", input: "", expectedGreeting: "hello, there!"},
        {name: "test valid value", input: "abhishek", expectedGreeting: "hello, abhishek!"},
    }

    for _, test := range testCases {
        test := test
        t.Run(test.name, func(te *testing.T) {
            actual := greet(test.input)
            expected := test.expectedGreeting
            if actual != expected {
                te.Errorf("expected %s, but was %s", expected, actual)
            }
        })
    }
}
Enter fullscreen mode Exit fullscreen mode

Let's break it down to understand it better. We start by defining the testCase struct ...

type testCase struct {
 name             string
 input            string
 expectedGreeting string
}
Enter fullscreen mode Exit fullscreen mode

... followed by the test cases which are a slice of testCase (the table) with the name, input and expected output:

    testCases := []testCase{
        {name: "test blank value", input: "", expectedGreeting: "hello, there!"},
        {name: "test valid value", input: "abhishek", expectedGreeting: "hello, abhishek!"},
    }
Enter fullscreen mode Exit fullscreen mode

Finally, we simply execute each of these test cases. Notice how the name, input and output are used with test.name, test.input and test.expectedGreeting respectively

    for _, test := range testCases {
        test := test
        t.Run(test.name, func(te *testing.T) {
            actual := greet(test.input)
            expected := test.expectedGreeting
            if actual != expected {
                te.Errorf("expected %s, but was %s", expected, actual)
            }
        })
    }
Enter fullscreen mode Exit fullscreen mode

Parallel tests

In a large test suite, we can improve efficiency by running each sub-tests parallelly. All we need to do signal our intent using the Parallel method on *testing.T. Here is how we can parallelize both our test cases:

notice the call to te.Parallel()

    for _, test := range testCases {
        test := test
        t.Run(test.name, func(te *testing.T) {
            te.Parallel()
            time.Sleep(3 * time.Second)
            actual := greet(test.input)
            expected := test.expectedGreeting
            if actual != expected {
                te.Errorf("expected %s, but was %s", expected, actual)
            }
        })
    }
Enter fullscreen mode Exit fullscreen mode

Since our tests are short, time.Sleep has been added on purpose to simulate a time taking operation as a part of the test. When you run this test, you will notice that the total time for test execution little over 3s, inspite of each test sleeping for 3s - this indicates that the tests ran in parallel to each other.

Other topics

Here are some other interesting topics which you should explore, but have not been covered in this post.

  • Benchmark: The testing package provides the ability to run benchmarks with the help of *testing.B type. Just like normal test function start with Test, benchmarks start with Benchmark and can be executed using go test -bench. They look like: func BenchmarkGreet(b *testing.B)
  • Skip (and its variants): call this to skip a test of benchmark
  • Cleanup: Tests and Benchmarks can use this to "register a function to be called when the test and all its subtests complete"
  • Examples in tests: you can also include code which serve as examples and test package verifies them as well.

That concludes yet another instalment of the Just Enough Go blog series - stay tuned for more. If you found this useful, don't forget to like and subscribe!

Top comments (0)