# 5 Beginner Tricks for Writing Your Own Unit Tests

Now that we’re back to school, I figured it was time to talk about another introductory programming concept: unit testing. This time, I bring you some of my favorite tips targeted at folks who may be new to unit testing.

## Tips for Unit Testing

Unit testing can be a daunting task for folks who’ve never had to do it before. It requires to be systematic in your thinking while also fairly creative. Blending these two skills can be tough.

Luckily for you, I’ve started teaching a class where unit testing is one of the primary focuses. As a result, I’ve had to look back on my knowledge of testing to think about how I might pass that knowledge to newer folks in our community. Therefore, what’s a better way to kick things off than with a short list of tricks for writing unit tests!

When it comes to writing tests, the question I’m always asked repeatedly is “how many tests should I write?” In general, I understand where this question comes from: students tend to want to know how much work they have ahead of them. Likewise, they also want to know if the amount of tests they’ve written already are adequate.

Unfortunately, there is no magic number of test cases. The number itself depends on a lot of factors from the types of parameters the method accepts to the kinds of behaviors the method is expected to perform.

Instead, what I tend to tell folks is to follow a testing scheme. There are a lot of these floating around. For instance, the one we teach in our courses is “routine, boundary, and challenge“. However, the one I personally use thanks to my undergraduate education is “zero, one, many; first, middle, last“.

To use this testing scheme, you take whatever input your method accepts and attempt to break it down into as many of these six buckets as possible. For example, a method that computes the factorial of a number would only support three of the possible six cases: `factorial(0)`, `factorial(1)`, and `factorial(27)`, for example. The “first, middle, last” part of the scheme really only makes sense for sequential data where there is a clear “first” element, “middle” element, and “last” element.

Ultimately, however, to answer the number of test cases question, every method should have anywhere from one to six+ test cases. I say six+ because more complex methods may have multiple combinations of the six bins (e.g., “zero, one, many” for two or more parameters). We can argue about method complexity some other time.

### Name Test Methods Using the Testing Scheme

If you subscribe to a testing scheme, something that can help you immensely is using that scheme when you name your test methods. That way, it’s very clear just from the method name which cases are passing and which are failing. In other words, rather than seeing this in your workflow:

```test1()
test2()
test3()```

You’d have something like this:

```testFactorialZero()
testFactorialOne()
testFactorialMany()```

It may not seem like a big deal if you only have a few tests, but it’ll save you a ton of time as your list of tests inevitably grows.

### Consider the Data Types of Your Parameters

Following a testing scheme is really only going to get you so far. When writing your tests, you should be looking at your data closely. What are its bounds? What are some of its weirder behaviors?

I bring this up because we introduce a smoothing algorithm very early in our class, and it’s amazing how many different test cases are possible for a single parameter. For example, consider the following method header:

`def smooth(seq: list[int]) -> list[int]`

This function will take a list of integers, `seq`, and return a second list of integers containing the average of every consecutive pair of integers in `seq`. In other words, if I pass in `[1, 5, 3]`, the function should return `[3, 4]`.

Now, you’d be amazed by the number of test cases we can consider. Here are just a few:

```test_smooth_seq_zero()
test_smooth_seq_one_positive_even()
test_smooth_seq_one_positive_odd()
test_smooth_seq_one_negative_even()
test_smooth_seq_one_negative_odd()
test_smooth_seq_one_greater_than_maximum_int()
test_smooth_seq_one_less_than_minimum_int()
test_smooth_seq_one_prime()
test_smooth_seq_one_perfect_square()```

For context, I wrote all of these test case headers using Python syntax where many of these test cases wouldn’t apply. That said, you can start to see the ways that different types of integers may cause problems for a function.

More importantly, with a test suite like this, you can see that following a testing scheme isn’t always enough to systematically explore the domain of possible inputs. That should be the goal!

### Consider How Parameters Might Change

Typically, tests are written by comparing some expected value to some actual value. For example, we might check that the factorial of three correctly returns six:

```def test_factorial_many_odd():
assert factorial(3) == 6```

For functions that return values, we’re often quick to just check that the return value is what we expect it to be and call it a day. However, it’s just as important to ensure our parameters are what we expect them to be. Fortunately, numbers are often immutable, so our factorial example is enough. If we go back to our smoothing example, however, our tests should probably look more like this:

```def test_smooth_one_positive_odd():
seq_expected = [3]
seq_actual = [3]

return_expected = []
return_actual = smooth(seq_actual)

assert return_expected == return_actual
assert seq_expected == seq_actual```

If for whatever reason, our input sequence changes, then we know that we modified it somewhere in our method. If that’s intentional, we should be clear about that in our documentation, and we should also update our tests. If it’s not intentional, there’s clearly a bug that should be addressed.

### Make Use of Development Tools

This is a bit of a bonus tip, but you can actually make your life a bit easier by leaning into the variety of development tools that are available for testing. For instance, a tool a lot of folks use is code coverage. The purpose of code coverage is to give you a chance to see if you’ve missed any lines of code with your testing. However, beware that even if you do exercise every line of code, that doesn’t mean you’ve tested your code enough. It’s just another approach to being systematic in your testing.

Another tool (or perhaps technique) I might recommend is continuous integration. If you’re making use of version control already, you can make your version control run your tests every time you commit code through continuous integration. The advantage of this is that you can see the history of your development and the exact moment when your code stopped or started working.

On top of these tools, I might just generally recommend static analysis tools, which can help spot bugs in your code. These types of tools are nice to have in addition to testing because testing can only surface bugs. In combination, you’ll find that you’ll write less buggy code, which will mean more time spent developing.

When I started as a graduate student, one of my peers told me he had been teaching the class that I would come to teach much later. When I asked him about it, he complained that it was a bit behind the times. Specifically, he was upset that the course spent so much time on unit testing rather than a more “modern approach” like telemetry. At the time, I figured he knew more than me, so I didn’t question it. Looking back on our discussion, I have no clue what he was trying to get at. Testing, whether it be unit testing or otherwise, is essential to successful software development.

At any rate, I’m going to go ahead and call it here! As always, thanks for taking the time to check out my work. Hopefully, it was helpful to you in some way. If so, consider showing your support by helping the site grow. Alternatively, you can check out some of these related articles:

And of course, you’re welcome to check out any of the following resources (#ad):

Once again, take care!