Writing Useful Tests: Naming Tests and Writing Assertions

Last time I blogged about organizing your tests so that they’re easier to maintain. Today I want to look at why you should put thought into naming your tests and writing assertions.


When you first write your tests, you (hopefully) have all the context of the code you’re testing. You know how the code should work, and you know what your test is trying to validate. However, if the test starts failing 6 months down the road, you probably won’t have all that context. Writing verbose test names and assertion messages will make it much easier for you to regain that context.

Naming Things

It’s Monday, and you just poured your first cup of “Damn Fine Coffee”.

You’re checking the morning influx of email and open an email about failing tests. You’re greeted with this helpful summary:

1
2
3
4
Failing Tests:
  - User Creation

  Expected "false" to be "true"

Well, that’s helpful…

Obviously this is a contrived example, but it illustrates two potential problems with your tests.

Problem 1: Generic Names

In the example above, the test is called “User Creation”, which isn’t very descriptive. Now you have to find the code for the test and re-grok it, just to remember what the test does. Only then can you figure out why it’s failing.

It might be better to call this test something like “User creation succeeds with valid email and matching passwords” or “User creation fails with an invalid email”. If the test does more than what those detailed titles describe, it’s probably time to break it into smaller tests.

Problem 2: Generic Assertions

The other thing you’ll notice from the example is the reason the test failed: Expected "false" to be "true".

That doesn’t really help you track down the problem. You know there’s probably a failing assertion, but if you have many “assert” statements in your test, it may not be immediately clear whith one failed. Adding a clear assertion message can make it much simpler to track down the part of the test that’s failing.

1
2
3
4
5
6
7
8
9
// Before
var succeeded = false;
assert(succeeded);
// Expected "false" to be "true"

// After
var succeeded = false;
assert(succeeded, "User creation should have succeeded");
// Expected "false" to be "true". User creation should have succeeded

Now it’s easy to find the assertion in your code that’s failing. Plus, your “tests are failing” email will tell you why the test is failing before you’ve even fired up your editor.

That Was Easy

Ok. You’ve fixed your tests. Time to treat yourself to more coffee, and maybe some donuts.

Yes, I did just finish watching Twin Peaks. Why do you ask?

Writing Useful Tests: Organization

Most people agree that writing tests is an important part of software development. However, not all tests are created equal.

Working on a number of projects over the years, I’ve run into a few pitfalls and gotchas that I want to cover in the next few blog posts. This won’t be a comprehensive guide to testing. Others have already done a better job of that than I ever could. I just want to write down the pain points I’ve run into, and the techniques that have worked for me.

Organizing Your Tests

Figuring out how to organize your tests can be a daunting task, especially when you’re starting a new project. Do you keep your tests beside your feature code, or do you have two distinct “src” and “test” directories? Do you break up your test files by feature? By class? By unit?

I don’t think there’s one right answer to these questions, but I know what has and hasn’t worked for me on my projects.

Test Location

I’m a fan of keeping your tests close to your source code. That doesn’t necessarily mean every foo.js has a foo.test.js beside it, but I do prefer having source code and test code in the same project, rather than keeping them in distinct source folders.

This collocation of tests and source code encourages the mindset of writing tests and source code being one and the same. I’ve worked on projects where the tests and source were in very different parts of the source repository, and the typical workflow was to write your code, and then figure out how to test it. Keeping your tests and source code in the same project doesn’t necessarily solve this problem completely, but at least it helps keep your tests readily available as you’re writing your feature or bug fix.

Breaking Up Your Tests

Breaking up your tests into distinct files (and folders if your project is large enough) is a good way to keep them manageable. I’ve worked on projects that had thousands of lines of code in the same test file, and it was a nightmare to find and update tests within those enormous files. By contrast, projects that have been broken into smaller focused test files made it much easier to find where a test should be added or updated.

How you break up your tests will partly depend on your programming language and the type of project. I’ve found that a good place to look for test grouping is the setup code for your tests. If you have the same few lines of setup code in several of your tests, consider putting them in the same test file with a shared beforeEach block. What this looks like will depend on your language and test framework.

By contrast, if you find that your beforeEach block is doing a lot of setup that’s not used by most of the tests, consider breaking the file apart into the pieces that use the different components of the setup block.

When you open a test file, it should be clear from the file name what feature(s) are being tested. This will help you navigate your test codebase when it’s time to update or add tests.

Arrange, Act, Assert

When talking about test structure, it’s common to discuss the three parts of a test: Arrange, Act, and Assert.

  • Arrange – Set up the variables, objects, and mocks you’ll use in your test
  • Act – Perform the action that you’re attempting to test
  • Assert – Verify that the action had the expected result

This organization structure for tests is probably a cliché, but it’s a useful way to think about how you organize your individual tests. Writing tests that have clear distinctions between these three sections will make it easier for other maintainers to understand what you’re trying to test.

I’ve sometimes had tests where these distinctions were hard to make, or where I felt like I needed to intermingle assertions in the other parts of the tests. This usually meant that I was testing too much, and needed to break into multiple tests.

Conclusion

Organizing your tests makes them easier to understand, navigate, and update, which makes it much easier and more enjoyable for you and your team to maintain them. Putting effort and thought into your test structure now will pay dividends in the future.

Next time I’ll talk about naming your tests and writing useful test assertion messages.

The Observer Effect and Debugging: How Dev Tools Can Change Your Code’s Behavior

Chrome recently added a new feature to their JavaScript debugger where when you select a piece of code and hover over it, the code is evaluated and displayed in a little popover. In general, this is very useful. You can check the value of a variable, or even look at the results of more complex expressions.

Selecting Code and Hovering Shows The Result In Chrome DevTools

This is super useful, but it can also cause problems if you’re not careful about how you use it.

First, a little physics. The observer effect refers to “changes that the act of observation will make on a phenomenon being observed”. That definition precisely describes the behavior of this Chrome DevTools feature.

Consider the following example where I highlight a call to a function that has a side effect (incrementing the variable i). Whenever I hover over the code, it re-runs that expression, and the value of i increases.

Selecting a function call evaluates the function

This side effect is fairly benign and easy to detect, but imagine a more complex function with multiple dependencies and obscured side-effects. Worse, you could observe the value of a Proxy object without even knowing it might have side effects.

Hovering over Proxy properties evaluates their handler

Obviously this also serves as a warning against side effects in proxy handlers, but the point here is that these previews can affect the behavior of your code in significant ways.

Several years ago, I wrote about verifying your tools after a Fiddler visualizer made a debugging session take much longer than it should have taken. The same principal holds true here. Like many other professions, understanding how your tools behave is an important part of software development.

Docker for Windows: Dealing With Windows Line Endings

One of the issues with Docker (or any Linux/macOS based system) on Windows is the difference in how line endings are handled. Windows ends lines in a carriage return and a linefeed \r\n while Linux and macOS only use a linefeed \n. This becomes a problem when you try to create a file in Windows and run it on a Linux/macOS system, because those systems treat the \r as a piece of text rather than a newline.

As a concrete example, if you try to clone the official WordPress docker image and build the image on Windows, you’ll run into problems when it tries to execute the docker-entrypoint.sh file.

The first line of that file is #!/bin/bash, which is the shebang syntax to say “run this file using /bin/bash”

However, if the file originated in windows, that first line will be interpreted as “run this file using /bin/bash\r”, and “bash\r” of course doesn’t exist, so you get this error:

1
2
$ docker run wordpress
standard_init_linux.go:175: exec user process caused "no such file or directory"

There are a couple of ways to handle this issue.

Converting Line Endings During Build

Unix has a handy CLI tool for converting line endings called dos2unix. If you want to create a robust image, you can install do2unix as a dependency, then convert any files that you copy into the image. Then as a cleanup step, you can uninstall do2unix from the image (unless the image depends on it after the build).

Your Dockerfile might look something like this:

1
2
3
4
5
6
7
8
9
FROM ubuntu:latest

RUN apt-get update && apt-get install -y dos2unix

COPY docker-entrypoint.sh /entrypoint.sh

RUN dos2unix /entrypoint.sh && apt-get --purge remove -y dos2unix && rm -rf /var/lib/apt/lists/*

ENTRYPOINT ["/entrypoint.sh"]

The main idea is to copy the script onto the machine, then use dos2unix to convert the line endings, and finally remove dos2unix from the machine (and clean up the files created by apt-get).

This is a good option if you’re managing the image, but what if you’re trying to build an image that someone else maintains?

Cloning Git Projects With Unix Line Endings

If you just want to clone and build an existing Docker image, you can use a Git flag to store the repository locally with Unix style line endings.

1
git clone git@github.com:docker-library/wordpress.git --config core.autocrlf=input

However, it’s worth noting that any new files you create will likely have Windows line endings, so you’ll still need to convert them before using them inside the Docker image.

Conclusion

That should cover the basics of line endings between Windows and Linux/macOS. These techniques apply beyond just Docker, but hopefully the Docker-specific details will help someone who’s struggling with Docker for Windows.

Docker for Windows: Interactive Sessions in MinTTY Git Bash

I recently installed Docker for Windows on my laptop. When I tried running the default docker run -it ubuntu bash demo from Git Bash, I ran into an issue I hadn’t seen before.

1
2
$ docker run -it ubuntu bash
the input device is not a TTY. If you are using mintty, try prefixing the command with 'winpty'

As it turns out, following the instructions and prefixing the command with winpty does work

1
2
$ winpty docker run -it ubuntu bash
root@88448b75631d:/#

However, I’m never satisfied with just getting around an issue, so I did some more digging. Turns out the issue here is the use of MinTTY to host the Git Bash prompt. If you’ve installed Git for Windows, you’ll recall the following configuration window.

Git Bash MinTTY configuration

If you select the “Use MinTTY” option, your Bash pompt will be hosted in the MinTTY terminal emulator, rather than the CMD console that ships with Windows. The MinTTY terminal emulator isn’t compatible with Windows console programs unless you prefix your commands with winpty.

Personally, I prefer the default CMD console, especially in Windows 10 where features like window resizing and text selection are drastically improved. If you want to go back to that mode, simply re-run the Git for Windows installer and chose the non-MinTTY option. If you want to stick with MinTTY, you just need to prefix your interactive Docker (and Python, and Node, and …) commands with winpty.

Docker for Windows: Sharing Host Volumes

I’ve been playing with the new “Docker for Windows” tool recently, and I wanted to share a slightly obscure issue I ran across.

Docker lets you share volumes between your containers and your host machine. This is handy during development time, when you want to be able to quickly iterate without having to continually rebuild your container.

The Suspicious Empty Directory

When I first tried sharing a volume, I typed this into PowerShell:

1
2
3
4
5
6
7
8
9
10
11
12
13
> dir c:\docker


    Directory: C:\docker


Mode                LastWriteTime         Length Name
----                -------------         ------ ----
-a----        7/30/2016   9:47 PM              8 test.txt

> docker run -it -v c:/docker:/host ubuntu bash
root@333a10babe3a:/# ls /host
root@333a10babe3a:/#

Hmm… As you can see, the c:\docker folder has a test file, but it doesn’t show up in the Docker container. After some head scratching, I figured out the missing piece: Out of the box, Docker isn’t configured with permissions to share your drives with your containers.

Giving Docker Access

To give Docker access to your computer’s drives, right click on the Docker icon in your taskbar, then click “Settings…”

Under the “Shared Drives” section, check the drives you’d like to share, then click “Apply”

Docker will ask you for credentials, which it uses to access the drives.

(Note that if you log into your computer with a Windows Live account, your username should be something like “MicrosoftAccount\you@live.com”)

Once you save your credentials, you should be able to share volumes from your host computer to your containers.

1
2
3
4
>docker run -it -v c:/docker:/host ubuntu bash
root@00aca8c2f880:/# ls /host
test.txt
root@00aca8c2f880:/#

Software Engineering and the No True Scotsman Fallacy

“If you rely on jQuery you’re a jQuery developer, not a real developer.”

“Nobody actually enjoys writing JavaScript, they just do it because it’s the only thing that runs in the browser”

“Only academics use languages like Haskell and Lisp”

“Real developers don’t use Windows”

If you’ve been in the developer community for any time at all, you’ve probably heard statements like these. They’re popular because they’re easy to repeat and they boost the ego of the person who says them. They’re also over-generalizations.

Sure, some developers rely too heavily on jQuery. JavaScript definitely has some not-good parts that annoy many developers. Some technologies are mostly used in academia. Lots of developers prefer OS X and Linux.

But that’s not the whole picture.

Many developers know when to use jQuery for a quick solution, and when to reach for other tools. Lots of developers actually like JavaScript, and use it everywhere they can. There are large communities of people building real software with “academic” technology. While you and I are tweaking our .bash_profile, lots of real developers are using Windows and Visual Studio to get shit done.

I believe these kinds of generalizations continue to live on partly because of the “No true Scotsman” fallacy.

Person A: “No Scotsman puts sugar on his porridge.”

Person B: “But my uncle Angus likes sugar with his porridge.”

Person A: “Ah yes, but no true Scotsman puts sugar on his porridge.”

No true Scotsman – Wikipedia

The core of this fallacy is the rejection of evidence that contradicts our beliefs. It’s sort of a rhetorical form of confirmation bias.

What a convenient thing to be able to say “No real developer does X”. Any counter-examples can be easily rejected, because obviously anyone who says “wait, I do X” isn’t a real developer. No need to actually back up your claim.

Instead of rejecting technology, patterns, or methodologies because they’re not what “real developers” use, we should be willing to listen to people who have chosen that path. They chose it for a reason, and it’s worth hearing them out, even if their situation doesn’t match your own.

Debugging Node.js Azure Web Apps (Websites)

Deploying Node websites to Azure is really simple with their Git deployment feature. However, sometimes debugging problems can be challenging. Here are some of the tricks I’ve discovered for diagnosing problems with Node apps in Azure.

Viewing Application Output

To view application logs (a.k.a. stdout and stderr) from your app, you can turn on Application Logging from the management portal’s “Configure” tab.

Note that application logging will only be turned on for 12 hours, so if you need to be able to see historical logs, you need to save them to a file from within your application.

To view the logs, you can go to the Kudu dashboard at “<yourazuresite>.scm.azurewebsites.net”. Select either CMD or PowerShell from the “Debug console” menu, then navigate to LogFiles/Application. Scroll until you find index.html and click the download button (which will display it in your browser).

From there you can scan through for the logs from the time period you’re interested in and click the “log” link. Notice that the logs are split by “stdout” and “stderr”.

Other IIS Logs

To enable other IIS related logs, go to the “site diagnostics” section. These settings can be useful for debugging IIS level issues.

You can find these logs in a few different folders under “LogFiles” in the Kudu dashboard.

Using Node Debugger

Node Inspector is a really useful tool for digging into problems with your Node app. I’ll often spin it up locally if I’m investigating an issue that isn’t easily revealed with a console.log statement. However, sometimes bugs only reproduce on the server. If that happens, using the version of Node Inspector built into Azure can be really useful.

Enabling Websockets

Node inspector uses WebSockets to communicate between the UI and the debugger backend. Since Azure Websites doesn’t enable them by default, you’ll need to manually enable WebSockets in the “Configure” tab of the management portal.

Check Your Web.config

If you deployed your site with a Git push, Kudu should have generated a Web.config with correct configuration for debugging. If not, I have a generalized Gist that I use as a starting point. Just change the references to bin/www to reflect the correct entrypoint for your app.

Enable Debugging

To enable debugging, you can either modify your Web.config or create/modify an iisnode.yml file.

In the Web.config, it looks like this:

1
2
3
4
<!-- You'll find a placeholder near the end of the generated Web.config -->
    <iisnode debuggingEnabled="true"/>
  </system.webServer>
</configuration>

Or in iisnode.yml, it looks like this:

1
debuggingEnabled: true

The advantage of using iisnode.yml is that you don’t have to worry about a new deployment generating over your existing configuration. It’s also worth noting that you can configure a bunch of other settings in both the iisnode.yml and the Web.config.

Testing Out the Debugger

To use the debugger, go to your azure website, and append the entry point path, and /debug. So for example, if your entry file was server.js, you’d go to <yourazuresite>.azurewebsites.net/server.js/debug

If everything’s configured correctly, you should see the debugger. You can set breakpoints and debug the route handlers for your app.

Keep in mind that when you set a breakpoint, the entire app is halted. This means you probably don’t want to attach the debugger to an app that’s serving production traffic.

Mocha Error - this.timeout Is Undefined

If you’re using an ES6 compiler like TypeScript or Babel, you may have run into an odd error when you tried to call this.timeout() from your Mocha tests.

1
2
3
4
it("foo", (done) => {
  this.timeout(1000);
  // test some things
});

If you look at the compiled output, the source of the problem becomes evident. The compiler is taking the value of this from outside the test. This is also the behavior you’d see if you used a JS engine with ES6 support.

1
2
3
4
5
var _this = this;
it("foo", function(done) {
  _this.timeout(1000);
  // test some things
});

Arrow functions specify that the scope of the “this” variable inside the function is the same as its scope outside the function. Unfortunately, in this case, that isn’t what we want. We want “this” to be the Mocha object that we can call this.timeout() on.

Switching back to the old-school function style fixes the problem:

1
2
3
4
it("foo", function(done) {
  this.timeout(1000);
  // test some things
});

And there you have it. Be careful with “arrow” functions in Mocha tests. They’re fine to use in most cases, but if you need to call this.timeout(), make sure you switch back to the old-school function syntax.

Viewing All Versions of an NPM Package (Including Pre-Release)

If you want to view all released versions of an npm package, there’s an easy way to do it:

npm show react-native@* version

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
react-native@0.0.0 '0.0.0'
react-native@0.0.5 '0.0.5'
react-native@0.0.6 '0.0.6'
react-native@0.1.0 '0.1.0'
react-native@0.2.0 '0.2.0'
react-native@0.2.1 '0.2.1'
react-native@0.3.0 '0.3.0'
react-native@0.3.1 '0.3.1'
react-native@0.3.2 '0.3.2'
react-native@0.3.3 '0.3.3'
react-native@0.3.4 '0.3.4'
react-native@0.3.5 '0.3.5'
react-native@0.3.6 '0.3.6'
react-native@0.3.7 '0.3.7'
react-native@0.3.8 '0.3.8'
react-native@0.3.9 '0.3.9'
react-native@0.3.10 '0.3.10'
react-native@0.3.11 '0.3.11'
react-native@0.4.0 '0.4.0'
react-native@0.4.1 '0.4.1'
react-native@0.4.2 '0.4.2'
react-native@0.4.3 '0.4.3'
react-native@0.4.4 '0.4.4'
react-native@0.5.0 '0.5.0'
react-native@0.6.0 '0.6.0'
react-native@0.7.1 '0.7.1'

However, this doesn’t show pre-release versions. If you want to see everything, there’s an equally easy (but undocumented) command:

npm show react-native versions

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
[ '0.0.0',
  '0.0.5',
  '0.0.6',
  '0.1.0',
  '0.2.0',
  '0.2.1',
  '0.3.0',
  '0.3.1',
  '0.3.2',
  '0.3.3',
  '0.3.4',
  '0.3.5',
  '0.3.6',
  '0.3.7',
  '0.3.8',
  '0.3.9',
  '0.3.10',
  '0.3.11',
  '0.4.0',
  '0.4.1',
  '0.4.2',
  '0.4.3',
  '0.4.4',
  '0.5.0-rc1',
  '0.5.0',
  '0.6.0-rc',
  '0.6.0',
  '0.7.0-rc',
  '0.7.0-rc.2',
  '0.7.1',
  '0.8.0-rc' ]

This is super useful for finding what beta/pre-release versions of a package are available.

You can also run npm show react-native versions --json for machine readable output.