On end to end testing (ElixirForum answer)

by Paulo Gonzalez

2023-11-11 | elixir testing elixir-forum answer

This was answered on ElixirForum here https://elixirforum.com/t/http-client-e2e-testing/59998/11

On end to end testing (ElixirForum answer)

Good question. Here are my 2c: I've seen that VCR and the likes โ€œfeelโ€ great but as folks here pointed out, they get outdated. It's a picture/snapshot in time saying that things may have worked given a certain set up you had. A green VCR test doesn't give me confidence, unfortunately. Much like a test that relies only on Mox/unit tests.

Bypass works well, but I personally feel like the tests are too complex. That's just my preference, I'm a simple person ๐Ÿ™‚ Even when the tests are done correctly, I don't have good confidence from a successful test run.

I've seen projects that leverage their own http server and I personally dislike those. It's a layer of complexity that I haven't seen yield good returns in practice, but that may be biased to my experience (as all of the points in this answer by the way).

What gives me the most confidence is having integration tests that exercise the live implementation and you need to load the correct keys and such, just like Dashbit discusses here (https://dashbit.co/blog/mocks-and-explicit-contracts):

  • - set boundaries in your code so you can leverage Mox (do what the readme says https://github.com/dashbitco/mox#basic-usage)
  • - create a test where you load real envs and the live impl is chosen.
  • - set things up so these are excluded from ci runs, as they are meant to be run locally with the correct env vars/keys. You can use `@moduletag` or multiple `@tags`. So, ci runs will just exercise the boundary and args (your Mox unit tests).
  • - every once in a while run those from your local machine. You can set up mix aliases to help here. Depending on the service/whatever you are testing maybe you pay for calls/only have a certain number. In certain projects, before merging a PR (after reviews, dev, etc), I'd run these locally to gain confidence.

This setup has given me (and a few teams) the most confidence when working with things we don't fully control (when we should use Mox). I've personally seen:

  • - the feedback loop be drastically decreased
  • - caught errors in docs โ†’ docs say something, service does something else. You can now prove it and not be seen as a maniac on your team, questioning your sanity.
  • - added documentation โ†’ since you can assert on results, this is very helpful to see outputs and help onboard folks into projects
  • - caught staging/prod discrepancies an api โ†’ Things worked in the tests, but cried in prod. Now you can repro and discuss, you have proof. It's a little surprise when it happens, but a lot of times these would happen and go undocumented/discussed. Once you can repro (by running the tests) you are playing a much better game imo :).

Downsides: Usually these tests hit test environments/sandboxes, which in turn are NOT prod. Even though these have given me the most confidence, you can still get little surprises in prod. As usual, make sure you have log and metrics to give yourself a better shot of handling the surprises, because they will come ๐Ÿ™‚

I've also seen a successful mix of Bypass and integration tests: https://github.com/HGInsights/avalanche/blob/main/test/integration_test.exs#L1. Even though there is no switch there (there is no live impl for Avalanche) but we achieve the integration/live tests by not intercepting the request. It works well in this case.

PS: Oh, I mention `integration` tests and I realized that this is an overloaded term. Much like a `mock` โ†’ in some contexts/communities means something to some people and others it's different. So maybe a better name for these tests would be `live unit tests`, but naming things is hard.

Thanks for reading!