๐ Tests (don't) slow down development. v2 ๐
In the previous post I explained why tests speed it up, and now to concrete techniques
Add to bookmarks, so that next time you go testing, you can check against the checklist
#tests #top
# [ $davids.sh ] ยท message #241
๐ Tests (don't) slow down development. v2 ๐
In the previous post I explained why tests speed it up, and now to concrete techniques
Add to bookmarks, so that next time you go testing, you can check against the checklist
#tests #top
@ [ $davids.sh ] ยท # 1445
. 90% Tests are Integration Tests
Almost all tests I write are integration tests: (1) the real complexity is most often the combination of database queries / third-party API calls and business logic, and only in such a combination can the application's operability be verified, (2) their writing speed is no less than unit tests, if you have a template.
If I need to test business logic (an algorithm) specifically, then I will extract it into a separate function, which will not pass any external dependencies (so that nothing needs to be mocked).
. Don't Test Transport
I write integration tests at the business logic function level, meaning I don't include calls via HTTP, MQ, TCP, etc.
If you want to test the transport, test it separately or as part of E2E tests.
. Test Features
A "feature" is conditionally the highest-level function you call from the transport layer. That is, your tests "simulate calling this endpoint / trigger."
Testing deeper often doesn't make sense.
. Don't Test Everything
โ If it's just a data request, you can test purely that the filters were applied correctly, or you can just skip it. โ If it's a CRUD operation you've written a hundred times, you can skip it. โ But if within a feature you actively change some data (INSERT, UPDATE, DELETE), then it's worth testing.
Therefore, I cover some applications (financial transactions) by 90%, and others (serving data to the frontend) by 30%.
. Divide Tests into Unit and Integration
I use a postfix in the naming of test functions or files (e.g., .unit.ts, .int.ts), which allows me to write separate configurations for each of them and run only the ones I need.
. Write Tests Next to the Code
Only this way and no other. Because this way (1) you won't test unnecessary things, (2) it's easy to understand which test relates to what, (3) it's easy to move such a folder to a separate location.
. Write Tests in Different Files
Even when testing the same code, I can create multiple test files to distribute the code more conveniently between them.
. Create a Global Test Library
Into such a library, I move: (1) database connection, with snapshot creation, (2) authentication, (3) a seeder with all core entities, which populates them into the database so you don't have to do it every time, (4) an initializer for all necessary dependencies (logger, config, etc.).
. Don't Use OOP
Well, more precisely, if you want to test a separate method, there's a high probability that if it's not static, you'll have to create an instance by passing many dependencies that this method doesn't need.
If you encounter this and want to continue using OOP, then learn what a Service Object is.
In FP, PP, and FOP this problem doesn't exist (because each function exists independently of another).
. Create a Separate Env for Tests
It can reuse the structure of the main env, but it must be separate.
. Use Docker-Compose
Before starting integration tests, I simply start docker-compose with all the necessary databases/APIs from the code in global setup or in a Makefile / npm scripts. Then I run migrations and fixtures, and when everything is ready, I start the application.
There's also testcontainers, but I haven't seen any advantage over using plain docker-compose.
. Create Database Snapshots
How I do it: I create a main database, apply migrations and fixtures to it, and then turn it into a template from which each individual test creates its own database version.
This allows integration tests to run all at once (or with a limit) and be confident that one test will not affect another.
(second part below)
@ [ $davids.sh ] ยท # 1446
. Schema-first
First, I'll describe my schema in the form of typing and API schemas (protobuf, GQL, openapi), and only then will I write the code.
In this case, you'll be able to use a ton of different test generators on top of the written schema.
. Instead of 1000 Code Reviews
Instead of sitting with a developer and trying to describe a task in depth (especially with a junior), I can just write tests, and they will write the code that should pass them.
This way, (1) I speed up the explanation process, (2) it's much easier for the person to understand what's expected of them, and (3) passing tests give me more confidence during Code Review and reduce its time.
In short, this technique allows for even more convenient team management.
. Benchmark
There are a billion libraries that consume a JSON file (custom-written or openapi) and generate tests to be run, providing you with a report.
For example, https://github.com/nakabonne/ali, but there are much better ones.
. Check the time
Don't hesitate to add time measurement for execution to your tests and check that you haven't exceeded reasonable limits.
. Example
In the next post, I'll share the boilerplate (haven't decided on Go or Node.js yet) and explain how testing is set up there.
@ Sergey Pogranichnyy ๐ ยท # 1447
What about testcontainers? I don't see this most anticipated and necessary word here
@ [ $davids.sh ] ยท # 1448
I don't see any advantages over just running docker-compose directly, even through code. Perhaps I just haven't been able to grasp what the "trick" is.
@ YURII VLADIMIROVICH ยท # 1449
Reveal it then?
@ Sergey Pogranichnyy ๐ ยท # 1450
@ [ $davids.sh ] ยท # 1451
This list is very similar to what I wrote above, but with one important caveat: it's written as if it does everything "on its own," but in reality, when working with testcontainers, you describe all the actions it needs to perform, just like with plain docker-compose.
And that leads to "swapping one thing for another," but with one significant disadvantage: you'll have to introduce and maintain configuration and libraries for working with testcontainers, at a time when working with plain docker-compose wouldn't require this.
Also, regarding the "ease of working with testcontainers in CI/CD," I don't know about now, but it used to be absolutely not the case, because when running from a container (even dind), you still had to use a hack where testcontainers needed to get root privileges and the passed-through docker.sock, which caused a lot of problems in setup and management.
@ [ $davids.sh ] ยท # 1452
And yes, its "parallelism" lies in the fact that it launches a container for each test and its environment, but (besides the fact that this still needs to be integrated with the test library) it consumes an incredible amount of resources (with 500+ tests, an M1 just dies).
The approach described above does not require creating a container for tests at all, except for integrations, and even then, only in one instance (1 PG, 1 Redis, 1 RMQ, etc.).
@ Sergey Pogranichnyy ๐ ยท # 1453
Works out of the box with GitHub Actions
@ [ $davids.sh ] ยท # 1454
Yes, I only have experience with testcontainers + Gitlab CI/CD
@ Sergey Pogranichnyy ๐ ยท # 1455
So the container is not for tests, but for infrastructure, database, Redis, RMQ. Testcontainers uses the same docker compose config. Dmitry Patsura (search on YouTube) also recommends its use, for example.
@ [ $davids.sh ] ยท # 1456
And I still don't understand why to use it if the same thing can be done purely with docker-compose...
It was like this for me: I used testcontainers, got tired of constantly trying to understand what it was doing, and then I thought: "why the hell do I need it?" โ I wrote the same thing by calling docker-compose commands (exec("docker-compose up -d") using the same env for docker between tests and got the same result, but faster and more manageable.
And it's important to note: the amount of code and configuration didn't increase.
In short, as I say: either I "don't get the point," or it's "a layer on top of docker-compose that provided a bit more ready-made configuration and API, which is why developers who are afraid of / don't understand docker decided to use it."
@ [ $davids.sh ] ยท # 1457
And yes, I'd be glad if someone shared specific examples where testcontainers are better / more advantageous than working directly with docker-compose.
I'll only note that docker-compose loves to change the log output format in every new release, so you have to parse them differently each time. This is a problem, but it arises once every few months, is detected on the spot, and fixed in 15 minutes.
Otherwise, I definitely spend less time debugging and trying to figure out: "how do I do this with testcontainers?" โ because I know exactly what I want from docker-compose.
@ Vassiliy ITK Kuzenkov ยท # 1458
It's more convenient to work with this in an IDE; the test launches with a click. But otherwise, you're right about everything) Starting through a test container significantly slows down our process. Because Jest parallelizes and starts a separate container for each process))) In theory, it can be fixed; there are plans to fine-tune the startup process.
@ [ $davids.sh ] ยท # 1459
About working through an IDE: here's an example of my configuration global-setup.ts, which can be specified in Jest ("globalSetup": "npm run test:int and everything starts up automatically with the same single button)
About slowing down the startup process and Jest parallelism: this is what I was talking about above, and there are a ton of such peculiarities with Testcontainers when integrating with test frameworks.
@ [ $davids.sh ] ยท # 1460
In short, I'll be posting a boilerplate soon (most likely for Go, since I've already written it) and I'll show everything there.
@ Vassiliy ITK Kuzenkov ยท # 1461
We did something similar before testcontainers, but on one of the projects, we decided to try them. I didn't try them on JS at first, but rather called Java ones from Clojure, as it's convenient to work with the boilerplate code. Especially when you need to spin up Kafka instances for tests.
@ [ $davids.sh ] ยท # 1462
Again, here's a Kafka setup example in docker-compose, and with it I can easily set everything up (maybe it's just one instance, but it's more than enough for integration tests)