Do read the first part before you continue to read this.
We stopped at the stack last time and now what remains is the evaluation part.
you would be like, why another post for just the evaluation part , why not just
into the first part?
The reason is that while the post is targeted towards the developer's role in
phase, this phase has a few ways to do things and I wanted to go through each
The developers role stays constant here , mostly in the boundaries of bug
fixing and solution hacking things that need to be handled right before
the initial launch or whatever phases the product is to go through before the
targeted users get to it.
These are the one's I've worked with and there's obviously better or worse ways
to evaluate but these are based on just my knowledge at this point, as that
changes, you can expect a better post later on.
To be fair, there's like a lot of these,
these are all good but we have to understand where each works and where the
other would be a better alternative.
If there's other that you wish I cover on , do consider emailing or reaching me
out on my twitter handle
UAT (User Acceptance Testing) - one of the first methods that I learned
about through the first startup I worked for and it worked fine, other than that
people took it very seriously and would fix and deploy things in such a hurry
that it would normally need a few rounds of deploy to see them finally rest,
which while is okay, I guess a bit of unit testing would've reduced but then it
was a very small startup and the deadlines were hard and I need to find better
Anyway, the point of uat is to make sure the user's actually understand the
app and it makes sense to their business logic (in B2B) or is intuitive enough
for users to browse through easily (in B2C).
These are more like things people end up doing and barely an issue with the
evaluation method itself.
Solution: Calm down humans! It's just an evaluation phase, the point of it
is to break!
Solution: Docker, K8s, They exist for a reason, use them!
Saw this one coming, didn't you?
This is something I picked up a while back without knowing what it was called.
Readers know that I build tools very specific to my requirements and then 90% of
the time I'm the one using them and this is the basic principle of dogfooding.
The builders of the product/tool/app use the app internally before the
This is something basecamp has been doing from the start and the evaluation
method works but requires a good version management to go with it to work.
Version management discipline will make sure you have checkpoints throughout the
codebase to identify what's still under evaluation and what is stable enough to
move forward with.
If you are using semver a good way is to handle it with pre-release tags which
look a little something like:
which translates to "this is the
alpha.2 version before the
and not the
This gives you a set of idea that everything that's in alpha is being evaluated
and everything with a stable non-alpha tag is being used on the stable releases.
This also means that you don't have to hurry the to fix something, but go
through the alpha releases slowly to make sure the defects are at a minimum in
the stable releases.
Bugs are inevitable, there's always a corner case, there's always a library
that decided to change something, there's always a new requirement. Don't rush
to fix the bugs and never fix them with the first solution that comes to your
mind, go through the problem, check if it's a problem at the implementation you
are looking or is something else the root of the issue.
All code is buggy. It stands to reason, therefore, that the more code you have
to write the buggier your apps will be.
- Rich Harris (Creator of Svelte)
A lot of people depend on various automations for UI testing and API testing and
I've talked about this before in a post about testing where I talk about how I
do it and in terms of whether I like this or not, here's a single line answers.
Doesn't work when requirements change constantly, you are better off manually
testing this instead.
That statement aside, you should still make it a habit to write tests for your
API's if you have the luxury of an open deadline. If on a hard deadline, you can
spend that time on actually writing that feature to be as robust as possible.
You can read about how I handle testing here -
Tests vs No Tests