cross-posted from: https://programming.dev/post/864349
I have spent some time trying to simplify the release process. For a variety of reasons, we can only release on Thursdays. The code is “frozen” on Tuesday before it can be released on Thursday. But we sometimes squeeze in a quick fix on Wednesday as well.
The question, is when should QA test the code?
Here is what I have seen happen:
- Dev writes code and sends it to QA.
- QA finds problems, sends it back to the Dev.
- Dev fixes and sends it back to QA.
I have seen a Dev fix their code on Tuesday, and then QA comes back on Wednesday with problems, when the code should have been frozen anyway.
I am looking, what should be the best solution here.
We have several problems going on at once:
- Developers test on the same server as QA tests. I am working to switch developers to a separate Dev server, but it is a long work in progress.
- We don’t have an easy way to revert code back from the QA server. It is easier to build revisions than revert changes. We can try to revert code more, but it will require a culture change.
- QA don’t really have a schedule when they are supposed to do functional testing vs regression testing.
I don’t know what is the best way to proceed forward. Thus far, I haven’t thought too much about the QA because I was focused more on getting releases out. Now that releasing is more simplified, that we can potentially do weekly releases, I am trying to see how we should proceed with the testing process.
It depends… The myriad of reasons to have a dedicated release day have often to do with synchronizing marketing, support and the other departments.
My question is what does QA mean for your org? Does it mean defect detection? Testing? Acceptance? Those are all different things. The teams i see that are able to release every day have a strict separation of Quality Control and Functional Acceptance. QC used to detect defects and regression and is handled by highly automated processes accounted for by engineering. Then acceptance is done by a dedicated product/quality team that figure out if the new functionality actually is built to spec and solves the customer problems. This also involves blogs, documentation, customer contact, release notes, tutorials and workshop for the support team etc… This second part is handled by feature flagging, so that the product teams can bèta test, run a limited release and track adoption.
It really depends on what kind of software youre running and what your relationship is towards the end user and the rest of the org. Something that is the same in all cases is that your requirements and acceptance criteria need to be very clear from the start and regression resting needs to be fully automated.