Automation generally means taking a set of steps and running them in some automated fashion. The easiest example would be a deployment of a website to a server. Instead of writing down the steps which may include building the static files, uploading them to a server, maybe invalidating any caches, restarting the load balancer on the server if any configs are changed, etc, we can start automating first by creating a script, often in bash, that does each of these steps for us. Now a deploy to the server is just a script someone needs to run on their machine. This becomes documentation on how the deploy is done but also ensures that anyone can do a deploy without fully understanding it and removes mistakes that can occur from forgetting to do a step or doing a step wrong.
It’s kind of a pain to have to run this script to do a deploy. That means someone needs to be in charge of it or responsible for running the deploy script. Instead of this, why not make the computers do more for us and have them run the script instead. So put this in a pipeline. Here, we use Gitlab CI for this. We can tell the CI to run the script at certain times or when certain things occur, such as a new tag is created in the git repository or a new commit is pushed to master. Now we just interact with git like normal and allow the pipeline to handle all the deployment business. This also ensures the deploy script is put into a git repo to be properly versioned.
We can extend this further by also running tests before/after a deploy. Testing before ensures deploys don’t occur unless a test suite passes. Testing afterward on a live instance of the site can ensure that if any tests fail, we can respond in an automated manner. We can build rollbacks into the CI, meaning if the automated tests fail after the deploy, the new code will be rolled back to a previous version.
This is a simplistic example but can show the power of automating what we can to remove user error and quicker response to issues. The idea is to “shift left” as much as you can. This essentially means do as much as you can in an automated fashion (in pipelines) before actually deploying. This causes bugs and security issues to be caught much sooner and allows us to iterate faster.
In the platform, currently much of the automation is around the generation of transform code. Users can create workflows that take in information about the transforms needed and can create the code and allow it to be downloaded by the user. In addition, we also have the ability to run some fuzz testers and such against existing code. Lastly, we can compile code. Bringing this altogether, we have the ability (with configuration) for users to run a workflow that pulls in new code, generates transform code, runs fuzz testing against the existing code, and then compiles the existing code with the transform code to output a binary that the user can download all from a simple push of new commits to a git repo.