Working with a wide variety of customers and technologies often brings interesting challenges and stories that usually end up buried in a support ticket never to see the light of day again. However, after a curious ticket regarding integration of our product into a BitBucket pipeline, we asked WeTek if they would like to contribute an article about this particular problem. Well, here it is, a great article highlighting the subtleties that can trip us up!

Author: Erik Müller

If you are like us at WeTek, then you want to stay on top of the latest-and-greatest the world of IT has to offer. Unfortunately, sometimes it means facing challenges on roads not yet discovered or taken by others.

Our issue came when we wanted to migrate our CI and build-server from Jenkins to BitBucket Pipelines, which is a relatively new CI/CD tool coming directly from Atlassian.

Now, operating BitBucket Pipelines is a fairly straightforward process – you open up your repo, click on the Pipelines link, and follow the instructions until you manage to set up a passing build. Then you just tweak the .yml file in your BitBucket project root until it starts to look like what you want. That is, of course, until you get to setting up any lesser-used specifics your build may require. In WeTek’s case, it was the mandatory code encryption, since the product we’re working on is going to be sold to third-party clients.

On our old build-server, the entire setup and inclusion of ionCube encoder was quite easy. Simply download the encoder, activate, link it with Jenkins, and boom, your environment is ready. The initial setup on Pipelines was pretty much the same, only, on Pipelines,we didn’t actually have access to the machine where the magic happened. After running a few test builds without ionCube, we came to realize that Pipelines are not persistent, and that ionCube would need to be activated and then deactivated during each run, since ionCube has a machine-bound licensing system. What actually happens inside is that Pipelines spin up a docker container, to which it then clones your code, runs the script you defined, and when everything is done it stops and removes the container it used.

After testing the flow locally, we found the idea to be sound and very manageable, so we decided to try and make it run on BitBucket.

The first issue we ran into was, how to get the encoder up into the container? There are many ways to do it, but we opted for pushing the files into our repo, and taking the other steps over from there.

Only, immediately after solving one issue, we ran into another one – whenever we called for ionCube to activate, the build would simply fail on that step with no error or explanation displayed. After a few days of moving things around we (somewhat) managed to get to the root of the problem. Whenever ionCube tried to activate, it would segfault, with no additional explanation provided. The next thing WeTek did was to contact Atlassian support, who told us to try debugging our pipelines locally. Although we already tested with a clone of their environment, and found everything to be in order, we gave it one more shot. The only difference this time, was that we cloned the code directly from the container, instead of mounting the project root directory inside.

And lo and behold, the exact same error popped up.

This already gave us some insight into why this might be happening, so we went and checked the .gitattributes to see if it handled everything correctly. From the looks of it, everything was in order, so we tried to remove and re-add the encoder again, to see what would happen. Git didn’t show and LF-CRLF (or vice-versa) warnings, so we assumed the push passed without problems. So we run the pipeline again, and got the same error. At this point it was either that it was, for some reason, impossible to do encryption within Pipelines, or that we were over-thinking things. It would later be found that it was the latter, as we started trying various different things, including archiving the entire part of our repo that dealt with ionCube. After modifying the .yml to handle the extraction on-site, miraculously, a build passed.

We were then certain that the issue came with pushing the encoder files to the repo and them getting corrupted. Curious, we still tried to get to understand what actually happened.

Not to get too much into the unnecessary details – the trick was that ioncube_encoder.sh actually just passes everything on to the binaries, which come without any extension on them. In our case, all no-extension files were handled as text, so the solution was to either add a direct path to ionCube’s binaries so they can be treated as binaries, or to pack the entire encoder into a .tar.gz and handle the extraction in the pipeline.

For the sake of simplicity, we at WeTek chose the archiving route, as it meant less things to track in the repo as the project grew.

In retrospect, the problem seems simple enough, but even a simple problem can confound you if it is about something that you usually automatically handle on a day-to-day basis, so much that you don’t even pay attention to when a potential issue emerges.

TL;DR

Be careful about handling ionCube encoder binaries in .gitattributes, as they are without extensions. Either archive all encoder files, and handle the archive extension, or add a direct path to the binaries to be handled as binaries.

Case Study: ionCube Encoder on BitBucket
twitterlinkedinmail
Tagged on: