Of lower overheads and territories of comfort

Much unknown to many is the fact Crisp has entered an autopilot mode since January. We've moved on to others, while keeping tabs on certain internal changes.

What a recent change to situations at my end gave way to, is the long pending changes to surface. Migrating the data pipeline which is one of the two "close to heart" components of Crisp, I've been working on and off.

Late last week, sat with migrating the core of the pipeline to C++ and performance has been much on expected lines. Last September when I repurposed most of the content fetcher, we achieved a 50s running time. As much as it was good enough for running as a cloud function, the overheads of runtime were a point of contention.

Now, lo and behold, a loaded 1GB single core box has come to a worst case of 7s; the same box that gave a 50s timing in September. With a best case averaging at 1.5s without the DB calls with a second core, this is significant in one sense.

While the question of how this matters in the autopilot mode lingers, another gets an answer. I've finally been able to do stuff on a fine-grained control level and see improvements. Add to that, the positive it has brought in, by pushing ahead my urge to work with C++ and lesser overhead choices for long.