< Back

Harry Logger and the Metrics’ Stone

Introduction

As Algolia grows, we need to reconsider existing legacy systems we have in place and make them more reliable. One of those systems was our metrics pipeline. Each time a user calls the Algolia API, whether the operation involves search or indexing, it generates multiple lines of logs.

We generate around 1 billion lines of logs per day, which represent 2TB of raw data

Those logs contain information about the query that, when harvested, can yield insightful results. From those we compute metrics. Those are all the numbers you can find in the Algolia dashboard, like average search time, which user agents are using an index, etc. It’s also used for the billing as it’s computing the number of objects and the number of operations our customers perform.

As we are big fans of Harry Potter, we nicknamed this project “Harry Logger”.

The Chamber of Logs

The first thing we had to do was to transfer, in a resilient way, using as few resources as possible, the logs for the API machines to our metrics platform. The old system was doing a  fine job, but worked by having a centralized system pulling the logs from each machine. We wanted to go to a push strategy using a producer/consumer pattern.

This shift enabled us to do 2 things:

  • Replicate the consumers on multiple machines
  • Put a retry strategy closer to the logs, in the producer

We needed something that works reliably and in a clean way, hence we asked Dobby to do the job. For performance reasons, Dobby was developed in Golang:

The prisoner of SaaS

Our second job was to compute metrics on top of those logs. Our old system was a monolithic application that ran on one machine, meaning it was a single point of failure (SPOF). We wanted the new version to be more reliable, maintainable and distributed.

As SaaS is in our DNA, we went to various companies that specialized in the processing of metrics based on events (a line of log in our case). All the solutions we encountered were top notch, but as a company, the quantity of the data we generate on a daily basis presented an issue. As of today, we generate around 1 billion lines of logs per day, which represent 2TB of raw data. And no vendor was able to handle it. At this point, we were back to square one.

The Streams of Fire

After much consideration, we concluded that we had to build our own system. As the logs are a stream of lines, we decided to design our new system to compute metrics on top of a stream of data (and it’s trendy to do stream processing).

We tried the typical architecture:

As we didn’t want to maintain and host this architecture (we have enough API servers to maintain), we decided to consider a cloud provider. We managed to find every tool on the shelf, which meant we’d have less software to operate and maintain. As always, the issue was price. This streaming platform was 100 times more expensive than our old system. We had a lot of back-and-forth with our cloud provider to try to reduce it, but unfortunately, it was by design in their system. And again, we went back to square one.

The Half-Blood Batches

During our tests, we found that the Stream processing software we were using was also able to work in batch mode, not trendy but maybe it was a way to fix our pricing issue? By only switching to batch mode, the price was reduced by a factor of 10! We only needed an orchestrator to launch the batches:

After some development of reliable orchestrator we had a fully working system, but it was still 50% more expensive than we envisioned.

The Order of DIY

One of our engineers, a bit fed up by the amount of time we took to optimize the system, decided to do a proof of concept without using any framework or PaaS software. After a few days of coding, he managed to develop a prototype that suited our needs, was reliable and had a running cost 10 times lower than the batches!

The tale of the Wealthy Bard

Migrating our log processing toolchain yielded many outcomes that were valuable to our team. In addition to improving the reliability and the evolutivity of our current toolchain, we also increased our internal knowledge regarding our PaaS provider. The process also helped us identify deployment pain points that we will address later this year.

Finally, we iterated by using and composing different solutions to solve the same problem. The best solution, for us, was the one closest to the no-compromise with respect to your budget limits. In our case, it was possible to have both by finally keeping the multi-master key/value storage of our PaaS provider.

  • Gauthier Monserand

    Have you had a look at https://clickhouse.yandex/ looks great for high traffic logging

    • Rémy-Christophe Schermesser

      Thanks for the link!