In this presentation, Antonin Mellier and Nicolas Besin explain why SNCF, the French National Railway Company, chose HAProxy as a core element of its custom CDN to replace Akamai. By having HAProxy as the entrypoint and exitpoint for their CDN, they are able to offload SSL encryption, gain invaluable insights about errors and cache hit ratios, and accelerate troubleshooting. They use HAProxy for server persistence with cookies, weighted routing, and detection of abnormal user behavior.

Transcript
Hello, today we are sharing how at Oui.sncf we build our own CDN with HAProxy. So, my name is Antonin Mellier and I am a technical architect at E.Voyageurs SNCF. And my name is Nicolas Besin and I’m also technical architect at E.Voyageurs SNCF.


Then, Willy came to our office in Lille and presented us HAProxy. After that we added the HAProxy statistics page and logs. We didn’t talk about performance or load balancing capabilities, we just talked about monitoring and observability. Just by looking at HAProxy logs or stats, we were able to understand the problem and the issue we had in production and start working on how to fix it.
After that, we have added HAProxy in front of all of our applications. Today, we have more than 100 applications in production with HAProxy in front of it. We have added HAProxy in front of our LDAP servers, SMTP server and some of our databases. More recently, we have chosen HAProxy as a control plane for Kubernetes clusters, but today we are not here to tell you how we use HAProxy on our on-premises infrastructure. We will tell you why in 2014 we have decided to choose HAProxy in our CDN solution and why we have built our own CDN solution in replacement of the Akamai previous solution.

Using a CDN offers many benefits. It helps minimizing delay in loading web page content by reducing the physical distance between the user and the server. With this cache function, it can respond to end user requests in place of the origin. For a website like ours, only 10% of requests are reaching the origin server. It also provides protections against high traffic peaks in case of marketing events or DDoS attacks, by example. Finally, it saves money. CDN bandwidth costs are cheaper than traditional hosting.

Once the IP address is retrieved, the user is able to connect to one of our edge servers. The main purpose of an edge server is to cache the static resources. If the edge server has the resource in cache, it will deliver it to the client. Otherwise, it will contact the origin server to get it. The origin server is a datacenter hosting the client application.


First of all, let me give you a few words about how we have chosen our hosting providers. For performance properties, we wanted dedicated hardware. So, we were looking for hosting providers that could offer us bare-metal servers with dedicated pools of public IPs. As we are a small CDN, we can afford not to share IPs between our different clients. We were looking for providers that could offer us DDoS protection and guaranteed bandwidth. Our hosting provider must have good network connectivity with B2C operators and with our datacenter ISP. And finally, we wanted to be able to install, configure and manage the operating system ourselves. So, keeping in mind all those prerequisites, we have chosen four providers spread across six datacenters. OVH and Online, which are European key and major providers, and BSO and Iguane Solutions, which are less known providers, but with very good network connectivity.

So after studying several products, we have chosen GeoDNS written in Golang by Ask Bjørn Hansen. Its product powers the NTP Pool system. In this product, the zone files, the DNS zones, are described as JSON files. So, to update a zone, you just have to upload the JSON file and the server automatically reloads the configuration. In our case, hot reloading is a very important feature because we have to update the DNS configuration frequently, for example, when we detect some problems or when we have to plan a maintenance operation or even when we have to add new clients.


The last thing I would like to say is the last component that the HTTP request passes through is HAProxy and by having HAProxy as the entry point and exit point of our CDN, we have a standardized view of our inputs and outputs. So, it’s easy for us to determine error and cache ratio. And when we have some problems, it’s easy for us to determine if an error came from our servers or if it came from the origins.


For each application, we have set up an HTTP health check. So, if we lose a datacenter, all the traffic is automatically redirected to the other. We can also route requests by rules based on access paths by using ACLs. We have provided a media server that allows contributors to easily publish content such as images for email campaigns or large videos for the website. HAProxy uses ACLs to redirect requests to this server. With the same mechanism, we can also manage the parts of the website which are hosted in the public cloud like AWS.



Flume is in charge of duplicating, or duplicates the logs to our on-premises data centers. It handles encryption in the local buffer in case of problems or latency. Once the logs are now on our on-premises infrastructure, they are stored in Kafka topics. Then, they are consumed to be stored in Elasticsearch for real-time analysis and in Hadoop for long-term analysis. Before being stored in Elasticsearch, each line of logs is parsed and each field of the log is named and typed. We have developed some Spark jobs that read the raw logs in Hadoop, aggregate them, and then store them in Elasticsearch. With our aggregated data, we are able to perform long-term analysis on our application usage.


That’s the reason why last summer we have decided to implement a new solution based on HAProxy’s peer mechanism. This solution is currently under testing in our staging environment. With the new solution, every stick table is pushed to our monitoring servers and because the communication between peers is not encrypted, we have added a dedicated frontend and backend, which are in charge of encrypting and decrypting the traffic. With this new solution on our monitoring server, we have one stick table per edge server. We have certain, different solutions to collect the centralized stick table data and we have finally decided to use a Python script that will be in charge of reading the centralized stick table data and expose them as Prometheus metrics.


I didn’t say it before, but all this configuration is generated by Ansible during our installation or update process. So, all the information is generated or taken from our CMDB. So even if all these configurations seem to be complex, it is in fact just a simple Python Jinja template.


On the other side, there are many advantages. We were a small team, just four people, and we built an entire infrastructure from scratch. It was and it is still a big technical challenge. By having to manage ourselves all the components of the infrastructure, we all have increased our technical skills. Our clients use daily all the monitoring and all the metrics that we provide to us, and it improves their diagnostic capabilities and we reduce incident duration. We all know exactly the behavior of our platforms.
And finally, the solution we have to implement costs less than a third of what we paid before with Akamai. When we launched our CDN solution, there were 18 applications on it. Today, there are about 40. During this period, our CDN bandwidth increased by more than 80% and all that was with the same infrastructure running cost.
Organizations rapidly deploy HAProxy products to deliver websites and applications with the utmost performance, observability and security at any scale and in any environment. Looking for more stories?