HAProxy gets more and more contributors. That’s a good thing. There’s a side effect to this, which is that the maintainer (myself) spends quite some time reviewing submissions. I wanted to have the opportunity to exchange with various contributors to give them more autonomy, to present how haproxy works internally, how it’s maintained, what things are acceptable and which ones are not, and more generally to get their feedback as contributors.
Since I had never done this before, I didn’t want to force people to come from far away in case it would be a failure, so I wanted to contact only local contributors for this first round and that we talked french so that everyone would be totally at ease. A few of them couldn’t attend but no less than 8 people responded present! Given that our meeting room in Jouy-en-josas is too small for such a team, we started to consult a few partners. Zenika was kind enough to respond immediately (phone call in the evening, 3 proposals the next morning, who can beat that ?).
So Baptiste, Emeric, William, Thierry, Cyril, Christopher, Emmanuel and I met there in one of Zenika’s training rooms in Paris last Friday. The place was obviously much better than our meeting room, large, fully equipped, silent, and we could spend the whole day there chatting and presenting stuff.
I talked a lot. I’m always said to talk a lot anyway, so I guess nobody was surprized. I presented the overall internal architecture. It was not in great details, but I know the attendees are skilled enough to find their way through the code with these few entry points. What matters to me is that they know where to start from. Emeric talked a bit about the peers protocol. Cyril proposed that the HTML version of the doc be integrated into the official web site instead of as an external link. Then Christopher presented the filters, how they work, the choices he had to make. William explained some limitations he faced with the current design and there was a discussion on the best ways to overcome them. In short, some hooks need to be added to the filters, and proabably an analyzer mask as well. Then Thierry talked about various stuff such as lunch, Lua, lunch, maps, lunch, stats and how he intends to try to exploit the possibilities offered by the new filters. He also talked about lunch. He explained how he managed to implement some inter-process stats aggregation in Lua, which may deserve a rewrite in C.
It was also interesting to discuss the opportunity to use filters to develop the small stupid RAM-based cache that has been present in the roadmap for a few years (the “favicon cache” as I often call it). Thierry explained his first attempt at doing such a thing in Lua and the shortcomings he faced in part due to the Lua implementation and in part due to the uselessness of such a cache which ignored the Vary header. Also he complained about the limits he reached with such a permissive language when it comes to refactoring some existing code.
Emmanuel explained that for his use case (haproxy serves as an SSL offloader in front of Varnish), even a small object cache would bring very limited benefit and that he would probably not use it this way as he prefers to use it in plain TCP mode and deal with HTTP at a single place. He was suggested to run a test with HTTP multiplexing enabled between haproxy and Varnish (possible since 1.6) to estimate any possible performance gains compared to raw TCP. Emmanuel also discussed the possibility of exporting some histogram information for some metrics (eg: response sizes and times).
The question about how haproxy should make better use of the information it receives from the PROXY protocol header surfaced again, especially regarding SSL this time. It turns out that we almost froze the protocol some time ago and that everyone implemented it as it is specified, while haproxy skips the SSL parts. Something probably needs to be done, how is a different story.
The issue of external library dependencies was brought, such as Lua 5.3 and SLZ, which are not packaged in mainstream distros. There wasn’t a broad adoption of the principle of including them in the source tree, but rather to see them packaged and shipped by distros even if that’s in unofficial repos.
I explained how I intend to chain two layers of streams belonging to the same session with a protocol converter in the middle to implement HTTP/2 to HTTP/1 gatewaying, and some of the issues that will come from doing this.
We also discussed about what is still missing to go multithread. In short, still a lot but good practices are already mandatory if we want to make our life easier in the future.
Interestingly, for most users there, there are almost no more local patches except the usual few things that need to bake a bit before being submitted upstream. This is another proof that we need to make the code even easier to deal with for newcomers, to encourage users to develop their own code and submit it once they feel at ease with it.
Well, at the end of the day everyone seemed very satisfied and expressed interest for doing this again if possible at the same place (the place is nice, easily accessible and people were really nice with us).
We learned quite a bit for next rounds. First, everyone must participate and it seems that 10 persons is the maximum for a workshop. We need to make pauses as well. Next time we do it, we’ll have to be better organized (though everyone was good at improvising). We should prepare some rough presentations and ensure everyone has enough time to present their stuff. It’s also possible that we’d need a first part with everyone and a second part cut into small groups by centers of interests.
So thanks again to Zenika for helping us set this up, thanks to all participants for coming, now looking forward to doing this again with more people.