One Billion Row Challenge in .NET is faster than Java and even faster than C++ with realistic input data. Last week GitHub exploded with The One Billion Row Challenge started by Gunnar Morling. As of the time of this writing, I have authored the fastest managed 1BRC implementation that performs well not only on the specific dataset that everyone was optimizing for but also on more generic data. In the Results section below I present different timings for different languages and datasets. In My #1BRC journey, I show the history of my optimizations and performance timeline. Then I discuss why .NET is fast and easy to use for that kind of code. Finally, I describe how I write High-performance .NET code as a daily routine at my work and invite you to apply to us if you are interested in the modern and fast .NET. In addition to my code, I also created a dedicated benchmarking server at my homelab. It has a fixed CPU frequency and very stable results. I put a lot of effort into comparing the performance of different implementations. For .NET and Java, I measured both JIT and AOT performance of the same code. Probably as expected, C++ is the fastest for the default dataset. However, the small difference between C++ and both .NET and Java is something less expected even by me. As for Rust, it will very likely be the leader overall. We just need to wait until the implementation is correct. At the time of writing, it was not. In the end, all results should converge to some physical limit and the ideal CPU utilization. Then the interesting question will be at what price such code was developed. For me, it was quite easy to reach the current point and the code is very simple. The default data generator has a small number of station names with a max length below the AVX vector size. Both of these properties allow for many extreme performance gains. However, the specs say that there may be up to 10K unique stations with up to 100 UTF8 bytes in their names. To make a fairer comparison I used two datasets: The original default one was generated with create_measurements.sh. It is 12+GB in size. This dataset has only 416 station names with a max length of 26 chars. The extended dataset was created by Marko Topolnik and is a more generic generator.