Oren Eini

CEO of RavenDB

a NoSQL Open Source Document Database

Get in touch with me:

oren@ravendb.net +972 52-548-6969

Posts: 7,565
|
Comments: 51,184
Privacy Policy · Terms
filter by tags archive
time to read 3 min | 473 words

In our profiling, we run into an expensive mystery call to string.Join. Nothing in our code called it, and it is expensive.

image

Looking at the code didn’t get us anywhere, so I profiled things again, this time using the debug configuration. The nice thing about the debug configuration is that it doesn’t inline methods (among other things), so what gets executed is much closer to the actual code.

Here are the results:

image

And now it is clear what is going on.

Here is the offending line:

var tryMatch = _trie.TryMatch(method, context.Request.Path);

With context.Request.Path being a PathString, and TryMatch accepting a string, we are silently calling the implicit string operator, which just calls PathString.ToString(), which just calls ToUriComponent(), which looks like:

public string ToUriComponent()
{
    if (HasValue)
    {
        if (RequiresEscaping(_value))
        {
            // TODO: Measure the cost of this escaping and consider optimizing.
            return String.Join("/", _value.Split('/').Select(EscapeDataString));
        }
        return _value;
    }
    return String.Empty;
}

Looking into the current code, it has already been optimized by removing the Linq call and its associated costs, but that is still expensive call to make.

The fix was to get the string directly:

var tryMatch = _trie.TryMatch(method, context.Request.Path.Value);

And here are 10% reduction in costs. Smile

time to read 5 min | 867 words

For the past 6 years or so, Hibernating Rhinos is using a licensing component that I wrote after being burned by trying to use a commercial licensing component. This licensing scheme is based around signed XML files that we send to customers.

It works, but it has a few issues. Sending files to customers turn out to be incredibly tricky. A lot of companies out there will flat out reject any email that has an attachment. So we ended up uploading the files to S3, and sending them a link to it. Dealing with files in this manner also means that there is a lot of relatively manual steps (take the file, rename it, place it in this dir, etc). With all the attendant issues that they entail.

So we want something simpler.

Here is a sample of the kind of information that I need to pass in the license:

image

Note that this is a pure property bag, no structure beyond that.

An obvious solution for that is to throw it in JSON, and add a public key signature. This would look like this:

{
  "id": "cd6fff02-2aff-4fae-bc76-8abf1e673d3b",
  "expiration": "2017-01-17T00:00:00.0000000",
  "type": "Subscription",
  "version": "3.0",
  "maxRamUtilization": "12884901888",
  "maxParallelism": "6",
  "allowWindowsClustering": "false",
  "OEM": "false",
  "numberOfDatabases": "unlimited",
  "fips": "false",
  "periodicBackup": "true",
  "quotas": "false",
  "authorization": "true",
  "documentExpiration": "true",
  "replication": "true",
  "versioning": "true",
  "maxSizeInMb": "unlimited",
  "ravenfs": "true",
  "encryption": "true",
  "compression": "false",
  "updatesExpiration": "2017-Jan-17",
  "name": "Hibernating Rhinos",
  "Signature": "XorRcQEekjOOARwJwjK5Oi7QedKmXKdQWmO++DvrqBSEUMPqVX4Ahg=="
}

This has the advantage of being simple, text friendly, easy to work send over email, paste, etc. It is also human readable (which means that it is easy for a customer to look at the license and see what is specified.

But it also means that a customer might try to change it (you'll be surprised how often that happens), and be surprised when this fails. Other common issues include only pasting the "relevant" parts of the license, or pasting invalid JSON definition because of email client issues. And that doesn't include the journey some licenses go through. (They get pasted into Word, which adds smart quotes, into various systems, get sent around on various chat programs, etc). If you have a customer name that include Unicode, it is easy to lose that information, which will result in invalid license (won't match the signature).

There is also another issue, of professionalism. Not so much how you act, but how you appear to act. And something like the license above doesn't feel right to certain people. It is a deviation from the common practice in the industry, and can send a message about not caring too much about this. Which lead to wondering whatever you care about the rest of the application…

Anyway, I'm not sure that I like it, for all of those reasons. A simple way to avoid this is to just BASE64 the whole string.

What I want to have is something like this:

eyJpZCI6ImNkNmZmZjAyLTJhZmYtNGZhZS1iYzc2LThhYmYxZ
TY3M2QzYiIsImV4cGlyYXRpb24iOiIyMDE3LTAxLTE3VDAwOj
AwOjAwLjAwMDAwMDAiLCJ0eXBlIjoiU3Vic2NyaXB0aW9uIiw
idmVyc2lvbiI6IjMuMCIsIm1heFJhbVV0aWxpemF0aW9uIjoi
MTI4ODQ5MDE4ODgiLCJtYXhQYXJhbGxlbGlzbSI6IjYiLCJhb
Gxvd1dpbmRvd3NDbHVzdGVyaW5nIjoiZmFsc2UiLCJPRU0iOi
JmYWxzZSIsIm51bWJlck9mRGF0YWJhc2VzIjoidW5saW1pdGV
kIiwiZmlwcyI6ImZhbHNlIiwicGVyaW9kaWNCYWNrdXAiOiJ0
cnVlIiwicXVvdGFzIjoiZmFsc2UiLCJhdXRob3JpemF0aW9uI
joidHJ1ZSIsImRvY3VtZW50RXhwaXJhdGlvbiI6InRydWUiLC
JyZXBsaWNhdGlvbiI6InRydWUiLCJ2ZXJzaW9uaW5nIjoidHJ
1ZSIsIm1heFNpemVJbk1iIjoidW5saW1pdGVkIiwicmF2ZW5m
cyI6InRydWUiLCJlbmNyeXB0aW9uIjoidHJ1ZSIsImNvbXByZ
XNzaW9uIjoiZmFsc2UiLCJ1cGRhdGVzRXhwaXJhdGlvbiI6Ij
IwMTctSmFuLTE3IiwibmFtZSI6IkhpYmVybmF0aW5nIFJoaW5
vcyIsIlNpZ25hdHVyZSI6IlhvclJjUUVla2pPT0FSd0p3aks1
T2k3UWVkS21YS2RRV21PKytEdnJxQlNFVU1QcVZYNEFoZz09I
n0=

Note that this is just the same thing as above, with Base64 encoding. This is a bit big, and it gives me a chance to play with the Smaz library I made. The nice thing about the refactoring I made there is that we can provide our own custom term table.

In this case, the term table looks like this:

image

And with that, still using Base64, we get:

AXICAAeTBQoLKA0NDWgsJAgNDSwmDQgMLAkKKSgsKggJDSMMK
CklCyUJBgwfFxAZCBsQFhUFhyxnLHeVVCIiAyIiAyIilS4iIi
IiIiIiBpIFjwZEB0oFNwZLBSgGTIxNjE4FSQZajE+NUoxTjVS
NVY1WjVcFSQZZjVCNUYxYBYcsWyx3BpEFlUgQCQwZFQgbEBUO
liBSDxAVFhoGlVMQDhUIGxwZDAULFyYLlSsYJyUYC5VZJBUOl
U8hmFBRUkEXlVcrJxSVUSIQEZZDRRQplUEWECIhllBYEiEgIp
ZWTyUhlkpUGJZBRh6WPT0HAWzteBm/nZAfi4aI9mkbVowBmGV
PC0S0fXaHv1MUodMQdsS6TuuvmlU=

This is much smaller (about 30%). And it has the property that it won't be easily decoded by just throwing it into a Base64 decoder.

I decided to use plain hex encoding, and pretty format it, which gave me:

0172020007 93050A0B28 0D0D0D682C 24080D0D2C
260D080C2C 090A29282C 2A08090D23 0C2829250B
2509060C1F 171019081B 1016150587 2C672C7795
5422220322 2203222295 2E22222222 2222220692
058F064407 4A0537064B 0528064C8C 4D8C4E0549
065A8C4F8D 528C538D54 8D558D568D 5705490659
8D508D518C 5805872C5B 2C77069105 954810090C
1915081B10 150E962052 0F1015161A 069553100E
15081B1C19 0C05954511 740C1C984D 4B4641270B
95560F1608 209841492F 4108962B58 0B9556120C
95590C954C 2195441695 5623954C29 2695482009
1D1C97514D 2B1117974E 492B150E96 3D3D070157
A9E859DD06 1EE0EC6210 674ED4CA88 C78FC61D20
B1650BF992 978871264B 57994E0CF3 EA99BFE9G1

This make it look much better, I think. Even if it almost double the amount of space we take (we are still smaller than the original JSON, though). Mostly this is me playing around on Friday's evening, to be honest. I needed something small to play with that had immediate feedback, so I started playing with this.

What do you think?

time to read 2 min | 346 words

In one of my recent posts about performance, a suggestion was raised:

Just spotted a small thing, you could optimise the call to:

_buffer[pos++] = (byte)'\';

with a constant as it's always the same.

There are two problems with this suggestion. Let us start with the obvious one first. Here is the disassembly of the code:

            b[0] = (byte) '/';

00007FFC9DC84548  mov         rcx,qword ptr [rbp+8]  

00007FFC9DC8454C  mov         byte ptr [rcx],2Fh  

            b[0] = 47;

00007FFC9DC8454F  mov         rcx,qword ptr [rbp+8]  

00007FFC9DC84553  mov         byte ptr [rcx],2Fh  

As you can see, in both cases, the exact same instructions are carried out.

That is because we are no longer using compilers that had 4KB of memory to work with and required hand holding and intimate familiarity with how the specific compiler version we wrote the code for behaved.

The other problem is closely related. I've been working with code for the past 20 years. And while I remember the ASCII codes for some characters, when reading b[0] = 47, I would have to go and look it up. That puts a really high burden on the reader of a parser, where this is pretty much all that happens.

I recently saw it when I looked at the Smaz library. I ported that to C# and along the way I made sure that it was much more understandable (at least in my opinion). This resulted in a totally unexpected pull request that ported my C# port to Java. Making the code more readable made it accessible and possible to work with. Whereas before it was a impenetrable black box.

Consider what this means for larger projects, where there are large sections that are marked with "there be dragons and gnarly bugs"… This really kills systems and teams productivity.

In the cas of the Smaz library port, because the code was easier to work with, Peter was able to not just port it to Java, but was able to repurpose it into a useful util for compressing mime types very efficiently.

time to read 3 min | 507 words

imageWe get some… fascinating replies from candidates to our code tests. Some of them are so bad that I really wish that I could revoke some people’s keyboard access:

Case in point, we had a promising candidate from Israel’s equivalent of MIT (Technion, which is in the top 25 engineering schools in the world).

He submitted code that went something like this:

var nameFirstChar = char.Parse(name.Substring(0,1).ToLower());

switch (nameFirstChar)
{                                
    case 'a':
        using (StreamWriter indexFile = new StreamWriter(Path.Combine(basePath, "Name_IndeX_A.csv"), true))
        {
            indexFile.WriteLine(name);
        }
        break;
    case 'b':
        using (StreamWriter indexFile = new StreamWriter(Path.Combine(basePath, "Name_IndeX_B.csv"), true))
        {
            indexFile.WriteLine(name);
        }
        break;
    // ...
    // you can guess :-(
    // ...
    case 'y':
        using (StreamWriter indexFile = new StreamWriter(Path.Combine(basePath, "Name_IndeX_Y.csv"), true))
        {
            indexFile.WriteLine(name);
        }
        break;
    case 'z':
        using (StreamWriter indexFile = new StreamWriter(Path.Combine(basePath, "Name_IndeX_Z.csv"), true))
        {
            indexFile.WriteLine(name);
        }
        break;
}

And yes, he had the full A-Z there. This was part of a 1475 lines methods. And no, he didn’t handle all the cases for Unicode.

Yes, he just graduated, but some things are expected. Like knowing about loops.

time to read 4 min | 666 words

Let us assume that we have the following simple task:

Given a Dictionary<string, string>, convert that dictionary into a type in as performant a manner as possible. The conversion will happen many time, and first time costs are acceptable.

The answer we came up with is to dynamically generate the code based on the type. Basically, here is how it looks like:

public static Func<Dictionary<string, string>, T> Generate<T>()
    where T : new()
{
    var dic = Expression.Parameter(typeof (Dictionary<string, string>), "dic");
    var tmpVal = Expression.Parameter(typeof (string), "tmp");
    var args = new List<MemberAssignment>();
    foreach (var propertyInfo in typeof(T).GetProperties())
    {
        var tryGet = Expression.Call(dic, "TryGetValue", new Type[0], 
            Expression.Constant(propertyInfo.Name),
            tmpVal);

        Expression value = tmpVal;
        if (propertyInfo.PropertyType != typeof (string))
        {
            var convertCall = Expression.Call(typeof(Convert).GetMethod("ChangeType", 
                new Type[] { typeof(object), typeof(Type) }), tmpVal,
                Expression.Constant(propertyInfo.PropertyType));
            value = Expression.Convert(convertCall, propertyInfo.PropertyType);
        }

        var conditional = Expression.Condition(tryGet, value, 
            Expression.Default(propertyInfo.PropertyType));

        args.Add(Expression.Bind(propertyInfo, conditional));
        
    }
    var newExpression = Expression.New(typeof(T).GetConstructor(new Type[0]));

    var expression = Expression.Lambda<Func<Dictionary<string, string>, T>>(
        Expression.Block(new[] { tmpVal },Expression.MemberInit(newExpression,args)),
        dic);

    return expression.Compile();
}

As an aside, this is an intimidating piece of code, but that is about bazillion time better than having to do this manually using IL manipulations.

What this code does is dynamically generate the following method:

(Dictionary<string,string> dic) => {
    string tmp;
    return new User
    {
        Name = dic.TryGetValue("Name", out tmp) ? tmp : default(string),
        Age = dic.TryGetValue("Age", out tmp) ? (int)Convert.ChangeType(tmp, typeof(int)) : default(int)
    };
}

This is pretty much the fastest way to do this, because this is how you would write it manually. And compiling the expression dynamically means that we don’t have to do this for each and every type we run into.

time to read 4 min | 781 words

This little nugget has caused a database upgrade to fail. Consider the following code for a bit. We have CompoundKey, which has two versions, slow and fast. The idea is that we use this as keys into a cache, and there are two types because we can “query” the cache cheaply, but constructing the actual key for the cache is more expensive. Hence, the names.

public class CompoundKey
{
    public int HashKey;

}

public sealed class FastCompoundKey : CompoundKey
{
    public int Val;

}

public sealed class SlowCompoundKey : CompoundKey
{
    public int SlowVal;

}
public class CompoundKeyEqualityComparer : IEqualityComparer<CompoundKey>
{
    [MethodImpl(MethodImplOptions.AggressiveInlining)]
    public bool Equals(CompoundKey x, CompoundKey y)
    {
        if (x == null || y == null)
            return false;

        SlowCompoundKey k;
        FastCompoundKey self;
        var key = x as FastCompoundKey;
        if (key != null)
        {
            self = key;
            k = y as SlowCompoundKey;
        }
        else
        {
            self = y as FastCompoundKey;
            k = x as SlowCompoundKey;
        }

        return self.Val == k.SlowVal;
    }

    [MethodImpl(MethodImplOptions.AggressiveInlining)]
    public int GetHashCode(CompoundKey obj)
    {
        return obj.HashKey;
    }
}

The problem was a null reference exception in the Equals method.  And I believe the term for how I felt was:

The problem is that it is obvious why a null reference exception can be thrown. If the values passed to this method are both SlowCompoundKey or both FastCompoundKey. But the reason the Equals method looks the way it does is that this is actually in a very performance sensitive portion of our code, and we have worked to make sure that it runs as speedily as possible. We considered the case where we would have both of them, but the code had a measured performance impact, and after checking the source of the dictionary class, we were certain that such comparisons were not possible.

We were wrong.

Boiling it all down, here is how we can reproduce this issue:

 var dic = new Dictionary<CompoundKey, object>(new CompoundKeyEqualityComparer());
 dic[new SlowCompoundKey { HashKey = 1 }] = null;
 dic[new SlowCompoundKey { HashKey = 1 }] = null;

What is going on here?

Well, it is simple. We add another value with the same hash code. That mean that the dictionary need to check if they are the same value, or just a hash collision. It does so by passing both the first key (slow) and the second key (slow) into the Equals method, and then hilarity enthused.

In retrospect, this is pretty much how it has to behave, and we should have considered that, but we were focused on looking at just the read side of the operation, and utterly forgot about how insert must work.

Luckily, this was an easy fix, although we still need to do regressive perf testing.

time to read 3 min | 506 words

I have a piece of code that does something on types. It is a whole complex thing that does a lot of stuff. And the code is really ugly, here is a small part from ~180 lines method.

image

The problem would have been much simpler if we could only switch on types, which is effectively what I want to do here. As it stands, however, the JIT is actually going to replace all those if statements with pointer comparisons to the method table, so this is pretty fast. Unpleasant to read, but really fast.

I decided to see what it would take to create a more readable version:

public class TypeCounter
{
    public int Long;

    public Dictionary<Type, Action<object>> Actions;

    public TypeCounter()
    {
        Actions = new Dictionary<Type, Action<object>>
        {
            [typeof(long)] = o => Long++,
            [typeof(int)] = o => Long++,
        };
    }
    public void Dic(object instance)
    {
        if (instance == null)
            return;
        Action<object> value;
        if (Actions.TryGetValue(instance.GetType(), out value))
            value(instance);

    }
    public void Explicit(object instance)
    {
        if (instance is int)
            Long++;
        else if (instance is long)
            Long++;
    }
}

This is just a simpler case of the code above, focused on just the two options. The dictionary version is much more readable, especially once you get to more than a couple of types. The problem, I have tested this on both this two types option as well as 10 types, and in all cases, the explicit version is about twice as fast.

So this rules it out, and we are left with the sad looking code that can run.

time to read 8 min | 1588 words

You might have noticed the low level work I have been posting about lately. This is part of a larger effort to better control over our environment, and hopefully gain more than mere incremental performance improvement.

As part of that, we decided to restructure a lot of our core dependencies. The data format change is one such example, but there are others. Of particular relevance to this post is the webserver and web stack that we use, as well as the actual programming model.

In order to reduce, as much as possible, dependencies on everything else, I decided that I want to benchmark a truly complex part of RavenDB, the “/build/version” endpoint. As you can imagine, this endpoint simply reports the RavenDB version.

Here is how it looks like in RavenDB 3.0:

image

This is WebAPI controller, running on OWIN using HttpListener. The methods calls you see here are static (cached) properties, which generate an anonymous object that gets serialized to JSON.

In order to test this, I decided to use gobench to see what kind of functionality I show expect. I run the following command:

Here is what this looked like when I run this:

image

The CPU usage was roughly 70% – 80% by RavenDB, and the rest was taken by gobench. Here are the results:

Requests:                          500,000 hits
Successful requests:               500,000 hits
Network failed:                          0 hits
Bad requests failed (!2xx):              0 hits
Successful requests rate:           15,151 hits/sec
Read throughput:                 4,030,304 bytes/sec
Write throughput:                1,500,000 bytes/sec
Test time:                              33 sec

Remember, we don’t do any real work here, no I/O, no computation, nothing much at all.

Nevertheless, this is quite a nice number, even if it is effectively a hello world. This is with a 100 clients doing 5,000 requests each. Admittedly, this in on the local machine (we’ll do remote testing later), but quite nice.

How about our new code? How does it compare?

Requests:                          500,000 hits
Successful requests:               500,000 hits
Network failed:                          0 hits
Bad requests failed (!2xx):              0 hits
Successful requests rate:           12,500 hits/sec
Read throughput:                 2,312,500 bytes/sec
Write throughput:                1,237,500 bytes/sec
Test time:                              40 sec

So we are a bit slower, although not by much. That is nice, to start with. And here are the process stats while this is running.

image

You might think that I’m disappointed, but quite the reverse is true. You see, this benchmark results that you are seeing.

They were taken while running under a profiler.

In other words, we saddled ourselves with a huge performance hit, and we are still within touching distance.

You might have noticed that the process shown here is dnx.exe. This is us running on CoreCLR with the Kesterl web server, and without any web framwork (such as WebAPI). The code we are running here is here:

image

As you can see, there is quite a bit more code here than in the previous code snippet. This is mostly because we are still playing with the API. We’ll probably have something like the GetObject method once we decide exactly on how to do it. For now, we want to have everything in our face, because it simplify how we are doing things.

Anyway, I mentioned that I was running this under a profiler, right? Actually, I run this under a profiler while I run the previous command 3 times. Here are the relevant results:

image

I highlighted a couple of interesting results. The call to BuildVersion.Get took 0.2ms (while under the profiler) and most of the time was spent in serializing the result into JSON (in the profiler output, ReadObjectInternal and Write/WriteTo methods).

Another interesting to note is the TryMatch call. this is how we handle routing. I’ll have a seaprate post on that, but for now, those are pretty encouraging results.

One thing that I do have to say is that the new code is not doing a lot of things that the existing codebase is doing. For example, we keep track on a lot of stuff (requests, durations, etc) that can affect request timing. So it isn’t fair to just compare the numbers. Let us inspect how our existing codebase handle the load under the same profiling scenario.

This time I run it only once, so we have only 500,000 requests to look at:

image

The most obvious think to see here is the request duration. A request takes roughly 6ms under the profiler. While it takes 0.2ms in the new code (still under profiler). The routing code is expensive, but I’ll talk about this in another post. And you can see some of the other stuff that we do that the new code doesn’t (autorization, etc).

Oh, and here are the actual results:

Requests:                          500,000 hits
Successful requests:               500,000 hits
Network failed:                          0 hits
Bad requests failed (!2xx):              0 hits
Successful requests rate:            1,577 hits/sec
Read throughput:                   419,563 bytes/sec
Write throughput:                  156,151 bytes/sec
Test time:                             317 sec

That is quite a change.

Oh, and what about running the new code without a profiler? I have done so, and here is what we see:

image

And the actual numbers are:

Requests:                        1,500,000 hits
Successful requests:             1,500,000 hits
Network failed:                          0 hits
Bad requests failed (!2xx):              0 hits
Successful requests rate:           75,000 hits/sec
Read throughput:                13,875,000 bytes/sec
Write throughput:                7,425,000 bytes/sec
Test time:                              20 sec

So we actually have capactity to spare here. I’m pretty happy with the results for nowSmile.

time to read 4 min | 660 words

As you read this post, you might want to also consider letting this play in the background. We had a UDP port leak in RavenDB. We squashed it like a bug, but somehow it kep repeating.

 

We found one cause of it (and fixed it), finally. That was after several rounds of looking at the code and fixing a few “this error condition can lead to the socket not being properly disposed”.

Finally, we pushed to our own internal systems, and monitored things, and saw that it was good. But the bloody bug kept repeating. Now, instead of manifesting as thousands of UDP ports, we had just a dozen or so, but they were (very) slowly increasing. And it drove us nuts. We had logging there, and we could see that we didn’t had the kind of problems that we had before. And everything looked good.

A full reproduction of the issue can be here, but the relevant piece of code is here:

Timer timer = new Timer(async state =>
{
    try
    {
        var addresses = await Dns.GetHostAddressesAsync("time.nasa.gov");
        var endPoint = new IPEndPoint(addresses[0], 123);

        using (var udpClient = new UdpClient())
        {
            udpClient.Connect(endPoint);
            udpClient.Client.ReceiveTimeout = 100;
            udpClient.Client.SendTimeout = 100;
            await udpClient.SendAsync(new byte[] {12, 32, 43}, 3);
            await udpClient.ReceiveAsync();
        }
    }
    catch (Exception e)
    {
        Console.WriteLine(e);
        Environment.Exit(-1);
    }
});
timer.Change(500, 500);

As you can see, we are issuing a request to a time server, wrap the usage of the UDP socket in a using statement, make sure to have proper error handling, setup the proper timeouts, the works.

Our read code is actually awash with logging, detailed error handling, and we poured over that a crazy amount of time to figure out what was going on.

If you run this code, and watch the number of used TCP ports, you’ll see a very curious issue. It is always increasing. What is worse, there are no errors, nothing. It just goes into a black hole in the sky and doesn’t work.

In this case, I’ve explicitly created a malformed request, so it is expected that the remote server will not reply to me. That allows us to generate the proper scenario. In production, of course, we send the right value, and we typically get the right result, so we didn’t see this.

The error we had was the timeout values. The documentation quite clearly states that they apply to the syncronous method only, and since they don’t say a word about the async method, this does not apply to the async methods. Given how UDP works, that makes perfect sense. To support timeout on the async methods, the UdpClient would need to start a timer to do so.  However, given the API, it is very easy to see how we kept missing this.

The real issue is that when we make a request to a server, and for whatever reason, the UDP reply packet is dropped, we just hang in an async manner. That is, we have an async call that will never return. That call holds the UDP port open, and over time, that shows up as a leak. That is pretty horrible, but the good thing is that once we knew what the problem was, fixing it was trivial.

FUTURE POSTS

No future posts left, oh my!

RECENT SERIES

  1. Production Postmortem (52):
    07 Apr 2025 - The race condition in the interlock
  2. RavenDB (13):
    02 Apr 2025 - .NET Aspire integration
  3. RavenDB 7.1 (6):
    18 Mar 2025 - One IO Ring to rule them all
  4. RavenDB 7.0 Released (4):
    07 Mar 2025 - Moving to NLog
  5. Challenge (77):
    03 Feb 2025 - Giving file system developer ulcer
View all series

RECENT COMMENTS

Syndication

Main feed Feed Stats
Comments feed   Comments Feed Stats
}