Devon H. O'Dell

Software engineer, debugger, and concurrency nut. Usually confused about something.

About Blog Twitter LinkedIn GitHub

Latest Entries

VarnishCon Video and QConSF

I wrote earlier about my talk at VarnishCon 2016. A video is now available of the talk. Unfortunately, I think I did a particularly poor job of conveying irony (especially at the end: “I did you all a huge solid”), which I regret. Also, time constraints meant that I had to skip some of the points in applying the information, which I think made some of the talk less than useful for folks who didn’t chat with me afterwards.

Debugging: Psychology, Theory, and Application

I spoke at VarnishCon 2016 on debugging. A small portion of the content was related to Varnish, but about half of the talk focuses on research into debugging. Though I have made my slides available, they’re not very useful without speaker notes (and I ended up ad-libbing a fair bit more than intended). This post is an attempt to provide a reference material for the ideas in the talk as well as to expand on some points I didn’t have time to cover.

Ripping Rings

TL;DR, I’ve released a “library” called librip: a “minimal-overhead API for instruction-level tracing in highly concurrent software”. The rest of this post is about the “why” of the software, followed by a slightly a little bit of the “what” and only a smidge of the “how”. Everyone knows that debugging is twice as hard as writing a program in the first place. So if you’re as clever as you can be when you write it, how will you ever debug it?

On Window TinyLFU

I was first introduced to the Window TinyLFU algorithm through a wiki entry for the caffeine project. The graphs looked neat, but it took actually reading the Einziger and Friedman paper, TinyLFU: A Highly Efficient Cache Admission Policy to understand just how interesting and simple the algorithm actually is. Unlike many papers in computer science, this one is extremely consumable for those (like myself) lacking a strong math background. I’ve been studying this algorithm with interest, since it is quite relevant to my work at Fastly.

The QP-Trie

I was recently pointed to an article about a new kind of trie, the QP trie. “QP” stands for “quadbit-popcount patricia”, and the QP trie combines techniques from DJB’s critbit trees and Bagwell’s hash array mapped tries. In previous posts, I explored shortcomings of a particular implementation of a critbit tree versus a red-black tree for construction time and lookup. To summarize, it turns out that red-black trees can be significantly more efficient than critbit trees for string lookups if you sort your tree by length and reduce the number of comparisons done at each level.

Strangeloop 2015

I’m speaking today at Strangeloop 2015 on a subject I hold dear: concurrent systems programming. The title of the talk is “Racing to Win: Using Race Conditions in Correct Concurrent Software” and the subject matter is effectively how we can use the property of linearization in concurrent software in conjunction with racy state snapshotting to make our software perform predictably under concurrent load. You can find my slides on speakerdeck. Check out uslab and Concurrency Kit.

Debugging the Internet

Yesterday, we were doing some tests to try to ensure that some changes we had made fixed some problems we were seeing with particular request types. What we noticed was that occasionally, a request would sit around forever and could not be destroyed, despite the associated connection for that session being in CLOSE-WAIT. We had noticed this behavior before, but were entirely unclear what to chalk it up to, and it didn’t seem like that big of a problem.

Efficient String Searching with Binary Search Trees: Part 2

In my previous post, I discussed tries and what seemed to me like a novel way to efficiently search for strings in a balanced binary search tree. Usually, using a binary search tree to search for strings is ineffectual; other data structures are used. This is largely because of the key comparison overhead. First, nodes must store their keys (this isn’t always necessary for e.g. hash tables with known inputs). Second, to traverse the tree, keys need to be compared at each level.

Efficient String Searching with Binary Search Trees

For some time, I’ve been a fan of the trie as an efficient string searching data structure. A trie is a tree-like data structure in which every node represents some piece of a string. Many variants exist; perhaps the most popular is the radix tree. Tries have a couple of neat properties: They have a low RAM impact compared to some other tree structures. They only require O(k) operations to search, insert, or remove (where k is the key length) a particular key.


I’ve tried numerous blog platforms and been unsatisfied with all of them. I’m testing out Hugo along with the Hugo-Uno theme. So far, it seems to be working really well. And I do like the idea of a static site generator. Maybe one of these days I’ll get to finishing up ASCIIToSVG work and incorporate it into the markdown parser. In the near future, I’m hoping to blog about a few more things related to work.