At my QConSF talk, I got a really great question: So when you were talking about the Socratic method, how do you engage in the Socratic method without sliding into a patronizing tone, if you know what I mean? Off the top of my head, the answer I gave focused on fostering trust. But I think this is a really important question, and I ended up ruminating on it for some time afterwards.
I spoke at QConSF last week about debugging. The content is largely similar to my previous blog posts on the topic, but I think more refined. I’m unsure as to when the video of the talk will be made available on InfoQ’s website. If you attended QConSF, you have early access to watch the video. However, since that may be some time yet, I figured I’d put the slides here with a rough transcript.
I wrote earlier about my talk at VarnishCon 2016. A video is now available of the talk. Unfortunately, I think I did a particularly poor job of conveying irony (especially at the end: “I did you all a huge solid”), which I regret. Also, time constraints meant that I had to skip some of the points in applying the information, which I think made some of the talk less than useful for folks who didn’t chat with me afterwards.
I spoke at VarnishCon 2016 on debugging. A small portion of the content was related to Varnish, but about half of the talk focuses on research into debugging. Though I have made my slides available, they’re not very useful without speaker notes (and I ended up ad-libbing a fair bit more than intended). This post is an attempt to provide a reference material for the ideas in the talk as well as to expand on some points I didn’t have time to cover.
TL;DR, I’ve released a “library” called librip: a “minimal-overhead API for instruction-level tracing in highly concurrent software”. The rest of this post is about the “why” of the software, followed by a slightly a little bit of the “what” and only a smidge of the “how”. Everyone knows that debugging is twice as hard as writing a program in the first place. So if you’re as clever as you can be when you write it, how will you ever debug it?
I was first introduced to the Window TinyLFU algorithm through a wiki entry for the caffeine project. The graphs looked neat, but it took actually reading the Einziger and Friedman paper, TinyLFU: A Highly Efficient Cache Admission Policy to understand just how interesting and simple the algorithm actually is. Unlike many papers in computer science, this one is extremely consumable for those (like myself) lacking a strong math background. I’ve been studying this algorithm with interest, since it is quite relevant to my work at Fastly.
I was recently pointed to an article about a new kind of trie, the QP trie. “QP” stands for “quadbit-popcount patricia”, and the QP trie combines techniques from DJB’s critbit trees and Bagwell’s hash array mapped tries. In previous posts, I explored shortcomings of a particular implementation of a critbit tree versus a red-black tree for construction time and lookup. To summarize, it turns out that red-black trees can be significantly more efficient than critbit trees for string lookups if you sort your tree by length and reduce the number of comparisons done at each level.
I’m speaking today at Strangeloop 2015 on a subject I hold dear: concurrent systems programming. The title of the talk is “Racing to Win: Using Race Conditions in Correct Concurrent Software” and the subject matter is effectively how we can use the property of linearization in concurrent software in conjunction with racy state snapshotting to make our software perform predictably under concurrent load. You can find my slides on speakerdeck. Check out uslab and Concurrency Kit.
Yesterday, we were doing some tests to try to ensure that some changes we had made fixed some problems we were seeing with particular request types. What we noticed was that occasionally, a request would sit around forever and could not be destroyed, despite the associated connection for that session being in CLOSE-WAIT. We had noticed this behavior before, but were entirely unclear what to chalk it up to, and it didn’t seem like that big of a problem.
In my previous post, I discussed tries and what seemed to me like a novel way to efficiently search for strings in a balanced binary search tree. Usually, using a binary search tree to search for strings is ineffectual; other data structures are used. This is largely because of the key comparison overhead. First, nodes must store their keys (this isn’t always necessary for e.g. hash tables with known inputs). Second, to traverse the tree, keys need to be compared at each level.