the jsomers.net blog.

Introducing Five’Em, a Texas Hold’Em variant

The game of Five’Em was invented by two friends of mine, Ben Gross and Rich Berger, to combat Hold’Em fatigue.

The rules are simple: You’re dealt five hole cards instead of two, and after each round of community cards comes out (starting with the flop), you discard one of these extras. After the river is dealt, and you’ve discarded your third extra card, you end up with a classic Hold’Em hand.

Five’Em has some of the pre-flop dynamics of Omaha, in that a seemingly excellent hand — say, a pair of kings and a pair of tens — might actually lead to some hard decisions, because you’ll only be able to hold on to one of those pairs. But since you always seem to have decent shot at a good hand, it’s hard to imagine folding early.

The extra decision on each “street” forces you to think more explicitly about odds and outs. It’s one thing to be on a straight draw, and another to weigh playing for that draw against, say, holding on to the top two pair.

It’s as if you’re playing multiple people’s Hold’Em hands simultaneously, with the twist that you’re forced to fold one at each turn. It’s more fun than the classic game because you’ve always got more chances — but of course your opponents do too, which means you’ve got to adjust your sense of a winning hand.

As a one-time offer, we’re waiving the $15 licensing fee — if you’ve got a standard deck of cards, feel free to start playing!

The three-page paper that shook philosophy: Gettiers in software engineering

In 1963, the philosopher Edmund Gettier published a three-page paper in the journal Analysis that quickly became a classic in the field. Epistemologists going back to the Greeks had debated what it meant to know something, and in the Enlightenment, a definition was settled upon: to know something is to have a justified true belief about it:

  • justified in the sense of deriving from evidence
  • true, because it doesn’t make sense to “know” a falsehoood
  • belief, i.e., a proposition in your head

Gettier, in his tiny paper, upended the consensus. He asked “Is Justified True Belief Knowledge?” and offered three cases—soon to be known as “the Gettier cases”—that suggested you could have a JTB about something and yet still we would want to say you didn’t know it. For that, he earned lasting fame, and his paper generated a literature all its own.

A Gettier case

Supppose you’re standing in a field and off in the distance you see a cow. But suppose that what you’re actually looking at isn’t a cow, it’s just a convincingly lifelike model of a cow made out of papier-mâché. You’re not seeing a cow, you’re seeing the model. But then finally suppose that right behind the papier-mâché cow is a real cow!

On the one hand, you have a justified true belief that “there is a cow in the field”: (1) you believe there’s a cow in the field; (2) that belief didn’t come from nowhere, but is justified by your seeing something that looks exactly like a cow; (3) and there is, in fact, a cow in the field. Still, we wouldn’t want to say that you know there’s a cow in the field, because in a sense you got lucky: by a strange coincidence, there happened to be a real cow there—a cow you knew nothing about.

In software engineering

At my old company, Genius, the CTO—who’d studied philosophy as an undergrad—was obsessed with these Gettier cases. He called them “gettiers” for short. So we used to talk about gettiers all the time, no doubt in part just because it felt clever to talk about them, but also because when you’re a programmer, you run into things that feel like Gettier cases with unusual frequency. And once you have a name for them, you start seeing them everywhere.

Here’s a recent example. I was working on a web application that used a client-side framework that had been developed in-house. My app was a little search engine, and in my latest pull request, I’d made it so that when you hit Enter in the search field, the field lost focus, so that folks who like to browse the web via their keyboard wouldn’t have to manually escape from the input box.

When I released the new version, I noticed that I’d broken the autofocusing of the search field that was supposed to happen on pageload. I started poking around, only to discover that I couldn’t seem to get the correct behavior back. No matter what code I changed, which lines I commented out, how many times I hard-refreshed the browser, etc., I couldn’t get the autofocus to work.

What had actually happened is that a coworker of mine had made a change to the framework itself, which changed how certain events were bound to the root DOM element, and as a result broke the “autofocus” attribute. At some point, I did a routine rebase on top of this change (and many other unrelated changes). Which meant that when I deployed my little pull request, I was also deploying a bug I had nothing to do with—one that ended up breaking autofocus. It only appeared as though my changes caused the problem, because I’d edited some code having to do with focus in the search field.

Note that I had a justified belief that “the pull request I just deployed broke autofocus on the production site,” and in fact my change did break it—making the belief true. But the break actually happened for a completely different reason!

(Yes, I should have caught the bug in testing, and in fact I did notice some odd behavior. But making software is hard!)

Here’s another example. (This one’s from a long time ago, so the details might be a bit off.) A user once reported that on-site messages were no longer generating email notifications, and I was asked to investigate. Soon, I discovered that someone had recently pushed a change to the code that handled emails in our web app; the change seemed to introduce a bug that was responsible for the broken behavior. But—gettier!—the email service that the code relied on had itself gone down, at almost the exact same time that the change was released. I could have had a JTB that the code change had caused the emails to stop delivering, but still we wouldn’t want to say I “knew” this was the cause, because it was actually the service outage that was directly responsible.

A new term of art

A philosopher might say that these aren’t bona fide Gettier cases. True gettiers are rare. But it’s still a useful idea, and it became something of a term of art at Genius—and has stuck with me since—because it’s a good name for one of the trickiest situations you can get into as a programmer: a problem has multiple potential causes, and you have every reason to believe in one of them, even though another is secretly responsible.

Having a term for these tricky cases allows you, I think, to be ever-so-slightly more alert to them. You can be a better developer this way. As I’ve spent more time writing software, I’ve gotten better at sensing when my assumptions are probably wrong—when something gettieresque might be going on: have I forgotten to clear the cache? Am I working off the wrong branch? Am I even hitting this code path?

Software is a complex and ephemeral business. More than most people, developers are daily faced with bizarre epistemological problems. It helps to be able to distinguish a cow in the field from, well, a gettier.

DocWriter: the typewriter that sends its keystrokes in real time to a Google Doc

For years I’ve wanted a writing machine that would combine the best parts of a typewriter and a word processor. After months of tinkering, my friend Ben Gross and I just finished building one. We call it the DocWriter. It’s a typewriter that sends its keystrokes in real time to a Google Doc.

The beauty of a typewriter is that it propels you through a piece of writing. You can’t tinker with phrases, so you get used to laying down paragraphs. Your mind, relieved from the micromechanics of language, applies itself to structure, to the building of sections and scenes and arguments. When you’re done you end up with something whole, even if it’s imperfect: a draft that reads from start to finish and that you can hold in your hands.

A word processor, by contrast, turns revision into a kind of play. This is true not just for the fine wordwork that comes right before publication, but for the big stuff, too, like when you want to move sections around, or see what a story looks like without a side character. Doing this kind of thing on a typewriter would be a nightmare — to say nothing of the simple fact that your words will have to be digitized at some point and it’s just not practical to scan them or type them up off a sheet of paper.

The idea behind the DocWriter is to be a bridge between these tools so that each serves its purpose: the typewriter, to create the building blocks of a piece of writing, and the word processor, to make the most of them.

How we built it

The DocWriter is actually pretty simple: we took a Brother SX-4000 electronic typewriter and spied on its keyboard switch matrix by soldering a few wires onto the main circuit board; we ran those to a Raspberry Pi 3, which runs a C program that reverse engineers the signals; we pipe this data over ssh to a computer program running in the cloud; that program maps the signals to keystrokes and runs a headless web browser that types the keys into a new Google Doc.

From the user’s perspective, you’re just using a typewriter. The Raspberry Pi is hidden inside it (the blue box in the image above), and it draws power from the typewriter itself, so there’s no extra cord. When you turn on the typewriter, it boots the Pi, which connects itself to your WiFi network, and runs the program that listens for keystrokes and pipes them to the cloud. You know that the DocWriter is ready once you get an email from Google Docs saying that the machine has shared a new document with you.

We’re indebted to numist, who turned the same model typewriter into a teletype using software much more sophisticated than ours. That work made our project seem doable, and gave us many clues about the kinds of problems we’d encounter along the way.

There were, indeed, many problems: we had a surprisingly hard time getting the case off the typewriter when we first bought it; by unlatching the keyboard, we inadvertently triggered a condition where the motor would endlessly grind up against the case, and nearly convinced ourselves we’d broken the machine; the early versions of our controller code erroneously piped data to the typewriter, causing all kinds of weird behavior; we built the whole setup three times, first on an Arduino and then on a Raspberry Pi Zero, before settling on the Pi 3; we had a bad connection on a wire that caused some keys to fail; we wrote elaborate code to compensate for noise on the lines, before realizing that we could use pull-up resistors to more or less eliminate it entirely; we spent nearly a full day just installing a headless web browser that worked; and we had to rewrite our main control code about a dozen times.

In the end, though, the setup is elegant: along with wires for power and ground, we had to solder just 16 connections onto the pins controlling the keyboard switch matrix on the typewriter’s circuit board. The rest is software, most of which does exactly what you’d expect. The hardest code to write was the controller to read the raw signals from the typewriter. But we got it down to something with nearly perfect behavior that’s also pretty minimal (especially when you ignore special cases for the Shift key):

#include <wiringPi.h>
#include <stdio.h>
 
int main(void) {
  wiringPiSetup();
  setbuf(stdout, NULL);

  int scanPins[] = {5, 22, 10, 11, 26, 27, 28, 29};
  int signalPins[] = {13, 12, 3, 2, 0, 7, 24, 23};
  
  int i = 0;
  int j = 0;
  for (i=0; i<8; i++) {
    pinMode(scanPins[i], INPUT);
    pinMode(signalPins[i], INPUT);
    pullUpDnControl(scanPins[i], PUD_UP);
    pullUpDnControl(signalPins[i], PUD_UP);
  }
  
  int keyDown[8][8] = {
    {0, 0, 0, 0, 0, 0, 0, 0},
    {0, 0, 0, 0, 0, 0, 0, 0},
    {0, 0, 0, 0, 0, 0, 0, 0},
    {0, 0, 0, 0, 0, 0, 0, 0},
    {0, 0, 0, 0, 0, 0, 0, 0},
    {0, 0, 0, 0, 0, 0, 0, 0},
    {0, 0, 0, 0, 0, 0, 0, 0},
    {0, 0, 0, 0, 0, 0, 0, 0}
  };

  int lastI;
  int lastJ;
  int sameKeyCount = 0;
  for (;;) {
    for (i=0; i<8; i++) {
      for (j=0; j<8; j++) {
        if (digitalRead(scanPins[i]) == LOW && digitalRead(signalPins[j]) == LOW) {
          (i == lastI && j == lastJ) ? sameKeyCount++ : sameKeyCount = 0;

          if (sameKeyCount > 50 && keyDown[i][j] <= 0) {
            printf("%d,%d\n", i, j);
            keyDown[i][j] = 50;
          }
          lastI = i;
          lastJ = j;
        }
        
        if (digitalRead(scanPins[i]) == LOW && digitalRead(signalPins[j]) == HIGH) {
          keyDown[i][j] = (keyDown[i][j] - 1);
        }
      }
    }
  }
}

A Ruby program in the cloud takes the output of this program (raw indexes like “6,0” for spacebar, or “5,3” for “j”) and maps them to strings, which it sends to a Google Doc using the watir gem for driving headless web browsers.

Can I get one?

For now this is just a one-off project. But if you think you’d want to build one yourself, or buy one, email me at [email protected]

Most book clubs are doing it wrong

The standard way to run a book club is to have everybody finish the book before meeting to talk about it. You have one meeting per book. The discussion goes on for one or two hours before it runs out of gas, and then the group picks the next book, and you agree to meet in another month or six weeks.

You would never run a class this way, because it practically minimizes the value that each participant gets from being in the group. The problem is that there’s no time to cash in on anyone else’s insights. If someone says something in the meeting that reframes how you think about the book — they suggest that Holden is lying, or that Kinbote wrote Canto IV; they tell you to read Portrait first, so you can understand Stephen’s double bind; they claim that Offred’s tale is a series of transcripts, not journal entries — well, now it’s too late, because you’ve finished reading the book and you’re probably never going back to it.

What makes a class useful is precisely that it lets you compare notes with your classmates along the way, to float your working theories about a book and see how they sound to others. It’s not a retrospective, or not merely one — you’re equipping yourself for the rest of the reading.

This is true not just of frameworks or theories or whatever but of little nuts-and-bolts stuff, too, like when someone points out a reference that you missed or helps you savor some language that you blew right by the first time. That kind of thing is especially valuable when you’re reading a difficult book.

My book club started four years ago to read Infinite Jest. There were five or six of us; we had all tried, and failed, to read the book on our own. We met every week and read about fifty pages for each meeting — five or six hours’ worth for a book that dense. If you were out of town, you tried to call or Skype in, and you were forgiven for missing a few sessions, so long as you more or less kept up with the reading.

Since then we’ve run just about continuously, every week, week in, week out, for four years. We’ve read other hard books, and easy ones too, and no matter what, we’ve always split the reading into at least more than one meeting, because isn’t that after all how you make use of those other minds? Book club, for us, isn’t about reading the same book; it’s about reading a book together.

We try to keep the reading to about the amount you can do in a few hours on a Sunday afternoon. Weekly book club has become a fixture in our schedules, an institution like family dinner, though it’s not uncommon for someone to skip a whole book, say if they’re traveling a lot or right after they’ve started a new job. The idea is to make book club less an obligation than a sort of pleasant presence in our lives, this thing that’s always there.

Some books don’t really demand so much attention, and our book-talk during those sessions quickly devolves into banter. But most of the time the discussion lasts a full hour before it runs out of steam, naturally, the way almost all meetings seem to.

That’s another reason to break a book into pieces: better to have too little to fit into a session than too much; god forbid you read something complex and demanding — do you really want to spend three hours in the unpacking, or to have the session break down before the unpacking’s done? And what are the odds that you’ll even remember most of the book by the time six weeks pass?

Good books are almost fractally deep: you find whole worlds wherever you look, and no matter how far in you zoom. Breaking a book into multiple meetings makes the most of this fact. It gives you space to dwell — on a page, even on a single word — without feeling like you’re wasting anyone’s time. No: that’s what a book club is for, not to sum up what you’ve read but to live inside it.

I don’t know why more people don’t run book clubs this way. I think part of it is that they’ve never tried; the very concept of a book club seems to imply a one-book-per-meeting structure. Others hear the idea of meeting weekly and think who has the time?

I would say that anyone who loves books has the time. A book club run in the standard way isn’t efficient or practical — it’s just a good opportunity wasted.

Speed matters: Why working quickly is more important than it seems

The obvious benefit to working quickly is that you’ll finish more stuff per unit time. But there’s more to it than that. If you work quickly, the cost of doing something new will seem lower in your mind. So you’ll be inclined to do more.

The converse is true, too. If every time you write a blog post it takes you six months, and you’re sitting around your apartment on a Sunday afternoon thinking of stuff to do, you’re probably not going to think of starting a blog post, because it’ll feel too expensive.

What’s worse, because you blog slowly, you’re liable to continue blogging slowly—simply because the only way to learn to do something fast is by doing it lots of times.

This is true of any to-do list that gets worked off too slowly. A malaise creeps into it. You keep adding items that you never cross off. If that happens enough, you might one day stop putting stuff onto the list.

* * *

I’ve noticed that if I respond to people’s emails quickly, they send me more emails. The sender learns to expect a response, and that expectation spurs them to write. That is, speed itself draws emails out of them, because the projected cost of the exchange in their mind is low. They know they’ll get something for their effort. It’ll happen so fast they can already taste it.

It’s now well known on the web that slow server response times drive users away. A slow website feels broken. It frustrates the goer’s desire. Probably it deprives them of some dopaminergic reward.

Google famously prioritized speed as a feature. They realized that if search is fast, you’re more likely to search. The reason is that it encourages you to try stuff, get feedback, and try again. When a thought occurs to you, you know Google is already there. There is no delay between thought and action, no opportunity to lose the impulse to find something out. The projected cost of googling is nil. It comes to feel like an extension of your own mind.

It is a truism, too, in workplaces, that faster employees get assigned more work. Of course they do. Humans are lazy. They want to preserve calories. And it’s exhausting merely thinking about giving work to someone slow. When you’re thinking about giving work to someone slow, you run through the likely quagmire in your head; you visualize days of halting progress. You imagine a resource—this slow person—tied up for awhile. It’s wearisome, even in the thinking. Whereas the fast teammate—well, their time feels cheap, in the sense that you can give them something and know they’ll be available again soon. You aren’t “using them up” by giving them work. So you route as much as you can through the fast people. It’s ironic: your company’s most valuable resources—because they finish things quickly—are the easiest to consume.

The general rule seems to be: systems which eat items quickly are fed more items. Slow systems starve.

Two more quick examples. What’s true of individual people turns out also to be true of whole organizations. If customers find out that you take two months to frame photos, they’ll go to another frame shop. If contributors discover that you’re slow to merge pull requests, they’ll stop contributing. Unresponsive systems are sad. They’re like buildings grown over with moss. They’re a kind of memento mori. People would rather be reminded of life. They’ll leave for places that get back to them quickly.

Even now, I’m working in a text editor whose undo feature, for whatever reason, has suddenly become slow. It’s killing me. It disinclines me, for one thing, from undoing stuff. But it’s also probably subtly changing the way I work. I feel like I can’t rely on undo. So if I want to delete something but think I might want it later, I’m copying it to the bottom of the file, like it’s the 1980s. All this because undo is so slow that it might as well not exist. Undo, when it’s fast, is an incredible feature; at any moment, you can dip into the past, borrow something, and zip back. But now it feels like a dead end.

Part of the activation energy required to start any task comes from the picture you get in your head when you imagine doing it. It may not be that going for a run is actually costly; but if it feels costly, if the picture in your head looks like a slog, then you will need a bigger expenditure of will to lace up.

Slowness seems to make a special contribution to this picture in our heads. Time is especially valuable. So as we learn that a task is slow, an especial cost accrues to it. Whenever we think of doing the task again, we see how expensive it is, and bail.

That’s why speed matters.

* * *

The prescription must be that if there’s something you want to do a lot of and get good at—like write, or fix bugs—you should try to do it faster.

That doesn’t mean be sloppy. But it does mean, push yourself to go faster than you think is healthy. That’s because the task will come to cost less in your mind; it’ll have a lower activation energy. So you’ll do it more. And as you do it more (as long as you’re doing it deliberately), you’ll get better. Eventually you’ll be both fast and good.

Being fast is fun. If you’re a fast writer, you’ll constantly be playing with new ideas. You won’t be bogged down in a single dread effort. And because your to-do list gets worked off, you’ll always be thinking of more stuff to add to it. With more drafts in the works, more of the world will pop alive. You will feel flexible and capable and practiced so that when something demanding and long arrives on your desk, you won’t back down afraid.

Now, as a disclaimer, I should remind you of the rule that anyone writing a blog post advising against X is himself the worst Xer there is. At work, I have a history of painful languished projects, and I usually have the most overdue assignments of anyone on the team. As for writing, well, I have been working on this little blog post, on and off, no joke, for six years.

Screen Shot 2015-07-26 at 6.03.14 PM