As I said in my last post, in the spirit of Year In Review posts, I’m reviewing the last 7 years, because that’s the only way it makes sense. Last time I talked about the somewhat complicated answer to who I was working for (but not in a secret squirrel kind of way; just, it’s hard to explain if you’re not familiar with government contracting; but I think the 1600-word post did the trick). In this post, I’ll talk about what I did during those seven years, up until I quit my job at the end of September. And after that, maybe I’ll talk about what I’ve been doing since then and where I see things going. Maybe not. But let’s get into it.
Part 2: What did you work on?
For the bulk of my time at OSC, I worked on a system called NAIS. The Nationwide Automatic Identification System is the Coast Guard’s shore-side system for consuming, storing, disseminating, and transmitting messages in a protocol called AIS. AIS consists of short binary messages sent, primarily, by ships, in which they broadcast their position, identity, and other relevant information (like how big they are and which way they’re pointing).
The heart of NAIS is a server program that accepts connections from talker and listener clients. It receives AIS messages from the talker clients (encoded in a text-based format called NMEA 0183), distributing a copy of each message to all listener clients, after a certain amount filtering and de-duplication specific to the listener.
Filtering
When I got there, the server filtered based on which set of talkers the message had come from (effectively limiting the feed to a geographic area of interest). My first project was to add filtering based on the contents of AIS messages. The most involved part of this was parsing the messages into their constituent fields. The NMEA format encodes AIS messages in a Base-64-like encoding. Converting that into a string of bits is easy. Determining which bits belong to which fields is done according to a complex set of rules that was not created with parser writers in mind.
This shouldn’t have been the most involved part, as, in retrospect, I could have taken advantage of work that had already been done in AIS parsing, both within and outside the OSC. There was nothing in the same language as I needed to use, but I could possibly have ported something, or gotten advice on avoiding the pitfalls. Also as part of adding these filtering features, I wrote code for garbage collection, symbol tables, the actual filtering rules, and a GUI for administrators to configure these rules for listener clients. (Why symbol tables? I was on a Lisp kick at the time, and Greenspun’s Tenth Rule applied.)
The filtering system had more complexity than was actually needed, but it was reliable. I unit tested it to within an inch of its life because another developer, during my interview, said “We try to do TDD here.” In actuality he was the only one there who did something like TDD. But in this case, it worked out well. Another group, who we had a somewhat adversarial relationship with, was tasked with testing the filtering. They tried hard to find bugs but could only find user-friendliness issues. Which brings me to the other issues with the system. The filtering GUI (though reliable) was designed for someone familiar with the details of the protocol (namely me), rather than your average user. Further, the GUI didn’t provide affordances to make the most commonly requested kinds of filtering easier (namely filtering by latitude/longitude area and vessel identifier). Lastly, those two kinds of filtering were the only kind ever used, as far as I know. All the work parsing every field in every message type in the protocol was completely unnecessary.
I’m running up against my one-sitting rule again. I have the rest of this post outlined, so I might just edit it to add the rest, but I’ll probably just put it in a new post instead. It’s nearing the New Year on the East Coast, so best wishes for 2017, and I’ll see you next year.