The following are some of the more time consuming and/or interesting things I have done. If you are interested or curious about anything just ask
Bought a blade server
for intel atom processors (112 nodes in 6Us) and am now getting around to building a hosting business around low price point dedicated servers for expert users. Self install and (hopefully) minimal support.
global load balancer.
Over the last couple years at Peer 1
I've been running a series of distributed network projects. It started with GLOBAL
, a Global LOad BALancer (I think a co-worker Rob
got $100 for that pseudo-recursive name). It's an Anycast DNS based load balancer with various health check mechanisms and geo-location based on network best path.
After my employer
decided that having an Anycast DNS load balancer wasn't doing a particularly good job of getting our customers to buy more of our colocation space in other cities, I was allowed to start work on a Squid
based caching reverse proxy system
. This works pretty good and over the years it's grown, but also languished from a lack of promotion.
Our regular DNS servers (that we had tried so very hard to get our customers not to use, by charging them a lot of money) started getting pretty unreliable and requiring too much attention. I used that as an excuse to start moving our customers onto GLOBAL's Anycast DNS infrastructure. The idea was to charge customers even more than before, but provide a much better service.
This is a service more like UltraDNS. Sometimes marketing types come up the most original names for products; thus our DNS product was dubbed SuperDNS. Someone should've been fined $100 for that name.
Now that my employer does more than provide just co-location and network access, we're starting to migrate the whole company onto SuperDNS's infrastructure. As if to restore some kind of karmic naming balance to the universe, this project has been pegged SuperduperDNS. It may be another stupid name, but at least it's charmingly stupid. Hopefully, some day, we'll actually start charging customers for it.
The last of the distributed network products that I built and run for my current employer
is a distributed Network Monitoring
system. It's pretty simple at this point, does basic threshold alerts and trending/graphing for SNMP MIBs, health checks for HTTP and TCP/IP.
It appears to be scalable, and we'll soon finish rolling it out into the Hosting facilities, where it's already demonstrated that it can scale up to at least thousands of monitored devices.
aterm_client, linux client for a wireless access point.
Living in Japan has provided me with an opportunity to experience the wonders of cool Japanese products that are not available outside of Japan. One such product was the frustrating NEC ATerm WARPSTAR E (sigma) wireless gateway / router box.
In order to get internet access on my girlfriend's laptop I had to reverse engineer the management protocol used by the windows drivers for this product. Well, reverse engineer is probably an exaggeration, I am sure it would have been a lot easier if I could have just read the Japanese manuals.
Anyway, if you are looking for a way to get your non-windows computer to work with this product check aterm client out.
linux snipes, the networked version.
A directed study (CPSC 444) on peer-to-peer distributed systems with Mike Feely at UBC. The original Snipes was a fun game, and its been too long since networks have seen its bits. I think it was the first multiplayer network game I ever played. It was also the best network 'testing' tool I've ever used. And, gosh-darnit, it was educational.
My project was based off the GPL'd Linux Snipes by Jeremy Boulton. His version plays pretty good, and reasonably accurate to the original (a couple bugs in some of the funny game modes). Unfortunately, the most important part, the multiplayer part, was missing. So I was adding it.
There were two main parts to the project. First, I tried to perfect a group communication algorithm that minimizes event propogation time (by using a serverless flat topology). Second, I was trying to have an 'auto-zoning' mechanism that linked Snipe mazes together into an infinite serverless grid.
It mostly worked. At the very least I was educated as a result. One principle lesson I'd like to pass on is that prototyping multi-threaded networked programs and protocols in plain C is not a good idea. It's too bad the guy(s) who were writing a Java-based Snipes never got anywhere, or at least they appear to have fallen out of Google. Java has charming network message marshaling mechanisms that I find useful for proto-typing (although slow [back then]).
mailreader, Japanization thereof.
(now wife) and I modified a webmail
program so that she (and possibly her friends) could send and receive Japanese-encoded email. It's really convenient, and I believe the changes were included into the latest debian
release package. Of course, these days all has been superseded by GMail. But, if you're looking for somewhere to host you're own webmail, let me know and I'm sure we can come to some arrangement.
Making LEGO robots.
LEGO has always been one of my favorite learning tools (ok, toy). In my ISCI 333 class we did a lot
. My robots did well, but there was so much more I wanted to do. Unfortunately, when I returned a year later to TA the course I didn't have enough time to work on my own robots. Which is really too bad because having twelve RCXs at my ready disposal would have been really convenient for doing autonomous distributed computing experiments. Although the projects for the course could not use it, I experimented briefly with an older version of legOS.
Now, after my advanced operating systems course I think I would like to take some time to work over the packet networking code of legOS and see if I can get a rudimentary UDP/IP system. If they can fit UDP/IP on a PIC, then I should be able to get it into a Hitachi H8 with 32k of memory.
Simple Service Provisioning Program.
At Internet Direct
I worked on many things, but my favorite was SSPP. Back in those days there were no Oracle client libraries for Linux or the BSDs so we needed a way to get account information to all of the servers (at least two dozen servers at one point). Having worked closely with the design of most of the database and the fact that I could understand the utilities that the administrators used to 'automate' account creation on the servers meant that I was in a great position to make it even more automated. SSPP essentially behaves like POP3 with a UIDL command. So each client machine would request the changes in the database since the last time it updated. To sync up a new server (accounts and services were mirrored across the multiple machines on both sides of the country) all one had to do was install the provisioning software and the first time it ran it would get all the relevant changes to the accounts and provision them.