Christopher Kalos

Technology, Hobbies, and New Ideas

An avid techie since childhood, I've finally decided that it's time to establish my own presence on the web, as opposed to outsourcing it to a bunch of social network pages.

Here you'll find musings, my professional history, and anything that catches my interest.

As will become obvious as this page grows, I'm an Apple user, through and through:  When I work, I use the best tools for the job.  Sometimes that's Microsoft-based, sometimes it's Apple-based, and often, it's Linux-based.  Once I'm at home, though, I find that comfort, simplicity, and ease of use trump all of the flexibility in the world.   

Cool counts for a lot.  Simplicity counts for a lot more.

 

Current Projects

Infrastructure Engineering Manager at Take-Two Interactive Software

  • Just started here, so we'll see what happens next!

Director of Technology at Happtique.

  • Currently guiding our upcoming product launch, bringing an mHealth content delivery platform to patients and prescribers.
  • Already working on the Next Big Thing:  I'm coordinating development for all of the upcoming product features.

Guest Host on the mHealthZone weekly podcast.

  • Providing tech commentary alongside the regular hosts, Ben and Corey.
  • Generally gabbing about the 'M' in "mHealth", while I let the experts focus on the "Health." 

System Architect for the Transformers Wiki

 

  • I jumped on this project to help out a friend of mine, cartoonist David Willis.  Since it's a fansite, and we're publishing a lot of content based on Hasbro's brand, I'm working on a volunteer basis.
  • After a major data loss with an older host, I set up an all-new environment on a shoestring budget, taking a few pages from my time at Meetup.
  • Earlier this year, facing rising costs and an imminent shutdown, we moved everything once more to Linode,  slashing costs back down to normal, all while improving performance.

Ongoing Projects and Interests

Amateur Photographer

    • Who isn't these days? I shoot on a Nikon D40, as I spend all of my photography money on glass. Whiz-bang new technology is a lovely thing, but the D40 has served me well on more trips than I can count.
    • Back in 2009, before you could just buy a Mifi at any Radio Shack, I whipped up a mobile hotspot using a Cradlepoint router, a USB 3G modem, and a hastily soldered-together battery pack.  This plus an EyeFi card let me help my friends at Altered States Magazine scoop the rest of the fandom on a live photostream.
    • Since then, it's all been cat photos and vacation snapshots. 
    • You probably don't want me to do weddings, and I don't bother taking pictures of food. 

    Gamer

    • Videogames, board games, card games, you name it:  Gaming keeps the brain sharp.
    • I've been working on a tabletop RPG campaign setting for a while, and I run the occasional session.
    • Way back in the day, I even whipped up a few games, working my way up from BASIC through Pascal and C, finally ending in x86 assembly. 

    Cooking

    • I like good food, and it's far more affordable to make it myself.  My wife and I spend a good amount of time in the kitchen.
    • This means that we also have the never-ending battle for counterspace for my coffeemaking equipment and bar accessories, versus silly things like cutting boards and major appliances.
    • I've definitely picked up on some simple lessons from the likes of Alton Brown and Anthony Bourdain:  No single-tasker tools, and Even A Reasonably Intelligent Poodle Can Make Boeuf Bourguignon.

    Past Jobs

    1997: The Beginning

    • I was lucky:  Two summer interships at Donaldson, Lufkin, and Jenrette, now part of Credit Suisse.  I moved from an analyst job (basic number crunching, compiling reports in Excel, and so on,) to IT.
    • IT pushed me right out in front of traders, all in need of a working computer, and let me cut my teeth on Networking and Server Maintenance. 

    1999-2002: The Dot-com Boom:  UNIX, Networks, and Best Practices On A Budget

    Globix Corporation - Junior System Administrator

      • Back in 1999, Globix was definitely a bit of a big deal in the datacenter and colocation space. I came here to learn, and had plenty of free time to experiment with Linux and Windows between support incidents. 
      • Aside from the necessary lessons in Best Practices for security, I basically completed trouble tickets as assigned to me, and adapted to working the midnight-to-nine shift. 

      Cythere, Inc. - System Administrator

      • These guys were nice enough to let me work on a whole mess of technologies, ranging from Solaris to Cisco.
      • Once I assisted in our move to a Tier 1 datacenter (Level 3,) they gave me a T1, a Cisco 1721 router, and told me to set up the Point-to-Point link.  I grabbed a book, figured it out, and connected our main office back to our datacenter. 
      • I also got to work on an early heuristic image search product with Lucent.

      Cybersites, Inc - System Administrator

      • One of the early forays into interest-specific online communities, they didn't make it out of the initial boom.
      • They did, however, give me a large enough server environment to learn all about Out-Of-Band management, Kickstarting servers from scratch, and playing around with Fibre Channel SANs a bit. 
      • I even set up their build deployment system, allowing us to move from staging to production in a single command. 

      Gotham Broadband - System Administrator

      • Gotham worked on some of the early broadband experiences, delivering high-quality websites for users on early broadband connections.
      • I got to manage the Linux/FreeBSD servers, the NT 4.0 Domain Controllers, and had two interesting projects of note: 
      • The first was a "broadband simulator."  Instead of signing up for a cable modem, DSL, and Satellite Broadband plan in our office, I set up a FreeBSD box with WF2Q (Worst-case Fair Weighted Fair Queueing) to simulate all three:  Cable was fast, with some minor packet loss, DSL was slightly slower but more reliable, and Satellite had high-latency and occasional connection failure. 
      • The second was an effort to connect our developers, who were traveling between the US and Germany for a client, back to the main office.  One flight later, I had a VPN link set up between both offices, using a little bit of off-the-shelf VPN hardware, and learning quite a bit more about European ISDN than I expected. 

      Esaya, Inc. - System Administrator

      • Not much to be said here:  More servers, all Linux, and my first foray into real server monitoring. 
      • Starting from nothing, I set up Nagios between both of our sites, customizing alert scripts across the board, defining escalation paths and dependencies, and making sure that alerts were resolved quickly. 
      • This came in handy one Christmas Eve, when our database server came down and I was able to coordinate a fix before Christmas morning came along, with the assistance of our Database Administrator. 

       

      2003- 2009: High Availability, Private Clouds, and a Touch of Big Data

      Meetup, Inc. - System Administrator

      • This is where I learned some of the bigger tricks:  Load balancers from scratch, multi-layer caching, and clustering, all on Linux.
      • With a bit of guidance, I set up a cavalcade of performance-boosting and downtime-reducing solutions. 
      • To cut down on database traffic, we added a few Memcached servers, using these to cache DB queries and session information, reducing hits to the database AND to our message queueing system. 
      • To support the ever-expanding database load, we added read-only DB replication to MySQL, and then doubled it.  Using the LVS-DR functions from the Linux Virtual Server project, we split reads evenly across multiple DB servers, effectively delaying the inevitable until some better NoSQL solutions came along.  (That was all after my time, however.) 
      • At the same time, we copied this setup for the webservers, using redundant LVS-based load balancers to set up a proper web farm. 
      • Since Meetup tracks a lot of images from photo uploads for their events, I started to worry, and grabbed a combination of storage usage, user adoption rates, and general site activity to accurately forecast upcoming storage needs.

      Jobson Medical Information - Network Manager

      • At long last, a team!  A team of two, but a team nonetheless!
      • Not only did I run the network team, but user support was a completely different group, giving us unprecedented freedom to innovate. 
      • We ran a multiple site WAN, complete with a separate datacenter, and I was back in Windows for a change, proving to myself that going back to another OS is just like riding a bike. 
      • We revamped everything: 
      1. We had to move cages at Equinix.  That was painful enough before it started that I worked to negotiate Right of First Refusal from then on, making sure that we'd always get to expand without a painful, multi-hour downtime as we literally carted servers down the hall.
      2. We bought quite a few business units, folding their domains into our own, adding their users, and bringing their services into our datacenter.  This gave me the chance to work on HP BladeCenters, including their non-Cisco network switches. 
      3. We were constantly running low on space, or waiting too long for files on Direct Attached Storage, so I brought in a few SANs, one for each major site.  Once we got these going, things became very interesting:  Remote Snapshot replication empowered all-new backup strategies, but put us at odds with an ongoing network performance concern. 
      4. So we solved it.  First, by bringing in WAN accelerators to deduplicate, cache, and compress data before sending anything across our locations, and then by upgrading our aging IP Frame Relay network to a brand new MPLS network.  By running both networks at once and rigorously testing all business-critical services, we upgraded without any downtime. 
      5. With the underlying infrastructure in-place, it was time for VMWare to come in and solve our power and cooling problem:  We effectively built a private cloud, migrated existing servers accordingly, and developed a hardware replacement program to eliminate downtime while reducing costs. 
      6. When it came time to bring even more business units in, we just shipped them a single VMWare box, running a Virtual SAN Appliance, and a smaller WAN accelerator.  Local files, nightly backup, and no loud, hot room filled with half a dozen boxes full of blinking lights. This earned a writeup in Storage Magazine.
      7. As part of a team of two, we got to play with everyone's toys:  We built and maintained MS SQL clusters, file clusters, Exchange clusters, web farms, and redesigned systems as needed. 
      8. One of the coolest products was a Medical Intelligence platform that grabbed PDFs of every major paper, extracted text, and produced searchable synthetic abstracts.  We rebuilt everything around this to improve performance and reliability, and I had to learn the system, from the hardware down to the ontologies in the database in order to do so.
      9. As the leader of the team, I spent a significant amount of time in both internal and client-facing meetings, forcing me to work on the one thing that tech guys aren't known for:  People skills.  Effectively, I became the Sales Engineer for Jobson's clients, particularly in their Internet Solutions business. 
      10. Having revamped the network in roughly three years, we went into maintenance mode, having set up enough resources to provide performance on-tap for our development teams, ranging from adding entire development environments on the fly to guiding them through best practices for load balanced websites via shared sessions.

      2009 - Present:  Streaming Video, Product Management, and Leadership

      Stream57 (now part of InterCall) - IT Manager, Sales Engineer, Wearer of Hats

      • When I came onboard, it was time to move us from managed hosting to Level(3) all over again.  This time, I knew what I needed, so we spun up the servers in almost no time. Learning my lessons about the ever-increasing appetite for storage, and workig for a streaming media company, I had SANs installed as soon as the budget was available.  Thin provisioning made life much easier.
      • As the guy who understood the bandwidth and protocol needs, I was roped in as an impromptu Sales Engineer, completing RFPs, explaining video capabilities, and helping to tailor solutions for the needs of both small shows and major pharmaceutical clients. 
      • Once InterCall bought Stream57, I became more involved in product direction, as my time assisting the sales team made me uniquely qualified to explain our capabilities in plain English.
      • I also took this time to streamline the loadout for our Event Technicians, putting everything they need for redundant on-site video encoding into a package that fit under an airliner seat.  Both the techs and their backs were pretty happy with this.
      • That prompted a move to Sales Engineering, putting me in front of clients both in person and on-camera. Once there, I got involved with the Production team, helping define Best Practices for H.264 encoding at different bitrates and aspect ratios, improving the quality of events for all of our clients.

      Today - Happtique - Director, Technology

      • I can't say much yet, but watch this space!