Better Late than Never: following up on Google HTTPS using DHE cipher suites

[Above image is the famous Caesar Cipher]

One of the things that irked me about how Google handled the security of their HTTPS traffic was the lack of DHE ciphers.  I banged out a post “Google and Amazon do not offer ciphers using Diffie-Hellman Ephemeral mode.“, way back in Oct 2010.

Tonight, out of the blue, I decided to see if anything had changed since then.  While it is more than a year ago today but a year after my original blog post, Google finally took the initiative to encrypt their HTTPS traffic using DHE ciphers, as seen here at their Google Online Security Blog.

Google servers will still offer a range of possible ciphers for HTTPS traffic, the ECDHE-RSA-RC4-SHA cipher is now the preferred one, which is supported by all modern browsers. Less secure ciphers are still offered to many visitors who are running ancient browsers supporting only the older ciphers.

This is a great step forward for the security of Google HTTPS traffic: each session of your interactions with Google will be encrypted with a different key that exists only for a short time in memory and never written to persistent storage.  If an attacker captures your HTTPS trafic and even a hijacked copy of Google server’s private key, the attacker is still missing one more piece, the ephemeral key that has long since disappeared.

Several decades down the road, if the attacker starts up a powerful (quantum?) computer for decryption attempts, a successful decryption would be limited to one HTTPS session. The attack would have to re-run for each one of the sessions.  This will make it very difficult for the attacker to reconstruct what transpired between you and Google during this time.

While it’s better late than never, thank you Google for boosting the security of your traffic, which only gets more sensitive each year as more people become dependent on your services.

Posted in Networking | Leave a comment

Reverse order sort du -h human-readable output

du -d 1 -h | perl -e'%h=map{/.\s/;99**(ord$&&7)-$`,$_}`du -d 1 -h`;die@h{reverse sort%h}'

MacBook-Pro.-=[jnevans] /var/log # sudo ~jnevans/
192K ./cups
728K ./krb5kdc
2.1M ./DiagnosticMessages
 11M ./asl
 22M .

Hat tip to

Want it in your .bashrc?

alias dsize='du -d 1 -h | perl -e'"'"'%h=map{/.\s/;99**(ord$&&7)-$`,$_}`du -h`;die@h{reverse sort%h}'"'"
Posted in Programming | Leave a comment

Ubuntu: How to select and install (or pin) a package from a repository

This is a follow-up to “Redis and php-resque on Ubuntu 12.04.”

Why pinning a package may be desirable

In my previous post, after adding the repository to an Ubuntu system, then performing an system-wide upgrade,  mysql-server and php (if already installed) will be also upgraded to the latest versions available from the dotdeb repository.

There are few reasons why an administrator may not elect to go down this route for the latest version of dotdeb packages and instead, stick with the older versions from official or mirrored Ubuntu repositories.

The PHP and mysql-server packages from an official or mirrored Ubuntu repository, may be a bit older but have been throughly battle-tested by many developers and users.  These packages are more likely to be stable and less prone to problems.  Additionally, the installation process of those packages have been “Ubuntuized” so they fit snugly within the distribution and work well with other Ubuntu-sanctioned software.

Once the administrator starts to deviate from the official Ubuntu path and install mysql-server and PHP from other sources, there’s no guarantee that these packages will work right off the bat and may require additional modifications to get them to work properly on the server.  While this is unlikely to be a major concern with the particular packages from dotdeb repository, the possibility cannot be completely disregarded.

Setting up APT to pin a package

However, there is a way to set up APT to accept one package and exclude all other packages.

In this case, we want to retain only the redis-server package from while disregarding PHP and mysql-server from, in favor of the standard packages from Ubuntu distribution.  This is called “pinning a package.”

Before setting up the repository on the system, first check APT’s policy about the redis-server package:

# apt-cache policy redis-server
 Installed: (none)
 Candidate: 2:2.2.12-1build1
 Version table:
   2:2.2.12-1build1 0
     500 precise/universe amd64 Packages

This output shows that redis-server is yet to be installed, and that if it were to be installed, it would come from Ubuntu standard packages which offers redis-server 2.2.12.

APT priorities

APT uses a priority number (500 in the example above) to indicate the how preferred this particular package is over the same package from other repositories.  The priority number 500 shows that this package version is right smack in the middle of the pecking order.

If a new repository was to be added to the system and happened to offer redis-server package with same priority score of 500, this package would be preferred over the Ubuntu provided package.  If the added repository package was marked with lower priority score, say 400, then it would be pushed down the pecking order then Ubuntu package would be preferred by APT.

After adding the repository (steps here), but before installing redis-server:

# apt-cache policy redis-server
  Installed: (none)
  Candidate: 2:2.4.16-1~dotdeb.0
  Version table:
    2:2.4.16-1~dotdeb.0 0
      500 stable/all amd64 Packages
    2:2.2.12-1build1 0
      500 precise/universe amd64 Packages

The APT policy now shows that’s redis-server 2.4.16 package will be preferred for installation.

However, now that repository has been set up, it also affects PHP5 package. The system now prefers the package for PHP5, as seen from this APT policy output:

# apt-cache policy php5
  Installed: 5.3.10-1ubuntu3.2
  Candidate: 5.3.16-1~dotdeb.0
  Version table:
     5.3.16-1~dotdeb.0 0
        500 stable/all amd64 Packages
 *** 5.3.10-1ubuntu3.2 0
        500 precise-updates/main amd64 Packages
        500 precise-security/main amd64 Packages
        100 /var/lib/dpkg/status
     5.3.10-1ubuntu3 0
        500 precise/main amd64 Packages

Modify APT preferences for a package

In order to disregard all other packages, except for redis-server package, we need to modify APT preferences with two new rules:

Rule 1: Lower the priority score to 400 for all packages, so that standard Ubuntu packages will override them.

Rule 2: Increase the priority score for redis-server package back to 500 so that it’ll be preferred over the standard Ubuntu package.

Create file with these two sections: /etc/apt/preferences.d/redis-server-dotdeb-pin-400

Package:  *
Pin: release
Pin-Priority: 400
Package:  *redis-server*
Pin: release
Pin-Priority: 500

To find out what to put in on the Pin: line,

# apt-cache policy | grep dotdeb
 400 stable/all i386 Packages
 400 stable/all amd64 Packages
     redis-server -> 2:2.4.16-1~dotdeb.0
     redis-server:i386 -> 2:2.4.16-1~dotdeb.0

Double checking APT policy for PHP5 to verify that it’s now sticking to Ubuntu standard package:

# apt-cache policy php5
  Installed: 5.3.10-1ubuntu3.2
  Candidate: 5.3.10-1ubuntu3.2
  Version table:
     5.3.16-1~dotdeb.0 0
        400 stable/all amd64 Packages
 *** 5.3.10-1ubuntu3.2 0
        500 precise-updates/main amd64 Packages
        500 precise-security/main amd64 Packages
        100 /var/lib/dpkg/status
     5.3.10-1ubuntu3 0
        500 precise/main amd64 Packages

Now APT is configured properly for the installation of redis-server package from repository while retaining standard Ubuntu packages for everything else.

Overriding the preferred repository

If redis-server isn’t installed and the system has both Ubuntu and packages available, you can force the installation of redis-server package from Ubuntu repository over the preferred repository:

# apt-get install -t precise redis-server
Posted in Linux | Leave a comment

Redis and php-resque on Ubuntu 12.04

During my free time, I help to administer a machine hosting several websites which generate copious amount of traffic and at times, there can be a significant load on the server. These websites require many tasks running on the back end to keep content fresh for incoming visitors.  These tasks include sending out emails, polling blogs for new content, encoding video files, organizing and moving files around, updating caches, etc.

Minimizing server overloads

One of the ways to squeeze more performance out of a busy server is to break up all these tasks into individual jobs.  The jobs are placed into queues with different levels of priority. Additionally, there are running worker processes that constantly check the queues for new jobs to take on.

During surges of visitor activity, there’s less worrying about the server load shooting through the roof and slowing down everything.  The jobs are simply inserted into the queues and the limited number of worker processes will eventually get to them all. Rather than having the CPU constantly near 100% utilization, the work is spread out over time so other services will not be starved.  The websites will still be snappy fast, even during the periods of activity surges.

In short, being able to slightly delay tasks that don’t absolutely need to be run immediately is an excellent way to keep a system running fast during periods of high activity.

Discovering Redis and Resque

After some researching to find a job/queue management system that’s flexible enough to run any type of jobs I could toss at it, I decided upon Resque.  It makes use of an excellent data structure server (not database!) called Redis. Redis is ideal since it’s almost brain-dead simple to use its capabilities, can be easily scaled from one machine to many machines, and with all the data stored in RAM, it’s blindly *fast* [more details on Redis here].  Resque sets up and manages the queues on Redis.

An excellent analogy about Redis and Resque would be that of a Warehouse/Inventory manager.  Redis is the huge warehouse where many boxes (jobs) can be stored.  Resque is the inventory manager who brings in the boxes, knows where they should go in the warehouse, monitors the boxes, makes sure the boxes are taken care of, and takes the boxes out when they no longer need to be stored in the warehouse.

The number of worker processes can be fine-tuned for your server. If the server starts to experience growing pains and there’s not enough workers to take care of the jobs quick enough, the number of workers can be increased.  If the server starts to max out on its internal resources, it’s a trivial task to offload the workers to other servers to relieve the load on the primary server.  Once applications are created with Resque in mind, they will have built-in ability to scale easily in the future.

…But my web developers use PHP, not ruby

Resque is based on ruby, which is a great language in its own right, and is easily extended using many third-party libraries.  However, for many web developers, their preferred choice (or they may not any say in the matter) of language is PHP.  Fortunately, Chris Boulton did the hard work of porting Resque to PHP and it’s called php-resque.

Installing Redis and php-resque on Ubuntu 12.04

To simplify the installation and maintenance of a Redis server on Ubuntu, check out the excellent repository which offers up-to-date versions of several popular packages such as PHP, PHP extensions, MySQL, and…. Redis server. Another reason to have this repository on a server is that APT can be used to easily keep Redis server up to date in the future, avoiding the need to compile.

The packages of repository are targeted towards Debian systems but are usable on Ubuntu. It’s highly recommended to test dotdeb packages first on a test server to ensure that nothing else will break.  Be sure you are comfortable with the APT packaging tools (and have good back ups) before heading down this path! You have been forewarned!

Back up the php.ini for cli and apache2

cp /etc/php5/apache2/php.ini /etc/php5/apache2/php.ini.bak
cp /etc/php5/cli/php.ini /etc/php5/cli/php.ini.bak

Set up the dotdeb repository and get the system up to date:

Create file /etc/apt/sources.list.d/
deb stable all
deb-src stable all

Approve dotdeb’s GnuPG key:

wget -q -O- | apt-key add -

Special note: if you want only the redis-server package from repository and not upgrade PHP5 or mysql-server, read this follow-up post.

Update the system:

sudo apt-get upgrade
sudo apt-get update
Fully update PHP5 then fix up php.ini for sessions file location
sudo apt-get install php5 [Run this even if PHP5 is already installed on the system]

Before PHP5 update:

$ php -v
PHP 5.3.10-1ubuntu3.2 with Suhosin-Patch (cli) (built: Jun 13 2012 17:19:58) Copyright (c) 1997-2012
The PHP Group Zend Engine v2.3.0, Copyright (c) 1998-2012 Zend Technologies

After PHP5 update:

$ php -v
PHP 5.3.15-1~dotdeb.0 with Suhosin-Patch (cli) (built: Jul 23 2012 12:25:58) Copyright (c) 1997-2012
The PHP Group Zend Engine v2.3.0, Copyright (c) 1998-2012 Zend Technologies with Suhosin v0.9.33,
Copyright (c) 2007-2012, by SektionEins GmbH

Make sure this line is not commented out in /etc/php5/apache2/php.ini

session.save_path = "/tmp"
$ sudo service apache2 restart

Install redis-server

sudo apt-get install redis-server

Install php-resque and run the demo

Note: php-resque uses PHP Composer to install packages, so:

In another directory:

curl -sS | php
sudo mv composer.phar /usr/local/bin/composer

If you don’t have git on your system yet:

sudo apt-get install git
git clone
cd php-resque
composer install  [install dependency packages listed in composer.json]
cd demo  [edit job.php and change sleep(120) to sleep(5), makes the demo run faster]

Start the workers

bash -c "VVERBOSE=1 QUEUE=* COUNT=2 php resque.php"

Submit a few jobs

php queue.php PHP_Job
php queue.php PHP_Job
php queue.php PHP_Job
php queue.php PHP_Job

Watch workers perform the work.  When done, kill all workers at once:

killall php

Running your own custom jobs

It’s now a simple matter to create custom jobs inside another class in job.php and queue multiple runs of the job class.

Now we’ll start over and use PHP Composer to get things set up for  a new project:

$ cd my_proj

create file called composer.json:

 "require": {
 "chrisboulton/php-resque": "1.2.x"
$ composer install
$ mkdir files ; cd files  [your custom code will go into this directory]

To pass values to your custom job class:

For example, to pass the third argument (‘’) from the command line to your job, i.e.

$ php queue.php Ping_Job

resque.php :

  require 'job.php';
  require __DIR__ . '/../vendor/chrisboulton/php-resque/resque.php';

queue.php :

if(empty($argv[1])) {
 die('Specify the name of a job to add. e.g, php queue.php Ping_Job');

require __DIR__ . '/../vendor/autoload.php';


$args = array(
   'host' => $argv[2],

$jobId = Resque::enqueue('default', $argv[1], $args, true);
echo "Queued job ".$jobId."\n\n";

job.php :

class Ping_Job
 public function perform()
   $host_pinged = $this->args['host'];
   echo "\n ==== \n pinging $host_pinged \n";
     ... rest of custom code

Start the workers and submit several jobs

$ bash -c "VVERBOSE=1 QUEUE=* COUNT=2 php resque.php"
$ php queue.php Ping_Job
$ php queue.php Ping_Job
$ php queue.php Ping_Job
Posted in Databases, Linux, Programming, Web/Tech | 2 Comments

Today, a new advancement in VRS calling

It’s days like this that make me proud that I’m part of the engineering team at ZVRS. This morning, at the NAD conference, ZVRS worked with Google on a demo of our prototype ZVRS app for Google+ Hangouts.

The ZVRS app for Hangouts extends the functionality of a Google+ Hangout to allow a deaf person to automatically bring in a sign language interpreter! Once the interpreter shows up in the video hangout, the deaf person can then dial a hearing person’s phone number. When the hearing person answers the phone, the call can be seamlessly interpreted.

Here’s a few screenshots of what the ZVRS app for Hangout looks like:

Starting a Hangout

First, you start a hangout but since this will be a private call to a hearing person’s phone, you don’t invite anyone to your hangout:

Inviting an sign language interpreter into your hangout

Once the Google+ hangout window pops up, the ZVRS app shows up on the top menu bar.  Since this is only a prototype, only those who have been whitelisted can see the ZVRS app.

There’s a short wait after you invite an interpreter.  Our backend systems start searching for an available interpreter that can be assigned to your hangout.

Now that there is an interpreter ready in the hangout, you can proceed to call out to a hearing person’s phone number and bring them into your hangout.

Calling a hearing person’s phone

Since you’re dialing a phone number, click the +telephone link and add the phone number.

Now the hearing person on the other end of the line is an active participant in your video hangout, and the call can proceed as an interpreted call!

Why this is game-changing

Google+ works across a range of platforms, from PC Windows, Apple Macs, Linux desktops, and Android tablets! (iPad support isn’t fully there yet).  There’s also several cool features such as chat window which makes it easy for the caller or interpreter to type out account numbers, confirmation numbers, addresses, phone numbers, etc.

For those who are technically inclined, Google+ hangouts are based on Vidyo technology and H.264 SVC.  This means that the video quality can be maintained over a network with some packet loss without causing annoying artifacts.

Also, imagine if you were in a hangout with several people and all of sudden, a hearing person joined, it takes only one click to bring in an interpreter for the hearing person!

If you know that the hearing person you are trying to call has a Google account, you can invite the hearing person directly to the conversation and enjoy the bonus of actually seeing their face on the call, rather than a face-less audio only call to their phone.  This is the first VRS/VRI capable app where you are able to actually see the hearing person if they so happen to have an account.  FCC may want to start taking note of the advancement of a new class of VRS calling apps in the modern world, where video calls are becoming more prevalent in the hearing world and people/business may not always have a phone number to call.

Since ZVRS app is still a prototype that needs to be fully fleshed out while developing the appropriate backend systems to support many calls, it will be a while before it’ll be officially released to the public.  This is another example of how ZVRS is standing out in the VRS industry as a company that isn’t content to rest on our laurels and how we are constantly pushing the envelope on better communication tools for the deaf.  This is one of the reasons why I’m proud to be part of this amazing company!

Posted in Networking, Web/Tech | 6 Comments

Tropical storms: How the deaf can monitor tornado warnings

[caution to the reader: this post has extreme snarkiness]

Unwelcome visitor: TS Debby

Yesterday was an eye-opening experience for me. Central Florida got slammed by tropical storm Debby which wrecked havoc in certain areas.  Debby unleashed 13 inches of rain in Pinellas county in 96 hours and caused flooding in the Tampa Bay area.  The 1945 record for wettest June in Tampa was broken.

[Water polo, any one? The local baseball stadium in the aftermath of Debby]

Of intense interest were the tornados that randomly spawned throughout the area.  Unlike hurricanes which, due to modern weather monitoring technologies, come with plenty of advance notice, tornados can pop out of nowhere. It is often only a few short minutes before one comes roaring into your vicinity and send you off to the Land of Oz.

What Bay News 9, the local TV station had to offer, or rather, didn’t offer:

First, I did what everyone else in Tampa Bay would do: turn on the TV to the local news for the live weather updates.  That should be easy enough, right?

[Hmm guys? Something’s missing here…]

I switched to Bay News 9 station which was continually providing local updates about Debby….and to my surprise, without closed captioning!  Apparently, the people at Bay News 9 think that everyone in Tampa Bay can hear (Halleujah!) and there’s no need to show any captions on this potentially lifesaving broadcast.

However, they may be surprised to learn that, according to The Florida Association of the Deaf, “Tampa Bay, Florida has one of the largest Deaf communities in the USA. The area is home to 348,000 Deaf people, the largest such community in Florida.”  BN9 staff, let that sink in for a moment.  Right now, it’s like you’re holding up a big fat middle finger to us all, telling us that our lives aren’t worth it.

Next up was to overcome this apparent inaccessibility wrought by BN9 and figure out another way to get the timely information about the violate weather situation.

Instant email alerts, NOAA’s NWS vs WeatherUSA

My highest priority was to get tornado alerts sent to my smartphone.  Having a mobile device during emergencies is important since the phone will still get updates after the power goes off at home, provided the cell towers are still functional.

I was most interested in the alerts about tornado warnings/watches in the Pinellas County, which is due west of Tampa proper.  I wasn’t interested in flood alerts since I don’t live in an area where this would be a concern.

The big name in extreme weather alerts is NOAA’s National Weather Service at the catchy URL: Their website is as ugly as they come, but hey, it’s a federal agency website- they’re all ugly.  It’s the email alerts I’m after, not a dissection of the bone-headed decisions behind the website’s design.  After finding the link to sign up for weather alerts, here’s what the alert subscription page looks like:

[Really, NOAA? How the heck am I supposed to find tornado alerts in central west Florida? This page could make a grown man cry…not that I did.]

So, NOAA has completely dropped the ball on this one.  Regular citizens visiting NOAA’s website have very little hope of finding the alerts they are truly interested in.

Moving on ahead….

My Google ninja skills led me to the weather alerts over at

weatherUSA Alerts is a free, real-time weather alert service which sends out weather warnings, watches, tropical alerts, and other advisories as soon as they are issued by federal agencies including the National Weather Service.

Hmmmm. WeatherUSA alerts are based on the same data from NOAA’s NWS? See the irony there? Doesn’t seem like the NOAA folks have yet.  WeatherUSA folks have more common sense as seen below:

[Whoo, there’s even local email alerts for the Pinellas County!]

With the most critical task out of the way, I can start focusing on getting more information about the tropical storm than what I’m able to glean from the TV (and I would like to reiterate the point here: not much due to the lack of closed captions on BN9).

Visualizing the alerts: Google Public Alerts

Getting the alerts via email is one thing but it’s entirely different when you are able to visually see them. Enter Google public alerts, where the map has visual overlays of the alerts!

Google public alerts about tornado warnings in Pinellas:

[The red color highlights make it easy to see where alerts are active.]

Tornado watch” alerts = weather conditions are favorable for a tornado. “Tornado warning” alerts = one or more tornados have already formed and are currently moving though the area. This difference is not well understood by many people, as we’ll see soon enough. The bottom line is: if your home is in the darker red area, you need to get to safety immediately and wait it out.

Being my own weatherman: Live Weather Radar and WeatherBug

While it’s all good to see current conditions, I also like to make my own predictions as to how weather could worsen or improve in my particular area during the next few hours.  To this end, I use two Google Chrome extensions:

Live Weather Radar

Live Weather Radar Google extension is a quick way to see an animation of how weather has been moving through your area.  Naturally, you want to see if the red or yellow blobs could approach your home area. This is the “bird’s view”


WeatherBug Google extension allows you to zoom in close to your home area to better measure how far away the storm is and if you should be worried or not.  As you can see above, I’m having a close call!

TweetDeck: Let the tweets flow!

The final piece of puzzle to monitoring the tropical storm for tornados is to get live updates from other people who are commenting on the ongoing storm.

Twitter is known to be heavily used by people to send out information during times of crisis.  A tropical storm is no different.  TweetDeck provides a way to show a constant live stream of tweets without the need for you to press a refresh button.

Three columns for maximum impact

I split TweetDeck into three columns:

Pinellas County (a custom list) column which consists of the twitter accounts of local news organizations and other local governmental entities.  This is the most trustworthy column for me but isn’t updated as frequently during the storm.

A search column showing tweets containing the words “Tornado warning” which in theory should show tweets about impending tornados. This column updates a bit more frequently but there are many tweets from people who clearly don’t understand the difference between “tornado watch” vs “tornado warning” and often incorrectly use these terms.

The third column searches for tweets containing the generic term “tornado” to show even more tweets from a wider audience.  This is a very active column with many tweets zipping down the column.  It’s a great way to get a pulse of what many people are saying about tornados but the challenge is to manually filter out those are from your area.  (There may be a way to automatically do this but I haven’t had time to look more into it).

So there, you have it – how deaf people can do an end run around Bay News 9’s lack of TV captioning and turn to the Internet to get everything they need to stay safe!

Posted in Web/Tech | 3 Comments

Until next time, my friend

For a long time, I’ve been dreading writing this blog post about one of the most amazing persons I’ve ever had the privilege of knowing, Daniel Stephen Foster.  For years, he had been suffering from a host of health issues and it all caught up with him early Sunday morning, June 10, 2012. To paraphrase William Shakespeare’s Hamlet: Dan had “beared the whips and scorns of time and has gone over to the undiscover’d country from whose bourn no traveller returns.

It has taken a few days to collect myself and with great difficulty I have accepted the grand finality that Dan is no longer around. No more can I send off an email to and expect a response within a few hours. One of the reasons why I dread writing this blog post is because how does one sit down to distill the essence of a bond spanning two decades and put into words something that gives it sufficient justice?

I first met Dan Foster when we entered NTID/RIT as green freshmen in the fall of 1992. We quickly discovered that we loved all things that pushed bits and bytes. Our friendship took off right away.

During student orientation, when I was introduced to RIT’s DEC VAX/VMS cluster, it took me a while to warm up to the platform while Dan immediately took to the system, like a fish to water.   At that time, I had prided myself on the fact I already had experience with C programming as a high school senior working at University of Washington. So, I thought I already had a head start on other freshmen at RIT.  After meeting with Dan, I quickly came to the realization that he was a rare breed after he told me about how he sneaked into networked computer labs at Gallaudet while a high school student at MSSD.

Clearly, his technical superiority and prowess far outstripped mine.

[Dan Foster and I during a moment of levity at RIT after several packs of beer]

I told myself, “Damn, this guy really knows what he’s doing!  I must find a way to room with him!  He’ll teach me 10x what I’ll learn in the classroom.”  Eventually, we were able to move into a large dorm room that could fit three people.  So began my two years as Dan’s college roommate.  One of my fondest memories with Dan was when our dorm room was supercharged with palpable excitement about a new release of then little-known OS called Linux. After installing Linux from 24 floppy disks and performing a few sleight-of-hand tricks to get X Windowing system running (and managing to do it without blowing out the CRT monitor!), we were taken aback when it all finally came together on the screen with several X terminal consoles with flashing prompts. We celebrated with a few beers that day.

Shortly afterwards, I vividly recall how he had to show me this interesting software he compiled, NCSA Moasic (which was the first graphical web browser) for something mysteriously called the “World Wide Web.”  We visited about 20 websites which made up all of the WWW at the time. How much has changed since then!

During our second year as roommates, I watched him start his ascendency to a system administrator of the highest order.  He never finished his RIT degree but instead started working at a local dial-up Internet provider and ultimately ending up with an international networking company, Global Crossing (now Level 3).  The breadth and depth of his knowledge about networking and system administration seemed to be boundless. If he wasn’t sure of something, he would research the hell out of it and come back with a great answer or solution.  Like many others, he was my go-to guy when I was utterly stuck on getting something to work on a server. He would guide me through configuration changes and went on to explain in depth why the changes were needed.

By the time I graduated from RIT with a BS in Information Technology, Dan had already worked on high-end systems worth millions of dollars,  similar to IBM’s famous Deep Blue- the first supercomputer to beat world champion chess player, Garry Kasparov!  I will always remember him taking me on a very special visit to get a rare glimpse of the Internet Exchange Point in New York City. For the layperson, the location is one of the major crossroads on the Internet superhighway through which billions of emails or video traffic can get from one place to other.  He also gave me my very first networking installation job for a client of his, in which I patched ethernet ports for new offices.

It is safe to say that I would not be where I am today without Dan Foster.

At first glance, he seems to be this completely unassuming human being but once you get to know him, you begin to see the sheer brilliance hidden inside his soul and mind.  If you’ve been lucky enough to get an email from him then you know the excruciating details he can get into when he’s passionate about something.  For example, we both share a love of aviation and Dan himself is one of only 100 deaf licensed pilots in USA.  His emails about the minute details of piloting planes are legendary: you had better strap yourself down for a hour just for the fun of reading his email.

One of his gifts was the ability to masterfully come up with great analogies to simplify and explain complicated concepts.  His ability to help others better understand things didn’t go unnoticed.

In fact, the authors of  UNIX and Linux System Administration Handbook (4th Edition) asked Dan to contribute some material about IBM’s OS, AIX for the book. This was a huge honor considering how it’s one of the standard books used for modern systems administration.

Here’s a snapshot of what he wrote in the book:

Despite his mastery at bending any systems to his will, I once observed in quiet amazement how he gently treated a complete Linux newbie who was trying to transfer a file using SCP, which is one of the most basic operations.  This seasoned veteran who operated at the highest level of network administration could have so easily disregarded the newbie and told him to go RTFM (Read The F*#!&%$ Manual), but instead took the time to guide him through the steps to transfer the file.

That was the quintessential Dan Foster: always patient and willing to help others, no matter what they need from him.

One thing was certain: he was a creature in an online world where there are no barriers in communication.  As his health declined, it became more difficult to see him in person. I will be forever grateful that our emails and chats kept on flowing until the day he could no longer physically handle a laptop.

We had a special understanding and bond that came not just from our similar fields of interest, but that we were both deaf, unable to clearly speak a single word, and our communication relied chiefly on written notes and sign language interpreters.  We faced the same challenges as we found our own way through life and to excel in our chosen field. That shared experience brought us closer together.

All I can say now to Dan: What you have accomplished in the short span of time you had here on Earth, is more than most will do in their entire lives.  Despite the long battle with your health, you never gave up when it would have been so easy to say, “Screw all this!” Your example has shown us the true meaning of persistence and fortitude.

Godspeed, Dan! Go and live it up in the Cosmos but avoid the black holes! I hear they are real drainers.

My candle burns at both ends
It will not last the night
But ah, my foes, and oh, my friends
It gives a lovely light!

-Edna St. Vincent Millay

Dan is now on the Smithsonian National Aviation and Space Exploration Wall of Honor

Posted in Uncategorized | 14 Comments

How to access the text console of a virtual KVM guest from within virsh

(done on clean installation of Ubuntu 11.10/KVM)

After getting a KVM host up and running on Ubuntu, often the graphical VM management application, virt-manager, is installed.  This useful utility makes it a snap to create new virtual machines and gives ready access to the virtual X desktops via graphic consoles.

However, when it’s time to make a serious use of KVM virtualization in the real world, a workhorse KVM host is more likely to have a number of Linux guest servers running with text consoles instead of X graphic consoles.  This helps to cut down on the total overhead on the host and allows more guest virtual machines to be jammed into a KVM host.

On a CLI-only KVM host, the text-based utility virsh is how one can manipulate the guests on a host.

It only makes sense that virsh would also allow access to the text console of a virtual machine, right…..right?  Let’s try switching to the console of a guest VM:

virsh # list
Id Name                 State
2 ubu1                 running
virsh # console ubu1
Connected to domain ubu1
Escape character is ^]

Argh, no further output!  No welcome message or command prompt is shown for the ubu1 VM.

It’s annoying at first but the lack of access to the text guest consoles can be resolved by taking these additional steps in the guest VMs:

The guest VM needs its serial console to be populated with a login prompt, which will present itself upon a successful serial connection from virsh.

In the guest Ubuntu VM:

Save some time/work by copying one of the tty configuration files:

sudo cp /etc/init/tty1.conf /etc/init/ttyS0.conf

edit ttyS0.conf and change the line:

exec /sbin/getty -8 115200 ttyS0 xterm

After restarting the guest VM, it’s now possible to use virtsh to get a console to the guest.

As long as we’re getting dirty here, let’s go the whole hog and get guest console to show practically everything!  How about seeing all the kernel messages during a guest boot?

Tell grub2 to output the kernel messages to the serial console.

sudo vi /etc/default/grub


sudo update-grub2

After restarting the guest, all the kernel messages will zip by until the command prompt is shown.

Posted in Linux | 4 Comments

Inception problem: keymap for ‘a’ key broken inside twice-virtualized Ubuntu KVM guest

Running an Ubuntu virtualized virtual guest inside a virtual KVM host inside VMware Fusion 4 for OSX?

You may have noticed that when ssh -X to the virtual KVM host machine then attaching to virtual guest graphic console (VNC) via the graphical utility, virt-manager, the key ‘a’ doesn’t work!

Alas, a broken keymap! It’s like if Leonardo DiCaprio was running around with a missing right eye in the Inception hotel.

How to fix the easy way?

$ virsh
Welcome to virsh, the virtualization interactive terminal.
Type:  'help' for help with commands
       'quit' to quit
virsh # list
 Id Name                 State
  2 ubu1                 running
virsh # edit 2

Make sure have keymap=’en-us’ in graphics line.

<graphics type=’vnc’ port=’-1′ autoport=’yes’ keymap=’en-us’/>

Posted in Linux | Leave a comment

Finding IP address of a device on a network.

[zippy] I get asked this question once in a while:

“When I plug in a new device to the network, fresh out of the box and it defaults to getting an IP address via DHCP: how the heck do I find its IP address once it’s powered on?”

In other words, it’s the networking version of “Where’s Waldo?”

The first thing to do is to find the unique MAC address identifying the device which is often labelled on the rear or bottom of the device.  The MAC address is always 12 hex [0-9,A-F] characters long. Example: 0C:4F:22:77:8C:90.

On a small home network, it’s pretty easy to find the device’s IP address.  You can simply login your home router and usually there is an list of devices currently connected to your router.  Find the device matching the MAC address and you’ll find out what its IP address is.

However, on a larger network where you don’t have admin access to the network’s router and don’t want to bother the busy network administrator, this can present a challenge to find which IP address was assigned to your device. On a /24 network, there are 254 usable IP addresses or on a /23 network, there are 510 usable IP addresses.

For this example, a managed switch is powered on the network and it has a HTTP web admin tool on port 80.  You could attempt to find it by visiting each IP address in your web browser. Let’s face it, it would be cumbersome and could take a very long time before you finally hit upon the managed switch.  If you power off the managed switch and turn it back on, it could get a different IP address and you would have to start searching for it all over again!

The easy & lazy (expert) method:

Fortunately, there is a network tool called nmap (or zenmap if want a nice GUI interface). I prefer to use nmap in a native Linux environment so if I’m on OSX or Windows machine, I fire up a Ubuntu virtual machine that has been set with a bridge to the network.  There are nmap versions for Windows and OSX if you don’t have access to Ubuntu. I haven’t personally used them so I can’t attest how well they work.

The managed switch has an open port 80 for its HTTP web admin, so it’s possible to take advantage of that fact to narrow down the search.  The resulting list should also ignore all IP addresses don’t have anything running on port 80 and list only those IP address that have port 80 open.


$ nmap -p 80 --open > results.txt

nmap will connect at port 80 on all usable IP addresses ranging from to then dump the results into a file called results.txt

If you look in results.txt, you’ll find entries such as this one:

Nmap scan report for
Host is up (0.0072s latency).
80/tcp open  http

Results show that has something running on port 80. However, note that there’s still no MAC address listed for The list has to display the MAC addresses so that you can find the IP address that corresponds with your device’s MAC address.

Run the same nmap command but with sudo privileges:

$ sudo nmap -p 80 --open > results.txt


Nmap scan report for
Host is up (0.00085s latency).
80/tcp open  http
MAC Address: 00:00:74:E8:D1:42 (Ricoh Company)

MAC addresses now show up in results.txt with the accompanying IP address. Search for your device’s MAC address and you’ll see its current IP address on the network.

Note: the machine that you use to run the nmap search must be on the *same* network as the device in order for you find its MAC address.

Posted in Networking | Leave a comment

Adding custom launchers to Ubuntu Unity launcher bar

For Ubuntu (11.04) desktop, I like to have a custom launcher to start ‘gksudo /usr/bin/wireshark’ At this time, it’s aggravating to use the Unity launcher bar (until they get their act together). There’s no support to add custom launchers to the bar.

To get back the old method of adding custom launchers:

sudo apt-get install gnome-panel
gnome-desktop-item-edit –create-new wshark.desktop

Fill in the necessary fields to start wireshark and choose a program icon, if desired.

Open the folder containing wshark.desktop and double-click to run it.

After wireshark starts, right-click on the icon inside launcher bar and select ‘Keep in launcher’

Really, Ubuntu dev team??

Posted in Linux | 1 Comment

Joining ZVRS

I’m thrilled to announce that I will be joining ZVRS, a company providing video relay services.  I’m looking forward to working at a company that aims to improve the quality of life for deaf people by enabling them to make calls 24/7/365.  My primary duties will include maintaining and expanding the network infrastructure as well as systems testing.

It will be bittersweet to leave Gallaudet University, where I’ve worked for the last two years. There’s no other environment in the world like Gallaudet University.  I can say this: the university is one heck of a serendipitous place. I’ve lost count of how many times I ran into old faces who happened to be visiting the campus. My work with Gallaudet Technology Services has been rewarding and enriching. There were many valuable lessons learnt while working on a campus network, spanning many buildings and supporting thousands of users. I enjoyed assisting the datacenter operations team with their modernization efforts to introduce VMware clusters of blade servers into the datacenter.


Simply put: ZVRS’s reputation is among the best in the industry.

The VRS industry has recently undergone a major upheaval resulting from a federal investigation into VRS fraud and abuse by several individuals.  The FCC is now enforcing more stringent rules and regulations in order to safeguard VRS from further abuse.  ZVRS has supported FCC’s fraud prevention efforts and gone as far as releasing their code of ethics.  These visible actions by ZVRS along with a personal meeting with the CEO, Sean Belanger, further cemented my confidence in ZVRS.  I firmly believe I can count on the company to stay within guidelines and not attempt to circumvent them.

Another reason: ZVRS has an amazing portfolio of devices and apps deaf people can use to make and receive phone calls.

The choices include several standalone videophones and apps for different platforms such as PC, Mac,  iPhone and Android smartphones/tablets.  With this wide variety of supported devices/apps, it is clear that ZVRS is willing to invest and ensure that users are able to choose a device that best suits their needs. Then imagine the technical team that is capable of pulling this off: now that’s a team I want to be a part of!

Posted in Uncategorized | 10 Comments

Fixing Windows clock time issue when booting.

I’ve been putting up with a slight annoyance when I dual-boot back into Windows from Ubuntu: The clock in Windows is always offset by a few hours.

The fix:

Set a registry key:

HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\TimeZoneInformation\RealTimeIsUniversal to 1

This tells the Windows instance to assume the time in UTC when booting up so the time is always correct.

Posted in Linux | Leave a comment

Turn off all colors in vim

The text editor vim usually has default setting of colorized syntax and search highlighting, which I find too distracting.

To kill it off, put the following inside the file .vimrc in your home directory:

syntax off
set nohlsearch
set t_Co=0

Posted in Programming | Leave a comment

Fedora 15 (64-bit) desktop with Chrome, Flash, Adobe Air, VLC media player

I have hunted down bits of information from the far-flung corners of the Internet and consolidated them into a single post: Improving upon the default Fedora 15 desktop and run popular user applications, such as Google Chrome with Flash plugin, VLC media player, and Adobe Air apps (TweetDeck).

Hopefully, with the step-by-step instructions below, Linux newcomers will encounter less frustration setting up a desktop environment complete with several essential applications that greatly enhance the experience.

Run the default installation from Fedora 15 ISO/CD-ROM

After initial install/reboot, and inside terminal console, immediately give your user account sudo privileges:

su -c 'usermod -G wheel [your_username]'

Bring your system up to date:

sudo yum update

Reduce potential headache later on by getting these packages installed:

sudo yum install make gcc kernel-headers kernel-devel

Install Firefox and Google Chrome

sudo yum install firefox

Install 64-bit Google Chrome RPM from Google

Installing Adobe Flash: choosing between 32-bit or 64-bit.

Adobe offers both 32-bit and 64-bit version of Flash for Linux.  However, it is highly recommended that you install 32-bit Flash with the plugin wrapper, instead of the native 64-bit version of Flash.  Adobe puts a high priority on keeping the 32-bit version of Flash up-to-date with security fixes as well as releasing bugfixes/new features on a regular basis.  The 32-bit version is included inside Adobe’s yum repo so the Flash updates are pulled into your machine automatically when they are ready.

In contrast, 64-bit Linux version of Flash is slightly more than a blip on Adobe’s radar so this version gets infrequent updates and additionally, you’re personally responsible for checking Adobe’s site to see if there’s an update to install manually.  Neglecting to do this on a regular basis may lead your system to run with out-dated Flash with known security holes.

(The Bad method) Installing native Flash 64-bit, and again, with a warning: do not forget to to manually check/install updates on a regular basis!!

Download and untar/ungzip latest 64-bit Flash from Adobe. At present time, it is:

sudo mkdir -p /opt/google/chrome/plugins
sudo cp /opt/google/chrome/plugins/
sudo cp /usr/lib64/mozilla/plugins

Both Firefox and Chrome should have working 64-bit Flash plugin.

(The Safer  method) Installing Flash 32-bit with plugin wrapper, including automatic updates. Select YUM for Linux and download/install

sudo yum update  (should see adobe-linux-i386 repo updating)
su -c 'yum install nspluginwrapper.{x86_64,i686} alsa-plugins-pulseaudio.i686 --disablerepo=adobe-linux-i386'
su -c 'yum install flash-plugin'

Run Mozilla Firefox once so that it creates /usr/lib64/mozilla/plugins-wrapped/

Firefox should work with the 32-bit Flash plugin- play youTube video clip to verify.

For Google Chrome:

sudo mkdir -p /opt/google/chrome/plugins
sudo ln -s /usr/lib64/mozilla/plugins-wrapped/  \

Install Adobe Reader (English version)

sudo yum install AdobeReader_enu

Install Adobe Air

First, install various 32-bit packages before 32-bit adobeair will work on 64-bit machine.

sudo yum install gtk2-devel.i686 nss.i686 libxml2-devel.i686 libxslt.i686 \ 
gnome-keyring.i686 rpm-devel.i686 rpm-build rpm-build-libs.i686  \
libgnome-keyring.i686 \ nss-devel.i686
sudo yum install adobeair

Now can install Adobe Air apps such as TweetDeck.

Gain access to additional software for Fedora:

Install RPMfusion repos for Fedora 15

su -c 'yum localinstall --nogpgcheck \ \'
sudo yum update

Install vlc from RPMfusion

sudo yum install vlc
Posted in Linux | 2 Comments