Cicada for MetaWatch, a Preview

Early in the course of developing some of my MetaWatch UI experiments, I started running into problems with coordinating between the multiple watch-compatible apps on my Android phone. My apps were getting button press events through the broadcast intents sent out by an early version of the MetaWatch Manager app. The problem with using broadcast intents for this is that they’re, well, broadcast—all my watch apps would receive the button press event at the same time and often react simultaneously. I could make each app only react to a dedicated button, but it got a bit tedious coordinating all that.

Out of this frustration, and from a desire to reduce the boilerplate involved in writing a new apps for the watch, I began to develop a framework that I called Cicada.

Cicada provides a couple of key things for hackers who want to explore app ideas. First, it provides a menu system to pick between watch-compatible apps installed on the phone. Only the app that’s currently on the watch screen gets the button press events. Second, it has a widget mode that lets you run three independent apps at the same time, with each getting one third of the watch screen, so you can mix and match pieces for your watch display.

Here’s a little walkthrough of the Cicada interface on the phone side:

Cicada mostly runs in the background on your Android phone, but it also has a very basic control UI. It’s nothing I’m proud of, but it’s done the job while I’ve been focused on the APIs and plumbing. This is what it looks like when it’s not running.

When Cicada is active, you see a copy of what’s on the watch screen, which is handy for taking screenshots. Here you can see the app list. Pressing the top and bottom buttons on the right side of the watch move the selection up and down, respectively, and pressing the middle-right button launches the selected app. Cicada automatically detects when you install or uninstall watch-compatible apps from your phone, and updates this app list instantaneously.

Now I’ve launched the “Widget Screen” app that’s built in to Cicada. This app has an associated settings UI, as you can see from the button that appeared. (Pressing the upper-left watch button would bring me back to the app list.)

In any case, the widget screen is showing several different apps at once. How did I set that up?

Tapping that settings button on the phone screen brings up the (similarly unvarnished) widget settings view, where you can see the list of apps that were being shown on the widget screen: an app showing the current status of my London tube line, a basic clock, and an app that shows the next appointments from my Google calendar.

Here I’ve tapped one of the widget slots, allowing you to see the list of apps that have declared that they can run as widgets.

I’ve made a few changes in the widget configuration. You can’t see it here, but as I’ve made each choice, the watch display has updated itself so I can see the new look.

Here’s what the new set of widgets looks like on the watch. I’ve moved the clock to the top, and swapped out the bottom widget for an app that shows bus arrival estimates in San Francisco. Now, if I press the lower-right watch button next to the bus times widget…

…it launches the bus times app in full screen mode, showing more detail. (As before, pressing the upper-left watch button would bring me back to the widget screen.) Note that this is the same thing I’d get if I’d chosen “Next Buses” from the initial app menu.

Anyway, that’s a quick look at Cicada. It’s not quite polished enough to put up on the Android Market yet, but if you’re a developer interested in building Android apps for putting glanceable information on the digital MetaWatch, it’s definitely far enough along to be useful to you. You can find the source code for the framework and many sample apps in the Cicada project on GitHub.

To get an idea of what’s involved in writing a Cicada app, have a look at the source for the Digital Clock sample app (less than 100 lines of code, including the license header). You just subclass the CicadaApp service class, implement onResume(), onPause(), onButtonPress(), and draw your watch screen in onDraw().

I’ll write more about Cicada here soon, but if you have any questions in the meantime, post them in the comments below or in the cicada-dev Google group.

MetaWatch Hacks & Resources

People are starting to hack together some interesting things for the MetaWatch bluetooth watch platform, but the official forums are so hard to navigate that it’s hard to keep track of what’s going on. This post will serve to pull together some of the things I’ve come across, and I’ll update it until I get bored of curating it.


There’s now a MetaWatch Wiki; updated versions of the content below can be found there, particularly in the Hacks and Libraries & Frameworks pages.

Developer Resources

Libraries & Frameworks


Assorted app prototypes that I put together

Minecraft Clock, also done by me

Controlling room lights via OpenAMI by Kai Aras

Remote control by Kai Aras

Qt Animation by javispedro

Showing Google Maps for your current location by Zero Cho

Desktop Mac App notifications via Growl by Kai Aras

Jailbroken iOS support by Kai Aras

Album art and song title display by javispedro

Firmware tweaks by Garth Bushell to add a 24-hour toggle to the embedded menu and to add the year to the idle display. (source code)

Let me know what else you find (though I reserve the right to not post everything that I come across).

Crafting a bluetooth Minecraft Watch with MetaWatch

A little bit of Friday silliness for you, wherein I manage to combine two recent pastimes, hacking the MetaWatch and playing Minecraft, by pulling a piece of game UI out into the real world.

Lately I’ve been experimenting with the possibilities of Bluetooth watches. While I was fooling with Travis Goodspeed’s PyMetaWatch library for talking to the MetaWatch from Python code on your PC, I remembered a fun hack that my friend Michael Dales had done to control lights in the real world from actions and switches in the virtual world of Minecraft.

Minecraft is an engrossingly open-ended game that involves exploring caves for minerals, then using those materials to build more tools and buildings. To complicate matters, there’s a day/night cycle, and night time brings zombies, skeletal archers, and other nasty ssssurprises. The upshot is that it’s a lot safer to travel during the day.

If you’re mining deep in a cave, though, how do you know when it’s safe to emerge from your spelunking to haul your loot home? Notch, the game’s creator, eventually added the ability to craft an astronomical clock in the game to tell you what time of day it was.

So, how did I get it on my wrist? Here’s the crafting recipe for this hack: I wrote a little mod for the Minecraft server that spit out the virtual world’s time of day, using V10lator’s lib24time library and the gratifyingly straightforward Bukkit Minecraft modding system. From there, I have a Python script that uses PIL and the assets from the game to render an approximation of the Minecraft watch at the given time of day.

Finally, I’m using my fork of PyMetaWatch to send the image from my Mac to the watch via bluetooth.

It works, but it’s very laggy right now. (Eagle-eyed Minecraft fans might have noticed that my screenshot above is more evocative than accurate.) The slowdown seems to be somewhere in the PyMetaWatch/lightblue combo, which is taking an agonizingly long time to send a bitmap to the watch. I can generally send a bitmap from my Android phone in less than a second, I suspect that either lightblue is configuring the bluetooth RFCOMM link for a ridiculously low speed, or there’s some overhead in the PyObjC bridge that it relies on. Let me know if you have any ideas.

Update (Sept. 17): Today I tried a different tack, involving an Android app loading the clock image from the Mac over wifi and sending it to the watch via MetaWatchManager, and it worked much better. Here are a couple more pictures showing the watch time more or less synced up with the time on the in-game clock:

MetaWatch Experiments

As the MetaWatch bluetooth watches are getting closer to shipping, I figure it’s a good time to talk about some of the UI experiments I’ve been doing with them, to give you an idea of what they might be useful for.

MetaWatch is a line of hacker-friendly wristwatches that can be paired with smartphones to enable new kinds of lightweight interactions. In the same way that glancing at a wristwatch is faster and less disruptive than pulling out a pocket watch to check the time, you can imagine how glancing at a connected watch could be more convenient and sociable than pulling your phone out of a pocket or handbag to see cloud-based information.

After my previous experiments in showing live bus times on an older bluetooth watch, the guys at Fossil got in touch with me, and over the past couple of years I’ve served as an unpaid advisor to the MetaWatch project, in the hopes of helping to make the end products as developer-friendly as possible.1

There are a lot of things to like about the MetaWatch devices:

  1. The screen of the digital version is always on, so the information on it is always a discreet glance away. You don’t need to push or swipe anything to bring it to life.
  2. The battery life is reasonable, so you can wear these things for the better part of a week without having to charge them.
  3. Since they’re designed by experienced watchmakers at Fossil, the MetaWatch devices look more like fashion watches than cookie-sized computers strapped to your wrist.

There are definitely some trade-offs, though, compared to other devices with more horsepower and flashier displays:

  1. As an Android app author, you’re basically treating the MetaWatch as a dumb terminal. You send pixels to the screen, and you get button presses back. This gives you a lot of control, but the downside is that watch-based UIs are a lot less responsive than you’d like.2
  2. The low-resolution monochrome display isn’t as sexy as color touchscreen devices like the iPod Nano or the WIMM Platform. They’ve made the most of it by hiring Susan Kare, designer of the original monochrome Mac graphics, to do the default imagery. You can’t reuse existing designs—for best results any UI is going to have to be custom-designed for this thing.

With that out of the way, here are a few prototypes that I’ve made over the past year or so.3 I hope you’ll excuse the rough graphics in places; I mostly wanted to see how these interfaces would feel if they were always easily accessible on my wrist.

Imagine this: you’re at the airport, hands full of luggage, and you just want to know where and when you need to be at your gate. Wouldn’t it be handy if you could glance at your wrist to find out? Matt Webb called this use “personal signage”, which is a nice way of thinking about it—you can get by with a lot less screen real-estate if your devices know exactly which part of the departure board you’re interested in.

How long do you have to work before your next meeting? It’d be great to be able to see that at a glance.

When you’re driving home, the time that’s really important to you is when you’ll get there. Assuming your phone knew your commute home, it could check current traffic conditions and show you the time you’d get home if you were to leave now.

Since I take the London Underground home instead, what I want to know is how the trains on my line are running.

Of course, the watch has several buttons that can enable you to trigger phone actions. I always text my wife when I’m heading home from work, so I wrote a little app to send a canned message with a single button press as I’m walking out of the building.

If you’re an author who compulsively checks your Amazon rank and social network stats, you could put those things right on your wrist and avoid the distraction of surfing to those sites.

I was curious how my commute time broke down, so put together a custom stopwatch app. I hit the button when I stepped off the bus or got out of the tube, and the app saved that “lap”. (It also used the time of day to know whether to reverse the order of the steps in the list.)

If you want to keep yourself focused on important things, perhaps a little memento mori reminding you how many days you had left (actuarially speaking) would help?

If that’s too morbid, you could try the Pomodoro Technique of staying focused on work in 25-minute increments, and have the watch vibrate when it’s time to take a break.

Finally, a little bit about the development process: to make it easier to quickly build new MetaWatch apps, I put together a little framework called Cicada. It detects watch-compatible Android apps as they’re installed on the phone and automatically adds them to an on-watch menu system.

A Cicada app can run in full screen mode…

…and optionally, the same app can be run in widget mode alongside other apps. Here, the same realtime bus times app is only using the bottom ⅓ of the screen, with other Android applications providing the tube status widget and the digital clock widget.

I’ll talk more about the Cicada framework in a later post.

  1. Disclosure: the MetaWatch guys have provided me with several prototype watches to experiment with over the past couple years. In my day job, I’m employed by Google UK, but my bluetooth watch experimentation is a personal project done on my own time.
  2. You can also modify the firmware that runs on the watch itself, which would be much more responsive, but I haven’t experimented with this yet.
  3. The watches shown here are early prototypes that have slightly different appearances and branding than the shipping devices.

Super See Original bookmarklet for Google Reader on iPhone

Right now, if you visit Google Reader on the iPhone, you get redirected to the mobile XHTML version of Reader. This works fine for browsing through feed items, but when you click “See original” to go to source page for an entry, you get the stripped-down Google Web Transcoder mobile version. We know that the iPhone is capable of more than this; what I really wanted to see in this situation was the full-blown original page in a new “tab”.

Fortunately, the iPhone’s Safari supports bookmarklets, so I whipped up a little bookmarklet that makes it easy to see the real original page in Reader. Here’s how to use it:

  1. On your desktop computer, drag the following link onto Safari’s bookmarks toolbar: Super See Original
  2. Sync your iPhone
  3. Once that’s done, go to Google Reader on your iPhone, and navigate to a blog entry.
  4. Hit the bookmark icon at the bottom, then choose Bookmarks Bar > Super See Original
  5. The original page for your blog entry should open in a new “tab”!

I have no doubt that the Reader team will eventually make a more iPhone-optimized version of Reader, but until then, hopefully some of you will find this useful.

Dabble DB: Still sadly short of structured Shangri-La

My latest side project is Headway, a resource for public transit hackers and the agencies who… often aren’t sure what to make of them. For whatever reason, the combination of sharp urban-dwelling creative folk and useful-but-confusing public transit systems has yielded many handy sites dedicated to making it easier to get around.

As I was setting up the blog, found that I really wanted some kind of outboard brain that could help me keep all the people and sites straight, and hopefully provide a useful reference for others. For expediency’s sake, I just used the handy “one-click” install of MediaWiki that DreamHost provides and started typing away. A few weekends later, the Headway Wiki was starting to become something useful—but I was definitely chafing against MediaWiki’s limitations. I found that I generally wanted to represent the same kinds of things about each entry:

  • the name of the site
  • the web address
  • who runs it
  • when it was launched (often with some degree of fuzziness, because even the site’s creator doesn’t really remember)
  • which agencies it serves

…and a few other miscellaneous things. Unfortunately, MediaWiki is really oriented towards prose—and in fact, I found myself using repetitive prose (with a smattering of bulleted lists) to express these things. Even worse, when I wanted to connect a entry about a third-party transit site to an entry on the agency that it was helping out, I had to manually maintain the link on both ends of the connection. That is, I couldn’t just tell the system that Boston Subway Station Map had information about the MBTA, and have it automatically display that in the MBTA entry—I had to go and edit the MBTA page by hand.

I did make use of MediaWiki’s (apparently) single structural feature: categories. Categories are basically simple tags that you can add to articles, so that the software can automatically generate an index of articles that all share a particular tag. Still, in the end it was far more work than I wanted to do.

There really should be a better way to put together a structured data collection like this, something in between limited expressiveness of MediaWiki and the programming involved in putting together a custom database-backed website using Ruby on Rails or what have you. I’m pretty sure that it’s possible, because I spent several years of my life working on tools like that for the MAYA Information Commons project. Sadly, that work still isn’t available to the general public, so it’s not really a contender here. However, there are a few intriguing new possibilities.

Enter Dabble DB. At first blush, it looked like just the thing that I was looking for. It has what’s probably the best available interface for experimenting with different ways of representing interconnected information. It’s pretty straightforward to create an item, add a few fields to it, and make some of those fields two-way links to other items. That’s no small feat, since my former co-workers and I spent the better part of 2004 building something similar (and if Dreaming in Code is to be believed, the folks on the respected Chandler team were at it for even longer, at around the same time). So far, so good. But after an evening trying to make the Headway data work in Dabble DB, I’ve run into a bunch of significant shortcomings.

No boolean fields

Starting with the smallest thing, there’s no straightforward way to represent a simple checkbox for things like “does this feed contain schedule information”? You can work around this by creating a multiple-choice field with the options “Yes” and “No”, but they’re missing an opportunity to make entering and displaying these fields simpler.

Limited spatial information

Here we are, a couple years after the Google Maps API catalyzed a geographic revolution on the web, and Dabble DB’s only location options are “US or Canadian state/province code” and “Country Code”. To their credit, they do automatically link to a Google Maps search for your term in some cases, but they could provide far more interesting map views if they simply had a lat/lon geocode field and just dumped it into Google maps.

Ontological limitations

It’s very cool that Dabble DB lets you put one item in multiple “categories” (schemas, basically). But in practice, their implementation is less handy than it would seem. Say you had two kinds of things, “websites” and “data providers”, both of which have names (of course) along with other more category-specific fields. If it turns out that you want to represent something that’s both a website and a data provider, and you put both categories on the same object, you end up with two name fields.

You could take a different tack and say that a “data provider” is a specific kind of “website”, so only the website category will have a name field. That’s great, but then there’s no easy way to have the system automatically add the “website” category when you go to create your next “data provider” item. Even worse, when you go to create a new view of your data based on “data providers”, there’s no way to choose to display the “name” field from the “websites” category in the table. (Note: this isn’t strictly true for the name field, since they special-case it so that you always have some kind of identifier, but it’s true for other attributes.)

Rudimentary public views

I could probably work around all those things, but there’s one thing that makes Dabble DB unusable for the Headway data set: the public view is horribly impoverished. Here are the results of my experiments: my lovingly interlinked data has been reduced to a box of yellowing printouts, metaphorically speaking. There’s no apparent way for the viewer to see a single entry laid out in a readable form, let alone follow links between items or search & filter by different attributes.

It’s a shame, because Dabble DB really is the best that I’ve seen so far in most other respects.

Freebase to the rescue?

There’s another contender on the horizon: the wonderfully named Freebase. Tim O’Reilly recently threw a debutante ball for it on his influential blog, and it’s easy to see why it stirred some excitement (and controversy) in the online community. It sounds quite a bit like the things I was working on at MAYA, but with a pleasantly simple web-based interface and without the radical peer-to-peer architecture. On the other hand, it’s hard to say for sure, since the alpha is currently only open to a few fortunate souls, and details are scarce. Hopefully I’ll get a chance to check it out soon.

In the meantime, Dabble DB has a lot of potential, especially since they recently launched their free Creative Commons version (which made it a viable option for Headway). Hopefully, with a few refinements, they’ll be able to turn it into a compelling alternative to developing custom code any time you want to share some interconnected information.

Geotagging Photos with Picasa and Google Earth

Geotagged photos

A few weeks ago, I went on a hike to Treasure Island, and I thought it’d be a good opportunity to try out the state of the art in simple photo geotagging, so that people could see photos of my trip on a map. My first test involved:

  1. Uploading the photos to Flickr using FlickrExport for iPhoto
  2. Finding the spot for each for photo in Google Earth, creating a placemark, and copying the latitude and longitude into geo:lat= and geo:lon= tags on the Flickr image.
  3. Using Scott The Hobo’s Flickr Photoset Maps to turn the geotagged photoset into an online map.

The resulting map is pretty nice, sluggish Yahoo map aside, and the process wasn’t too painful. The worst pain point was the cut-and-paste geocoding process.

However, since Google just released a whole slew of geographic updates, not to mention a barebones (but snappy) photo hosting service, I thought I’d give it a try using Google tools.

First step: get the photos into Picasa, Google’s excellent free Windows photo organizer. I used FlickrDown to download the photoset from Flickr to my Windows box. It was simple, though I was sad that there was no way to preserve my photos’ tags. I then downloaded the new version of Picasa from the Picasa Web Albums site. (You need to get this specific version to be able to do the fancy stuff I’m about to describe.) Picasa immediately found and imported the downloaded photos—so far, so good.

Next, I went through and geotagged the photos using Picasa’s integration with Google Earth 4. I highlighted some photos in Picasa and selected the Geotag With Google Earth option hidden away in the Tools menu.

Geotag Menu

This took me to a slick geotagging interface in Earth.

Earth Geotagging

Basically, you just drag and zoom around in Google Earth until the crosshairs (which are anchored to the middle of the display) are resting on the point that you want to tag the photo with. Then you just hit the Geotag button, the view bounces to give you visual feedback, and it moves on to the next photo. This was so much more pleasant than manually copying the location to Flickr. When I was done, it brought me back to Picasa. The photos all had little crosshair icons in the corner, indicating that they had been geotagged, and a quick look at the Properties dialog seemed to indicate that the location had been added to the image’s EXIF data.

Properties dialog

Now that the images were geotagged, I found that I could use the View in Google Earth... option to see the photo on the map. It seems that this is implemented using some sort of dynamic folder in Google Earth called Picasa Link that constantly queries Picasa for images with geotags in their EXIF—so effectively, you can browse your Picasa library geographically using Earth! I tried adding a random geotagged phonecam image from the web, and sure enough, it showed up on Earth.

OK, so now that I had found geotagged image bliss, how could I share it? I tried the Export to Google Earth File option in the ToolsGeotag menu, which yielded a nice Google Earth KMZ file with the photos embedded.

Since Google Maps recently added support for viewing KML files, I decided to see if I could view my photos there. The results were not so hot.

Maps KML failure

As you can see, it wasn’t a total bust—the locations show up correctly—but the actual photos were nowhere to be found.

Since I’d seen examples of photos on maps, I was sure it could be done—maybe they just wanted the photos to be linked from the web. The Google Earth UI didn’t seem to give me any way to replace the photos with web links to photos. However, KML is a straightforward XML format—hand-editing ahoy!


(Incidentally, I was hoping that when I uploaded the images from Picasa to my Picasa Web Album account, it would do something smart. Sadly, Web Albums didn’t show any recognition that the images were geotagged, not even in the EXIF section. I’m sure that they’ll eventually sort that out, maybe by automatically generating KML links to Maps.)

Back to the hand-editing; first I had to unzip the KMZ file that Picasa had generated. (It’s just a normal zip file, rename the .kmz to .zip and you should be able to unzip it normally.) The only file that I needed was the doc.kml file; the rest of the archive just contained the photos and thumbnails. I stripped out all the style stuff at the top of the file, since Maps didn’t seem to be paying attention to the icons anyway. Then I replaced the contents of the description tag in each placemark with an image reference and link to the images on my Picasa Web Album. Then I uploaded the KML file, and it worked!

Working map

The result isn’t quite as nice as my original map, because I didn’t immediately see a way to get smaller images out of Web Album, but it does the job.

The verdict: geotagging with Picasa and Google is a dream, viewing geotagged Picasa photos is awesome, but the web mapping part of the Google photo story needs work.

(Incidentally, while you’re checking out my Treasure Island album be sure to try pressing the left and right arrow keys while you’re looking at pictures—you can flip through photos really quickly in Picasa Web Album!)

What I’ve been up to…

…for 20% of my time, anyway.

My teammates and I are proud to present Google Transit, a new Google Labs experiment focused on helping make public transit information more accessible and understandable. We’re starting with Portland, Oregon’s transit system, but we’ll be expanding it to support other cities soon.

For more information, check out the announcement, and if you have feedback or bug reports, we’d love to hear it at!

Quicktime 7.0.3 for Windows Installation Problem

This is for the benefit of future searchers: if you’re trying to install iTunes on Windows and it dies during the Quicktime installation with a “-3″ error, or you try to install Quicktime 7.0.3 on Windows and you get an Error 1714 or some complaint about not being able to find quicktime.msi even though it’s there, the way to fix it is to:

  1. Open regedit
  2. Navigate to HKEY_CLASSES_ROOT –> Installer and select Products
  3. Use the Find menu to find quicktime inside that part of the key
  4. Delete the part of the tree that you find
  5. You should now be able to install Quicktime and iTunes without incident

Google Maps + MBTA Update

Since I had the day off today, I had some time to tinker with my MBTA Google Maps experiment and fix a few things which had been bugging me. The changes include:

  • You can now look for times near a particular address using the Location area under the map. In order to do this, I wrote a little proxy to feed addresses entered on this page to the wonderful free geocoding service, and return the results as JavaScript literals.
  • The page now remembers your location, zoom level, and selected stop between visits.
  • It now works in current versions of Internet Explorer.

At this point, it’s almost suitable for regular use. The biggest improvement that’s still on my list is to do a better job of handling stops that are too close together. Right now, stops can fall too close together on the map, making them difficult to click on individually. Automatically combining these stops would help the situation.

Building LuaJava on Mac OS X

In my previous post about coroutines, I mentioned that I’d be looking into LuaJava (a bridge between Java and the Lua language) as a way to get coroutine behavior in a Java environment. Since I had some time this afternoon, I decided to get LuaJava up and running on my PowerBook. Here are some steps that you can follow to build LuaJava on Mac OS X 10.4.2 Tiger. (I’ve also successfully tested these instructions on Mac OS X 10.3.9 Panther.)

LuaJava requires Lua 5.0, so first we need to download the Lua 5.0 source and build it:

% tar xzvf lua-5.0.tar.gz
% cd lua-5.0
% make
% sudo make install

Next, we need to build LuaJava. Download LuaJava 1.0, then extract it and switch to its directory:

% tar xzvf luajava-1.0.tar.gz
% cd luajava-1.0

Since LuaJava’s config file comes set up for Linux by default, we need to edit it to Mac OS X-friendly settings. Comment the following lines:

#LIB_EXT= .so
#LIB_OPTION= -shared
#DLLIB= -ldl

and uncomment the corresponding ones:

LIB_EXT= .jnilib
LIB_OPTION= -dynamiclib -all_load

We also need to change the LIB_LUA line to read:

LIB_LUA=/usr/local/lib/liblua.a /usr/local/lib/liblualib.a

With those changes in place, we can just type

% make

and apart from a few JavaDoc warnings, everything should go smoothly. To test it, we can fire up the LuaJava Console:

% java -cp "luajava-1.0.jar" org.keplerproject.luajava.Console
API Lua Java - console mode.
> print('Hello, world!')
Hello, world!
> exit

OK, that looks good. How about the included tests?

% cd test
% ./
% ./

Again, working fine. There are a couple of other Lua test files in the test directory that we can run like so:

% java -cp "../luajava-1.0.jar" -Djava.library.path=.. 
    org.keplerproject.luajava.Console testMemory.lua

(replace testMemory.lua with the name of the file you want to run)

OK, let’s try creating a program of our own. There’s a decent Hello World on the LuaJava examples page, so (after switching back to the luajava-1.0 directory), create the and hello.lua files depicted on the examples page. You’ll need to add an import line to the top of so that Java knows where to find all the LuaJava objects:

import org.keplerproject.luajava.*;

Once you have the files, compile the Java class:

% javac -classpath luajava-1.0.jar

Now run it:

% java -cp luajava-1.0.jar:. Hello

If everything is working correctly, you should see:

Hello World from Lua!
Hello World from Java!

Huzzah! Now we have all the pieces we need to start embedding Lua functionality in Java code.

(Lack of) Coroutines in Java

My current work project involves lots of small bits of code talking to each other asynchronously and requesting data from a distributed network. This means that when my code goes to retrieve a piece of data, it may be returned quickly, slowly, or not at all. Because of this uncertainty, we wouldn’t want to hold up other parts of the code while waiting for data, and so we want to put the requesting code aside and just come back to it later if and when the data arrives.

The current state of the art looks a bit like this–effectively a callback-based state machine:

Snippet establishPreference = new Snippet() {
    // called when the result of requestData is available
    public void dataReceivedCallback(Data d) {
        if (d.getValue("likes_marmalade")) {
        } else {
            serveBagel.requestData("cream cheese");

Snippet serveToast = new Snippet() {
    public void dataReceivedCallback(Data d) {

Snippet serveBagel = new Snippet() {
    public void dataReceivedCallback(Data d) {

To kick off this process, you would call:

establishPreference.requestData("Joe Hughes");

And then depending on whether I liked marmalade or not, I would eventually get served either toast with marmalade or a bagel with cream cheese.

This works fine, but you can see how it could easily get convoluted once you do any kind of serious branching or looping. For the sake of easier creation and maintenance of these things, I’d really like to be able to express these operations more like this:

Data customer = requestData("Joe Hughes");
if (customer.getValue("likes_marmalade")) {
    Data topping = requestData("marmalade");
} else {
    Data topping = requestData("cream cheese");

When you put it that way, it’s much easier to see what’s going on. The problem is that in order for this to work, you have to be able to pause this block of code whenever you go off to do a requestData() call, and then restart it with result when you receive it. It turns out that what I’m describing is a somewhat obscure programming language construct called a coroutine.

Unfortunately, not many modern programming languages natively support coroutines. Python does, somewhat (through Stackless and the new Generator construct). Ruby seems to. Io and Lua do. Java, however, doesn’t.

I’ve certainly been able to use threads to make things that look like coroutines:

public abstract class PseudoCoroutine 
    implements DataListener, Runnable {

    private Data response = null;

     * This method starts a thread to run the specified task.
     * @param task the PseudoCoroutine subclass to be run.

    public static void doTask(PseudoCoroutine task) {
        t = new Thread(task, task.getClass().getName());

     * This is a callback method from the DataListener interface
     * provided to the DataService to call when it has retrieved the requested data.
     * @param d the data received

    synchronized public void dataReceived(Data d) {
        this.response = d;

     * Provides a delayed-synchronous way to perform a data request.
     * This method call will block until the response
     * is received from the server. 
     * @param dataID the ID of the data to be requested.
     * @return A Data object corresponding to the requested dataID.
     * @throws InterruptedException

    protected Data requestData(String dataID) throws InterruptedException {
        // second parameter is used to pass this.dataReceived() as a callback
        DataService.getInstance().requestData(dataID, this);

        synchronized (this) {
            while (this.response == null) {

        return this.response;

    // to make this a Runnable that can be passed to a thread
    public void run() {

     * This is where the PseudoCoroutine subclass should perform its task.

    public abstract void taskBody();

This class lets me just use the code I had written above:

private class ServeBreakfastBread extends PseudoCoroutine {
    private String customerID;

    public ServeBreakfastBread(String customerID) {
        this.customerID = customerID;

    public void taskBody() {
        Data customer = requestData(customerID);
        if (customer.getValue("likes_marmalade")) {
            Data topping = requestData("marmalade");
        } else {
            Data topping = requestData("cream cheese");


PseudoCoroutine.doTask(new ServeBreakfastBread("Joe Hughes"));

This works fine, except that each of these things now requires an OS thread, with their attendant limitations (good luck getting more than a few thousand on a desktop VM). I’d also like to have the ability for several of these coroutines to be able to operate on common state information without lots of synchronization hassle, which I can’t do if each coroutine lives in its own thread.

So, where to now? Maybe I can embed another language interpreter within my Java system and express the coroutines in that. Unfortunately, Jython doesn’t support Generators yet. I’ll have to look into LuaJava and JRuby to see if they have anything to offer.


The typical setup in my office is that I have my G5 powering two monitors front and center, and the powerbook beside them, on a (modified) iCurve for ergonomic viewing. While this is great for the displays, it leaves the problem of controlling the laptop. At one point I had a KVM switch set up, but the hassle of plugging in a USB cable and flipping the switch led me to just type un-ergonomically on the laptop’s keyboard.

Then I came across Synergy. It’s a cross-platform tool that lets you send your keyboard and mouse commands to other machines on your network–sort of like VNC without the screen-sharing (since the other screen is right in front of you). The Synergy team’s most brilliant innovation, though, is the interface for switching machines. Basically, you can configure your machines so that when you roll your mouse pointer off the edge of one machine’s screen, it magically appears on the corresponding edge of a different machine’s screen. You can roll your mouse from your Linux box across your Windows box over to your Mac in one smooth motion. It’s like the way that multi-monitor setups work, except that under the hood it’s seamlessly switching to sending your input to another machine over the network.

I’ve been using Synergy for a few months now, but it’s not without its rough edges. Last time I did it, configuration was a text-editing affair, though the SynergyKM preference pane add-on for Mac OS X makes things much more automatic. I also tended to experience general glitchiness on OS X. A vestigial mouse pointer would often remain on my main monitor, twitching distractingly, as I controlled the laptop. It also didn’t handle modifier and function keys, meaning I still had to press the function keys on the laptop directly to trigger Exposé.

Enter Teleport. While (or perhaps because) it’s Mac-only, it solves most of the problems I had with Synergy. The configuration is a breeze (using Rendezvous AKA Bonjour), and input forwarding is smooth and comprehensive. It also seems to automatically sync clipboards well, something that I was using Erik Lagercrantz’s ClipboardSharing utility for until he failed to update it for Tiger.

So far, I only have a few minor critiques. First, it doesn’t appear to allow you to put two remote screens side-by-side–the remote screens must be adjacent to the main computer’s screens. Also, it seems to hit the disk every time I roll over the boundary between two machines, which is audibly distracting and causes an annoying delay in which mouse motion isn’t counted on the new screen. Even so, I think it will be a part of my desktop setup from now on. Thanks Julien!

MBTA Google Maps Experiment

MBTA Google Maps Experiment

Since Google so graciously released an API for their excellent mapper last week, I figured I’d take it for a spin. I had written some code to prise nearby stops and bus times from the MBTA’s handy trip planner a while back, and so I decided to glom them together and see what happened. Here’s the result.

Here’s how it works: You pan and zoom around the map until you find an area of Boston you’re interested in, and then click Recalculate Stops. The map will then update to show stops within a half-mile of the center of the map. You can click on any of these points to get the next three times for the routes at that stop. Seems to work for bus, subway, commuter rail, and ferry routes.

It’s vaguely useful for an afternoon’s work, though there are a few quirks:

  • Stops which are too close to each other may be impossible to click on (sometimes you can tell because their shadows look darker)–I’d probably have to write some code to combine these into a single marker.
  • The same subway line is often broken into separate listings. This is something that the trip planner does, perhaps to get around some sort of database limitation? In any case, this can probably be remedied with more code on my end.
  • Other than the subways, each route is listed only once per map, at the nearest stop. This means that even though the 87 might stop in several places within the mile-wide area being searched, you’ll only see the stop that’s nearest to the center of the map.
  • There’s no way to jump to a particular address. The Google Maps API currently doesn’t do geocoding, though I could integrate or the trip planner’s built-in geocoder given a little more time.
  • Clicking the Recalculate Stops button isn’t the smoothest thing in the world. My first attempt updated the query automatically when you panned the map, but that made things a bit too jumpy as you panned around to get a better look at the stops and surroundings. Needs more work.
  • The iconography and layout could be better.
  • I have no official connection with the MBTA, and they could break the scraping code that this depends on at any time. I hope they’ll be more flattered than offended if they find this.

All in all, the Maps API is pleasant to work with, though the way that Google binds API keys to individual directories keeps you from being able to just copy the source files from your test directory to your deployment directory. But that’s the most minor of nitpicks, given that you can dispense keys for all the directories you want. They’ve done a great job, and I think we’re going to see an even bigger wave of new mapping apps in the coming weeks.

Tiger Dashboard: First Impressions

When people install a new operating system, one of the first things they do is go poking around to see what’s different. With Tiger, one of the first things that they’re going to notice is Dashboard.


While Dashboard was immediately compared to Konfabulator because of its visual, technical, and “widget” naming similarities, it also owes much to Apple’s old Desk Accessories, as John Gruber and others have pointed out. I think of them as a new version of the old Terminate and Stay Resident programs popular on MS-DOS. Generally, those programs didn’t live on the same screen as the main program you were using. Instead, they popped up when you pressed a keyboard combination. Apple’s design decision to put Dashboard widgets on a separate “layer” that you can call up makes them much more useful to me than Konfabulator, because there truly is never enough screen real estate, and the Konfabulator widgets immediately got smothered under other windows when I tried to use them. (I should note that it’s also possible to keep Dashboard widgets on your desktop.)

Development Simplicity

Apple’s decision to make HTML/CSS/Javascript the lingua franca is Dashboard’s most interesting feature to me. Konfabulator’s XML/Javascript environment came close, but Apple’s “lazy” decision to use their Safari WebKit engine means that many widgets can actually be developed and viewed in a browser. (Many of the early posters on the Dashboard Widgets site were clearly developing their widgets without any access to Tiger.)

Using standard technologies means that their developer population is the large set of people who already know how to develop using web standards. Furthermore, given the recent buzz around “Ajax” web applications, Dashboard gives people like me another excuse teach myself data-driven Javascript programming.

By effectively turning Ajax into a GUI library for desktop apps, Dashboard almost fulfills the plan that Netscape appeared to be working towards in the late 1990s. While many widgets will be little more than borderless self-refreshing web pages, it’s also possible to hook up a Dashboard interface to native code. In Apple’s first Dashboard widget contest at WWDC 2004, the top prize went to a widget which put an HTML face on the GNU Go game engine.


Dashboard isn’t all roses though. For one, going to HTML-based interfaces makes it even easier for Apple to continue their recent trend of ignoring their own interface conventions whenever it suits their fancy. (See brushed metal vs. aqua, the woodgrain GarageBand window, the non-standard sets of widgets in the new iPhoto.) In several of the sample widgets that ship with Dashboard, they go to great lengths in Javascript to create properly-behaving milky-white scroll bars rather than simply using the standard ones:

Even so, the initial set of widgets that Apple shares largely consistent features: the aforementioned white scroll bars, a circled i which fades in to trigger a flip to the preferences side, rounded square widget icons, and a dark textured flip side with a large “Done” button. It’ll be interesting to which, if any, of these features get adopted by the third-party widget community.

What is it good for?

In the initial burst of activity following Tiger’s release, there’s a lot of experimentation going on in the Dashboard world. The best place to watch this happen is the Dashboard Widgets Showcase, which seems to have captured the, uh, Tiger’s share of widget-writing activity. From what I’ve seen so far, widgets generally fall into the following categories (from most to least useful):

  • Status Displays — These display some piece of information which changes relatively frequently, and which you want to see at a moment’s notice. Examples include Apple’s weather and stock displays, the iTunes Connection Monitor, and my Bloglines Notifier.
  • Handy Controls — These are small bits of functionality that people want to have ready access to–good examples here are Apple’s iTunes widget, Panic’s Transmit FTP widget, and the Capture screenshot tool.
  • Games and Geegaws — Games and toys like miniPatience, Hula Girl, and MAYA Cards. These don’t necessarily benefit from being on Dashboard, other than not crowding up your desktop, and being quickly dismissable when your boss walks by.

Baby Steps

As first experiments with Dashboard, I’ve put together a few widgets of my own. The first is Bloglines Notifier, featuring a gorgeous visual design by Jeremy Koempel:

It’s very simple, showing you the unread posts in your Bloglines online RSS reader account, and giving you a button to go read them. The other is a little promotional widget that I cooked up with some of my co-workers at MAYA Design:

This one’s a virtual version of our customized playing cards that we give out as promo schwag. Each one features a different pithy design quote which reflects our design philosophy. This one came about when I realized that Dashboard’s flipping functionality would be a good match for playing cards.

I’ve got a couple others in the works, so keep watching this space…

Multi-Pointer Gestures

Apple released another bump in the Powerbook line this morning. Typical weekday release, still no G5, no big deal—right? Well, there was one thing that caught my eye:

Trackpad Scroll

So what? We’ve had things like SideTrack to set aside sections of the trackpad for scrolling for some time now.

Two Arrows

Wait, two fingers, you say?

For the past 15 years or so, we’ve pretty much been stuck with a single cursor with a couple buttons as our narrow pipeline into the world behind the screen. A few niche products and research projects have demonstrated the potential of multiple-pointer interaction. For instance, a SIGGRAPH video that I once saw showed a user holding a virtual tool palette with one hand, and clicking through it with the other hand’s cursor. It also made rectangular selection more fluid, with each pointer getting one opposite corner. A company called FingerWorks has been selling keyboards and touchpads that can detect multiple fingers. I’ve been curious, but they’re pricey and I’ve never found a demo unit that I could try for myself. With such a tiny market, developers and OS makers have had little incentive to investigate the possibilities of multi-pointer interaction.

That’s why Apple’s addition, if I’m guessing correctly and they’re not just using some capacitance trick, stirs my imagination. If they eventually move to multi-finger touchpads across their entire portable line, it’d be the first wide-scale deployment of a multi-pointer input device. (Nintendo blew their chance with the DS—it’s disappointing that its touchscreen can only detect one finger at a time, since it would otherwise serve as a great reconfigurable controller.)

The operating system could still be a bottleneck. I have no idea whether OS X can support multiple pointers under the hood—their initial use of the multi-finger gestures for things like scrolling is easy enough to do at the driver level. But I hope that if they expand the use of the multi-finger trackpads, they’ll eventually expose it at the OS level. I would love to have the additional expressiveness in my work. For example, I could use it to solve the ambiguity of whether the user wants to drag a frame or something inside of it—in the latter case, you could just pin down the frame with one finger and grab the contained element to yank it off. You could zoom in or out by grabbing a map at two points and bringing your fingers further apart or closer together (Hiroshi Ishii prototyped this behavior with phycons on a projection table). Or you might express the difference between “move” and “copy” operations by whether the user grabbed the item with one or two fingers.

Who knows, maybe Apple could end up doing for multi-pointer input what they did for USB and WiFi. Well, I can dream…

Update: As it turns out, some earlier Powerbooks and iBooks (though, sadly, none in my household) have trackpads that support this feature. On those machines, you can install a driver mod to enable two-finger scrolling capabilities.

Update: Looks like I missed another interesting feature of the new Powerbooks: an accelerometer! Enterprising hackers have already found ways to tap into it from software, yielding a “tilt”-control iTunes interface. I think it’s an unwritten law of Mac software development that every Mac I/O device must eventually be hooked up to iTunes.

More A9 Yellow Pages

Here are a couple other interesting bits about the A9 Yellow Pages:

While my previous post on the topic was more about the UI of the feature, I just noticed another interesting part of the page:

A9 Update Listing function

Clicking that button takes you to a fairly comprehensive set of web forms which allow the business owner or any random websurfer to contribute metadata about that business—things like phone numbers, email, website, hours of operation, and credit cards accepted. This, along with the fact that all of Amazon’s existing commenting and recommendation features are available for the businesses, made me realize that what they’re really doing here is planting the seeds for ownership of the real-world metadata game as thoroughly as they’ve captured the product-metadata space.

What’s the typical place to link to if you’re talking about a book or DVD online? Amazon. (I’ve even got a plugin on my WordPress installation that automates these sorts of links.) Amazon really realizes that they’re in the cataloging business as much as the product-shipping business—I don’t have a reference handy, but I remember Bezos saying that they could always make a business licensing their catalog (with all the rich comments, ratings, and other user-contributed metadata) if the “selling things” bit didn’t work out. Now they’re poised to become a definitive resource about local businesses (and other physical entities).

As good a job as I’m sure they’ll do with it, the fact remains that according to their license, Amazon’s dataset (including user contributions) is proprietary. (That could be one reason why they decided to run their own GPS photo trucks rather than employing pre-existing road-photo data sets.) They alone ultimately control what can be done with it. Wouldn’t it be better if we could find a way to collaboratively build similar systems without throwing our work over a proprietary wall in the process?

Fast Feedback

A9 Yellow Pages 1369

This morning’s buzz on the web seems to be centered around a9′s new Yellow Pages feature, which tries to show photos of the businesses alongside their results. How did they get all these photos? Basically, they had trucks with side-facing cameras and GPS units driving down the major commercial thoroughfares in a bunch of cities, and the system tries to roughly match up the geocoded address with the photos taken near that location. (As Russ Beattie points out, this has been done in Spain before, but not with this level of grace in the U.S.) If you know anything about GPS, you’ll realize that this process isn’t very exact, and indeed most of the photos of Cambridge businesses were about a block off their intended targets.

A9′s saving grace here is that they provide an incredibly painless way for users to correct the listings. At the bottom of the screenshot above, there’s a smooth javascript-driven row of images that you can use to pan down the street, and in a single click, assert that one of the particular photos is in fact the correct photo of the business—without going to another page, without signing in, without hassle. In my own projects like buskarma, I’ve learned the value of fast feedback—if you can present users with a minimal interface at the exact point at which they notice an inconsistency or failure of the system, you can often skim nice, targeted content improvements off the top of the user’s brain.

In A9′s case, once I corrected the entry shown above, it immediately started using that photo as the definitive one. It didn’t, however, update the thumbnail in the search results listing (I assume that’s cached). It also didn’t give me a way to assert that the photos were of the wrong side of the street for the business I was looking for. Finally, it didn’t make any attempt to re-interpolate the locations of the nearby businesses based on my assertion. Still, the mechanism is a great Wikipedia-style way of having the legions of web users who are undoubtedly kicking the tires of this service today improve the results as they go along.

Road Editor

Oh, and by the way, A9′s not the only one who’s been driving around with photo trucks. Peep this screenshot from a collaborative GIS demo that I helped put together for a northeastern state which just happened to have yearly drivethrough data for all of its state roads. Track me down at ETCon if you want to see it in action.

LibraryLookup for the Minuteman Library Network

It’s a shame that provides much better facilities for searching and wishlisting books than most local libraries do, since you can save a bunch of cash (not to mention room in your house) by only buying the books which are worth re-reading. Fortunately, Jon Udell’s LibraryLookup bookmarklet tool offers a way to combine the handiness of Amazon’s catalog with the cost savings of library use.

A bookmarklet is a browser bookmark which contains a glob of JavaScript code instead of a URI, so that it becomes a little program which operates on the page you’re currently viewing. In the case of LibraryLookup, it scours the web page you’re looking at for an ISBN number, which it then feeds to your library catalog so that you can jump directly to the listing there.

The Minuteman Library Network is an umbrella organization for many local libraries in the Boston area, including my own Cambridge Public Library. They recently changed their catalog format, and I updated the bookmarklet to work with the new system—here’s the result:


To “install” it, just drag the “library” link above to your bookmarks bar. Then, next time you’re looking at some random book on the web, click the “library” bookmarklet and you’ll get a popup showing whether it’s available at any of the Minuteman branches and giving you the ability to reserve it. Enjoy!

Update: It’s worth mentioning that I had some problems posting the bookmarklet code in WordPress for a bit—it turned out that it was eating the backslash characters when saving. I replaced each backslash with %5c, which fixed the issue.

Home Heartbeat Unveiled

Home Heartbeat Starter Kit

Over the past year, MAYA Design has been working on a pretty cool project for Eaton, but I haven’t been able to say anything about it. However, since it’s being shown at CES and now has a public site, I think I can start writing about it. I should mention that while I work for MAYA, I haven’t had much personal involvement in this project, I just think it’s nifty.

Home Heartbeat is an inexpensive nervous system for your home. It’s one of the first consumer-level uses of Ember‘s low-power mesh-networking technologies. While the individual sensors don’t actually relay messages from other sensors for power reasons (that way, they can supposedly last for years on one battery), you’ll be able to get repeater modules to extend the coverage range of the net (you get about 90 feet per base or repeater). When you bring home the starter kit and plug in the base station, you will effectively have a mesh network in your house.

So what does this get you? Well, the Ember chips are cheaper and more power-efficient than, say, WiFi, so it’s more feasible to place networked sensors (and actuators) around your house and just forget about them until they have something interesting to tell you. (A network supports up to 30 devices.) Here’s the starting lineup of sensors:

  • wet/dry sensor — can tell you if the basement’s flooded or Sparky’s water bowl has run dry
  • open/closed sensor — tells you the current state of a door or window
  • power sensor — stick this between the outlet and your iron or TV’s cord and you’ll know if it’s on or off (I want one that can tell me the actual power draw, though, like the Kill A Watt)
  • water valve shutoff — hey, an actuator snuck in there! I don’t know much about this one; presumably, you could set it up to cut off your water if the wet/dry sensor detects flooding
  • reminder — Here’s where things start to get interesting. This thing is a timer that you stick next to some task you’ll need to accomplish in the future. When you take care of it, you touch it and the timer starts over. So you can put it next to your air system filter, and tell it to ping you in three months—or stick it on the washing machine and have it remind you to put your laundry in the dryer in 40 minutes.
  • attention sensor — This is actually a networked button, which is kind of cool in itself. Stick it by the door and have the kids press it when they get home—or give one to your neighbor to stick on their fridge, so they can press it if they see a suspicious character lurking around your house.

Home Heartbeat Home Key

That’s the input, what about the output? Well, the base station comes with a small pocketable display which the team informally called the “key fob” (you can actually use it as a keychain), but which is apparently called the Home KeyTM in the product. It’s got a vibration feature (and probably some loathsome beepy stuff) to alert you to sensor messages, but it also acts as the setup interface. When you first get a new sensor, you slide the key into it, and the sensor bonds with your network. At the same time, it becomes the user interface for that sensor so that you can name it and configure it. When you’re done tweaking the sensor, you just pull out the key and stick it back in your pocket.

What happens when you leave the house? Unfortunately, your key is left high and dry, though it does remember the last state of all your sensors, so you’ll still be able to check whether you left your iron on or not. In addition, the base station can call out using your landline to send SMS messages to your cell phone for simple alerts. It’s too bad that they couldn’t work in a pager chip or something so that you could keep using the same key for your alerts.

So, basement flood warnings are great and all, but what else can you do with a household mesh network? Well, the base station has a USB port, which will probably support some interesting programming possibilities, along with allowing you to use your broadband connection for the SMS alerts. From what I hear, there will be some effort set aside to help the developer community do interesting things with this platform in the coming months.

If you’re interested in more details, I recommend checking the product manuals in the support section of the site.

Update: My co-worker Mike discusses the philosophy and motivation of Home Heartbeat.

Update: MAYA has now has a page about Home Heartbeat.