Multi-Pointer Gestures

Apple released another bump in the Powerbook line this morning. Typical weekday release, still no G5, no big deal—right? Well, there was one thing that caught my eye:

Trackpad Scroll

So what? We’ve had things like SideTrack to set aside sections of the trackpad for scrolling for some time now.

Two Arrows

Wait, two fingers, you say?

For the past 15 years or so, we’ve pretty much been stuck with a single cursor with a couple buttons as our narrow pipeline into the world behind the screen. A few niche products and research projects have demonstrated the potential of multiple-pointer interaction. For instance, a SIGGRAPH video that I once saw showed a user holding a virtual tool palette with one hand, and clicking through it with the other hand’s cursor. It also made rectangular selection more fluid, with each pointer getting one opposite corner. A company called FingerWorks has been selling keyboards and touchpads that can detect multiple fingers. I’ve been curious, but they’re pricey and I’ve never found a demo unit that I could try for myself. With such a tiny market, developers and OS makers have had little incentive to investigate the possibilities of multi-pointer interaction.

That’s why Apple’s addition, if I’m guessing correctly and they’re not just using some capacitance trick, stirs my imagination. If they eventually move to multi-finger touchpads across their entire portable line, it’d be the first wide-scale deployment of a multi-pointer input device. (Nintendo blew their chance with the DS—it’s disappointing that its touchscreen can only detect one finger at a time, since it would otherwise serve as a great reconfigurable controller.)

The operating system could still be a bottleneck. I have no idea whether OS X can support multiple pointers under the hood—their initial use of the multi-finger gestures for things like scrolling is easy enough to do at the driver level. But I hope that if they expand the use of the multi-finger trackpads, they’ll eventually expose it at the OS level. I would love to have the additional expressiveness in my work. For example, I could use it to solve the ambiguity of whether the user wants to drag a frame or something inside of it—in the latter case, you could just pin down the frame with one finger and grab the contained element to yank it off. You could zoom in or out by grabbing a map at two points and bringing your fingers further apart or closer together (Hiroshi Ishii prototyped this behavior with phycons on a projection table). Or you might express the difference between “move” and “copy” operations by whether the user grabbed the item with one or two fingers.

Who knows, maybe Apple could end up doing for multi-pointer input what they did for USB and WiFi. Well, I can dream…

Update: As it turns out, some earlier Powerbooks and iBooks (though, sadly, none in my household) have trackpads that support this feature. On those machines, you can install a driver mod to enable two-finger scrolling capabilities.

Update: Looks like I missed another interesting feature of the new Powerbooks: an accelerometer! Enterprising hackers have already found ways to tap into it from software, yielding a “tilt”-control iTunes interface. I think it’s an unwritten law of Mac software development that every Mac I/O device must eventually be hooked up to iTunes.

  • http://davidwatson.org:8086/ David

    The problem with the multiple input approach that you describe is not the input per se, but the output. Windows NT has supported multiple input queues since the mid-90s though I’m not sure about OS X. The problem occurs when you try to provide feedback to the user after processing the events through the asynchronous event queue, which event generated which feedback? There’s a large direct manipulation physics question there that remains unanswered AFAICT. We know the answer isn’t throwing a bunch of message boxes, but I’m not sure that a rigorous model that covers all the cases you’d run into in an OS has been found. The typical aggregated log window seen in IDEs might suffice, but it’s hardly optimal.

  • http://retrovirus.com Joe

    For this to work, I think you’d need pointer events to have a pointer ID attached to them—so that the first finger has pointer ID=0, the next has pointer ID=1, etc. I suspect that some event APIs might have trouble being adapted to this in a backwards-compatible way, but once they had been, handling multi-pointer gestures would be straightforward. In Java terms, when I got a mouseDragged event, I would check to see if I had any other cursors in the mouseDown state, based on previous input that I had received.