I thought this was pretty awesome and on-point; check it out:
An apropos re-imagining of the way that interaction with the desktop computer should work. At first I thought, while watching, "if you're gonna suggest a change for desktop interaction interfaces, why not just go all out and promote eye trackers or cerebral interfaces." But I realized that Miller's way is a significantly different approach than the one we have now, while still possible in the relatively near future.
I mean, we already have multi-touch; hardware for the input device could be developed cheaper than a multi-touch display (since it doesn't have to display as well). Then it's a matter of writing drivers (easy enough) and adapting software (maybe a little harder). Integration with Gnome 3.0 would be awesome.