IMHO, the future of computer input lies in optimization of what's already there:
The keyboard exists and is ubiquitous - talk about market share. This isn't like windows vs. Linux - everything has a keyboard and they are all basically the same (let's leave out the subject of keymaps, OK?). Therefore, a totally new input interface, completely divergent from this basic technology is kinda hard to envision.
As far as I know, all of the computer admin work (not word processing but coding, command-line work, debugging, etc) up to this time is completely keyboard-based. I can hardly imagine a superuser logged into a linux mainframe trusting some voice-recognition app to get the options just right as he has his kernel reread the partition table on an active volume which contains partitions which are very important not ever be unmounted.
Then again, some of the most efficient manipulation of objects occurs at the text interface: ever seen an efficient admin hammer away at a config file with vim? Or how about bash, with its myriad shortcuts and symbology? Compare that with a windows gamer clicking away at a civilization or warcraft screen to move units around and you get where I'm going with this.
If you look at a pro graphics designer on the other hand, he does all operations with keyboard shortcuts, and yet 80 percent of his time is spent selecting the objects he works with and positioning his layers where he wants them.
From this, I see that the main bottleneck in interfaces is not rally the fact that we use a sequential series of characters or key-combinations to interface with a computer, but rather that the control of objects from this medium is not complete - you need to continually disconnect your hand from the keyboard to reach over and wiggle the mouse around.
I like the idea of projecting the keyboard map on a table and sensing as the user executes a "key-press" in 3D space. Exactly. Only a few new key combos need be created, like selecting and de-selecting for instance, and the user is able to then just slide his hands across the KEYBOARD and move the object, without taking his hand off the interface which he uses to control the computer.
Sure, you'll still see a mouse on the screen, but with a little interface work to more clearly show what objects are selected, you could create something like a massive touchpad across all of the keys. But - there's gaps in the keys, you say. Exactly.
Slide a finger or two across the keyboard and the mouse shifts across the screen. When it hovers over what you want, you give it a key combo, like super-esc (super is the "windows" key) to select, super-esc to deselect. Do it twice fast and you just double clicked. So it's selected. You slide horizontally across a row of keys: the object only moves horizontally. Now we have precision! I remember when I was a kid and someone let me play with something resembling paint in mac. It was excruciating to actually draw the shape you were trying to draw - we have reached the end of the mouse (joysticks? forget it..).
You can already do similar things in linux. I use gnome with compiz fusion. ctrl-alt-arrow and I shift desktop. ctrl-alt-t and I get a terminal. ctrl-alt-w and I get firefox. ctrl-alt-m and I get mail. ctrl-alt-u and I get music. super-e and I get an expose' of my windows, alt-tab, ctrl-tab, ctrl-t, etc etc.
But it's not complete and it's still neanderthal. The ideal interface, as I see it, is a single surface where a standard, and studiedly economical series of hand motions and key presses, controls the computer on a basic level. Implemented right, you don't need to add voice recognition (as a kernel module, ha ha ha ha ha) or train someone into some weird sign language or martial art like Minority Report (I know comp-fu and fu-jitsu!) so he can move photos around on a computer screen.
Until the neural interface comes along, it's probable that this is the direction we are heading.