Love is Programming Handheld in Hand
Musing about Apple's mistake of pissing off developers with the App Store, Paul Graham wonders what a smart phone would have to offer to make developers want it instead of an iPhone. He wants to see someone make a software development environment worth using on a handheld device. It would have to be good enough that you'd want to program on your phone, with its small screen and limited keyboard,[1] rather than on a laptop or desktop with huge monitors and ergonomic keyboards.[2]
After using Eclipse for five years, I can't imagine doing significant Java development (particularly debugging) on a handheld device. Mobile device user interfaces are all about presenting only the immediately relevant information while an IDE[3] is really good at providing a bunch of information at once.
I still use a 24x80 text editor when writing in scripting languages, partly because the problems are usually small enough to fit in just a few files and the languages are high-level enough that a significant amount of code fits in a terminal window. I can imagine writing Python or Ruby on an iPhone or even Lisp if the editor had good parenthesis-matching. Perl and Haskell might be frustratingly symbol-heavy. But just because someone could write Python in a text editor on a handheld device, I don't think they would. It's just not as enjoyable as using a full keyboard on your desk.
A successful development environment on a mobile device should make use of the unique features a handheld offers. Typing on a phone may be less fun than using a laptop, but what if you could program with gestures? Tilt your phone to the left to generate an assignment. Move it in a circle to create a loop. Touch a variable to reference it again. Use multi-touch zooming and scrolling to select a function from an API.
Historically, attempts at visual programming languages haven't yielded many success at lower abstractions than GUI layout and UML, but I think a graphical language designed from the ground up to fit a handheld environment could be quite elegant. Text is very good at representing symbolic computation, but it's not the only way. Constraints often lead to beautiful art; if the crutch of an easy keyboard wasn't available, I'll bet someone would come up with something really clever. A lot of programmers are good at spatial reasoning, so let's try breaking nonlinear programs out of the linear medium of text.
Developing on a handheld would obviously be handy when you're working on an application for that device. Make a change to the tilt response code, then tilt your device to test it. Take a picture[4] and feed it straight to a test case. With lots of sensor input, mobile devices seem like an ideal platform for neural nets and other fuzzy techniques. On the development side, you could present an example to the device and immediately classify it, quickly building a model.
These are all hand-wavy suggestions of possible directions. I've hardly ever used a cell phone, let alone developed a mobile app, so my conceptions may be off base. Yet I think my key message is valid: to make handheld development attractive, it needs to be approached from the ground up. The metadeveloper needs to focus on solving a class of problems[5] and find ways to do so using things small computing devices are really good at. I've focused on sensor input here, but mobile devices have other unique features. Would dictating pseudocode to your phone be useful? Maybe there are ways to take advantage of location information and remote networking in programming. In the end, I think successful handheld development will be as different from desktop IDE development as that is from simple interactive text editors as that is from programming with punchcards.
[1] Not owning a smart phone, I'm assuming that typing on a handheld device is more of a chore than using a full keyboard. I know Apple and others have done some clever things with predictive text, automatic spelling correction, and other clever techniques to make typing easier on a phone. Unmodified, I suspect those would not work well when programming where function names deliberately deviate from proper spelling and punctuation has very special meanings that differ between languages. You could build language models for programming languages, of course, but you could also implant such models in IDEs (most already do) and use a full keyboard. I think dropping the keyboard metaphor would be preferable for handheld input. Why not leverage the motion and orientation sensors and use arm and wrist motions to "paint" your words in thin air? This seems like it would work particularly well for Chinese. Nonetheless, I don't think it would be faster than using a desktop keyboard.
[2] I actually hate ergonomic keyboards; I'm most comfortable on a laptop or Apple keyboard. The most RSI-inducing computer input I've done is use the scroll wheel on a cheap Microsoft mouse. But the key (hah!) feature of an extended keyboard for programming in rich environments is hot keys, which are hard to accomplish with a keyboard interface like the iPhone's.
[3] For the non-programmers in the crowd, IDE stands for Interactive Development Environment. Here's a screenshot of the information overload I'm talking about.
[4] Why do phones come standard with (crappy) cameras these days? My camera can't make low-quality phone calls, I don't need my phone to take low-quality pictures. The benefit I see to cell phone cameras is (a) you've always got it on you and (b) it's immediately connected to software and the network. This makes it ideal for things like taking a picture of a barcode and comparison shopping instantly. The picture quality only has to be good enough to make out black and white lines.
[5] The best programming languages are the result of somebody thinking "There's got to be a better way" to solve a set of problems he was working on. Perl is really good at working with text because that's was the itch bugging Larry Wall. Dennis Ritchie created C to work on Unix and it's still the most popular language for writing operating systems. Guido adds new features to Python by trying out the syntax on a sample of existing code and deciding which produces the most readable result.
After using Eclipse for five years, I can't imagine doing significant Java development (particularly debugging) on a handheld device. Mobile device user interfaces are all about presenting only the immediately relevant information while an IDE[3] is really good at providing a bunch of information at once.
I still use a 24x80 text editor when writing in scripting languages, partly because the problems are usually small enough to fit in just a few files and the languages are high-level enough that a significant amount of code fits in a terminal window. I can imagine writing Python or Ruby on an iPhone or even Lisp if the editor had good parenthesis-matching. Perl and Haskell might be frustratingly symbol-heavy. But just because someone could write Python in a text editor on a handheld device, I don't think they would. It's just not as enjoyable as using a full keyboard on your desk.
A successful development environment on a mobile device should make use of the unique features a handheld offers. Typing on a phone may be less fun than using a laptop, but what if you could program with gestures? Tilt your phone to the left to generate an assignment. Move it in a circle to create a loop. Touch a variable to reference it again. Use multi-touch zooming and scrolling to select a function from an API.
Historically, attempts at visual programming languages haven't yielded many success at lower abstractions than GUI layout and UML, but I think a graphical language designed from the ground up to fit a handheld environment could be quite elegant. Text is very good at representing symbolic computation, but it's not the only way. Constraints often lead to beautiful art; if the crutch of an easy keyboard wasn't available, I'll bet someone would come up with something really clever. A lot of programmers are good at spatial reasoning, so let's try breaking nonlinear programs out of the linear medium of text.
Developing on a handheld would obviously be handy when you're working on an application for that device. Make a change to the tilt response code, then tilt your device to test it. Take a picture[4] and feed it straight to a test case. With lots of sensor input, mobile devices seem like an ideal platform for neural nets and other fuzzy techniques. On the development side, you could present an example to the device and immediately classify it, quickly building a model.
These are all hand-wavy suggestions of possible directions. I've hardly ever used a cell phone, let alone developed a mobile app, so my conceptions may be off base. Yet I think my key message is valid: to make handheld development attractive, it needs to be approached from the ground up. The metadeveloper needs to focus on solving a class of problems[5] and find ways to do so using things small computing devices are really good at. I've focused on sensor input here, but mobile devices have other unique features. Would dictating pseudocode to your phone be useful? Maybe there are ways to take advantage of location information and remote networking in programming. In the end, I think successful handheld development will be as different from desktop IDE development as that is from simple interactive text editors as that is from programming with punchcards.
[1] Not owning a smart phone, I'm assuming that typing on a handheld device is more of a chore than using a full keyboard. I know Apple and others have done some clever things with predictive text, automatic spelling correction, and other clever techniques to make typing easier on a phone. Unmodified, I suspect those would not work well when programming where function names deliberately deviate from proper spelling and punctuation has very special meanings that differ between languages. You could build language models for programming languages, of course, but you could also implant such models in IDEs (most already do) and use a full keyboard. I think dropping the keyboard metaphor would be preferable for handheld input. Why not leverage the motion and orientation sensors and use arm and wrist motions to "paint" your words in thin air? This seems like it would work particularly well for Chinese. Nonetheless, I don't think it would be faster than using a desktop keyboard.
[2] I actually hate ergonomic keyboards; I'm most comfortable on a laptop or Apple keyboard. The most RSI-inducing computer input I've done is use the scroll wheel on a cheap Microsoft mouse. But the key (hah!) feature of an extended keyboard for programming in rich environments is hot keys, which are hard to accomplish with a keyboard interface like the iPhone's.
[3] For the non-programmers in the crowd, IDE stands for Interactive Development Environment. Here's a screenshot of the information overload I'm talking about.
[4] Why do phones come standard with (crappy) cameras these days? My camera can't make low-quality phone calls, I don't need my phone to take low-quality pictures. The benefit I see to cell phone cameras is (a) you've always got it on you and (b) it's immediately connected to software and the network. This makes it ideal for things like taking a picture of a barcode and comparison shopping instantly. The picture quality only has to be good enough to make out black and white lines.
[5] The best programming languages are the result of somebody thinking "There's got to be a better way" to solve a set of problems he was working on. Perl is really good at working with text because that's was the itch bugging Larry Wall. Dennis Ritchie created C to work on Unix and it's still the most popular language for writing operating systems. Guido adds new features to Python by trying out the syntax on a sample of existing code and deciding which produces the most readable result.
no subject
Interestingly (at least for me), it has a flat typing system, not a hierarchical one, which is something I've been proposing in papers for a good while.
I still think C is at the level of a moderately well done 2 semester grad level programming language class. The main reasons for my complaint include 1) it looks like a higher level language, but isn't, 2) its semantics are not well defined (For instance, order of evaluation expressions is undefined. A friend (a testing manager at StorageTek) once ran the same program through several different C compilers and got a several different results. That's a big fail in my book. If a program is syntactically correct, one should be able to predict its behavior), and 3) reliance on pointer programming requires a level of sophistication that is not needed in pointer-free environments.
Actually, #1 and #3 may not be really that much of a problem if C programming were confined to low-level programming. Unfortunately, C is used for systems where it is not the best tool.
no subject
The best way to achieve a decent screen size is with a head-mounted display. It would take a major breakthrough to change this. HMDs can be very small, light, and high-res. They can be transparent or opaque and immersive, monocular or binocular, obvious or hidden to outside viewers.
One-handed chording keyboards are the traditional choice of the wearable community, but there's a steep learning curve. Other alternatives include arm-mounted, one-handed keyboards, or half to full size keyboards that fold or roll up for carrying.
One idea I've never seen, that seems promising to me, is a waist-mounted keyboard tray - you could use it with both hands while standing, for example while waiting in line. If it was a tray that held a folding or roll-up keyboard, each part could also be used separately - the tray could hold other items when your hands were full, and the keyboard could be used on a table when you weren't wearing the tray.
I think in the short term, sticking with the text and keyboard paradigm will be most successful - so these are ways to do that effectively. But I agree that in the long term, we'll be finding new possibilities that leverage the advantages of handheld technology, instead of treating it like a retarded cousin that needs extra help.
no subject
Fortunately, Apple and others have taught people that gadget sales are higher when someone can look cool using the product, bluetooth headsets notwithstanding. So if you trip over the kerb, you'll at least be able to do so in style.
I think there's an ocean of room for growth in responding to verbal requests. Apple's Knowledge Navigator video shows a lot of the subtleties of human-to-human communication. Like most things that come naturally to humans, programming it is really hard, but the more we can put the burden of translation between humanese and computerese on the computer's side, the better. Maybe device miniaturization can be a catalyst for improved natural language interaction. I think this is especially key in third-world countries, where the only computer many people ever interact with is a mobile phone, so they aren't likely to build up instincts about computer interaction.