Siri, the voice in the latest iPhone, is a great start for Apple. It’s been successful, probably beyond their expectations. This has made them even more protective in their drive to prevent “jail breaks” – that’s what the development world calls efforts to open up, or gain access to, the operating system so they can run software and do other things not authorized by Apple. With so much at stake, and with those efforts to break through the defenses so relentless, Apple is vigorously defending its technology, using every tool and technique available. For example, Apple filed a suit recently against Samsung for infringing on four patents, including Siri’s “voice search.” Sooner or later though, Apple will have to yield and open up a bit. When they do, developers will rush to add Siri’s voice recognition features to their applications.
That voice recognition capability is part of a field called natural language processing or NLP, and its potential goes far beyond the usefulness we’ve seen to date. The way it works is straightforward in principle, if not so much in programming. What happens when you say something like “Please call my Mom,” or “Call Mom,” or something similar? The software recognizes the words “call” and “Mom” and links them. Same thing if you say, “Dial Mom.” The software recognizes nouns and verbs and their synonyms and responds by taking a specific action.
There are many ways this idea can be expanded upon. Right now you have to press a button to activate the software, but you could have a sound activated system that monitors a room and takes a specific action whenever a certain word or sound is heard. It could be in a secure area; it could be in a nursery; or it could be in a conference room. The system could be flexible enough to adapt itself to a complicated spoken command such as “Listen for the phrase xyz, then calculate the latest values for factors 1, 2, and 3. Display the results on the conference room monitor.” The possibilities are endless. Think Star Trek and the voice controlled systems on the Enterprise. They’re always on, always listening, and always responding.
You can extend the notion even further. Cell phones were getting smaller until touch displays and video cameras came along. Then they had to get bigger in order to have a useable display. But, suppose you didn’t have to scroll through apps to find the one you wanted? You could just tell your phone what you wanted or where you wanted to go, and it would sort through the hundreds of apps, find the one corresponding to your needs, activate it, and report/display the results to you. It could also keep track of your location and surroundings and give you a running commentary and make informed suggestions based on your transaction history and interests. Face it, sooner or later your phone will shrink to pen size or smaller, maybe even become part of your body as an implanted micro device.
Or, you might combine NLP with very flexible LCD displays descended from the flexible displays that tech companies like Nokia and Samsung are working on. Then you’d have a screen you could unroll when you need a face to face with someone or want to view content the way you might view it on a TV, magazine, or newspaper. It’d be something like the old-fashioned book scroll you see in period pieces, or even more compact when it’s time to put it away. That’d fit right in with the pen analogy.
So what’s your take on the future of mobile development? Why don’t you write to us at email@example.com and tell us? If your idea(s) are interesting enough, maybe we can make that happen…