iPad Dictation in the Lab & Cloud Computing

CultOfMac bring up a really interesting problem with Siri/Dictation… what you say is sent back to Apple and could be retained.

I can see why Apple do it; in reality it is the only way to get Dictation working with any kind of useful real-world accuracy. Not only does the recognition require a lot more data and potentially CPU than you have on your iDevice, but by gathering lots and lots of samples into one place, they can continually improve the performance in the real world.

However, this is probably a technical violation of some confidentiality clauses, government regulations, and potentially prior-art disclosure.

There’s a general point here about Cloud Computing. Yes, there’s a lot of Corporate-level discussion about “The Cloud” and people are starting to wake up to the privacy and legal implications of having your data on other people’s servers. But in parallel with that, and largely unremarked, as our phones get more capable they are also splattering our information all over the Cloud in various ways – some obvious, some declared but invisible (as with Siri), and some rather less honourable (the recent Path/Address Book issue as an example).

I don’t think this is going to stop adoption of iPads and other devices in the Lab; the trend is too powerful to resist. And I don’t think we’re going to see the emergence of a class of devices which “Do things properly” from an Enterprise perspective – Apple’s consumer focus has clearly shown where the market traction is, and anything focused on the Enterprise is never going to get a critical mass.

But I do think this is something to keep in mind; paranoia isn’t going to work, but Inter-company NDAs and Patent Law are going to have to start wrangling with Consumer-focused Cloud services. Those two worlds haven’t really interacted much, it is going to be interesting to watch it play out.

Leave a Reply

Your email address will not be published. Required fields are marked *