Gypsies, Tramps and Thieves
Everybody knows the gypsy palm reader down the block is a fraud when she pretends to tell your fortune. Just what makes news reporters and scientists better equipped to see the future?
For example, a couple days ago, the New York Times ran a story about the Google X labs.
The “X” in the headline ought to be the giveaway, and if that’s not enough, the headline refers to the future, which is confirmation of nonsense here. The story itself exudes journalistic bragging about its ability to ferret out secret information — like the existence and location of Google X.
It’s so secret that the Times was able to photograph one of the computer scientists, Sebastian Thrun, posing at a Google whiteboard with some stray statistics littered on it. Thrun, a professor at Stanford, spiritual home of Google, had a lot to do with Google’s driverless car. The connection between that project and the web search giant comes clear in the article:
“Google could sell navigation or information technology for the cars, and theoretically could show location-based ads to passengers as they zoom by local businesses while playing Angry Birds in the driver’s seat.”
After all that money and effort by all those Phds, we end up not far from the ubiquitous games of solitaire on millions of office computer screens not so long ago. I would say that Google is having the very human trouble of not knowing what to do with its billions. Besides doesn’t Google know those local businesses want the people to look up, now that they can give their full attention to shopping?
Not to be outdone by Google, the government, in particular DARPA, the military’s main research arm, is planning a big pie-in-the-sky project to create better computer security. At least, the overall goal is a truly useful one — and success would have the positive side effect of protecting Facebook accounts from vandalism.
In a couple days, DARPA will give military contractors and research organizations a briefing on its plan at an Active Authentication Industry Day affair.
The announcement says in its own peculiar language that DARPA wants “to advance research in the area of new software biometric modalities for the purpose of eventually using those biometric modalities for computer system authentication.”
What that means is the DARPA wants to replicate what Hollywood already knows how to do with all those eyeball scanners and voice recognizers. And it wants to go a little beyond into territory where even Google, IBM and Facebook haven’t yet gone.
I guess that DARPA has realized that many of the most sensitive pieces of information in the world are not stored in the little case next to your monitor either at home or at work. These things are stored on networked servers — be it a cloud, a cluster or colossus. The hacker’s target could be thousands of miles away.
Hollywood knows this problems isn’t solved, since they always show us some geeky kid hacking into computer systems by typing four or five guesses at a password. We all know that in real life, the machine doesn’t flash back “Access Granted” in big letters. Hacking is a bit more difficult in real life, and no one knows better than our military, which has suffered numerous embarrassing breaches, even though it plays both sides of the game.
The means to the biometric ends could be many things, like identifying particular movements in the way people type on the keyboard and move the mouse around, and, more intriguing, recognizing patterns in language as we interact with the machine.
Again, Hollywood has been ahead for years. Computers have been talking to us for decades. HAL looked at the person who sat down and said, Hello Jack. It knew as well as you or I do who is sitting the keyboard. I wonder what ingenious ways will be offered to figure out how people introduce themselves to the faraway networks.
Some things in science fiction are surpassed by reality after a while. Other things in human experience remain opaque despite strenuous efforts of business, government and academia. Like language understanding.
I’m thinking of IBM’s warehouse-sized Jeopardy-playing Watson. In some important ways, Watson is doing more than any artificial intelligence venture has ever done before, but let’s not get carried away. It has conquered a game that is built on an artificial structure, has only one word answers and the hard questions often turn on cheap puns.
Don’t misunderstand, Jeopardy is difficult, but in many ways it’s more like a game of chess, which is composed of an astronomically large, but finite number of possible moves to the complete language ability of human beings, which is provably infinite in variety.
And no matter what the press releases and the reporters say about Watson, the men and women who built it know it doesn’t understand anything.
No sooner than I wrote this, I came across a relevant item, which describes how a computer scientist from the University of Delaware is cleaning up on the real Jeopardy with the help of his computer program that helped him study for the game show by predicting sequences of questions. That predictability is what I mean by artificial game aspects of jeopardy, compared with ordinary human communication.
The guy from Delaware, Roger Craig, is going to try to pare down software into a smart phone app. The humongous Watson does a lot of statistical modeling on what types of sources contain the answers to different kinds of questions, and researchers spent years training the machine.