Questions about bots and algorithms - is anyone thinking this through?
I don't know if you saw the news last week that United Airlines parent company stock had crashed apparently on the rerelease of a 6 year old story on the UAL bankruptcy by a Florida newspaper. Fingers were pointed at Google and its newsbot. But there was a second shoe that fell, so to speak, that is even scarier in my opinion.
See the article in the Sunday New York Times Week in Review section (9/14/08) by Tim Arango entitled "I Got the News Instantaneously, Oh Boy" that covered what happened in detail. What I didn't realize is that Wall Street has bots of its own reading the electronic news and news feeds, measuring tone in the news, and then initiating automated trading based on an algorithm.
So wha happened to UAL was apparently one algorithm processing a piece of history followed by a second algorithm processing the history, not realizing it was history, and then selling off billions just like that.
Thank goodness it was just business equity which can be fixed, sort of. What had this been national defense? I had the luck to have the 1980's movie War Games on Sunday morning while I was reading the paper - an excellent movie on a related subject worth reseeing including its great ending line by the WOPR itself - but still scary when you think about it.
We now have multiple systems interacting with each other in ways nobody thought of. True, they are algorithms, but these algorithms are making huge decisions without human intervention.
Who is testing this stuff? Nobody has tested this entire metasystem - Google+stock market bots would be my guess. Are we likely to have further consequences - absolutely. We probably see those all the time and didn't know what we were seeing. Have you noticed that there is often a echo on stock prices - a short period after a bad day, there is often another bad day that comes out of the blue -I would not be surprised if the bots are reading and reacting to the bad news of the first day and triggering a second.
One consequence is that we should see all this play out in court. Google and Wall Street will not be able to protect their algorithms, since it was the algorithms that committed the deed in the first place. I suspect we will get a glance at the algorithms.
Back to coding - I wonder about design flaws in these algorithms, probably the most glaring is the absence of history in the Wall Street algorithm - it should have known it was looking at old data. Or, barring that, called for human review for something so massive. Not a design flaw but certainly an area we can all discuss is can the emotional content of the written word be accurately measured? Interesting but how do we settle that when humans oftentimes cannot decide on what the written word means?
Should there be broader testing? Human safeguards? Common sense - built into the algorithms. Now, that would be a good trick.
I don't think normal ethics will help here - the motivations for creating these bots is too great. Rather, I think this will play out in the courts and the usual forces - legal liability - will serve to check the rise of the bots. Perhaps the developers of these tools will learn to treat them like precocious 2 year olds instead of unleashing them. Finally, we may have the interesting situation of algorithm as defendant (well, not quite, but maybe soon).