News

News

Are we ready for AI we dont understand?

EverythingElsePosted by Eskil Thu, May 24, 2018 14:05:43

Some day in the future a little girl in a red dress, runs out in to the road and gets hit by a self driving car. A few weeks later a young boy with a red jacket on the other side of the planet gets hit by another self driving car from the same maker. A few months go by and the pattern is clear, the AI for some reason doesn't understand that kids dressed in red is something to not drive in to.

If this was a faulty break pedal, airbag or ignition switch, the problem could be found, fixed, and cars could be recalled so that the issue could be addressed. As costly as this might be, the punitive damages a car maker could face if they were to knowingly ignore a faulty car that would hurt or kill people would be far greater.


However, with Neural networks and machine learning, the AI driving the car was in large part not designed by an engineer, it was trained using millions and millions of miles of traffic data recorded by cars with cameras and other sensors. The Neural network looks at this data and tries to find patterns in traffic and the responses expected by the driver.


The problem here is that if something goes wrong and we have accidentally thought the machine that its OK to hit kids if they wear red cloths, its very hard to figure out what in the millions of miles of data made it think it was OK. There is no line in the code that can easily be fixed that says:


if(kid && color != red)
car_berak();

This causes a huge liability problem. If you go in front of a judge and say that there is no real way to know why the AI drove in to the child, and that its not something that can easily be fixed, No matter how good the overall safety record is, the judge will order all cars off the road until the company can guarantee that it wont happen again. With Machine Learning you cant really make that guarantee. Saying "If we keep training it will probably get better at not hitting kids" wont really cut it in a legal or PR context.

We are going from a paradigm where we understand the code, but the code doesn't understand the world, to paradigm where the code understands the world but we don't understand the code.

Our legal system is based on the idea that we are each responsible for what we do and that we know what we are doing. Its almost impossible to guarantee anything that comes out of a machine learning algorithm no matter how high it success rate is. In our society we demand that when things go wrong we can find the issue, have it fixed, so that it doesn't go wrong again. We allow for mistakes, but there is a reason why we don't allow for repeated mistakes.

If I was in the legal department of any company basing their tech on machine learning I would be very worried about this. What kind of promises can we make, and how responsive can we be when something is wrong with a product no one really understands in depth? What happens when your translation system is sexist, or your camera system cant see black people?

A great feature of technology is if we can understand it. If we understand its capabilities and limitations we can trust it, to do somethings but also know what it cant be trusted with. A steering wheel is understood, we know when to blame its maker and we know when to blame its user.



  • Comments(1)

Fill in only if you are not real





The following XHTML tags are allowed: <b>, <br/>, <em>, <i>, <strong>, <u>. CSS styles and Javascript are not permitted.
Posted by Ed B. Thu, May 24, 2018 17:03:35

Just a heads-up -- the RSS feed is dropping all your paragraphs & headings.

This was, however, interesting enough to read as a giant wall of text.