NathanE wrote: ↑13 May 2019, 09:40
One observation on learning applied to humans vs AV. In an AV environment learning can be pooled in a much more effective way than in a Human one. This should accelerate safety development significantly. I have probably driven around 1m miles in 30 years to accumulate my experience. This target will be reached in days with an autonomous fleet.
In addition, how many drivers have any real understanding of limit handling or other "exceptional" situations? Having raced for a number of years and spent significant time developing skills on skid pans and kick plates I know that road driving never have me any real sense of driving at or close to the limits of adhesion. In a pooled knowledge AV environment all vehicles will have a better starting point capability.
That’s a good point, particular for the car control part of the problem. I can understand why Tesla crow about the miles of experience they are building up. However this is only part of the equation. I can’t quite match your mileage but I don’t think it matters for car control, I believe that I reached the peak of my car control abilities, admittedly not high, after a very small subset of those miles.
The more difficult part has been learning situational awareness. I think this is a much more difficult problem for AVs. When a human encounters a situation they can use the rational, rather than autonomous, part of their mind to analyse the situation ,sometimes there and then but more often after the situation has passed. They can then, if they are interested, figure out a revision to the rules they want the autonomous part of the brain to use to deal with such situations “better”.
So AVs, Tesla for instance, gather data on situations they encounter but where is the rational mind that allows them to classify situations and then propose improvements to behaviour? As I understand it the neural networks are trained by being presented with a classified set of data of characteristics that might be important and outcomes that are acceptable. For identifying objects in a scene the base level comes from group of humans marking objects and classifying them. The NN can then learn characteristics of the objects, such as their likely trajectories.
What happens if a Tesla sees a two wheeled vehicle and misclassifies it as a motorcycle? Will that misclassification ever be corrected?
As time goes by and the libraries of situational data increase I can see how fleets of AVs might help reinforce learning making decisions more and more reliable, but I think the difficult thing is deciding on the structure of the libraries and I think that’s still going to be the preserve of human minds.
Fortune favours the prepared; she has no favourites and takes no sides.
Truth is confirmed by inspection and delay; falsehood by haste and uncertainty : Tacitus