There are some ludicrous claims about AI (artificial intelligence) that received a good debunking (boldface mine):
AI pioneer Yoshua Bengio told BBC News he didn’t like the Terminator films for several reasons.
“They paint a picture which is really not coherent with the current understanding of how AI systems are built today and in the foreseeable future,” says Prof Bengio, who is sometimes called one of the “godfathers of AI” for his work on deep learning in the 1990s and 2000s.
“We are very far from super-intelligent AI systems and there may even be fundamental obstacles to get much beyond human intelligence.”
In the same way Jaws influenced a lot of people’s opinions on sharks in a way that didn’t line up with scientific reality, sci-fi apocalyptic movies such as Terminator can generate misplaced fears of uncontrollable, all-powerful AI.
“The reality is, that’s not going to happen,” says Edward Grefenstette, a research scientist at Facebook AI Research in London…
Today’s AI agents struggle to excel at more than one task, which is why they’re often referred to as “narrow AI” systems as opposed to “general AI”.
But it would be more appropriate to call a lot of today’s AI technology “computers and statistics”, according to Neil Lawrence, who recently left Amazon and joined the University of Cambridge as the first DeepMind professor of machine learning.
“Most of what we’re calling AI is really using large computational capabilities combined with a lot of data, to unpick statistical correlations,” he says…
We should be more concerned with how humans abuse the power AI offers, Prof Bengio says.
How will AI further enhance inequality? How will AI be used in surveillance? How will AI be used in warfare?
…But we don’t need to look into the future to see AI doing damage. Facial-recognition systems are being used to track and oppress Uighurs in China, bots are being used to manipulate elections, and “deepfake” videos are already out there.
“AI is already helping us destroy our democracies and corrupt our economies and the rule of law,” according to Joanna Bryson, who leads the Bath Intelligent Systems group, at the University of Bath.
One thing that AI needs to work well (as done as statistical technique) is clean data: the data used to train the AI and also used in the daily operations must be standardized. In other words, messy people have to be forced to conform to the needs of the AI (something people whose last name is “Null” have faced). The other side of that coin is that AI often can’t handle ‘outside the box’ situations. For example, those adorable little food delivery robots have a dark side: they make it difficult, sometimes impossible, for people in wheelchairs to cross the street.
It’s cases like this that highlight one of the problems of an AI-driven world. People will be forced to confirm to the AI, and we might discover that’s not a good world for messy people to live in
““Most of what we’re calling AI is really using large computational capabilities combined with a lot of data, to unpick statistical correlations,” he says…”
Exactly. Another way to say this is, to quote Roger Schank, “There’s no such thing as AI”. I’d put it slightly differently and say “There’s no “I” in AI.”. Marcus and Davis’ book “Rebooting AI” explains in detail how current AI techniques can’t possibly do anything any normal person would call “intelligent”. (tl;dr: current AI programs don’t even try.) It’s all parlor tricks and statistical games. (I have an all-but-thesis in AI from Schank.)
Which is to say, there is currently nothing in AI, or anything that will be called AI for the next 25 years or so, that needs to be dealt with or thought about in any way that’s at all different from non-AI computer technologies. There really isn’t. Bank rating schemes for rating lenders should be evaluated for what they actually do, not whether or not they’re “AI”.
(By the way, you have a typo: “confirm” should be “conform”. Your point about the quality of the data being important is, of course, correct. But it’s old news: GIGO dates back to 1965.)