Interesting question with some interesting responses.
To me the discussion looks a lot like the very serious and deeply controversial debate going on between inventors, developers, and end users of AI based system applications like GPT-3. Right now much of that discussion is going on out of public view. But even when published reports do surface, the tech aspects are so abstract and dense that average citizen/consumer can’t understand and doesn’t get it.
https://www.vice.com/en/article/y3pezm/scientists-increasingly-cant-explain-how-ai-works
Basically the experts are arguing and warning about two things – 1) Black box AI learning functions that are more and more self-generated, and simply can’t be explained or analyzed by humans using current methods; and 2) The defensive aspect of AI answer outputs that can to generate vague or even completely wrong answers in order continue processing feedback. What’s at the root of the problem is that most advanced AI development is now completely outside the standard computer coding model, and instead relies on “neural networks” that allow AI systems to automatically link and share and learn data far beyond any original program inputs. Datasets for some text-to-image operations can involve billions and in a few cases nearly a trillion set points of info, none of which are determined by clearly defined program parameters.
So you’re impressed by a chatbot that seems to display something like human thought when responding to a question that involves consecutive counting? But then you’ve got doubts about the answer you get when you ask the same question again, but with just slightly altered syntax? And what about the AI facial recognition that repeatedly misidentifies people of color, or that medical bot that can’t spot a woman’s breast cancer in CAT scans – but picks up early prostate problems for men every time ?? You’re not the only ones :
“Frankly, it doesn’t really matter if we create intelligent chatbots or even want to—that’s a distraction. We should be asking instead questions along the lines of: “Are large language models justifiable or desirable ways to train artificial systems?” or, “What would we design these artificial systems to do?” or if unleashing a chatbot to a public whose imagination is constantly under siege by propagandistic and deceptive depictions of artificial intelligence (and using that public to further train the chatbot) is something that should be allowed to happen.”
https://www.vice.com/en/article/dypamj/image-generating-ai-keeps-doing-weird-stuff-we-dont-understand