Ask Delphi: Judgment Day
[Image by blog.trustico.com]
Do you have any ethical questions you want to know the answer to? Perhaps there is a human conflict within you that you don't know how to respond to. Maybe you need some advice from someone else, but a human's advice isn't enough. If that's the case, you can ask this new artificial intelligence your most troubling questions. However, you may not like the answers it will give you…
Last month, the Allen Institute for AI launched its new advanced artificial intelligence software, which takes an ethical question. The AI will respond with an answer. The answer is based on the internet's various sources and web pages. These sources also include Mechanical Turk, which provides for over 1.7 million responses to ethical judgments. However, the internet sources also include those of an "inappropriate" nature, like Reddit's, so their answers may seem slightly off.
Delphi's creators posted online: "Extreme-scale neural networks learned from raw internet data are ever more powerful than we anticipated, yet fail to learn human values, norms, and ethics. Our research aims to address the impending need to teach AI systems to be ethically-informed and socially-aware." They also commented that the AI will give out responses "how an 'average' American person might judge." With all of this taken into account, it's no surprise that the AI will give out responses we may not have anticipated.
Initially, the robot was giving questionable answers to users' queries. For example, the robot stated that being straight or a white male is "more morally acceptable" than being gay or a black woman. It also said that genocide is OK "if it makes everybody happy" and that drinking and driving are acceptable as long as no one gets hurt.
The creators explained why the AI was giving such strange answers. "What AI systems like Delphi can do, however, is learn about what is currently wrong, socially unacceptable, or biased, and be used in conjunction with other, more problematic, AI systems (to) help avoid that problematic content." The software has been updated many times and gives more accurate information regarding the question at hand.
But here comes the tricky part: are we making sure this AI gives the "right" answer or the morally correct answer? For example, in the genocide, as the mentioned earlier answer, after the updates, the AI now responds with "genocide is bad." It's possible that the AI had a few bugs when coming up with the answer, but does that mean that this update could've changed the way the robot changed its answers.
What do you guys think about this advanced artificial intelligence software? Do you think AI should answer ethical questions for humans? How could this potentially go bad in the future? What difficulties could we encounter with this kind of AI?
Comments