Close

Hmmm, you are using a Gmail.com email address...

Google has declared war on the independent media and has begun blocking emails from NaturalNews from getting to our readers. We recommend GoodGopher.com as a free, uncensored email receiving service, or ProtonMail.com as a free, encrypted email send and receive service.

That's okay. Continue with my Gmail address...

Jon Rappoport warns: AI “brains” just as faulty as human brains, can be dominated by false information stored in fake info hubs like Wikipedia


Seasoned investigative journalist and author Jon Rappoport recently disputed that artificial intelligence is very far from its goal of merging with the human brain. Rappoport says that while AI systems have gained steam during the last few years, neuroscience has not completely understood how the human brain works. The renowned author also discussed that discovering the algorithm behind brain function remains pretty far off.

According to Rappoport, more challenges lie ahead before a functional human-computer interface becomes a reality. Rappoport noted that the interface would deal with hurdles in transmitting detailed information. The author raised concerns on how the brain would be effectively hooked up to a computer interface, and whether the brain would be functional enough to absorb and process all the information it will obtain from the interface.

Likewise, the author argued that the proliferation of false information would be a big threat to the development of human-computer interface. Rappoport noted that if the interface has indeed become possible, it will still be faced with hoards of faulty information stored in questionable databases. Rappoport further discussed that detecting and deleting false information is beyond a program’s ability. Likewise, the author argued that there is no committee that monitors and makes the corrective changes in these faulty information.

“There is an inherent self-limiting function in AI. It uses, accesses, collates, and calculates with, false information. Not just here and there or now and then, but on a continuous basis. Think about all the entrenched institutions and monopolies in our society. Each one of them proliferates false information in cascades. No machine can correct that. Indeed, AI machines are victims to it. They in turn emanate more falsities based on the information they are utilizing,” Rappoport wrote in Waking Times online.

Rappoport was the author of  The Matrix RevealedExit From The Matrix and Power Outside The Matrix. He also was once a candidate for a U.S. Congressional seat in the 29th District of California.

Expert: AI brains are as faulty as human brains

New York University (NYU) research professor Kate Crawford also undermined the capacity of AI brains, stating that the programs might just be as prone to errors as human brains. According to the expert, AI systems depend on neural networks that emulate the brain’s mechanisms in order to learn. Crawford also explained that these systems can be trained to recognize information patterns such as speech, text data or visual images.

However, Crawford argued that these information are fed to the systems by no other than humans themselves. This, in turn, makes the AI brains just as susceptible to human errors. The expert also warned that the AI brain may inadvertently use these errors and biases that may affect its decision making skills. (Related: Expert warns that AI brains are not infallible and have even been found to “make bad decisions” that can harm humans.)

“These systems “learn” from social data that reflects human history, with all its biases and prejudices intact. Algorithms can unintentionally boost those biases, as many computer scientists have shown. It’s a minor issue when it comes to targeted Instagram advertising but a far more serious one if AI is deciding who gets a job, what political news you read or who gets out of jail. Only by developing a deeper understanding of AI systems as they act in the world can we ensure that this new infrastructure never turns toxic,” Professor Crawford said.

Crawford and her colleagues announced the launch of The AI Now institute in October last year. The institute aims to examine the complex social implications of AI development, Crawford explained.

Log on to Robotics.news and be up to speed with the latest news in robotics and artificial intelligence.

Sources include:

WakingTimes.com

DailyMail.co.uk

Receive Our Free Email Newsletter

Get independent news alerts on natural cures, food lab tests, cannabis medicine, science, robotics, drones, privacy and more.



Comments
comments powered by Disqus

RECENT NEWS & ARTICLES