OpenAI CEO Sam Altman went on a mini world tour to talk about the benefits of artificial intelligence and the regulations that need to be made. However, Altman was ultimately challenged where he went to speak. Protesters think that OpenAI is doing dangerous work, that the level of AGI in artificial intelligence could harm humanity.
Sam Altman gave a speech at University College London the other day. The CEO of OpenAI got a lot of attention, hundreds of people lined up to watch the talk. But a few protesters have issued a stern warning about AI outside the building where the speech was held. Protesters called on OpenAI and similar companies to stop their work on the AGI standard. Because they said that AGI could pose a threat to humanity. AGI stands for Artificial General Intelligence. In Turkish, we can say Artificial General Intelligence. AGI is said to be an important step in the development of artificial intelligence and human-like consciousness.
One of the protesters, Gideon Futerman, said Sam Altman could be a fraud. “I hope he’s a fraud,” Futerman said. But even so, he is exacerbating known harmful systems. If he is not a fraud and is right, then the risk will be greater. “Humanity is vulnerable,” he said.
In his speech, Sam Altman said that people are right in their concerns about artificial intelligence, but the potential benefits are much greater. In addition, he emphasized that artificial intelligence safety tests should be done and official rules and regulations should be implemented.
According to critics of OpenAI, this talk about artificial intelligence or AGI level control is currently a ruse to hide and distract from the damage done by artificial intelligence. Critics say that Altman points everything to the future, and now all issues like fake news created by artificial intelligence, the spread of fake news, facial recognition should be discussed.
Reportedly, OpenAI wants to finish the training of GPT-5 by the end of 2023. But the demands are not limited to this. In his tweet, developer Siqi Chen claimed that some within the company believe that GPT-5 could reach the level of AGI. Chen later tweeted that his claims about GPT-5 and AGI do not represent anything within OpenAI. As a result, it also ignited the rumor mill.