Experts disagree on the threat, but artificial intelligence cannot be ignored
For many experts in the field of artificial intelligence, a pivotal moment in AI development is just around the corner. The Global AI Safety Summit, to be held in November at Bletchley Park in Buckinghamshire, is therefore highly anticipated. Ian Hogarth, chairman of the British task force responsible for monitoring the safety of cutting-edge AI, expressed concerns about taking office this year in connection with general artificial intelligence, or "Divine" AI. AGI definitions vary, but in short, they refer to an AI system that can perform tasks at a human or higher level and could evade our control.
1. AGI achievement imminent?
Max Tegmark, a scientist who drew media attention this year with a letter calling for a halt to major AI experiments, told The Guardian that tech industry professionals in California believe AGI is close. "Many people here think we will achieve Divine general artificial intelligence maybe in three years. Some think maybe in two years." He added, "Some believe it will take longer and won't happen until 2030." Which also doesn't seem too distant in the future.
2. Controversies surrounding AGI
There are also authoritative voices who believe the fuss about AGI is exaggerated. According to one counter-argument, the noise is a cynical maneuver aimed at regulating and fencing off the market and strengthening the position of major players such as ChatGPT creator, OpenAI, Google, and Microsoft.
3. Current AI threats
The Distributed AI Research Institute warned that focusing on existential risk overlooks the immediate effects of AI systems, such as: exploiting artists and authors without their consent to build AI models; and using low-paid workers to perform some model-building tasks.
4. Is AGI a real threat?
Another position says that uncontrolled AGI simply won't happen. "Uncontrolled general artificial intelligence is science fiction, not reality," said William Dally, chief scientist at AI chip manufacturer Nvidia, during a US Senate hearing last week.
Summary
For those who disagree with the above, the threat posed by AGI cannot be ignored. Concerns about such systems include refusal – and evasion – of shutdown, connecting with other AIs, or the ability to self-improve.
Other concerns expressed by government officials are that subsequent versions of AI models, below the AGI level, can be manipulated by dishonest actors to create serious threats, such as bioweapons.
With international leaders arriving at Bletchley Park in a few weeks, Downing Street wants to focus the world's attention on something that officials believe is not taken seriously enough in political circles: the chance that machines can cause serious harm to humanity.