why it matters: Prominent technical players have bet that in the last few years, Artificial General Intelligence (AGI) – Systems that coincide or cross the human cognition by throwing more computing power in the bus AI. But a recent survey by AI researchers shows that there is increasing suspicion that it is the right way to increase current approaches.
A recent survey of 475 AI researchers suggests that 76% believe that adding more computing power and data to the current AI model is “unlikely” or “very possible” to lead to AGI.
The survey conducted by the Association for the Advancement of Artificial Intelligence (AAAI) reveals the increasing doubt. Billions were added to the creation of large-scale data centers and sometimes trained in large-scale general models, the researchers argue that the returns on these investments are decreasing.
“Massive investment in scaling, a huge investment in scaling, is unaware of any comparable efforts to understand what was going on, I always felt wrong.”
Numbers tell the story. According to the Techcrunch report, the venture capital funding for generative AI alone last year is at the top of $ 56 billion. Push has also demanded a massive demand for the AI Accelerator, which stated in the February 1 report that the semiconductor industry reached $ 626 billion in 2024.
Running these models always requires huge amounts of energy, and as they increase, demands have increased only. Companies like Microsoft, Google and Amazon are therefore securing nuclear power deals to fuel their data centers.
Nevertheless, despite these huge investments, the performance of the state -of -the -art AI model has said. For example, many experts have suggested that the latest models of Openai have only shown marginal reforms on their predecessors.
Beyond doubt, the survey also highlights changes in priorities among AI researchers. While 77% prefer to design the AI system with an acceptable risk-profit profile, only 23% focuses on chasing AGI only. Additionally, 82% of respondents believe that if AGI is developed by private institutions, it must be publicly owned to reduce global risks and moral concerns. However, 70% of AGIs oppose preventing research until complete safety mechanisms are, suggests a cautious but moving approach.
More efficient options are being detected for scaling. Openai has experimented with “test-time compute”, where AI models spend more time before generating reactions. This method has promoted performance without the need for large scale scaling. Unfortunately, Arvind Narayanan, a computer scientist at Princeton University, told the New Scientist that the approach is “unlikely to be a silver bullet.”
On the other hand, technical leaders such as Google CEO Sundar Pichai remain optimistic, saying that the industry “bus could do”-even he indicated that the AI was over the era of low-cut fruits with profit.