Artificial intelligence (AI) systems are becoming increasingly powerful, raising concerns about potential security risks if not properly regulated. Regulators are focusing on the computing power of AI models as a key indicator of their potential danger.
Currently, AI models trained on 10 to the 26th floating-point operations per second are required to be reported to the U.S. government. California is considering even stricter regulations that could impact AI development in the state.
The concern is that AI systems with such high computing power could be used to develop weapons of mass destruction or carry out catastrophic cyberattacks. Lawmakers and AI safety advocates are working to differentiate between existing high-performing AI systems and the next generation that could be even more potent.
While some criticize these thresholds as arbitrary, others see them as a necessary step to prevent potential harm. President Joe Biden's executive order and California's proposed AI safety legislation both rely on specific computing power thresholds to determine regulatory requirements.
European Union and China are also looking at similar measures to regulate AI development. The focus on floating-point operations per second is seen as a practical way to assess AI capabilities and risks.
Despite ongoing debates among AI researchers, the flops metric is currently considered the most effective way to evaluate AI technology. It provides a straightforward method to gauge an AI model's capabilities and potential risks.
While some tech leaders argue that these metrics are too simplistic and may not effectively mitigate risks, others defend them as a necessary safeguard. The regulatory thresholds are seen as a starting point that can be adjusted as AI technology evolves.
Overall, the debate around regulating AI systems highlights the need for ongoing monitoring and adaptation of regulatory frameworks to ensure the safe development and deployment of AI technology.