币界网报道:Prominent AI experts are raising urgent alarms about the unchecked development of superintelligent AI systems, warning that such technology could pose existential risks to humanity if not properly regulated. In an open letter signed by over 1,000 researchers and tech leaders, including executives from OpenAI, DeepMind, and Anthropic, the group calls for immediate global coordination to establish safety standards and governance frameworks for advanced AI. The letter highlights concerns that superintelligent AI—systems surpassing human cognitive abilities across all domains—could become uncontrollable or misaligned with human values, potentially leading to catastrophic outcomes. Experts emphasize the need for rigorous testing, transparency, and international cooperation to mitigate risks, comparing the challenge to managing nuclear proliferation. While acknowledging AI's potential benefits, the signatories urge governments and corporations to prioritize safety research and slow the pace of deployment for cutting-edge models. The debate comes as major tech firms race to develop increasingly powerful AI systems, with some predicting human-level machine intelligence could emerge within the decade. Critics argue current self-regulation efforts are insufficient given the stakes, while some industry leaders maintain that responsible development can harness AI's benefits while minimizing dangers. The discussion reflects growing unease about the societal impacts of rapid AI advancement, with calls for balanced approaches that foster innovation while addressing fundamental safety concerns.