币界网报道:AI Vulnerabilities Spark Calls for Stricter Regulation as Security Risks Mount Recent incidents exposing critical flaws in AI systems have intensified global scrutiny over the technology's safety protocols. Multiple studies reveal how adversarial attacks can manipulate AI decision-making through subtly altered inputs, with researchers demonstrating how image recognition models misclassify stop signs as speed limit indicators when stickers are applied. The U.S. Federal Trade Commission has launched investigations into several AI providers following reports of biased hiring algorithms and malfunctioning financial assessment tools. European Union officials accelerated final negotiations on the AI Act after tests showed chatbots generating harmful content despite guardrails. Microsoft and OpenAI disclosed spending $38 million collectively on red teaming exercises in 2023, while Google DeepMind established a new AI Safety division with 200 researchers. Cybersecurity firms report a 140% year-over-year increase in AI-specific vulnerability disclosures, with the financial sector experiencing the most attacks. "We're seeing nation-state actors systematically probing AI systems for weaknesses," said a senior NATO technology advisor speaking anonymously. The White House is preparing an executive order mandating third-party audits for high-risk AI applications, coinciding with China's release of mandatory AI security standards for critical infrastructure operators. Industry responses remain divided, with some startups warning that excessive regulation could stifle innovation while major cloud providers generally support baseline safety requirements.