METR (Model Evaluation & Threat Research) leads AI safety efforts in late 2025 as a nonprofit evaluating frontier models for catastrophic risks. It focuses on autonomous capabilities, R&D acceleration, and evaluation integrity—publishing transparent reports and developing benchmarks like RE-Bench and MALT.

Telegram
Telegram
WhatsApp
WhatsApp