On December 9, the US DOJ issued an indictment charging a Ukrainian citizen for her role in conducting two cyber-attacks against US critical infrastructure, working for cyber group (CARR) that was founded, funded, and directed by Russia's GRU. Both cyber-attacks occurred in 2024 and caused kinetic damage.
The first attack was in April 2024, against water utilities in Texas, California, and other states, and resulting in "damage to controls and the spilling of hundreds of thousands of gallons of drinking water."
The second cyber-attack was in November 2024 -- which falls within the time window of this question (which started on October 4, 2024) -- against a meat processing facility in Los Angeles, resulting in "the spoiling of thousands of pounds of meat and triggering an ammonia leak in the facility."
https://www.justice.gov/opa/pr/justice-department-announces-actions-combat-two-russian-state-sponsored-cyber-criminal
https://www.epa.gov/newsreleases/foreign-national-indicted-and-extradited-united-states-role-two-russia-linked-cyber
Once again this is clear evidence that the GRU (and other Russian security entities) are actively conducting cyber-attacks against US and NATO infrastructure with the intent to cause kinetic damage. Cyber- attacks like these and the other ones that I and others have cited strongly suggest to me that Russia does not believe that cyber-attack against NATO infrastructure would trigger NATO to invoke Article V. Given disunity within NATO these days, Russia is likely correct in this assumption.
In my opinion, the only reason the 2nd attack doesn't trigger a "yes" resolution to this question is because it was against the food processing industry/infrastructure rather than against energy or transportation infrastructure.
Why do you think you're right?
I am starting at 6%.
The historical base rate is 0%, as no national government to date has officially declared that “AGI” has been achieved.
Looking at the current state of play at the leading AI model developers:
-- OpenAI: CEO Sam Altman wrote in January that OpenAI is “now confident we know how to build AGI,” signaling internal confidence but not a government claim
-- Google DeepMind: Demis Hassabis has repeatedly framed AGI as ~2030 (+/-)
-- Anthropic: Dario Amodei has floated 2026–27 for systems “outsmarting most humans,” but emphasizes terms like “powerful AI” over the loaded “AGI.”
I feel safe starting at 6% given:
-- 0% Base-rate and short horizon: With zero prior government AGI declarations and only ~7 months remaining, the prior is very low.
-- high political/strategic downside: A premature government “AGI” claim would invite scientific pushback, financial scrutiny, and security angst. I assume most capitals will avoid the word “AGI” unless the evidence is overwhelming.
-- Resolution criteria: The resolution criteria requires the explicit term “AGI” in an official government announcement, as it should. This will filter out vague boasts (“advanced AI,” “frontier AI”) and all most corporate claims.
-- Official caution: Even if a lab believes it has “AGI,” governments (US, China, EU states) are likelier to speak in capability or safety terms rather than plant an “AGI achieved” flag.
-- Governments don’t seem to expect it so fast. No government has even started conditioning the public for an “AGI achieved” declaration.
In my next update I'll take a look at what leading independent AI experts and researchers, outside of the leading labs are saying about the AGI timeline
Why might you be wrong?
There could be prestige incentives in play, i.e.in a US–China tech contest, a leadership might deliberately use the AGI label for national prestige, even if the technical community disputes it. But this is highly unlikely as a responsible government wouldn’t risk embarrassment like that.
One wildcard here would be Kim Jong Eun (falsely) declaring that the DPRK has achieved AGI just for the attention.