The UN has a funding crisis. They are ~$3B in the hole from just deliquent dues. The US is responsible for some 90% of this shortfall. And I don't see Trump et al hurrying up to write checks to an organization that he considers corrupt, wasteful, controlled by countries hostile to him, etc. https://www.nytimes.com/2026/01/30/world/americas/un-finances-collapse-debts.html
The UN is now pulling 10,000-15,000 troops out of peacekeeping missions -- with a general reduction / contraction of 15% across all peacekeeping missions.
There is just no money for new missions this year.
Why do you think you're right?
The FDA is just getting around to giving approval to an AI-assisted device that detects multiple conditions from a single patient scan. That's a long way from utilizing a generalized LLM-based tool as a diagnostic or treatment tool. Just not going to happen within the next two months.
https://www.statnews.com/2026/01/21/fda-clears-aidoc-tool-detect-multiple-conditions-from-ct-scan/?utm_campaign=daily_recap&utm_medium=email&_hsenc=p2ANqtz--m9SUg2py4gK3n72JdxMPqrD2bkPH-xQoyokz4Sc8vHYXNid-0F7LUlv3HZY0rLtwYlOb6xym8sP_9TdtOpPSVZasZ9w&_hsmi=399757372&utm_content=399757372&utm_source=hs_email
Why might you be wrong?
Could be something in the approval queue that I haven't stumbled upon, yet. But I still don't think it could be approved within the next two months, given that the FDA still doesn't have any published rules for approval of such devices.
Thank you for the link. But let me notice that "a generalized LLM-based tool as a diagnostic or treatment tool" is hardly needed for a positive resolution here; according to para #1 of the clarification, "Any incorporation of an LLM in the device at the time of FDA authorization would count, even if used for minor features like natural language interfaces, report summarization, or user interaction" (emphasis mine).
Which is why the link you posted about Aidoc's CARE multimodal foundation model sent me searching, especially after the para #2 of the same clarification, which states that "Foundation models that are multimodal count as "LLM-based functionality" as long as they are used in their text or language generation capacity", and Aidoc reports claiming that their foundation models "can even generate text from imaging pixel data to enable automated report generation and help accelerate workflows".
But turns out that we are not there (yet?); the company's "pixel-to-report" feature is a future one (source), and the device entry in Devices@FDA is solely about image processing, without mention of any LLM-based functionality whatsoever...