Although the Trump administration rescinded the 2025 AI Diffusion Rule before it fully took effect—arguing that it was overly complex and bureaucratic—the evidence suggests that the government did not abandon the intention to regulate exports related to advanced AI. In fact, the Department of Commerce announced that it would issue a “replacement rule,” and in October 2025 it established the *American AI Exports Program*, aligned with an executive order aimed at promoting the export of the “American AI Technology Stack.” This indicates that the administration remains interested in creating a formal, legally binding framework to control the international diffusion of critical AI technologies.
Given that the U.S. government views frontier AI as a strategic asset with national security implications—especially regarding China—it is reasonable to expect that the new rule will include explicit controls on advanced model weights, frontier-class training compute, or cloud access intended for frontier-model development. Moreover, rescinding the previous rule did not eliminate the institutional recognition that frontier AI requires a specific regulatory regime; it merely opened the door for a version that is simpler, more focused, and more aligned with the current administration’s vision.
Therefore, with a program already created, an executive order mandating action, and a regulatory gap the administration has committed to fill, it is highly probable that before July 31, 2026 a new rule—published in the Federal Register or as an update to the EAR—will be implemented that meets the required criteria: explicit controls on frontier model weights, frontier-training compute, or cloud access for advanced AI development, with broad applicability and verifiable enforcement.
Why do you think you're right?
By July 31, 2026, it is highly likely that the United States will maintain and incrementally expand export controls explicitly affecting frontier AI model development, particularly through refined restrictions on advanced computing hardware, training infrastructure, and the transfer of high-capability model weights. While sweeping, entirely new regimes are less certain, policymakers are expected to adjust existing thresholds, licensing requirements, and country-specific rules in response to geopolitical competition, national security concerns, and technological advances. Overall, the regulatory trajectory points toward continued selective tightening—aimed at preserving U.S. strategic advantages in frontier AI—while seeking to balance innovation, allied coordination, and commercial competitiveness.
Why might you be wrong?
This forecast could be wrong if political or administrative changes lead the U.S. government to deprioritize export controls in favor of competitiveness or diplomatic engagement, or if legal challenges and industry resistance limit the feasibility and enforceability of new restrictions. It may also fail to materialize if key allies do not align with U.S. policy, increasing the risk of regulatory arbitrage, or if technological workarounds—such as more efficient training methods or decentralized development—reduce the effectiveness of controls. Finally, policymakers could shift strategy toward promoting the global diffusion of U.S. AI standards and platforms rather than tightening restrictions, altering the expected regulatory trajectory.