Adjusting slightly to add +2 to the first option.
-0.410625
Relative Brier Score
846
Forecasts
87
Upvotes
Forecasting Calendar
| Past Week | Past Month | Past Year | This Season | All Time | |
|---|---|---|---|---|---|
| Forecasts | 0 | 0 | 288 | 257 | 2925 |
| Comments | 0 | 0 | 181 | 164 | 1609 |
| Questions Forecasted | 0 | 0 | 56 | 46 | 243 |
| Upvotes on Comments By This User | 0 | 2 | 15 | 14 | 207 |
| Definitions | |||||
Star Commenter - Nov 2025
Why do you think you're right?
Why might you be wrong?
X
Why do you think you're right?
Joining the question late considering the act of war argument as substantial inhibitor besides the other good arguments presented by my fellow forecasters.
Why might you be wrong?
X
Why do you think you're right?
Starting out on the fence.
Why might you be wrong?
X
Why do you think you're right?
Starting late. I'm a bit surprised why it should be a tax incentive but maybe that's just driven by my continental European perspective.
While I can see the logic behind tax incentives for cybersecurity audits, it does not make much sense for AI models specifically in my eyes. You would need to determine security standards and the details of the audit. It is one thing to have guidelines and another to agree on a more strict regime.
If risks turn out to be that major, I could imagine that certain model providers will be forced to have an audit by law. If risks do not turn out as major, I'm not sure we will see anything more than guidelines or agreed upon minimum standards.
Why do you think you're right?
Adjusting slightly.
Why might you be wrong?
X