ArXiv will ban authors for a year if they let AI do all the work, tightening rules around authorship and accountability in scientific publishing. The move matters because it targets a growing problem in AI-assisted research: papers that rely on language models without meaningful human contribution, undermining trust in methods and claims. Source
Anthropic’s planned $1.5B copyright settlement is facing friction as a judge delays approval after lawyers accused parties of rushing the deal to capture $320 million in fees. The dispute matters globally because it could set practical precedents for how aggressively rightsholders can pursue compensation in AI-training cases, and how courts evaluate fee incentives and the fairness of “historic” settlements. Source
The US Commodity Futures Trading Commission is turning to AI to help catch insider trading in prediction markets. The point is less technical than institutional: by treating AI as an enforcement tool in high-speed markets, regulators are signalling that detection and audit capabilities will increasingly determine which trading strategies are viable. Source
OpenAI and Malta are partnering to expand access to ChatGPT Plus, alongside training designed to help citizens build practical AI skills and use AI responsibly. With this kind of “public enablement” effort, governments move from experimentation to distribution: the political question is who gets guided access to powerful tools, and how that guidance shapes adoption norms. Source
YouTube is expanding its AI likeness detection programme to all users over the age of 18, using a selfie-style face scan to monitor for lookalikes and alert users if a match is found. The implications are global and economic: scaling detection to essentially everyone increases the pressure on deepfake use cases while also testing how privacy and automated surveillance are handled at platform scale. Source
