Back to articles

GPAI Policy Lab publishes first internal AI use policy to protect employee cognition from AI risks

LessWrong AI · April 24, 2026

GPAI Policy Lab publishes first internal AI use policy to protect employee cognition from AI risks

AI Summary

  • GPAI Policy Lab (an AI safety and policy research organization) released Version 1 of an internal policy that restricts how their own staff can use AI tools at work. The policy is motivated by concerns that AI systems might degrade human thinking over time, based on their extrapolations of AI capabilities and internal conversations about cognitive effects.
  • The policy takes a precautionary approach—the team believes the cost of being somewhat over-cautious now is lower than under-cautious later. Rather than keeping the policy internal, they published it publicly and invited criticism, asking for counterarguments, comparisons from other organizations, and specific feedback on whether individual restrictions are too narrow, too broad, or target the wrong problems.
  • This matters because most organizations have no written rules about how AI use might affect their employees' decision-making, writing skills, or judgment. GPAI is signaling that cognitive integrity—the ability to think independently without AI degrading your reasoning—is a workplace issue worth addressing now, not after problems emerge. The move invites other companies and institutions to publish their own policies and develop shared best practices.

Related Articles

Stay ahead with AI news

Get curated AI news from 200+ sources delivered daily to your inbox. Free to use.

Get Started Free