Please ensure Javascript is enabled for purposes of website accessibility
Home / News / Protecting workers in the age of AI: Developments at the federal and state levels

Protecting workers in the age of AI: Developments at the federal and state levels

The rapid advancement of artificial intelligence (AI) has raised concerns about its impact on workers and the workplace. Last month, the White House unveiled a set of principles designed to protect workers and ensure they have a voice in how these technologies are used.

The principles were developed by the Department of Labor, as directed in the president’s earlier Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence. The principles are focused on these eight areas:

  1. Worker Empowerment: Involving workers in AI system design and oversight.
  2. Ethical AI Development: Ensuring AI systems are safe and fair.
  3. Governance: Ensuring organizations regulate systems with human oversight.
  4. Transparency: Informing workers about AI usage.
  5. Labor Rights Protection: Safeguarding workers’ rights.
  6. Supporting Job Quality: Using AI to enhance jobs.
  7. Supporting Workers: Supporting or upskilling workers during AI-related job transitions.
  8. Responsible Data Use: Protecting workers’ data.

The administration has called upon technology companies to adopt these principles, with Microsoft and Indeed already committing to do so.

Colorado regulates ‘high-risk’ AI

While the White House is recommending guidelines, states are also stepping up to address the challenges posed by AI in the workplace.

In May, Colorado passed comprehensive legislation targeting “high-risk” AI.

The Colorado Artificial Intelligence Act (CAIA), set to take effect on February 1, 2026, imposes obligations on Colorado businesses that use AI systems to influence “consequential decisions.”

Under the CAIA, developers and deployers (i.e., users) must exercise reasonable care to protect consumers from known or reasonably foreseeable risks of algorithmic discrimination. “High-risk” AI systems include applications in employment and workforce management, such as resume screening, candidate evaluations, and employee monitoring. Other high-risk functions include roles in healthcare, financial services, public safety, education, insurance, or housing.

Employers in Colorado will be required to inform applicants about the use of AI systems, disclose the purpose and nature of the system, and provide opportunities to correct personal data and appeal adverse decisions. The act has a limited carveout for organizations with less than 50 employees.

As AI continues to shape the future of work, both the federal government and individual states are taking steps to protect workers. Employers should stay informed and prepare to adapt their practices to ensure compliance and promote a fair and equitable workplace in the age of AI.