Client Alert: AI Provisions in Service Contracts: A Primer

June 18, 2025  

Artificial Intelligence (AI) is rapidly transforming how services are delivered across industries- from healthcare and software education, finance, and manufacturing. As service providers increasingly integrate AI tools into their offerings, it is critical for companies to update their service agreements to address new legal, operational, and compliance risks.

This update outlines key legal considerations when negotiating or revising service contracts that incorporate AI and offers practical tips for aligning contractual terms with emerging laws and client expectations.

KEY TAKEAWAYS

  • AI-specific disclosures in contracts are essential to limit liability and manage client expectations.
  • Ownership of AI-generated content must be clearly defined—AI outputs may not be copyrightable under U.S. law.
  • Regulatory trends: The EU AI Act and U.S. state laws (e.g., Colorado, California) are shaping a fast-evolving compliance landscape.
  • Data use and protection clauses should reflect heightened risks when AI tools interact with sensitive or regulated information.
  • Contracts should allow flexibility as AI technologies evolve, without requiring constant renegotiation.

LEGAL AND REGULATORY LANDSCAPE

  • U.S. Federal Law: No comprehensive federal AI law exists yet, but the Federal Trade Commission (FTC) has warned against deceptive AI marketing and may pursue enforcement actions where AI tools are misused or misrepresented. [1]
  • EU AI Act: Effective as of August 1, 2024, the European Union’s AI Act (AIA) imposes tiered obligations on companies that develop, import, or deploy AI in the EU, depending on risk level and function. Most requirements will apply following a 24-month implementation period. U.S. companies operating in Europe must assess applicability early. [2]
  • State-Level Action: Several U.S. states—including California, Colorado, and Utah—have enacted or proposed legislation governing AI use, particularly in employment and consumer contexts. More are expected to follow.

CONTRACTING BEST PRACTICES FOR AI SERVICES

  1. Disclosure and Limitations of Use

Clearly disclose the use of AI tools in the performance of services. Include disclaimers around:

  • The reliability of AI-generated outputs.
  • The client’s responsibility for verifying results.
  • Any limitations in AI performance or applicability.
  1. Intellectual Property Ownership

AI outputs may not be eligible for copyright unless there is meaningful human authorship. Contracts should clarify:

  • Whether outputs are owned by the provider or assigned to the client.
  • Whether humans contribute to the output creation.
  • Rights to data inputs and fine-tuned models.
  1. Data Use and Privacy Compliance

Specify how the AI tool interacts with sensitive information, including:

  • Protected Health Information (PHI) under HIPAA.
  • Personally Identifiable Information (PII).
  • Proprietary or confidential business data.

Ensure that data handling practices comply with privacy laws and are reflected in indemnification and security provisions.

  1. Notice Obligations and AI Failures

Include clauses requiring prompt notification if:

  • A party identifies an AI-related issue or system failure.
  • AI tools generate inaccurate or harmful results.

Establish timelines, required disclosures, and remedies such as suspension, remediation, or fallback procedures.

  1. Service Levels and Accountability

Define performance standards while managing risk. Consider:

  • Excluding liability for issues arising from inaccurate or incomplete client-provided data.
  • Carving out performance guarantees that hinge on AI-driven analytics or recommendations.
  1. Flexibility to Evolve Technology

AI tools change rapidly. Build flexibility into your agreements:

  • Reserve the right to update or improve underlying AI models or tools.
  • Avoid provisions that require amending the entire contract for iterative AI changes.

FINAL THOUGHTS

As AI continues to shape service delivery models, companies must proactively adjust their contracts to reflect new risks and responsibilities. A forward-thinking approach not only minimizes legal exposure but also builds trust with clients and partners.

QUESTIONS

The CFDB team regularly advises clients on structuring service agreements that incorporate AI tools while staying ahead of evolving legal frameworks. Please reach out to discuss how we can help.

Cameron Robinson
Partner
crobinson@crokefairchild.com
872.224.2920

Sarah Sager
Counsel
ssager@crokefairchild.com
303.358.8585

[1] AI and the Risk of Consumer Harm | Federal Trade Commission

[2] EU AI Act: first regulation on artificial intelligence | Topics | European Parliament