AI Addendum

AI ADDENDUM

This AI Addendum (“Addendum”) forms part of the Software and Services Agreement, order form, or statement of work (the “Agreement”) between Cority Software Inc. or its Affiliate (“Cority”) and the Client collectively identified in the Agreement (“Client”). All capitalized terms not otherwise defined herein will have the meaning given to them in the Agreement.

Last Update: March 11, 2026

1. Context, Purpose, and Scope

Cority has embedded enterprise-grade AI Systems developed by third-party licensors including, without limitation, Google and OpenAI for specific use cases within the software licensed under the Agreement. Cority therefore relies on the controls and governance mechanisms developed by such third-parties to satisfy regulatory requirements.

For a detailed list of third-party licensors that process Client data, please refer to our list of sub-processors at https://www.cority.com/legal-center/cority-sub-processors/.

This Addendum establishes mutual obligations, controls, and governance mechanisms for the use, development, deployment, or integration of artificial intelligence systems in connection with the Agreement. The Parties acknowledge that AI governance operates under a shared responsibility model.

2. Definitions

  1. AI Credits: A proprietary unit of measure used to quantify the charges applicable for the consumption of AI Tokens.
  2. AI System: Any software, application, or service provided by Cority under the Agreement that incorporates or relies upon automated reasoning, prediction, or content generation, including any Foundational LLM integrated therein..
  3. AI Tokens: The underlying unit of consumption measured by the third-party licensor’s API (e.g., input and output tokens processed by the Foundational LLM). AI Credits represent a Cority-defined abstraction of AI Token consumption for billing purposes.
  4. Deployer: Under the EU AI Act, the natural or legal person using an AI system under its authority (the Client).
  5. Foundational LLM: A large language model that has been pre-trained on broad, general-purpose datasets and is designed to be adapted or integrated into downstream applications across a range of tasks and domains, rather than trained or fine-tuned for a single specific use case, including models operating in a multimodal capacity (such as the processing of text, images, audio, or video). For the purposes of this Addendum, Foundational LLMs include, without limitation, OpenAI GPT and Google Gemini model families as made available to Cority under its commercial agreements with such providers.
  6. Input Data: Any data, prompts, text, audio recordings, or files submitted by Client to the AI System.
  7. Output: Any results, transcripts, recommendations, or generated content produced by an AI System.
  8. Personal Data: As defined under applicable data protection laws.
  9. Prompt Injection: Malicious input designed to bypass safety filters, extract sensitive data, or manipulate the model’s intended logic (e.g., “jailbreaking” or “virus injections”).
  10. Provider: Under the EU AI Act, the natural or legal person that develops or has developed an AI system or a general-purpose AI model and places it on the market or puts the AI system into service under its own name or trademark. For the purposes of this Addendum, Cority acts as a Provider where it integrates Foundational LLMs into its Software and makes them available to Client as part of the licensed Software.

 

3. AI Credits and Consumption

In order to access AI functionality, Client must purchase AI System licenses and/or AI Credits through the Agreement. Once AI Credits are fully consumed, AI functionality will be automatically deactivated in order to prevent the Client from incurring unexpected overages. Client may purchase additional AI Credits to resume usage at any time and purchased AI Credits reset at the beginning of each twelve (12) month subscription period.

4. Professional Oversight & Non-Reliance

  1. No Substitute for Professional Judgment. AI Systems and Output are probabilistic in nature, which means that the accuracy, reliability and suitability of Output for Client’s purpose(s) cannot be guaranteed, and Client must therefore review all Output (including independent human review) to confirm its accuracy, reliability and suitability for Client’s purposes as well as to correct and delete Output as appropriate. Output is not a substitute for professional judgment including, without limitation, medical, legal, safety, or engineering judgment.
  2. Mandatory Human Review. Client will ensure that all Output is reviewed by a competent human reviewer before it is adopted, acted upon, or incorporated into any decision, record, or workflow (“Human Review”). Human Review requires the reviewer to: (i) assess the Output for accuracy, relevance, and completeness; (ii) correct or reject Output that is inaccurate, misleading, or unsuitable for its intended purpose; and (iii) confirm that the Output is appropriate before it is finalized. No Output may be used in an autonomous or fully automated manner without Human Review. Client acknowledges the risk of automation bias (the tendency to over-rely on or insufficiently scrutinize AI-generated Output) and will implement reasonable organizational safeguards to mitigate this risk, including appropriate training, policies, and review procedures that require reviewers to exercise independent judgment rather than passively approve Output.
  3. Mandatory Professional Verification. In addition to Human Review, for any Output involving regulated activities including, without limitation, medical, legal, safety, or engineering activities or high-impact decision-making on individuals, Client is responsible for ensuring such Output is validated by a qualified professional with relevant domain expertise before any action is taken on the basis of such Output (“Professional Verification”). Client is responsible for (i) determining which Output requires Professional Verification based on Client’s regulatory environment and intended use case; and (ii) any consequences that arise from failing to obtain Professional Verification where required.
  4. Personal Injury Disclaimer. TO THE MAXIMUM EXTENT PERMITTED BY LAW, CORITY DISCLAIMS ALL LIABILITY FOR ANY PERSONAL INJURY, DEATH, OR PROPERTY DAMAGE ARISING FROM CLIENT’S RELIANCE ON OUTPUT WHERE CLIENT HAS FAILED TO PERFORM THE OUTPUT REVIEW REQUIRED UNDER SECTION 4(b) OR THE PROFESSIONAL VERIFICATION REQUIRED UNDER SECTION 4(c). CORITY PROVIDES SELF-SERVICE SOFTWARE AND DOES NOT REVIEW OUTPUT. WHERE CLIENT HAS PERFORMED THE REVIEW REQUIRED UNDER SECTION 4(b) OR, WHERE APPLICABLE, SECTION 4(c), AND A CLAIM NEVERTHELESS ARISES, THE PARTIES ACKNOWLEDGE THAT THE INDEPENDENT HUMAN REVIEWER’S OR QUALIFIED PROFESSIONAL’S REVIEW AND APPROVAL OF THE OUTPUT CONSTITUTES AN INTERVENING JUDGMENT THAT SUPERSEDES THE AI SYSTEM’S OUTPUT. FOR THE AVOIDANCE OF DOUBT, NOTHING IN THIS SECTION 4(d) EXCLUDES OR LIMITS EITHER PARTY’S LIABILITY FOR DEATH OR PERSONAL INJURY TO THE EXTENT THAT SUCH EXCLUSION OR LIMITATION IS PROHIBITED BY APPLICABLE LAW.
  5. Upstream Provider Defects. Where a claim arises from a defect in the underlying Foundational LLM (as distinct from Cority’s integration or configuration of such model), Cority’s liability will not exceed the recovery, if any, that Cority obtains from the applicable third-party licensor in respect of such claim.
  6. Accuracy and Model Bias. The Parties acknowledge that AI Systems function in two distinct capacities under this Agreement:(i) Analytical AI. For AI Systems used for data extraction, PDF analysis, or review, the primary risk is technical accuracy rather than social bias. Cority warrants that it tests AI Systems to minimize extraction errors, but AI Systems can make mistakes and Client ultimately remains responsible for verifying the accuracy of all Output.(ii) Foundational LLMs. For Foundational LLMs, Client acknowledges that Cority does not perform independent bias testing or algorithmic fairness audits. Cority relies exclusively on the safety evaluations, red-teaming, and bias mitigation protocols conducted by its third-party licensors. In this context, Cority’s sole obligation is to select reputable licensors who provide public documentation regarding their responsible AI practices. Client is responsible for determining if the AI System’s general fairness profile is suitable for Client’s specific regulatory environment and intended use case.

 

5. EU AI Act Compliance

Where either party is subject to the EU AI Act, the following obligations apply:

  1. Cority Obligations. Cority will: (i) maintain technical documentation reasonably necessary for Client to fulfill its obligations as a Deployer under the EU AI Act; (ii) implement safeguards appropriate to the risk profile of the AI System; and (iii) cooperate with Client and relevant authorities in connection with any compliance, conformity assessment, or market surveillance activity.
  2. Client Obligations. Client, as the Deployer, is responsible for: (i) ensuring personnel dealing with the AI System have a sufficient level of AI literacy (Article 4); (ii) implementing human oversight to prevent or minimize risks to health, safety, or fundamental rights (Article 14); (iii) informing natural persons when they are interacting with an AI system (Article 50); (iv) the retention and protection of automatically generated logs within Client’s control (Article 12); and (v) conducting any Fundamental Rights Impact Assessment required under Article 27.
  3. Transparency. To facilitate Client’s transparency obligations, Cority will place a marker such as “Powered by AI” when an end user is interacting with AI Systems.
  4. Risk Classification. Upon Client’s written request, Cority will make available a risk classification assessment for each AI System feature to assist Client in determining its regulatory obligations under the EU AI Act.

6. Standard of Input & Prohibited Content

Client agrees that all Input Data will meet the following standards:

  1. Lawful & Non-Derogatory: Client will not submit Input Data that is illegal, racist, obscene, derogatory, defamatory, harassing, or promotes discrimination.
  2. Security Integrity: Client will not submit inputs designed for Prompt Injection or malicious code.

7. Third-Party Terms (OpenAI & Google)

  1. Flow-Down Obligations. Use of the AI System is subject to the then-current OpenAI Usage Policies available at https://openai.com/policies/usage-policies/ and the Google Generative AI Prohibited Use Policy available at https://policies.google.com/terms/generative-ai/use-policy. Client agrees to comply with these terms as if it were a direct party to them.
  2. No Gap Clause. In the event of a conflict between this Addendum and the policies referenced in Section 7(a) above, the more restrictive provision providing the highest level of safety and protection will govern.

8. Warranties and IP Indemnity

  1. Foundational Model Warranty. Cority warrants that it has entered into valid commercial agreements with its AI providers and that, to Cority’s knowledge, such providers have implemented commercially reasonable measures to ensure their models were developed in accordance with applicable laws.
  2. IP Indemnity. IP indemnity protections set forth in the Agreement do not apply to any AI Systems, or AI-generated outputs.
  3. Ownership of Output. As between the Parties, Client owns all Output. Cority hereby assigns all its right, title, and interest in and to the Output to Client. Client acknowledges that Output may not be unique across users and that AI Systems may generate the same or similar output for other users. Cority’s assignment of Output does not extend to Output generated for other users.
  4. AI Disclaimer. For the avoidance of doubt, the warranty disclaimers and limitations of liability set forth in the Agreement apply fully to the AI Systems and Output. Additionally, because Output is generated by probabilistic machine learning, Cority specifically disclaims any warranty regarding the accuracy, completeness, or non-infringement of the Output.
  5. Use Case Scope. WHILE THE AI SYSTEMS ARE DESIGNED TO PERFORM SUBSTANTIALLY IN ACCORDANCE WITH CORITY DOCUMENTATION, OUTPUT IS GENERATED BY PROBABILISTIC MACHINE LEARNING AND MAY VARY IN ACCURACY, COMPLETENESS, AND SUITABILITY FOR CLIENT’S PURPOSES. ACCORDINGLY, CORITY DISCLAIMS ALL IMPLIED WARRANTIES WITH RESPECT TO OUTPUT, INCLUDING BUT NOT LIMITED TO MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE. THE AVAILABILITY OF AI FUNCTIONALITY IS DEPENDENT UPON THIRD-PARTY LICENSORS AND IS PROVIDED ON AN “AS AVAILABLE” BASIS. CORITY DOES NOT WARRANT UNINTERRUPTED OR ERROR-FREE ACCESS TO AI FUNCTIONALITY AND WILL NOT BE LIABLE FOR ANY DOWNTIME OR SERVICE DEGRADATION CAUSED BY ITS THIRD-PARTY LICENSORS. THIS DISCLAIMER APPLIES TO OUTPUT AND THIRD-PARTY AI AVAILABILITY ONLY AND DOES NOT LIMIT ANY WARRANTIES OR SERVICE LEVEL COMMITMENTS APPLICABLE TO THE CORITY PLATFORM.

9. Limitation of Liability

  1. Cority’s and all of its Affiliates’ liability, taken together in the aggregate, arising out of or related to this Addendum, whether in contract, tort or under any other theory of liability, is subject to the limitation of liability set forth in the Agreement.

10. Shared Responsibility Matrix

Responsibility Area 

Cority Obligations (Vendor / Integrator) 

Client Obligations (Customer / Deployer) 

Data Governance 

Configuration & Privacy: Secure the API pipeline; ensure “Opt-Out” settings are active so Input is not used to train third-party global models without the Client’s prior consent. 

Input Hygiene: Sanitize Input Data (PII/PHI) per internal policy; ensure legal right and consent to process data via third-party sub-processors. 

Model Governance 

Vetting & Integration: Select reputable sub-processors; provide documentation on intended use; implement moderation “wrappers” to filter out harmful content. 

Validation & Suitability: Verify that the AI System is appropriate for the specific business use case; perform mandatory human review of all Output. 

Security 

Platform Security: Protect the Cority application environment; encrypt data in transit; monitor for system-level Prompt Injection and “jailbreak” attempts. 

Endpoint & User Security: Secure user credentials and API keys; monitor for unauthorized user behavior or “malicious prompting” by internal staff. 

Compliance 

Systemic Compliance: Ensure the platform features meet statutory requirements (e.g., EU AI Act Provider rules); provide technical documentation for Client audits. 

Operational Compliance: Ensure final use of Output complies with industry regulations (HIPAA, OSHA, etc.) and professional standards. 

Transparency 

Technical Disclosure: Disclose the identity of the underlying third-party models (e.g., GPT-4o, Gemini 1.5 Pro) and known probabilistic limitations. 

User Notification: Notify end-users/natural persons when they are interacting with AI; label synthetic content as required by the EU AI Act (Art. 50). 

Human Oversight 

Control Mechanisms: Provide the technical interface allowing users to edit, override, or reject AI recommendations before they are finalized. 

Independent Judgment: Maintain a “Human-in-the-Loop” for high-impact decisions; ensure no autonomous action is taken on probabilistic Output. 

Accountability 

Service Monitoring: Maintain records of system performance, sub-processor uptime, and security incidents at the platform level. 

Audit Trails: Maintain records of how AI-assisted Outputs were used, reviewed, and approved to demonstrate responsible organizational use. 

11. Data Privacy

(a) The Parties will comply with applicable data protection and privacy laws, including requirements governing automated decision-making.

(b) Client acknowledges that processing done by the AI System may occur in a different geographic region than the hosting location of the Software, subject to the security controls identified in the Agreement. For more information about where AI Systems are processing data, please refer to the list of sub-processors at https://www.cority.com/legal-center/cority-sub-processors/

13. Training

(a) No Client data will be used to train Foundational LLMs unless permitted by Client.

13. Prohibited Activities

Client will not use AI Systems, whether directly or indirectly, in connection with the Agreement for any unlawful, unethical, or prohibited purpose under applicable laws (“Prohibited Practices”). Prohibited Practices include, without limitation: (a) the generation or dissemination of misleading, deceptive, or fraudulent content; (b) infringement or misappropriation of intellectual property, trade secrets, or privacy rights; (c) discrimination, harassment, or other violations of applicable law; (d) manipulation of data or outcomes in a manner inconsistent with the purpose of this Agreement; and (e) any activity that may cause reputational, legal, or regulatory harm to either Party. Each Party will implement reasonable safeguards to ensure compliance with this provision and will promptly notify the other Party of any known or suspected breach.

14. Suspension and Termination

Cority reserves the right to immediately suspend or terminate access to the AI System, without liability, if Cority (or its third-party providers) identifies a pattern of safety violations, conduct violations (e.g., racist or derogatory content), or third-party policy breaches.

15. Usage Data and Service Improvements

Notwithstanding anything to the contrary in the Agreement, Cority may collect and analyze “Usage Data” (defined as technical logs, metadata, performance metrics, and patterns of use) derived from Client’s interaction with the AI Systems. Usage Data does not include Input Data or Output. Client agrees that Cority owns all right, title, and interest in such Usage Data and may use it to: (a) maintain, protect, and improve the AI System and the Software; (b) monitor for security threats or Prompt Injection; (c) develop aggregated, de-identified insights; (d) monitor consumption of AI Tokens. Cority will not use Usage Data in a manner that identifies Client or any natural person.

16. Change of Model Providers

Subject to Client’s right of objection under the data processing addendum where applicable, Cority reserves the right to modify or replace the underlying Foundational LLM and any third-party AI licensors, provided that such change does not materially diminish the security or functionality of the AI System.

17. Order of Precedence

In the event of a conflict between this Addendum and the Agreement, this Addendum will prevail.