Control Library
The following is a comprehensive list of all Controls found in the Credo AI platform.
Key |
Requirements |
Type |
Risk Type |
Description |
CTRL-AM1 |
1 |
use_case |
abuse_misuse |
Incidents of abuse and misuse should be documented. |
CTRL-AM2 |
2 |
use_case |
abuse_misuse |
Methods for proper user operation of the AI system should be documented and made available to users. |
CTRL-AM3 |
2 |
use_case |
abuse_misuse |
An organization code of conduct for the use of generative AI tools should be published and acknowledged by generative AI tool users. |
CTRL-AM4 |
2 |
use_case |
abuse_misuse |
A policy for human review of AI-generated content is clearly defined. |
CTRL-CO1 |
2 |
use_case |
legal |
Relevant laws, regulations, and standards with which your system complies should be documented. |
CTRL-CO2 |
1 |
use_case |
legal |
An open source license checker should be leveraged to automatically identify when AI-generated code contains licensed material. |
CTRL-CO3 |
1 |
use_case |
legal |
A protocol for preventing the unintentional and unapproved inclusion of copyrighted material in AI generated content is in place. |
CTRL-ES1 |
2 |
use_case |
environmental_societal_impact |
Risks of the AI system associated with environmental and socioeconomic harms are identified and addressed. |
CTRL-ET1 |
5 |
model |
explainability_transparency |
The design, theory, and logic of the model should be documented. |
CTRL-ET2 |
4 |
use_case |
explainability_transparency |
The deployment context for the use case should be documented. |
CTRL-ET3 |
4 |
use_case |
explainability_transparency |
The AI System must be transparent to end users and key stakeholders. |
CTRL-ET4 |
1 |
use_case |
explainability_transparency |
A policy regarding the disclosure of generative AI content is in place. |
CTRL-FB1 |
3 |
use_case |
fairness_bias |
Developers of AI systems should consider individuals with disabilities in the design and development of the tool. |
CTRL-FB2 |
4 |
model |
fairness_bias |
Methods to evaluate potential harmful bias of the model should be defined and performed. |
CTRL-PR1 |
3 |
use_case |
performance_robustness |
The AI system's purpose, intended use, and application should be documented. |
CTRL-PR2 |
3 |
model |
performance_robustness |
Methods to evaluate the performance of the model should be defined and performed. |
CTRL-SE1 |
1 |
use_case |
security |
A security testing framework should be leveraged to automatically probe AI-generated code for known vulnerabilities. |
CTRL-PI1 |
3 |
model |
privacy |
Model should be trained on anonymized data to prevent potential privacy leaks. |
CTRL-AM5 |
1 |
use_case |
abuse_misuse |
Jailbreak prevention measures should be taken to prevent abuse and misuse of the system. |
CTRL-AM6 |
1 |
use_case |
abuse_misuse |
Available educational resources should be documented. |
CTRL-AM7 |
1 |
use_case |
abuse_misuse |
Document any mechanisms in place for users to report incidents of bias or harm. |
CTRL-AM8 |
1 |
use_case |
abuse_misuse |
Provide contact information for questions or complaints regarding the use case. |
CTRL-AM9 |
1 |
use_case |
abuse_misuse |
Document technology requirements to use or access the system. |
CTRL-AM10 |
1 |
use_case |
abuse_misuse |
Document features, buyer guidance, or other processes that you have implemented to mitigate against improper, out-of-scope, or "off-label" uses. |
CTRL-AM11 |
1 |
use_case |
abuse_misuse |
Processes should exist to rollback malfunctions and identify and correct errors. |
CTRL-CO4 |
1 |
use_case |
legal |
Document any warranties for the system, certifications or standards with which the system complies, and insurance coverage protecting against legal liability. |
CTRL-CO5 |
1 |
use_case |
legal |
A specific individual/team has been delegated to manage AI risks and governance. |
CTRL-CO6 |
1 |
use_case |
legal |
Document the protocol for iteratively improving organizational policies around generative AI tool use. |
CTRL-ET6 |
1 |
use_case |
explainability_transparency |
Third-party components of your system should be documented. |
CTRL-ET7 |
1 |
use_case |
explainability_transparency |
Foundation Model used by the system and its Provider are identified. |
CTRL-ET8 |
1 |
use_case |
explainability_transparency |
Document how your system addresses avenues for decision explanation. |
CTRL-CO7 |
1 |
use_case |
legal |
Document roles and expertise of personnel that developed and reviewed and tested the system. |
CTRL-ET10 |
1 |
use_case |
explainability_transparency |
Model card should be developed and maintained. |
CTRL-ET11 |
1 |
use_case |
explainability_transparency |
Provide a description of the training and testing datasets used in your system. |
CTRL-FB3 |
1 |
use_case |
fairness_bias |
Diverse stakeholders should be involved in the design and development of the team. |
CTRL-FB4 |
1 |
use_case |
fairness_bias |
A plan for bias/fairness monitoring should be documented. |
CTRL-FB5 |
1 |
use_case |
fairness_bias |
Document any potential proxies for identity attributes used by the model. |
CTRL-FB6 |
1 |
use_case |
fairness_bias |
Document whether any protected class information is used in the training data of the model. |
CTRL-FB7 |
1 |
use_case |
fairness_bias |
Document how model inputs were tested to determine correlations or dependencies with protected class information. |
CTRL-PI2 |
1 |
use_case |
privacy |
Document any biometric data used by the AI system. |
CTRL-PR3 |
1 |
use_case |
performance_robustness |
Describe alternate models or methods that were considered and document whether their performance was insufficient. |
CTRL-PR4 |
1 |
use_case |
performance_robustness |
Current model limitations and future enhancements should be documented. |
CTRL-PR5 |
1 |
use_case |
performance_robustness |
A plan for performance monitoring should be documented. |
CTRL-SE2 |
2 |
use_case |
security |
Appropriate level of access for all people with access to sensitive data should be reviewed and ensured. |
CTRL-SE3 |
1 |
use_case |
security |
Data used to train the model(s) should be evaluated for and protected against potential data poisoning attacks. |
CTRL-SE4 |
1 |
use_case |
security |
Datasets shall be protected against theft. |
CTRL-SE5 |
1 |
use_case |
security |
Model(s) should be trained to protect against adversarial attacks. |
CTRL-SE6 |
1 |
use_case |
security |
Model(s) shall be protected from extraction attacks in production. |
Berto QA |
4 |
use_case |
abuse_misuse |
This is a QA test for controls |
CTRL-ET13 |
1 |
use_case |
explainability_transparency |
Processes of model training and fine-tuning are documented. |
CTRL-ET14 |
2 |
use_case |
explainability_transparency |
Visibility into application usage is articulated. |