Credo AI & Microsoft Foundry Integration

Credo AI’s integration with Microsoft Azure AI Foundry enables organizations to bridge the gap between AI development and governance. This integration automatically imports “Model as a Service” (MaaS) models from the Microsoft Azure AI model catalog into Credo AI’s model registry, where they can be tracked, assessed, and governed according to enterprise policies and regulatory standards.

By leveraging this integration, teams can ensure models associated with their AI use cases are evaluated with the right Azure AI Evaluation SDK evaluators based on the risks applied to their use cases. Organizations can run targeted evaluators to mitigate specific risks—such as fairness, toxicity, and jailbreaks—and use the results to make informed, responsible AI governance decisions.

Functionality 

Controls & Evaluators

This integration adds new controls to the Control Library in your tenant. Each control, specifies an Azure AI Evaluation SDK evaluator to use to satisfy the control and provides skeleton code which guides technical users in executing the associated evaluator,and submitting the evaluation result as technical evidence to be associated with the control in the Credo AI platform. The following  controls will be added to the Control Library 

  • MSFT-HATEUNFAIR -  Uses the Hateful and Unfair Content Evaluator to establish and apply fairness testing and validation framework
  • MSFT-SEXUAL - Uses the Sexual Content Evaluator to establish content safety policy and boundaries
  • MSFT-VIOLENT - Uses the Violent Content Evaluator to establish content safety policy and boundaries
  • MSFT-SELFHARM - Uses Self-Harm-Related Content Evaluator to establish content safety policy and boundaries
  • MSFT-PROTECTED - Uses Protected Material Content Evaluator to establish third-party assessment and management framework
  • MSFT-DIRECTJAILBREAK - Uses Direct Attack Jailbreak Evaluator to establish security validation framework
  • MSFT-INDIRECTJAILBREAK - Uses Indirect Attack Jailbreak Evaluator to establish security validation framework
  • MSFT-GROUNDEDNESS - Uses Groundedness Evaluator to establish information quality assurance framework
  • MSFT-GROUNDEDNESSPRO - Uses Groundedness Pro Evaluator to establish information quality assurance framework
  • MSFT-RETRIEVAL - Uses Retrieval Evaluator to establish information quality assurance framework
  • MSFT-RELEVANCE - Uses Relevance Evaluator to establish information quality assurance framework
  • MSFT-COHERENCE - Uses Coherence Evaluator to establish and apply performance testing and validation framework
  • MSFT-FLUENCY - Uses Fluency Evaluator to establish and apply performance testing and validation framework
  • MSFT-SIMILARITY - Uses Similarity Evaluator to establish and apply performance testing and validation framework
  • MSFT-F1 - Uses F1 Score Evaluator to establish and apply performance testing and validation framework
  • MSFT-BLEU - Uses BLEU Score to establish and apply performance testing and validation framework
  • MSFT-ROUGE - Uses ROUGE Score to establish and apply performance testing and validation framework
  • MSFT-GLEU - Uses GLEU Score evaluator to Establish and apply performance testing and validation framework
  • MSFT-METEOR - Uses METEOR Score to establish and apply performance testing and validation framework

Each evaluator requires the user to provide an evaluation dataset generated by the model or system being evaluated. Some evaluators use the Azure AI Foundry safety evaluations back-end service and thus require specification of an Azure AI Foundry project, subscription, model endpoint and other similar information. Others are more flexible, only requiring an Azure-hosted model endpoint. The skeleton code for each control included in the integration guides technical users in providing the Azure access information required for the evaluator captured by the control.


Implementation Note: Some evaluators can only be run from specific Azure regions. Azure plans global deployment to roll out through 2025. See the region support section of the Azure Evaluator documentation for complete details.

Questionnaire

This integration adds an optional Default Azure Intake Questionnaire to the Credo AI Questionnaire registry. Users can elect  to:

  1. Add a new questionnaire with questions from a specified questionnaire and the Default Azure Intake Questionnaire appended
  2. Add the Default Azure Intake Questionnaire as a stand alone questionnaire. 
  3. Add  no questionnaire.

The Azure Intake Questionnaire provided by the integration asks users high level questions about the prior evaluation steps for the use case. Based on the user’s answers to each question, triggers and actions automatically apply relevant risk scenarios to the use case. In turn, the triggers and actions apply the integration-specific risk mitigating controls that are most relevant to each identified risk scenario.

Models

This integration ingests models which are available in the Azure AI Foundry Model Catalog for “Model as a Service” hosting into in the  Model Registry in your Credo AI tenant with the following details:

  • Model name = "Azure {displayName} {version}"
  • Model source = “Azure AI Foundry”
  • A summary of the model, including its Azure Asset ID
  • Model description = “Imported from Azure AI Foundry”

Custom Fields 

This integration adds the following 4 new custom fields to Credo AI to hold parameters required for running the Azure AI Evaluation SDK evaluators:

  • Azure Project ID
  • Azure Resource Group
  • Azure Subscription
  • Azure Project’s Connection String

Setup

To enable the integration, users will provide Credo AI dev team with credentials for Credo AI and Azure AI Foundry. This document provides guidance on how to collect the following information from Credo AI and Microsoft Azure.

  • Credo AI API Token
  • Credo AI Tenant Name
  • Credo AI API URL  base path
  • Azure Tenant ID
  • Azure app Client ID and Client Secret. 

Credo AI Authentication

To authenticate into Credo AI, this integration requires :

  • Credo AI API Token
  • Credo AI Tenant ID 
  • Credo AI API URL  base path

How to find Credo AI API Token

  1. Login to app.credo.ai
  2. Go to Settings -> Tokens -> Add token as shown in red below:
  3. Copy and save the API token 


How to find Credo AI Tenant ID

  1. Login to app.credo.ai
  2. Go to Settings -> Information
  3. Copy and save the Tenant ID


How to find Credo AI API URL

For SaaS users, base URL is https://api.credo.ai/

Credo AI Questionnaire Details 

To configure the questionnaire, the integration requires a selection of  1,  2 or 3

Option 1: Add new Default Questionnaire

This option creates a new standalone Azure Default Questionnaire in Credo AI

Option 2: Add Azure Default Questionnaire to existing questionnaire

This option will add the default azure questionnaire to an existing questionnaire in Credo AI. To enable this option, please provide the following information questionnaire details:

  • Credo AI Questionnaire ID
  • Credo AI Questionnaire Version 

How to find Credo AI Questionnaire details 

  1. Login to app.credo.ai
  2. Go to Questionnaire
  3. Click on the questionnaire 
  4. Copy the questionnaire id form the url  as circled in the image below
  5. Copy and save the questionnaire version as circled in the image below


Option 3: The integration does not enable a questionnaire. 

Microsoft Azure Foundry Setup

This integration uses Microsoft Entra to manage authenticate. Users will have to create a Credo AI application  and assign it the Azure AI Developer role.

To authenticate into Azure AI Foundry, this integration requires the Azure Tenant ID, Azure app Client ID and Client Secret. 

How to find Azure Tenant ID

  1. Navigate to Microsoft Azure Portal: portal.azure.com
  2. Copy the Tenant ID as circled below 


How to find Azure App Client ID 

  1. Navigate to Microsoft Entra Admin Center entra.microsoft.com
  2. Go to App Registration -> New Registration as shown below

  1. Register a Credo AI app with 
    1. Name: Credo AI app
    2. Who can use this application or access this API?: Accounts in this organizational directory only
    3. Click Submit
  2. On the new page, copy the Client ID as circled in the image below 

 

How to find Azure Client Secret

  1. Navigate to the App registration -> Certificates and Secrets 
  2. Copy client secret as circled in the image below

 

How to Assign role to the Credo AI app

  1. Navigate to the subscription 
  2. Go to Access control(IAM) -> Add -> Add role assignment as shown below

  1. On the next page, type Azure AI Developer in the search box and select the appropriate role as shown below 


Microsoft Azure Foundry-Evaluation Script Configuration

Each control in the Credo AI platform associated with an Azure AI Foundry Evaluator includes a Python snippet which, when configured and run will:

  1. Trigger an evaluation
  2. Push the evaluation result to the Azure AI Foundry Evaluations page (for the relevant Foundry project)
  3. Simultaneously push the evaluation result to the associated evidence requirement in the Credo AI platform.


Configuration requires the following prerequisites:

  • A virtual environment (venv, conda, uv, poetry, etc.) with Python 3.11.7 installed

Note: Credo AI has tested the integration only with version 3.11.7 Other versions may support the integration’s functionality, but have not been validated at this time.

  • Install the azure-ai-evaluation, azure-ai-projects, azure-identity,and openai Python packages
    Credo AI has tested the integration with the following versions only
    Azure-ai-evaluation == 1.6.0
    Azure-ai-projects == 1.0.0b6
    Azure-identity == 1.20.0
    openai == 1.77.0
  • Install the azure CLI
  • Log in to azure CLI with az login
  • A Credo AI platform API key (see above)
  • A json lines (.jsonl) file, containing the data to be evaluated
  • Access to an Azure AI Foundry Project – at ai.azure.com
    From Azure AI Foundry, the following information for the Project being used for the evaluation: 
    • Project Connection String
    • Azure Endpoint URI
    • Project API Key
    • Azure Deployment (the judge model)
    • API Version of the deployment
    • Subscription ID
    • Resource Group Name
    • Project Name

This information should be copy-pasted into the relevant variable placeholder in the evaluation snippet copied from the evidence requirement in the Credo AI Platform.

See below for guidance on where to find this information in Azure AI Foundry.

The Azure AI Foundry API Key, Connection String, and Subscription ID can be found in the Overview tab of the AI Foundry Project.


The “azure_endpoint”  and “Azure Deployment” can be found by navigating  to the “Models + Endpoints” tab and selecting the model that will be used as the judge. 





The `azure_deployment` is simply the name of the model (e.g. gpt-4o-mini).

The Target URI should be used as the `azure_endpoint` in the evaluator script.

The `api_version` is found at the end of the Target URI string. For instance, in the above screenshot, the Target URI is

https://ai-esherman8911ai143303245448.openai.azure.com/openai/deployments/gpt-4o-mini/chat/completions?api-version=2025-01-01-preview

And so the API version is “2025-01-01-preview”.


Finally, the resource group and project name can be found in the dropdown tab at the top right of the AI Foundry app:

Troubleshooting 

To report issues with the integration, please email support@credo.ai