Responsible AI Practices
AI built responsibly. Solutions you can trust.
We develop AI features responsibly by design, empowering our customers with the tools and capabilities they need to deploy, configure, and use AI confidently to meet their goals.
Responsible AI at Âé¶¹´«Ã½.
Âé¶¹´«Ã½ is? an industry leader in responsible AI (RAI) governance¡ªand we want our AI to be a force for good. That¡¯s why we thoroughly assess and manage AI risks. Our team collaborates across Âé¶¹´«Ã½ to make sure our AI is built to respect human rights and safety, and truly benefits society by implementing responsible AI by design.
We¡¯ve proactively partnered with Coalfire, a cybersecurity leader, to evaluate our against the NIST AI Risk Management Framework, and with Schellman to certify our program to ISO 42001 standards. These independent evaluations validate our commitment to the highest standards of responsible AI, including controls for AI security and privacy.
Robust risk evaluation.
Not all AI is the same when it comes to risks to people¡¯s rights and safety. Our first step is always a careful risk assessment. This helps us focus on areas of potential concern, ensuring all of our AI¡ªwhether for customers or our own teams¡ªis responsible and ethical.
There are multiple checkpoints along the way, and we review our AI use cases at least twice: once while we¡¯re building it and again before it¡¯s used. This helps us spot any potential issues early on. We then create clear plans to handle those concerns, depending on whether we¡¯re the ones developing the AI or deploying it.
?
How we evaluate risk.
The tier is determined by factors including the product¡¯s ability to make predictions or categorizations relating to individual workers, its potential for primary or secondary impact to their economic opportunities, and additional aspects relating to the characteristics or context of AI being utilized. We also account for ¡°Prohibited Risk¡± in this evaluation to identify use cases that fall outside of our approved governance structure due to potential harm caused to people or threats to fundamental human rights.
Each risk tier is mapped to a specific set of protocols, or standards, to mitigate those risks. Higher-risk AI features are subject to more rigorous requirements and a greater number of protocols. This ensures that the level of oversight is proportionate to the potential risks involved when the product is put to use.
Whether we're building the AI or using it, our evaluations are tailored to match. This helps us look ahead, catch potential issues early, and make sure our AI is developed and used safely and responsibly. Our internal policies further define prohibited AI use cases, roles and responsibilities, and outline the specific timing and requirements for conducting the RAI risk evaluation.
Our commitment to responsible AI.
The responsible use of AI is an ongoing journey. We are committed to continuously monitoring evolving regulatory trends, societal expectations, and best-practice frameworks. Our team continues to actively evolve its practices and programs to ensure our approach to AI development and deployment remains fair, reliable, ethical, and aligned with the highest standards of responsible AI.?
We¡¯re guided by AI governance frameworks such as the NIST AI Risk Management Framework (AI RMF). We also engage in ongoing assessment of current and developing regulations against our RAI program and practices, including the EU AI Act; Colorado¡¯s AI Act; and other emerging state, local, national, and international regulations and guiding frameworks.?
At Âé¶¹´«Ã½, we prioritize transparency¡ªproviding clear information to support decision-making. Below, we describe the key responsible AI protocols we¡¯ve adopted for a risk-based governance in our AI lifecycle, which we share with our AI development and deployment teams based on the risk level of the use case, as described above.
AI Responsibility
Risk identification.
Development description.
Proactively identify and assess the risk level of the overall product or feature.
?
This includes ethical, social, and technical risks inherent to the AI feature¡¯s intended use case and characteristics, allowing us to mitigate potential downstream harms in production before they occur.
?
We do this not only to understand the potential harms but also the possible benefits of what could happen when it¡¯s done well and a path to delivering the features safely to that end.
Deployment description.?
Similar to development.
Roles and responsibilities.
Development description.
Clearly define the roles and responsibilities of different teams involved in developing AI features; for example, Product & Technology, ML Engineering, Legal and Compliance, Privacy, and others. This ensures accountability, collaboration, and diversity of input throughout the AI development lifecycle.
?
In addition, it organically grows the network of experts that can be tapped for questions and support when new teams navigate the governance structure.
?
Finally, this allows for structured contribution to the RAI governance framework as new edge cases are identified and as the business shifts, changes, and grows.
?
Deployment description.
Similar to development.
Utility evidence.
Development description.
Gather and document information demonstrating the utility of the AI feature. This should show how the feature achieves its intended purpose and provides value to users.
?
Put another way, does the AI feature meaningfully add value for users?
?
Deployment description.
Similar to development.
Transparency and explainability.
Explainability.
Development description.
Provide clear explanations for how the AI feature works and how its outputs are derived. This can include documentation for customers such as AI fact sheets, user interface descriptions, and other methods to promote transparency and understanding for what data is being used to generate the outputs.
?
Deployment description.?
Similar to development.
Interpretability.
Development description.
Work to make AI features¡¯ outputs as understandable and clear as possible to customers and users.
?
Provide clear explanation and supporting materials within AI fact sheets to help customers and end users understand the meaning and implications of the AI¡¯s outputs in the context of the intended use case(s).
?
Deployment description.?
Similar to development.
Notice.
Development description.
Design the AI feature with clear and accessible notices informing end users that they are interacting with an AI system.?
?
Provide guidance and default language that customers can use to describe the type of data processed by the AI feature.?
?
Notice can appear either through text or graphics indicating that the feature utilizes AI and that its output should be considered accordingly.??
?
Deployment description.?
Similar to development.
Human-centric design and control.
Human in the loop.
Development description.
Design the AI feature to support human oversight and control. We provide documentation to customers explaining how the AI features¡¯ outputs are intended to support, not replace, consequential human decision-making.?
?
The feature should also incorporate a practical user experience where humans are the final decision-maker for critical decisions, whether the feature¡¯s outcomes are accepted or adjusted, reinforced by standards for explainability in its outputs.
?
Deployment description.?
Similar to development.
Alternative procedures.
Development description.
Design the AI feature to allow for alternative data-processing procedures, such as human review, to the AI feature¡¯s standard processing when appropriate. Provide clear instructions and documentation to customers on how to implement these alternative procedures.
?
Deployment description.?
Ensure data subjects are provided optionality in the user interface or elsewhere, such as data subject communications, to request alternative procedures to that of the AI solution processing their data and surfacing individualized results, which could include human review as opposed to machine-only review.
Inclusivity.
Development description.
Design the AI feature with inclusivity in mind, ensuring it is accessible and usable by diverse end users. Consider factors such as language, culture, disability status, and other potential barriers to access and engagement. Ensuring the quality of the user experience is not diminished regardless of ability and/or preferences.??
?
Deployment description.?
Ensure the AI solution provides reasonable optionality within the AI solution user interface. This enables diverse end users to access and engage with it in ways that promote equity.
Embedded exports.
Development description.
Provide options for customers to access relevant exported data from the AI feature. This enables customers to conduct their own monitoring and testing together with their own experts, promoting transparency and control.??
?
Deployment description.?
Ensure sufficient instruction is provided to the team configuring the product for use in specifying options for accessing and exporting AI solution output data required for testing AI solution performance.
Configurability.
Development description.
Design the AI feature with configurability in mind, allowing customers to tailor its functionality to their specific needs and preferences. Provide clear documentation and tools to support customer configuration decisions. This follows the spirit of previous protocols, keeping humans in the center of all decisions, including how they choose to participate with features of the Âé¶¹´«Ã½ AI platform.
?
Deployment description.?
Configure the AI solution to fit with the local intended use case(s).
Testing and monitoring.
Fairness testing.
Development description.
Conduct fairness testing on the AI feature. For development, this can be accomplished using synthetic data or aggregate samples of outputs, depending on availability. Analyze results for potential biases and document the findings and mitigation strategies. We include a descriptive summary of developer-side fairness testing where relevant within our AI feature fact sheets.
?
Deployment description.
For deployment, consider using real feature outputs where available.
Efficacy.
Development description.
Rigorously test the AI feature to ensure its outputs are intended and reliable toward its purpose and quality of the use case. Document the testing methodology and results, demonstrating the AI¡¯s ability to produce accurate outputs.??
?
Deployment description.?
Similar to development.
Robustness.
Development description.
Test the AI feature¡¯s ability to maintain performance under various conditions, such as different input data, user settings, and populations. Document the testing procedures and results, demonstrating the AI¡¯s ability to maintain performance across different scenarios.
?
Deployment description.?
Similar to development.
Scheduled testing.
Development description.
Develop and maintain a regular schedule for testing and monitoring of the AI feature¡¯s performance, including accuracy, robustness, utility, and fairness. Define, implement, and document the testing frequency and procedures.
?
Deployment description.?
Similar to development.
Maintenance standards.
Development description.
Establish clear standards and procedures for ongoing maintenance and updating the AI feature and its underlying ML models. Define the criteria for when updates are necessary and how they will be implemented and communicated.
?
Deployment description.?
Document standards to be used in determining when and whether the AI solution or its configuration should be updated and/or reevaluated.
Privacy and security.
Data quality.
Development description.
Ensure the data used to develop the AI feature is of high quality, appropriate for the intended use case(s), and representative of the relevant populations. Demonstrated through transparent documentation practices such as our AI feature fact sheets.
?
Deployment description.?
Similar to development.
Traceability.
Development description.
Design the AI feature to support system monitoring and traceability capabilities.
?
Deployment description.?
Ensure there is a mechanism in place that supports system monitoring and traceability.
Location exclusion.
Development description.
Provide customers with the ability to control the geographic availability of the AI feature. This allows customers to comply with local laws and regulations to manage the availability of the AI feature in different regions.
?
Deployment description.?
Similar to development.
Updating and withdrawal.
Development description.
Develop a comprehensive change management plan for updates to the AI feature and its underlying ML models. This plan should include communication protocols to inform customers about updates and any potential impact on their use of the feature.
?
Deployment description.?
Develop and document a specified change management plan for updates to the AI solution or its configuration.
Efficacy management.
Development description.
Design the AI feature with safeguards to mitigate potential vulnerabilities and risks to its efficacy. This includes protecting against adversarial attacks, data poisoning, and other attempts to exploit or undermine the AI system.
?
Deployment description.?
Ensure that vulnerabilities to end-user human error and bad actors seeking to ¡°game¡± or otherwise risk the AI solution or our intellectual property security are identified, mitigated, and managed.