We’ve all heard about Artificial Intelligence (AI), predictive modeling, or machine learning (ML). These innovations show great promise in revolutionizing the way we live and work. However, despite their many conveniences, ethical challenges remain top of mind, requiring the need for responsible AI practices. Some principles of responsible AI exist to help us understand how to evaluate and deploy these tools to enhance our lives and business operations without jeopardizing our core values.
What is responsible AI?
Responsible AI is a concept that ensures AI systems are designed, developed, and deployed with ethical considerations, transparency, fairness, and accountability at their core. For every American, this concept ensures five pillars of protections, according to the Blueprint for an AI Bill of Rights.
An example of AI for water utilities
AI for water utilities exists in many capacities, but one of the most promising applications of AI involves detecting the presence of lead in water service lines. By 2024, all U.S. water utilities will have to submit a service line inventory listing the locations and material types of their service lines and a lead service line replacement plan, if lead is found—all enforced by the Lead and Copper Rule Revisions (LCRR) regulation.
Some state authorities have provided guidelines allowing the use of predictive modeling techniques, such as ML, to accelerate the process of identifying the service line materials and finding lead service lines as efficiently as possible. AI can be a powerful tool in LCRR compliance programs but using it incorrectly can have significant public health implications. Therefore, responsible AI practices must be adopted to ensure that the AI-driven predictions of service line material are accurate, fair, and transparent.
>>YOU MIGHT LIKE: Can You Use Machine Learning Reliably to Develop LCRR Inventory?
Responsible AI involves 10 principles
Responsible AI implementation involves a multi-dimensional approach to account for ethical, technical, legal, and societal factors. The key steps for evaluating responsible AI involve 10 principles.
1. Ethical frameworks and guidelines
The teams working behind any AI tools should be designing, adopting, and deploying various frameworks and guidelines to ensure all the products and procedures, including AI systems, follow the principles and practices aligned with the company and human values (European Commission).
2. Inclusive and diverse teams
AI team members from different backgrounds and expertise can account for a wide range of needs and biases in AI. This is especially important when using AI for niche industries such as the water utility industry. As mentioned previously, AI-driven predictions to support LCRR compliance must have minimal errors. Ensuring accuracy starts with the team behind the AI, which should include water system and GIS experts, data and computer scientists, and product managers to name a few. Such a team will recognize when more data is needed via field verifications or whether the results from a ML model are skewed based on the inputs from a certain data collection method.
3. Data quality and representation
The data used to train AI models should be accurate, high-quality, and representative of the population to minimize biases and improve the overall performance of the AI system. Let's go back to the LCRR compliance example. As a water utility considers the use of predictive modeling, it is important to confirm that the data helping to train the model is truly representative of unknowns in the water system. In other words, if the model is making system-wide assumptions based on data from a handful of neighborhoods where little to no lead is present, there could be a real risk of not finding potential lead service lines in the rest of the system sooner to protect the most vulnerable populations from continued lead exposure.
4. Transparency and explainability
Transparency is a key principle of responsible AI. When AI is used as part of LCRR compliance programs, it is crucial to clearly communicate the AI model’s methodology, assumptions, and limitations to stakeholders, including water utilities, regulators, and the public. This ensures that they understand how the model works, the factors it considers, and the level of uncertainty associated with its predictions. This level of transparency and explainability enables informed decision-making and fosters trust in the technology, which is essential for public health interventions.
5. Auditing and Monitoring
When used as part of LCRR compliance programs, ML is an iterative process as opposed to a one-time event. It is a process for which the data used for training the ML model and evaluating its performance are regularly monitored to identify any biases, inaccuracies, or other issues that may arise. The predictions should be audited by water system experts to ensure the reliability of the results. Corresponding monitoring and auditing information help with further valuation of how AI systems evolve over time.
6. Privacy and Security
Responsible AI implementation should involve ensuring the privacy and security of user data. Robust data protection measures should be in place as well as adherence to relevant data protection regulations. Responsible AI practices mean that your data is secure while in storage or being used. Other cybersecurity measures such as establishing secure integrations with other systems are also in place.
7. Stakeholder Engagement
The stakeholders of AI, the users, domain experts, the public that interacts with any tools, all play a role in responsible AI. Active cooperation is essential for understanding everyone’s needs, concerns, and expectations. Responsible AI communication can take many forms, from blog posts to dedicated training programs.
8. Legal Compliance
Legal compliance includes continuous oversight of any AI-related products to ensure all systems comply with existing laws and regulations related to integrity, anti-discrimination, privacy, and data protection to name a few.
9. Continuous Learning and Adaptation
The application of AI systems is not a one stop process. AI is ever-evolving and requires continuous learning and adaptation to keep up with the latest advancements in AI ethics, technology, and regulatory changes. This means being actively in communication with state authorities, related organizations, and research and development teams across the United State and countries that are sharing the same values regarding responsible AI.
10. Accountability
The accountability principle emphasizes the importance of attributing responsibility for AI decisions and their consequences. Clear lines of responsibility should be drawn for developers, clients, and regulators to address any issues that may arise during deployment of new AI-related technology, further fostering public trust.
See responsible AI practices in use today
Trinnex® applies all 10 principles of responsible AI throughout our entire suite of products. From our LCRR compliance software and predictive modeling tools to our digital twin software, we have built in responsible AI principles. Want to learn more about responsible AI or how to evaluate the right tools? Reach out to our team today.