Responsible AI: A Guide for Building a Digital Moral Compass

By Jason Roys

Artificial intelligence (AI) systems have the ability to learn from real-world data, recognize patterns, adapt to changing circumstances and improve their performance over time. What they don’t have is a moral compass. Maybe that’s why a chatbot driven by OpenAI’s ChatGPT and built into Microsoft’s search engine tried to convince a New York Times reporter to leave his wife because he didn’t really love her.  

The focus of AI is increasingly turning to development and deployment of AI systems that are ethical, fair, transparent, accountable, and respectful of human values. In other words, responsible AI.   

Whereas human beings (most of us anyway) have built-in systems for detecting bias, telling right from wrong, and perceiving unethical practices, an AI system needs to be “taught” those capabilities using training data. In the healthcare, government, and business sectors, the need to ensure accountability, secure sensitive data, manage the privacy of individuals, and root out bias and unfairness is paramount. It's a top priority for us at SDV INTERNATIONAL as well. 

New technologies, especially ones that society doesn’t understand well, often have people crying wolf about consequences we can’t anticipate. I think caution is warranted when it comes to AI systems. Even the CEO of OpenAI, Sam Altman, testifying May 16, 2023, before members of a Senate subcommittee, largely agreed with them on the need to regulate AI technology.  

“I think if this technology goes wrong, it can go quite wrong,” he said. “And we want to be vocal about that. We want to work with the government to prevent that from happening.”  

Also in May 2023, the Biden Administration outlined what it expects of companies that are leading AI development. Vice President Kamala Harris and senior administration officials met with industry leaders — CEOs of Alphabet, Anthropic, Microsoft and OpenAI — to underscore the importance of driving responsible, trustworthy, and ethical AI innovation to mitigate risks and potential harms to individuals and society.  

Given the attention being paid to this emerging field of ethical AI, I’m devoting this article to the definition and development of responsible AI, IV&V (independent verification and validation), best practices and what the future may hold. 

WHAT IS RESPONSIBLE AI?  

At its most basic, responsible AI is a governance framework that aims to address the potential risks and challenges associated with AI systems and ensure that benefits are realized in a responsible manner.   

AI confers many benefits because of its ability to quickly analyze reams of data, spot patterns, identify anomalies, and improve its performance. I’ve written previously about how AI, particularly the subset known as machine learning, is transforming healthcare by providing faster and more accurate diagnoses; personalizing treatment; accelerating biomedical research; and finding cost-savings and efficiencies.  

It is used by commercial enterprises to tailor products and services to customers. It is used by governments to, among other things, spot waste, fraud and abuse; improve customer service through chat bots and virtual assistants; and deploy zero-trust architecture in their IT systems to detect and prevent cyberattacks.   

But the ethical implications are also well documented. As NPR reported recently, “Artificial intelligence is quickly getting better at mimicking reality, raising big questions over how to regulate it. And as tech companies unleash the ability for anyone to create fake images, synthetic audio and video, and text that sounds convincingly human, even experts admit they're stumped.”  

Subtler but no less serious consequences include: 

  • Violating constitutional rights by using facial identity to surveil individuals. 

  • Perpetrating financial fraud. 

  • Trusting solutions created by an algorithmic “black box” with built-in bias.  

The main driver in the move toward responsible AI is lack of trust in AI, according to a 2021 study by KPMG. Among the factors that influence trust is belief about “the adequacy of current regulations and laws to make AI use safe,” the study reported.  

Here are seven key principles of responsible AI:  

1. AN ETHICAL FRAMEWORK  

A responsible AI initiative emphasizes the importance of aligning AI systems with ethical principles and values. It involves detecting bias, discrimination and unfairness and ensuring that AI is used in ways that respect human rights and societal values.  

2. FAIRNESS

Responsible AI initiatives use fairness practices to mitigate biases in data and algorithms to avoid unfair outcomes. This involves careful selection and preprocessing of training and testing data, as well as evaluating and adjusting machine learning models to ensure equitable treatment of individuals of different ethnicities, demographic backgrounds, age groups, etc.  

3. TRANSPARENCY  

No more “black box”! Transparency means individuals can understand how and why certain decisions are made by AI systems. A responsible AI toolkit promotes transparency in the design, development and use of AI systems. It makes algorithms and decision-making processes understandable and interpretable to users and stakeholders.   

4. ACCOUNTABILITY  

Responsible AI emphasizes accountability for the impact of AI systems — who gets the credit when things go right, and who gets the blame when they don’t. It involves clearly defining roles and responsibilities for developers, operators and users. It also includes mechanisms for handling complaints, providing feedback, explaining AI decisions, and addressing any negative consequences caused by AI systems.  

5. DATA PRIVACY AND SECURITY  

Responsible AI strategy respects individuals' data privacy rights and ensures the security of personally identifiable information. It involves implementing robust data protection measures, obtaining informed consent for data usage, and handling data in compliance with relevant regulations and best practices.  

6. HUMAN OVERSIGHT AND CONTROL  

HAL 9000 in “2001: A Space Odyssey” tells astronaut Dave Bowman: “I know that you and Frank were planning to disconnect me, and I'm afraid that’s something I cannot allow to happen.” Extreme example, but here’s the point: Responsible AI augments human capabilities rather than replacing human decision-making entirely. Human involvement helps ensure that AI systems are used appropriately, responsibly, and in line with human values.  

7. SOCIETAL IMPACTS AND BENEFITS

Responsible AI considers the broader societal implications of AI systems. It involves assessing and mitigating potential negative consequences, such as job displacement, and maximizing the positive impact of AI on various aspects of society, including healthcare, education and sustainability.  

INDEPENDENT VERIFICATION AND VALIDATION (IV&V) IN RESPONSIBLE AI  

Past cases of AI misuse have highlighted the importance of proactive IV&V processes to prevent potential harms and ensure the responsible use of AI.  

Verification answers the question: “Are we building the right product, service or system to meet the needs of our customers and other stakeholders?” With independent verification (IV), this process is carried out by a third party who can provide outsider perspectives.  

Validation answers the question: “Are we building the product, service or system correctly to ensure compliance with a regulation, performance requirement, specification or imposed condition?" IV&V are used as a management tool in industry, government and technology, to name a few, to achieve quality outcomes.  

Here are some key lessons learned from AI mishaps and the failure to adhere to responsible AI principles: 

  • Bias and discrimination. Proactive IV&V can help identify and mitigate biases in training data, algorithms and decision-making processes.  

  • Privacy breaches. Proactive IV&V processes can ensure that privacy safeguards are in place, such as anonymization techniques, data access controls and encryption mechanisms. Independent verification helps identify vulnerabilities and gaps in data privacy.  

  • Adversarial attacks. Adversarial attacks involve intentionally manipulating AI systems to produce incorrect or malicious results. Independent validation can help identify vulnerabilities and weaknesses in AI models, making them more robust against adversarial attacks.  

  • Systemic errors. AI systems can generate erroneous results due to flaws in algorithms or data. Proactive IV&V processes can catch potential pitfalls, ensuring the accuracy and reliability of AI systems.  

  • Lack of transparency and explanations. Independent validation can ensure that AI systems provide clear justifications for their outputs, reducing the risks of unexplainable or unjustifiable decisions.  

  • Unintended consequences. By conducting comprehensive IV&V, organizations can understand the potential ramifications of AI systems and take necessary precautions.  

  • Lack of accountability. By involving external experts and auditors, organizations can foster a culture of responsible AI use and accountability.  

I believe IV&V is critical for applying responsible AI practices and for giving government agencies and Fortune 1000 companies confidence that the AI solutions they deploy meet their true mission, without unintended consequences.  

CHALLENGES TO APPLYING IV&V IN RESPONSIBLE AI  

It might seem that IV&V processes are a no-brainer. After all, AI and ML models can produce valid results only if they are built using valid data. But there are significant barriers to applying IV&V as part of responsible AI strategies.  

Let’s look at four of these challenges.  

  • Limited access to training and testing data. IV&V requires access to the underlying data used to train AI models to assess their fairness, accuracy and robustness. However, obtaining access to proprietary or sensitive training data can be a significant barrier, particularly when dealing with commercial or government AI systems.  (Similar to other fields, should government mandate 3rd party IV&V for AI/ML tech used on government programs?)

  • Lack of standardization. Because the field of responsible AI is relatively new, widely accepted standards and guidelines for IV&V are not readily available. This makes it challenging for auditors and evaluators to perform consistent and reliable IV&V across different AI systems.   

  • Lack of resources. Comprehensive IV&V processes require skilled personnel, computational power, and time. It’s a fact of life, though, that organizations have to deal with resource constraints, such as tight budgets, time pressures, and a shortage of AI expertise.  

  • Lack of transparency. Many AI models, such as deep learning and large language models, are complex and black-box in nature. That makes it difficult to verify and validate the behavior, decision-making processes, and potential biases.  

IV&V CASE STUDIES  

IV&V for responsible AI is top-of-mind with government agencies. The Department of Energy’s Risk Management Playbook calls for leveraging it to “test all incoming data from external/non-authorized sources by profiling it as it enters the data ecosystem and before it is used by the models.” The 2023 National Defense Reauthorization Act instructs the Department of Defense to “incorporate a standardized, independent testing and validation process into the lifecycle of AI-enabled models, systems and applications.”  

Here are a few case studies of IV&V in action:  

  • Healthcare. IV&V studies have been conducted to evaluate the performance and potential biases in AI algorithms used for radiology image analysis and patient risk prediction.  

  • Face recognition. An MIT researcher investigated the accuracy of commercial gender classification algorithms across different demographic groups. Her IV&V study revealed significant disparities in performance based on race and gender.  

  • Criminal justice. An IV&V study on the COMPAS (Correctional Offender Management Profiling for Alternative Sanctions) algorithm, which is used to predict recidivism risk in the criminal justice system, revealed possible racial biases in the algorithm's predictions, raising concerns about fairness and equity.  

BEST PRACTICES FOR IMPLEMENTING IV&V IN RESPONSIBLE AI  

By implementing rigorous IV&V processes, organizations can mitigate risks, enhance transparency and ensure that AI systems align with ethical, legal and societal standards. Here are a few of the best practices.  

  • Building a diverse team of experts. Besides knowledge of AI systems, the knowledge and skills required by an IV&V team include risk assessment and mitigation; validation methods (testing, simulations, analysis); test planning and conducting; familiarity with industry standards; and reporting methods.  

  • Establishing IV&V processes. The optimal IV&V evaluation comprises four phases: plan, review, assess and report.   

  • Selecting the right IV&V methodology. The IV&V team needs to not only test the performance requirements given but go beyond the required testing, creating test scenarios that include out-of-the-box thinking and testing.  

  • Ensuring continuous IV&V oversight. Probably the most important best practice for IV&V is commitment: to the time it takes, the money it requires, and the realization that it’s not a one-and-done.  

SDV INTERNATIONAL implements mechanisms for ongoing monitoring and improvement of AI systems, including regular audits, feedback loops and impact assessments to identify and address any potential issues related to AI ethics, bias detection, or environmental impact. By continuously evaluating and refining AI solutions, we ensure that our projects align with our customers’ goals, and we minimize unintended consequences.  

THE FUTURE OF RESPONSIBLE AI   

A Mitre-Harris poll released in February 2023 found that only 48 percent of Americans believe AI is safe and secure, and 78 percent are very or somewhat concerned that AI can be used maliciously.  

Responsible AI raises awareness and creates solutions to mitigate the negative effects of artificial intelligence, so it is going to be a top priority for companies and agencies that wish to continue leveraging AI solutions. They will also be focused on optimizing machine learning algorithms to identify, measure and improve fairness and bias detection. In fact, Gartner predicts that all personnel hired for AI development and training work will soon have to demonstrate expertise in responsible AI.  

The AI for Social Good (AI4SG) movement aims to establish interdisciplinary partnerships centered around AI applications toward sustainability, inclusiveness, and responsibility. For example, SDV INTERNATIONAL develops AI solutions with a focus on energy efficiency. By optimizing algorithms, hardware and infrastructure, an organization can reduce the computational resources required for AI applications, minimizing their energy consumption and carbon footprint.  

We also foster collaboration and knowledge-sharing among our clients and stakeholders. By facilitating partnerships between organizations, academia and community groups, we promote AI principles and contribute to the development of sustainable AI ecosystems.  

SDV INTERNATIONAL is an industry leader in integrating responsible AI practices into its work by promoting ethical, transparent and environmentally conscious AI solutions for its clients. We were proud to announce recently that SDV INTERNATIONAL was awarded a $249 million contract to provide state-of-the-art AI solutions to bolster the Defense Department’s capabilities and advance its strategic goals.  

Interested in learning more about how your AI initiatives can meet the standards of responsible AI? Contact SDV INTERNATIONAL today at our website, by calling 800-738-0669 or by emailing info@SDVInternational.com