As I prepared to speak with my fellow ethical AI panelists for the April 24, 2023 EAIGG discussion “New Wave of Data Centric AI Startups”, I thought further about how to articulate the mandate and opportunity presented by AI to solve critical problems faced by humanity. We are in the infancy of enterprise AI. The convergence of data enablement, compute power, MLOps tools such as no code ML and connected devices has accelerated the work pipeline in the absence of best practices and societal guardrails to ensure that what is created is effective and good for society. We must begin to build these public and private guidelines and educate the people who will implement them. We live in a 280 character world, so the message must be clear and actionable. At tomtA.ai, we routinely see confusion in the marketplace with synthetic data, federated learning, and other pseudo PET purveyors claiming safety and privacy when the facts don’t support the assertions. What is ethical AI in a world wherein which people are flexible or struggle to understand appropriate standards?
Ethical AI requires integrity of purpose, process and validation. We must act with integrity to apply AI for good and foster a public/private partnership to establish the societal and governmental data privacy and ethics regulations, as well as deliver precision in technology workflow and output of the data pipeline to ensure that we achieve a trusted AI lifecycle. Lastly, we must validate that the AI indeed works to achieve this purpose.
Purpose. Much of the ethical AI infrastructure has not yet been built. We, and we must confront the trend of emotion over science, opinion over truth and aggressive growth over impactful value to society. We have big problems to solve as a species to survive the existential threat of climate change. Yet, it presents our biggest opportunity as a technology community to create solutions that improve quality of life from the perspective of every industry to reduce greenhouse gas emissions and foster sustainable models of living; climate change offers a veritable greenfield for technology innovators. For those of us blessed with a career in disruptive innovation in enterprise technology, the moral imperative is to build products that address these significant problems. It’s both an ethical and profitable decision. But how do we build this community and empower them to do so?
Process. At tomtA.ai, we empower data and AI professionals at the top of the ethical AI funnel that demands quality data with fidelity to ensure that AI models are precise and accurate. Achieving data fidelity is a challenge in a GDPR world because data governance stakeholders are reluctant to share data when it is perceived as violating data privacy laws. And of course, ethical AI requires strict adherence to privacy standards to protect people from the harm caused by fraud, theft, bias and other bad actors. Privacy loss presents a significant hard and soft cost to society, and its ill effects are widespread. Data privacy failures can have implications for democracy and national security. For instance, if political campaigns or foreign actors are able to access and manipulate personal data, this can undermine the integrity of elections and democratic institutions. As a company, we provide freedom and empowerment to safely share precise data, which will breed innovation we haven’t yet imagined, as AI is the new electricity, and data is the key ingredient. Without precise data we have no AI. Impactful AI requires safe data share that exceeds the most stringent data privacy laws to foster trust by all data owner stakeholders.
The best data privacy protections are found in the EU, and the US has continued to forfeit its leadership in key areas such as climate change, ethical AI and data privacy. Clear standards for privacy, accuracy and utility – just to name a few – are necessary for enterprises to properly scrutinize vendor claims and technical capabilities. In the US, data de-identification techniques are allowed but are rightfully prohibited under GDPR. In the words of Dr. Cynthia Dwork, accomplished computer scientist, “de-identified data is neither data nor de-identified.” Yet most US companies foolishly stick to an inferior privacy and data enablement strategy that will yield only imprecise ML models. Because the EU has GDPR – a much more stringent data protection policy – they will solve the privacy/precision paradox much faster and ultimately develop more impactful AI to address serious societal problems in health, habitat, mobility, energy, manufacturing et al. The good news is that we can shed ineffective and imprecise legacy data workflows in favor of those that will deliver precision and privacy.
Validation. Validating ethical AI requires the establishment of public and private workflows to ensure that the AI system behaves in a fair and ethical manner, both in public-facing scenarios and in more private or internal settings. We must establish standards that protect society but do not threaten the innovative spirit that has provided improvements in quality of life in the past few decades. In addition, we must ensure that our process provides material feedback loops to ensure that our models are maintained to be accurate and properly impactful, or alert us when they drift and produce a harmful effect. These standards include:
- Establish clear ethical principles that the AI system must adhere to such as fairness, transparency, accountability, and privacy.
- Define specific criteria for validating the ethical principles, such as accuracy, bias, interpretability, and explainability. These criteria should be measurable and testable.
- Test the AI system to determine whether it meets the defined ethical principles and validation criteria. These tests should be conducted in both public-facing and private settings to ensure that the system behaves consistently across different scenarios.
- Involve a diverse range of stakeholders, including end-users, subject matter experts, and ethicists, in the validation process. This will help to ensure that the AI system takes into account a broad range of perspectives and considerations.
- Use the results of the validation process to identify areas for improvement and iterate on the AI system to ensure that it continues to meet ethical principles and validation criteria.
Ethical AI requires integrity of purpose, process and validation. It requires a commitment to develop AI for good and to ensure that the process delivers upon that promise. It starts with accurate data with fidelity and compliance with data privacy laws. This can ensure that the use of AI technology is fair, transparent, and does not harm individuals or groups. By ensuring accurate data and compliance with data privacy laws, we can build AI systems that are trustworthy and beneficial to society.