- EQTY Lab’s AI Integrity Suite, built on Hedera, is being evaluated by Accenture for global scalability.
- Using Hedera ensures transparency and security in AI, boosting trust in AI models among regulators and users.
The increasing attention of global regulators to artificial intelligence (AI) has led consulting firm Accenture to test an emerging technology that could establish a standard method for regulatory compliance, ensuring responsible and safe innovation.
The approach, developed by Los Angeles-based EQTY Lab, uses cryptography and blockchain to track the origins and characteristics of language models, providing crucial transparency into their inner workings.
Innovation and Compliance in the Age of AI
EQTY Lab’s AI integrity suite, currently under evaluation at Accenture’s AI Lab in Brussels, promises to be a scalable tool for thousands of clients, including many within the Fortune 100.
This step forward comes at a time when countries around the world are proposing ways to address the promise and risks of AI, as evidenced by the U.S. Department of Commerce’s announcement of a consortium that will advise the government’s new AI Security Institute, following a White House executive order on AI last year.
"@EQTYLab’s #AI Integrity Suite is being evaluated in @Accenture’s AI lab in Brussels to see if the software could be scaled to serve the firm’s thousands of clients, many of whom are in the Fortune 100."
Responsible AI from EQTY Lab – built on #Hedera. https://t.co/IJVAxZvMJZ
— Hedera (@hedera) February 10, 2024
The Mechanics of Generative AI and Compliance Risks
Unlike other forms of machine learning, the technical details of generative AI models can be opaque. When these tools are deployed in enterprises, they are often modified and tweaked, adding layers of complexity that can lead to compliance risks.
The Data Provenance Initiative, for example, found that many implementations of generative AI models contain inappropriately licensed data that is difficult to track.
The Role of the Blockchain in the Traceability of AI Models
EQTY Lab expects AI developers to use its software to create an immutable fingerprint of all components of a model. Its technology tracks the different parts of an AI model as it develops, converting the information into a cryptographic signature stored on a blockchain, theoretically making it impossible to alter.
Transparency and Security in AI Development
By registering a model with EQTY technology and subsequently modifying it, the technology continues to track additional layers of training or fine-tuning. Even benchmarks that measure the performance of models, or their levels of possible bias or toxicity, would be recorded, providing a visible audit trail of all their features.
Implications for the AI Industry
The EQTY Lab initiative represents a significant move toward transparency and accountability in AI development, offering a potential solution to the regulatory and ethical challenges facing the field.
By providing a method for tracking and verifying the integrity of AI models, this technology could play a crucial role in facilitating greater adoption of AI in various sectors, while ensuring that safety and ethical standards are maintained.
Hedera: Facilitating Trust and Transparency in AI
Hedera distinguishes itself in the blockchain sector with its focus on security, governance and energy efficiency, essential features for the deployment of responsible AI solutions .
By using Hedera, EQTY Lab ensures that AI models are developed and operated within a framework that prioritizes transparency and accountability, critical aspects for gaining the trust of both regulators and end users.