- Xayn, a decentralized, open-source, transparent search engine, is aimed at tackling the misuse of AI and preserving privacy by retaining all user data and interactions within the individual’s device.
- With a focus on radical transparency, user control, and decentralization, Xayn incorporates IOTA’s Tangle technology as a trust anchor, reinforcing the trustworthiness of its AI model while ensuring user control and AI transparency.
Delivering Personalized Search with Privacy at the Core
Xayn strives to provide an optimal web search experience, ensuring that the users’ privacy is protected while offering a personalized search experience. It goes beyond the traditional search functions, offering personalized curation of content based on your past interactions. Notably, your data, interaction history, and the AI that anticipates your search preferences stay entirely under your control and on your device.
Really proud of what Xayn (https://t.co/DsUWqfuH6S) has built over the last 6 years by specializing in personalized semantic search.
— Dominik Schiener (@DomSchiener) May 24, 2023
This respect for privacy is rooted in our belief that privacy is a fundamental human right. While we harness AI for enhanced personalization, we go the extra mile to safeguard your privacy. We fight AI misuse by adopting a decentralized approach, entrusting control to the users, and promoting transparency through open-source technology.
The Journey to Ethical AI
Personalization is a common aspect of many AI applications, from search engines to video streaming platforms. Companies like Netflix, Google, Facebook, and TikTok rely on personalization or hyper-personalization to deliver better and faster content, thereby improving user experience. Personalization, at its core, seeks to understand user behaviors and interests to tailor content accordingly.
However, the data used for training personalization models can be misused, leading to potential manipulation of user beliefs and interests. This misuse of AI to reshape human thought processes poses a significant threat, not just on an individual level, but also on a societal scale, potentially undermining our democracies. We have seen trust issues stemming from AI misuse, as evident in the controversy surrounding TikTok and the 2016 US election.
As a responsible tech and AI company, we aim to counteract this alarming trend and ensure that our AI is truly controlled by the users.
Prioritizing User Control and Transparency
Our strategy for achieving user control and transparency is grounded in decentralization. We strive to keep our users at the helm of the experience, allowing you to give feedback on content relevance and control the AI’s learning process. We also employ edge AI and federated learning to continually improve the quality of our AI models, leading to an effective and personalized search experience for all users.
Xayn’s AI, founded on federated learning, is built on the open-sourced XayNet, which is designed to be robust, resilient, and compliant with data-privacy regulations like the EU GDPR.
Leveraging IOTA’s Tangle Technology for Enhanced Trust
In our battle against AI misuse, we have integrated IOTA’s Tangle technology into XayNet as a trust anchor. This integration strengthens the trustworthiness of the XayNet protocol and Xayn by recording the hash of the decrypted aggregated AI model computed in a federated learning round. This additional trust layer strengthens our commitment to decentralization, user control, and AI transparency.
Our integration with IOTA’s Tangle technology is just the start. We plan to decentralize XayNet’s coordinator while maintaining the original protocol’s integrity. This major transformation will align with the spirit of IOTA’s Tangle Coordicide, enhancing our framework’s legal compliance, robustness, and resiliency.
We are excited about the future possibilities of our IOTA integration and look forward to sharing more updates soon.