Can we completely trust AI? No, but we can monitor it


Can we completely trust AI? No, but we can monitor it


We love machines. We follow our navigation system to go to places, and carefully evaluate recommendations about travel, restaurants and potential partners for a lifetime, across various apps and websites, as we know algorithms could spot opportunities that we may like, better than we can ever do. But when it comes to final decisions about health, our job or our kids, for example, would you trust and entrust AI to act on your behalf? Probably not.ย 

This is why we (FP) talk to Kavya Pearlman (KP), Founder & CEO at XRSI, which is the X-Reality Safety Intelligence group she put together, to address and mitigate risks in the interaction between humans and exponential technologies. She is based on the West Coast of the US, of course. This is our exchange.ย ย 

FP. Whatโ€™s going on with the advent of AI?

KP. For years, tech companies have normalized the idea that we must give up our most valuable asset, our data, in exchange for digital convenience. We always click โ€œacceptโ€ without ever asking questions. Now, with the rise of wearables and ๐€๐ˆ-๐ข๐ง๐ญ๐ž๐ ๐ซ๐š๐ญ๐ž๐ ๐ฌ๐ฒ๐ฌ๐ญ๐ž๐ฆ๐ฌ, the stakes are much higher. Itโ€™s not just about browsing history or location data anymore. Companies are harvesting insights from our bodies and minds, from heart rhythms and ๐›๐ซ๐š๐ข๐ง ๐š๐œ๐ญ๐ข๐ฏ๐ข๐ญ๐ฒ to ๐ž๐ฆ๐จ๐ญ๐ข๐จ๐ง๐š๐ฅ ๐ฌ๐ญ๐š๐ญ๐ž๐ฌ. And still, almost no one is asking: ๐‡๐จ๐ฐ ๐๐จ ๐ฐ๐ž ๐ญ๐ซ๐ฎ๐ฌ๐ญ ๐ญ๐ก๐ž๐ฌ๐ž ๐ฌ๐ฒ๐ฌ๐ญ๐ž๐ฆ๐ฌ ๐ฐ๐ข๐ญ๐ก ๐จ๐ฎ๐ซ ๐ฆ๐จ๐ฌ๐ญ ๐ข๐ง๐ญ๐ข๐ฆ๐š๐ญ๐ž ๐๐š๐ญ๐š? ๐–๐ก๐š๐ญ ๐ฉ๐จ๐ฐ๐ž๐ซ ๐๐จ ๐ฐ๐ž ๐ก๐š๐ฏ๐ž ๐ข๐Ÿ ๐ฐ๐ž ๐๐จ๐งโ€™๐ญ ๐ญ๐ซ๐ฎ๐ฌ๐ญ ๐ญ๐ก๐ž๐ฆ? ๐–๐ก๐š๐ญ ๐š๐ซ๐ž ๐ญ๐ก๐ž ๐ข๐ง๐๐ข๐œ๐š๐ญ๐จ๐ซ๐ฌ ๐จ๐Ÿ ๐ญ๐ซ๐ฎ๐ฌ๐ญ ๐ฐ๐ž ๐ฌ๐ก๐จ๐ฎ๐ฅ๐ ๐๐ž๐ฆ๐š๐ง๐?

This isnโ€™t just a technical challenge. Itโ€™s a governance challenge and at its core, a question of ๐ญ๐ซ๐ฎ๐ฌ๐ญ. Without transparency and accountability, AI risks amplifying hidden biases, eroding trust, and leaving people without recourse when systems get it wrong. ๐˜›๐˜ณ๐˜ถ๐˜ด๐˜ต ๐˜ค๐˜ข๐˜ฏ๐˜ฏ๐˜ฐ๐˜ต ๐˜ฆ๐˜น๐˜ช๐˜ด๐˜ต ๐˜ช๐˜ง ๐˜ธ๐˜ฆ ๐˜ฅ๐˜ฐ๐˜ฏโ€™๐˜ต ๐˜ฌ๐˜ฏ๐˜ฐ๐˜ธ ๐˜ธ๐˜ฉ๐˜ข๐˜ต ๐˜ฅ๐˜ข๐˜ต๐˜ข ๐˜ช๐˜ด ๐˜ฃ๐˜ฆ๐˜ช๐˜ฏ๐˜จ ๐˜ค๐˜ฐ๐˜ญ๐˜ญ๐˜ฆ๐˜ค๐˜ต๐˜ฆ๐˜ฅ, ๐˜ฉ๐˜ฐ๐˜ธ ๐˜ช๐˜ตโ€™๐˜ด ๐˜ถ๐˜ด๐˜ฆ๐˜ฅ, ๐˜ฐ๐˜ณ ๐˜ฉ๐˜ฐ๐˜ธ ๐˜ฅ๐˜ฆ๐˜ค๐˜ช๐˜ด๐˜ช๐˜ฐ๐˜ฏ๐˜ด ๐˜ข๐˜ณ๐˜ฆ ๐˜ฎ๐˜ข๐˜ฅ๐˜ฆ.

FP. Can you really create a system that does that, transparency and accountability?

KP. You can, if you want to. As an example, we just launched our ๐‘๐ž๐ฌ๐ฉ๐จ๐ง๐ฌ๐ข๐›๐ฅ๐ž ๐ƒ๐š๐ญ๐š ๐†๐จ๐ฏ๐ž๐ซ๐ง๐š๐ง๐œ๐ž (๐‘๐ƒ๐†) ๐ฌ๐ญ๐š๐ง๐๐š๐ซ๐. It provides concrete guardrails for AI and wearable technologies, including c๐ฅ๐ž๐š๐ซ ๐ฉ๐จ๐ฅ๐ข๐œ๐ข๐ž๐ฌ ๐จ๐ง ๐ฐ๐ก๐š๐ญ ๐๐š๐ญ๐š ๐œ๐š๐ง ๐š๐ง๐ ๐œ๐š๐ง๐ง๐จ๐ญ ๐›๐ž ๐ฎ๐ฌ๐ž๐, p๐ซ๐จ๐ญ๐จ๐œ๐จ๐ฅ๐ฌ ๐Ÿ๐จ๐ซ ๐ฆ๐š๐ง๐š๐ ๐ข๐ง๐  ๐€๐ˆ ๐จ๐ฎ๐ญ๐ฉ๐ฎ๐ญ๐ฌ ๐š๐ง๐ ๐ž๐ง๐ฌ๐ฎ๐ซ๐ข๐ง๐  ๐ญ๐ก๐ž๐ข๐ซ ๐ช๐ฎ๐š๐ฅ๐ข๐ญ๐ฒ, e๐ฑ๐ฉ๐ฅ๐š๐ข๐ง๐š๐›๐ข๐ฅ๐ข๐ญ๐ฒ ๐ฅ๐จ๐ ๐ฌ ๐ฌ๐จ ๐๐ž๐œ๐ข๐ฌ๐ข๐จ๐ง๐ฌ ๐š๐ซ๐ž๐งโ€™๐ญ ๐ก๐ข๐๐๐ž๐ง ๐ข๐ง ๐š ๐›๐ฅ๐š๐œ๐ค ๐›๐จ๐ฑ, a๐ฅ๐ข๐ ๐ง๐ฆ๐ž๐ง๐ญ ๐ฐ๐ข๐ญ๐ก ๐ ๐ฅ๐จ๐›๐š๐ฅ ๐ซ๐ž๐ ๐ฎ๐ฅ๐š๐ญ๐ข๐จ๐ง๐ฌ ๐ญ๐จ ๐ฉ๐ซ๐จ๐ญ๐ž๐œ๐ญ ๐ข๐ง๐๐ข๐ฏ๐ข๐๐ฎ๐š๐ฅ๐ฌ ๐š๐œ๐ซ๐จ๐ฌ๐ฌ ๐›๐จ๐ซ๐๐ž๐ซ๐ฌ, and so forth.ย 

FP. Why should a company adopt these standards?

KP. They do have the incentive to do it, as consumers and fans out there will know whoโ€™s serious and whoโ€™s not. Organizations that meet standards can be easily identified. AI doesnโ€™t just need smarter models; it needs smarter governance. Because trust is not automatic. It is earned, sustained, and protected through responsible data governance. The question is no longer โ€œcan AI do this?โ€ but rather โ€œcan we trust the way itโ€™s being done?โ€.ย 

FP. Trust is not automatic and consumersโ€™ benefit, in line with human values, may not necessarily be the objective of this or that model. We need new standards, recognized across public and private enterprises. Groups like XRSI are working on it. The right time to understand, guide, label, measure, etcโ€ฆ is now.ย 

By Frank Pagano



Source link