We love machines. We follow our navigation system to go to places, and carefully evaluate recommendations about travel, restaurants and potential partners for a lifetime, across various apps and websites, as we know algorithms could spot opportunities that we may like, better than we can ever do. But when it comes to final decisions about health, our job or our kids, for example, would you trust and entrust AI to act on your behalf? Probably not.ย
This is why we (FP) talk to Kavya Pearlman (KP), Founder & CEO at XRSI, which is the X-Reality Safety Intelligence group she put together, to address and mitigate risks in the interaction between humans and exponential technologies. She is based on the West Coast of the US, of course. This is our exchange.ย ย
FP. Whatโs going on with the advent of AI?
KP. For years, tech companies have normalized the idea that we must give up our most valuable asset, our data, in exchange for digital convenience. We always click โacceptโ without ever asking questions. Now, with the rise of wearables and ๐๐-๐ข๐ง๐ญ๐๐ ๐ซ๐๐ญ๐๐ ๐ฌ๐ฒ๐ฌ๐ญ๐๐ฆ๐ฌ, the stakes are much higher. Itโs not just about browsing history or location data anymore. Companies are harvesting insights from our bodies and minds, from heart rhythms and ๐๐ซ๐๐ข๐ง ๐๐๐ญ๐ข๐ฏ๐ข๐ญ๐ฒ to ๐๐ฆ๐จ๐ญ๐ข๐จ๐ง๐๐ฅ ๐ฌ๐ญ๐๐ญ๐๐ฌ. And still, almost no one is asking: ๐๐จ๐ฐ ๐๐จ ๐ฐ๐ ๐ญ๐ซ๐ฎ๐ฌ๐ญ ๐ญ๐ก๐๐ฌ๐ ๐ฌ๐ฒ๐ฌ๐ญ๐๐ฆ๐ฌ ๐ฐ๐ข๐ญ๐ก ๐จ๐ฎ๐ซ ๐ฆ๐จ๐ฌ๐ญ ๐ข๐ง๐ญ๐ข๐ฆ๐๐ญ๐ ๐๐๐ญ๐? ๐๐ก๐๐ญ ๐ฉ๐จ๐ฐ๐๐ซ ๐๐จ ๐ฐ๐ ๐ก๐๐ฏ๐ ๐ข๐ ๐ฐ๐ ๐๐จ๐งโ๐ญ ๐ญ๐ซ๐ฎ๐ฌ๐ญ ๐ญ๐ก๐๐ฆ? ๐๐ก๐๐ญ ๐๐ซ๐ ๐ญ๐ก๐ ๐ข๐ง๐๐ข๐๐๐ญ๐จ๐ซ๐ฌ ๐จ๐ ๐ญ๐ซ๐ฎ๐ฌ๐ญ ๐ฐ๐ ๐ฌ๐ก๐จ๐ฎ๐ฅ๐ ๐๐๐ฆ๐๐ง๐?
This isnโt just a technical challenge. Itโs a governance challenge and at its core, a question of ๐ญ๐ซ๐ฎ๐ฌ๐ญ. Without transparency and accountability, AI risks amplifying hidden biases, eroding trust, and leaving people without recourse when systems get it wrong. ๐๐ณ๐ถ๐ด๐ต ๐ค๐ข๐ฏ๐ฏ๐ฐ๐ต ๐ฆ๐น๐ช๐ด๐ต ๐ช๐ง ๐ธ๐ฆ ๐ฅ๐ฐ๐ฏโ๐ต ๐ฌ๐ฏ๐ฐ๐ธ ๐ธ๐ฉ๐ข๐ต ๐ฅ๐ข๐ต๐ข ๐ช๐ด ๐ฃ๐ฆ๐ช๐ฏ๐จ ๐ค๐ฐ๐ญ๐ญ๐ฆ๐ค๐ต๐ฆ๐ฅ, ๐ฉ๐ฐ๐ธ ๐ช๐ตโ๐ด ๐ถ๐ด๐ฆ๐ฅ, ๐ฐ๐ณ ๐ฉ๐ฐ๐ธ ๐ฅ๐ฆ๐ค๐ช๐ด๐ช๐ฐ๐ฏ๐ด ๐ข๐ณ๐ฆ ๐ฎ๐ข๐ฅ๐ฆ.
FP. Can you really create a system that does that, transparency and accountability?
KP. You can, if you want to. As an example, we just launched our ๐๐๐ฌ๐ฉ๐จ๐ง๐ฌ๐ข๐๐ฅ๐ ๐๐๐ญ๐ ๐๐จ๐ฏ๐๐ซ๐ง๐๐ง๐๐ (๐๐๐) ๐ฌ๐ญ๐๐ง๐๐๐ซ๐. It provides concrete guardrails for AI and wearable technologies, including c๐ฅ๐๐๐ซ ๐ฉ๐จ๐ฅ๐ข๐๐ข๐๐ฌ ๐จ๐ง ๐ฐ๐ก๐๐ญ ๐๐๐ญ๐ ๐๐๐ง ๐๐ง๐ ๐๐๐ง๐ง๐จ๐ญ ๐๐ ๐ฎ๐ฌ๐๐, p๐ซ๐จ๐ญ๐จ๐๐จ๐ฅ๐ฌ ๐๐จ๐ซ ๐ฆ๐๐ง๐๐ ๐ข๐ง๐ ๐๐ ๐จ๐ฎ๐ญ๐ฉ๐ฎ๐ญ๐ฌ ๐๐ง๐ ๐๐ง๐ฌ๐ฎ๐ซ๐ข๐ง๐ ๐ญ๐ก๐๐ข๐ซ ๐ช๐ฎ๐๐ฅ๐ข๐ญ๐ฒ, e๐ฑ๐ฉ๐ฅ๐๐ข๐ง๐๐๐ข๐ฅ๐ข๐ญ๐ฒ ๐ฅ๐จ๐ ๐ฌ ๐ฌ๐จ ๐๐๐๐ข๐ฌ๐ข๐จ๐ง๐ฌ ๐๐ซ๐๐งโ๐ญ ๐ก๐ข๐๐๐๐ง ๐ข๐ง ๐ ๐๐ฅ๐๐๐ค ๐๐จ๐ฑ, a๐ฅ๐ข๐ ๐ง๐ฆ๐๐ง๐ญ ๐ฐ๐ข๐ญ๐ก ๐ ๐ฅ๐จ๐๐๐ฅ ๐ซ๐๐ ๐ฎ๐ฅ๐๐ญ๐ข๐จ๐ง๐ฌ ๐ญ๐จ ๐ฉ๐ซ๐จ๐ญ๐๐๐ญ ๐ข๐ง๐๐ข๐ฏ๐ข๐๐ฎ๐๐ฅ๐ฌ ๐๐๐ซ๐จ๐ฌ๐ฌ ๐๐จ๐ซ๐๐๐ซ๐ฌ, and so forth.ย
FP. Why should a company adopt these standards?
KP. They do have the incentive to do it, as consumers and fans out there will know whoโs serious and whoโs not. Organizations that meet standards can be easily identified. AI doesnโt just need smarter models; it needs smarter governance. Because trust is not automatic. It is earned, sustained, and protected through responsible data governance. The question is no longer โcan AI do this?โ but rather โcan we trust the way itโs being done?โ.ย
FP. Trust is not automatic and consumersโ benefit, in line with human values, may not necessarily be the objective of this or that model. We need new standards, recognized across public and private enterprises. Groups like XRSI are working on it. The right time to understand, guide, label, measure, etcโฆ is now.ย
By Frank Pagano