Can AI Replace Human On-Chain Analysts in Crypto?


Can AI Replace Human On-Chain Analysts in Crypto?


Artificial intelligence has reshaped multiple industries, and everywhere it goes, the same question follows: Will it replace humans? In crypto, its impact is already visible, from AI-driven trading bots to agentic trading systems.

However, Alex Svanevik, CEO and co-founder of Nansen, argues that AI is not a substitute for human judgment, but rather an augmentation. In an exclusive interview with BeInCrypto, Svanevik explores this shift in depth and outlines what lies ahead for AI-powered analysis. 

The AI Debate in Crypto: Nansen CEO Argues for Augmentation, Not Replacement 

On January 21, Nansen announced the launch of its AI-powered on-chain trading functionality. This marks a major shift from a pure analytics platform to a unified insight-and-execution product. 

Sponsored

Sponsored

Built on its proprietary dataset of more than 500 million labeled wallets, the new release allows users to manage portfolios, interpret live on-chain signals, and get data-backed suggestions. It also enables users to execute trades directly within Nansen.

“Trained and evaluated on Nansen’s proprietary dataset, Nansen AI consistently outperforms leading AI products on benchmarks designed for on-chain analysis and trading use cases. This ensures the insights it delivers are not only more accurate, but also directly actionable for traders/investors, turning agentic intelligence into a practical trading edge,” the announcement read.

Furthermore, the launch unlocks what Nansen calls “vibe trading.” It describes this as a more intuitive way to move from insight to on-chain execution without switching tools.

As AI takes on more analytical work, the role of human analysts comes into question. Svanevik said AI excels at processing scale, allowing it to analyze hundreds of millions of wallets, track cross-chain flows, and identify patterns that would be difficult for humans to detect. 

However, he emphasized that decision-making remains with users, who ultimately guide the process by asking the right questions and approving actions.

“The boundary isn’t fixed. It shifts as AI gets better at reasoning and as on-chain data becomes richer. But the goal isn’t to replace judgment. It’s to free humans from grunt work so they can focus on higher-order decisions,” he stated.

What Makes Analysis Credible in an AI-First Crypto Market?

Research suggests that increased reliance on artificial intelligence tools can be linked to diminished critical thinking skills. In cryptocurrency markets, where traders must navigate extreme volatility and high-risk assets, the stakes are even higher.

Sponsored

Sponsored

However, Svanevik offered a different view. He argued that “good AI” surfaces more signals, pushing users to think more critically about execution rather than less.

“The real systemic risk is when everyone runs the same playbook. That’s not unique to AI—it happens with human analysts too. The answer is diversity: diverse models, diverse strategies, diverse data interpretations. That’s why we’re building tools that empower individual decision-making, not a single oracle everyone follows,” he added.

The executive also emphasized that neither AI nor human analysts should be trusted blindly. According to him, what matters is whether the analysis consistently holds up over time. 

When it comes to credibility in an AI-first market, the CEO explained that, 

“Credibility in an AI-first era comes from measurement and repetition, not from a name or a Twitter following. AI has the advantage that it can be tested relentlessly, at scale, and against reality in a way individual humans simply can’t.”

He shared that the most straightforward test is practical. Svanevik suggested that users should ask questions that matter to them and judge whether the responses are grounded, useful, and actionable, noting that users tend to be effective judges of quality.

Sponsored

Sponsored

“Long term, trust will shift away from individual analysts toward platforms that can prove, continuously, that they surface signal and reduce noise. That’s the bar we hold ourselves to,” Svanevik told BeInCrypto.

Why AI Can Analyze On-chain Data, but Can’t Replace Human Conviction

Human analysts often align trading decisions with on-chain metrics, price data, and other signals through judgment and contextual interpretation. On the other hand, AI systems rely on patterns learned from past data.

When asked whether AI could eventually develop a similar form of judgment, Svanevik disclosed that it is likely, though not in a human sense.

He detailed that AI would develop its own form of contextual reasoning. The executive believes it could be more effective at integrating live data across a far broader set of variables than any human could track.

“The path there is through better training data, longer context windows, and feedback loops from real execution. We’re already seeing this with our agent. It doesn’t just pattern-match—it reasons over behavioral data in real time. That’s early-stage judgment. It’ll get sharper as the models evolve and as we compound learnings from millions of onchain interactions,” Svanevik mentioned.

Sponsored

Earn up to $5,000 as a welcome bonus on BitMEXJoin now

Sponsored

However, he also identified one aspect of on-chain analysis that he believes AI will never fully replace: taking responsibility for decisions under uncertainty.

Svanevik pointed out that while AI can surface patterns, probabilities, and potential scenarios, and assess what has happened or what might happen based on data, it cannot determine an individual’s risk tolerance, value judgments, or take accountability for decisions when outcomes turn negative.

“On-chain analysis ultimately feeds into real-world actions: deploying capital, backing teams, making public calls. Someone has to own those decisions. That’s a human role,” the executive remarked.

He stressed that, regardless of how advanced AI models become, credibility will continue to rest with humans in matters of judgment, accountability, and conviction. AI may inform decisions, he said, but humans ultimately make them and bear the consequences.

“Deciding what matters. AI can tell you what’s happening on-chain, but it can’t tell you what you should care about. That’s taste. That’s conviction. That’s human,” Svanevik commented.

Ultimately, Svanevik sees AI as a powerful enabler rather than a decision-maker. While AI can surface patterns, probabilities, and insights at unprecedented scale, human judgment remains central for risk, accountability, and conviction. 

As AI-driven analysis becomes more prevalent, trust will increasingly rest with platforms that can continuously prove the quality of their insights. At the same time, humans remain responsible for deciding what matters and standing behind the outcomes.



Source link