It was supposed to be a routine meeting. A Calendly invite, a Zoom link, a familiar face on the other end of the call the CEO of a company the target knew well. The conversation was smooth. The technology was convincing. And by the time the victim ran a ClickFix “troubleshooting command” to fix an alleged audio glitch, North Korean hackers had already burrowed into the system.
No lock was picked. No code was cracked. A deepfake did the work.
According to Google’s Mandiant threat intelligence team, a victim targeted by the North Korea-linked group UNC1069 reported that during a video call they were presented with what appeared to be a deepfake of a CEO from another cryptocurrency company. The goal, as with every operation in this growing playbook, was access to keys the cryptographic signatures that authorize the movement of funds from multisignature wallets. The same wallets the industry once believed were impenetrable.
They are not.
The Multisig Myth: Why “Unhackable” Was Always a Human Problem
A multisig wallet, short for multi-signature requires multiple independent approvals before any transaction can execute. Think of it as a vault with three combination locks, each held by a different person in a different location. No single compromised individual, no stolen device, no single point of failure can drain it. That was the theory.
On February 21, 2025, Bybit fell victim to the largest crypto heist ever recorded when attackers siphoned roughly 401,000 ETH worth about $1.5 billion during a routine cold-to-warm wallet transfer. Rather than exploiting code, the hackers used sophisticated phishing and social engineering to compromise the multisig signers, tricking them into approving a malicious transaction.
The method was elegant in its brutality. A developer at Safe{Wallet} the third-party multisig platform Bybit relied on fell for a social engineering attack, and his workstation was compromised.
The attackers then stole AWS session tokens, bypassed multi-factor authentication controls, and gained access to Safe{Wallet}’s AWS account. From there, they manipulated the user interface that Bybit employees saw, replacing legitimate code with instructions to redirect funds to wallets controlled by North Korean operatives.
When Bybit’s signers approved the transaction, they believed they were signing what the screen showed them. They were wrong. The FBI officially attributed the attack to North Korea’s “TraderTraitor” operation. It remains the largest single crypto theft in history.
But Bybit was just the opening act.
Six Months, One Score: The Drift Protocol Anatomy
If the Bybit hack revealed that multisig could be broken through supply chain compromise, the April 2026 Drift Protocol attack proved something even more disturbing: that human trust, carefully cultivated over months, is more valuable to an attacker than any zero-day exploit.
On April 1, 2026, attackers drained approximately $285 million in user assets from Drift Protocol — the largest decentralized perpetual futures exchange on Solana in roughly 12 minutes, with most stolen funds bridged to Ethereum within hours. The planning began in autumn 2025.
Attackers posed as a quantitative trading firm. Beginning in Fall 2025, they initiated face-to-face engagements with specific Drift contributors at major cryptocurrency conferences, demonstrated technical fluency, provided verifiable professional backgrounds, and deposited more than $1 million of their own capital into an Ecosystem Vault as cover. Six months of patient relationship-building. Six months of being the most credible, professional counterparty in the room.
Then came the kill shot. The attacker used social engineering to induce Drift Security Council multisig signers into pre-signing transactions that appeared routine but carried hidden authorizations for critical admin actions. On March 27, Drift migrated its Security Council to a new 2/5 threshold configuration with zero timelock eliminating the delay that would have allowed detection and intervention.
TRM Labs assessed the hack was “likely perpetrated by North Korean hackers” based on on-chain staging patterns, noting that post-hack laundering exceeded the pace of the Bybit exploit in both speed and transaction size. Ledger CTO Charles Guillemet drew the parallel directly: “Smart contracts held up. The real targets now are humans: social engineering and opsec weaknesses more than code exploits.”
That sentence is worth reading twice. The blockchain worked as intended. The cryptography was sound. The humans were the vulnerability.
The Deepfake Layer: When AI Puts a Known Face on the Attack
What Bybit and Drift demonstrated through traditional social engineering, the next generation of attacks is now automating with AI and the upgrade in capability is alarming.
From February to April 2026, the Security Alliance tracked and blocked 164 domains associated with DPRK threat actor UNC1069, which operates multi-week, low-pressure social engineering campaigns across Telegram, LinkedIn, and Slack impersonating known contacts or credible brands before delivering a fraudulent Zoom or Microsoft Teams meeting link. The fake meeting UI is browser-based, built on legitimate SDKs, and visually indistinguishable from real Zoom or Teams meetings.
Inside those fake meetings, deepfakes run the con. Kaspersky, tracking the same campaign under the name GhostCall, documented how webcam footage from previous victims was recorded, uploaded to attacker-controlled infrastructure, and reused to deceive new targets, making them believe they were participating in a genuine live call. When the video of a familiar CEO plays on your screen, your brain doesn’t run a verification protocol. It trusts.
A darknet actor is now marketing a tool that combines deepfake verification with real-time voice cloning, allowing attackers to mimic both appearance and speech simultaneously, creating a synthetic identity capable of passing institutional-grade verification calls.
The tool doesn’t need to perfectly replicate a known individual. It only needs to be convincing enough for someone under mild cognitive load the exact conditions of a business meeting to approve one transaction.
The scale of the broader fraud wave is staggering. Deloitte’s Center for Financial Services projects that AI-enabled fraud losses in the United States could reach $40 billion by 2027, up from $12.3 billion in 2023 a compound annual growth rate of 32%. North Korean threat groups were responsible for $2.02 billion in stolen digital assets in 2025 alone, a 51% increase from the prior year, with Chainalysis noting that scam clusters tied to AI service providers show significantly higher operational efficiency than those without.
AI isn’t just helping attackers move faster. It’s making every operation more convincing, more targeted, and cheaper to run.
What Comes Next: The Vault That Cannot Be Locked
The industry’s standard answer to treasury security has always been the same: more signatures, colder wallets, stricter access controls. The lessons of 2025 and 2026 suggest that answer is insufficient when the attack surface is the judgment of individual humans.
Kraken’s chief security officer Nick Percoco captured the problem plainly: “Attackers aren’t breaking in, they’re being invited in.”
The Bit Gazette’s coverage of wash trading and market manipulation in unregulated exchanges points to the same structural truth: in crypto, the most dangerous vulnerabilities are rarely technical — they are behavioral and institutional. The Bit Gazette also reported on how digital dollars are becoming a weaponized financial instrument, a trend that makes treasury security not just a corporate concern but a geopolitical one.
As institutions pour trillions into crypto infrastructure, Bitmine holding 4.8 million ETH, BlackRock managing $130 billion in crypto products, tokenized real-world assets crossing $27 billion the prize for successfully compromising a multisig signer grows proportionally. The bigger the treasury, the more valuable every human in the approval chain becomes as a target.
A joint paper from the American Bankers Association and the Financial Services Sector Coordinating Council put it starkly: “Deepfakes are not a sector-specific problem but a national problem the same organized criminals and hostile nation-states exploiting the same core deficiencies in identity and authentication infrastructure.”
The response the industry needs isn’t more keys. It’s cryptographic proof that the person holding a key is actually who they claim to be in real time, verified by something a deepfake cannot fake: a private key they physically possess. Until that infrastructure exists at scale, every multisig treasury in crypto has a human layer. And the human layer, as North Korea has now demonstrated twice in fourteen months, is the layer that bleeds.
The vault was never the weakest link. The person standing in front of it always was.