I kept seeing the Synthetix postmortem circulating last month, a botched oracle feed that briefly put over a billion dollars in synthetic assets at risk before the team talked the attacker into giving it back. A billion dollars in exposure, unwound because the attacker agreed to give it back, which tells you something about how much of this ecosystem runs on goodwill and luck. I was at my desk at Mastercard, halfway through a design doc for a smart contract language we were building internally, and the contrast was hard to ignore. Synthetix was an oracle problem, bad data in, bad results out, the kind of thing our language wouldn’t have caught either. But it landed in my feed right next to a thread cataloging every other Solidity exploit from the past three years, and those were a different story. Reentrancy, integer overflow, gas manipulation: vulnerabilities that trace directly back to what the language lets you write. We were spending our days deliberately removing expressiveness from a smart contract language, because in enterprise finance, the freedom to write anything is the freedom to lose everything.

Three years into Ethereum’s life, the pattern is unmistakable. The same exploit classes keep surfacing: different teams, different auditors, different contracts, same vulnerabilities. These failures trace back to a fundamental design choice baked into Solidity from the beginning.

The Turing Completeness Trap

Solidity is Turing complete: it can express any computation a general-purpose programming language can. In the early days of Ethereum, this was the selling point, a world computer capable of running arbitrary programs, enforced by consensus. Turing completeness comes with a mathematical consequence that most smart contract developers never think about until it costs them money.

The halting problem, proven undecidable by Turing in 1936, means no program can reliably determine whether another program will finish executing or run forever. For smart contracts, the implication is severe: static analysis can never fully guarantee that a Turing-complete contract will terminate, that it won’t consume unbounded resources, or that its control flow won’t be hijacked mid-execution. Gas limits are Ethereum’s answer, a runtime circuit breaker that halts execution when computation exceeds a threshold. But gas is a band-aid applied at runtime, after the contract is already deployed and immutable. It doesn’t prevent the bug; it caps how long the bug can run before the transaction reverts.

Gas limits don’t prevent bugs in smart contracts; they just cap how long a bug can run before the transaction reverts.

This distinction matters enormously. A compiled, non-Turing-complete language can reject entire categories of dangerous programs at compile time, before they ever touch a blockchain. A Turing-complete language with gas limits can only catch them at runtime, transaction by transaction, after real assets are at stake.

A Pattern Written in Stolen Ether

Ethereum’s exploit history makes the case against Turing completeness in financial infrastructure better than any whitepaper could. Each incident exploits a different surface, but they all share a root cause: the language allowed the developer to express something dangerous, and developers, given the opportunity, did exactly that.

The DAO hack in June 2016 was the original sin. A reentrancy vulnerability let an attacker recursively call back into the withdrawal function before balances updated, draining 3.6 million ETH, roughly $60 million at the time. The fallout was severe enough that the Ethereum community hard-forked the entire chain to reverse the theft, splitting the network into ETH and Ethereum Classic. Reentrancy exploits a combination of external calls, mutable state, and unrestricted control flow that Solidity’s Turing-complete design makes easy to write and hard to prevent, since the compiler has no way to restrict the call graph at deploy time.

Around the same time, the GovernMental contract showed the second failure mode. An unbounded loop in its payout logic exceeded the block gas limit, permanently locking roughly 1,100 ETH. The contract worked fine with a small number of participants and became permanently unexecutable once the list grew beyond what a single block’s gas could process. Bounded loops, where the iteration count is known at compile time, would also have prevented this. We went further and removed loops entirely, because even bounded iteration introduces state-space complexity that makes formal verification harder, and the enterprise use cases we cared about didn’t require iteration at all.

July 2017 brought the first Parity wallet hack: an unprotected initialization function combined with Solidity’s delegatecall mechanism let an attacker take ownership of the wallet contract and drain 153,037 ETH, roughly $30 million. Four months later, the Parity wallet freeze was worse. Someone called selfdestruct on the shared library contract that all Parity multisig wallets depended on, bricking every dependent wallet and permanently freezing 513,774 ETH. That’s approximately $280 million, still locked to this day. A single transaction rendered an entire ecosystem of contracts permanently unusable, enabled by an opcode that a more constrained language would never have exposed.

April 2018 brought BatchOverflow and ProxyOverflow: integer overflow vulnerabilities in ERC-20 token contracts that let attackers mint trillions of tokens from nothing. Solidity performed unsigned integer arithmetic without overflow checks by default, a design choice that most financial software treats as a basic safety failure. The fix, a library called SafeMath, existed but was optional, a matter of developer discipline rather than language enforcement.

The Fomo3D exploit in August 2018 revealed a different attack surface entirely: gas manipulation. The winner stuffed blocks with high-gas transactions that prevented other players from interacting with the contract, buying themselves a guaranteed win worth 10,469 ETH. The attack only works because the language and runtime allow arbitrary computation whose gas costs can be weaponized.

Then SpankChain in October 2018: 165 ETH lost to reentrancy, the exact same vulnerability class as The DAO, two and a half years later. They had skipped a security audit, but even with one, reentrancy remains possible in Solidity because the language still permits the pattern. Audits are a manual review process layered on top of a language that allows dangerous constructs by default; they reduce risk without eliminating the root cause.

The Constantinople upgrade scare in January 2019 drove the point home. EIP-1283, a proposed gas repricing change, would have re-enabled reentrancy attacks on contracts already deployed and assumed safe under the existing gas schedule. The community caught it in time and postponed the fork, but the near-miss illustrated something fundamental: in a Turing-complete environment, even changing the cost model can reintroduce entire vulnerability classes into contracts that cannot be patched.

Compiled for Safety

Turing completeness is the headline problem, but how a smart contract language is executed matters too, especially because the deployment target is immutable. Ship a bug in a web application and you can push a fix; deploy a bug to a blockchain and it lives there permanently, executing as written, holding real assets, with no patch mechanism.

A compiled language with a strong type system can reject entire categories of bugs before deployment. Type errors, integer overflow, unreachable code paths, non-termination: these become compile-time failures rather than runtime catastrophes. An interpreted language defers these checks to execution time. On a mutable platform, that means you can fix mistakes; on an immutable one, you cannot. For Mastercard’s use case, we decided compile-time rejection of dangerous patterns was worth the tradeoff, especially with a smaller, purpose-built compiler we could formally verify ourselves.

The language we’re building at Mastercard is compiled for this reason. Every guarantee we move from runtime to compile time is a category of failure that becomes impossible to deploy. When your infrastructure processes billions of dollars, that shift changes the entire risk calculus.

What Enterprise Actually Needs

Solidity was designed for a specific vision: a permissionless world computer where anyone can deploy arbitrary programs. Enterprise blockchain has different requirements entirely. Permissioned networks and private transactions solve some of the platform-level gaps, but the language-level problems persist regardless of the network topology; bolting safety rails onto a race car doesn’t turn it into a school bus.

Privacy and access control are non-negotiable in enterprise finance. Every piece of state on Ethereum is publicly visible; the only privacy is pseudonymity, which is insufficient for transaction data between real institutions. Enterprise smart contracts need to enforce who can see what, not only who can modify what.

Deterministic execution, same inputs producing same outputs every time with no dependence on block state, gas prices, or miner behavior, is harder to achieve in Solidity than it should be. Gas-dependent execution paths, block timestamp manipulation, and miner-extractable value all introduce non-determinism that enterprise applications cannot tolerate.

Formal verifiability, mathematically proving that a contract behaves according to its specification, is vastly easier in a non-Turing-complete language. Remove unbounded loops and unrestricted recursion, and the state space of possible executions becomes finite and enumerable; automated tools can exhaustively verify every execution path. In a Turing-complete language, formal verification is theoretically possible and teams like Certora are doing impressive work proving properties of deployed Solidity contracts, but the process is orders of magnitude more expensive, less comprehensive, and requires deep expertise that most development teams don’t have.

Safe arithmetic by default should not require importing a library. Solidity allowed unchecked integer overflow for years; SafeMath was a community patch rather than a language feature, and adoption is growing but still inconsistent. That ordering reveals a design philosophy oriented toward expressiveness over safety, though the ecosystem is clearly moving toward safer defaults.

Building the Language at Mastercard

We called it Plume, partly because a good smart contract language should be lightweight, and partly because we liked the irony of naming something after a feather when Solidity keeps dropping like a stone. Plume started from a single premise: build a language where dangerous patterns are impossible to express, rather than building a general-purpose language and trying to prevent developers from writing dangerous code after the fact.

The most visible design choice is the absence of loops. Without loops, every program is guaranteed to terminate: no halting problem, no gas estimation uncertainty, no unbounded computation. The entire class of vulnerabilities that stem from non-termination, from GovernMental’s locked funds to Fomo3D’s gas manipulation, cannot exist.

The question we kept asking ourselves: does the problem enterprises need to solve actually require Turing completeness? The answer, almost every time, was no.

We compile to , which gives us full type safety and static analysis before anything touches the chain. Integer arithmetic is checked by default; overflow is a compile-time error, not a runtime surprise that mints trillions of tokens. The type system makes common financial patterns easy to express correctly: .

Predictably, the decision to omit loops generated the most internal debate. After surveying the smart contract use cases Mastercard cares about, supply chain verification, cross-border payment settlement, tokenized asset management, loyalty program logic, we found they’re all about state transitions with well-defined rules. A payment either meets the conditions for release or it doesn’t. A supply chain checkpoint either validates or it fails. These are decision trees, not algorithms that need to iterate until convergence.

Formal verification is baked into the design, not bolted on as an afterthought. Because the language is non-Turing-complete, the state space is finite, making exhaustive verification practical rather than theoretical. We can prove properties about every possible execution of a contract before it deploys, a guarantee that Solidity cannot offer.

The Right Constraints

I still read every Solidity postmortem that crosses my feed, and the pattern hasn’t broken since The DAO: reentrancy, integer overflow, gas manipulation, access control failures. The Solidity ecosystem is getting better, SafeMath adoption is up, the checks-effects-interactions pattern is more widely taught, and language-level improvements are on the roadmap. These are welcome changes. But they’re incremental fixes to a language whose core design still permits the patterns that keep producing losses, and each fix makes Solidity safer whilst leaving its fundamental expressiveness intact.

So do smart contracts actually need that expressiveness? Public blockchain’s vision of arbitrary decentralized applications may require it, but every enterprise use case we evaluated at Mastercard ran fine without it. Every constraint we added to Plume removed a class of failures from the realm of possibility. The smart contract languages that gain traction in enterprise will be the ones that kept the most dangerous patterns from being expressible in the first place, and the timeline for that shift is already underway.