Augur V2 and Decentralized Prediction Markets

  • Commentary
  • August 8, 2020

Five years after its initial launch, Augur has released a second instantiation of its prediction market protocol. Long anticipated, Augur v2 contains several updates largely aimed at enhancing efficiency and usability. In light of these events, the following dispatch explores the obstacles that have long hindered Augur and the potential for recent updates to alleviate such ailments, while also considering the implications of Augur’s performance for decentralized predictions markets as a sector.

Seemingly promising and generally lauded as a concept prior to its launch in June of 2015, Augur has yet to achieve meaningful adoption in the five years since. At the heart of Augur’s troubles was the platform’s obstructive inefficiency; among other factors, its highly secure yet prolonged and convoluted market resolution process hindered its ability to attract users, who largely opted for more centralized and efficient platforms. For most potential users with risk tolerances compatible with the risk profile of the average Augur prediction market, counterparty risks and centralized reporters accompanying centralized platforms were not absolute deterrents, and enhanced usability and efficiency made the latter worth accepting. As a result, the platform remained highly illiquid, with low volumes leaving its prediction markets too inefficient to accurately reflect the likelihood of outcomes based on collective trading decisions—the core tenant of any prediction market.

Augur v1’s inefficiency arose not from technical obstacles but rather the protocol’s core logic. Hence, Augur v2 implements several procedural revisions to the platform’s reporting protocol designed to shorten market resolution times from seven days to three days. This shift in efficiency is, arguably, mostly negligible, as waiting three days for a position to be resolved is by no means convenient for users. There also remain similar dispute procedures that will likely prolong market resolution, as they did in v1. Thus, one might reasonably conclude that all the aforementioned consequences of a seven day reporting process for user behavior are nearly as likely to occur with a three day one.

In an attempt to address other shortcomings, Augur v2 has implemented several notable changes to its mechanics, including integrations of IPFS, 0x mesh, Uniswap oracles and Dai. The latter two integrations genuinely appear promising in alleviating a key hindrance to positive user experience; rather than purchasing shares in highly-volatile ETH, users may now do so with the Dai stablecoin. Further, the introduction of ‘invalid’ as a tradable outcome—inelegance aside—may indeed eliminate invalid market attacks. However, in Augur v1, when a network fork was required to resolve a market, users were incentivized to participate by transferring their REP to their chosen fork in exchange for a 5% bonus paid in REP; this mechanism was found to be insufficient in generating participation, theoretically allowing a small group of participants to determine which fork prevails. In order to strengthen this mechanism, Augur v2 has eliminated bonuses and instead mandated that, in the event of a fork, users participate within 60 days or forfeit the entirety of their REP holdings. While potentially more secure, the threat of complete liquidation with no direct upside potential is likely to be a deterrent for holding REP. Finally, despite IPFS integration, the data intensity of Augur smart contracts remains high and scalability remains a concern; especially in light of Ethereum’s worsening network congestion.

Augur proves a relevant case study in the decentralization of platforms with existential dependencies on efficiency, and its ailments, theoretical and practical, typify those facing the decentralized prediction market and DEX sectors at large. Decentralization inherently leads to some degree of inefficiency and declines in usability; platforms like Augur eliminate counterparty risks and single points of failure in reporting but will never be able to achieve the efficiency of their centralized counterparts. As the past five years have demonstrated, users are largely unwilling to accept this tradeoff. Low volumes in decentralized prediction platforms and DEXs mean illiquid markets, and illiquid markets are inefficient markets. Without the ability to offer prices that reflect the collective psychology of traders and all available information, thus reflecting and predicting the likelihood of outcomes, these platforms fail to achieve their primary purpose. In the coming months and years, these projects’ success will be determined by their ability to disprove the mounting evidence that such problems are intrinsic and fundamentally inescapable.