Earlier this month, a U.S. Court of Appeals ruling threw out the “Open Internet” rules adopted by the Federal Communications Commission in 2010. Under those regulations, broadband Internet service providers were generally prohibited from discriminating on almost any basis between the traffic flowing across their servers. In other words, they were more or less required to treat all traffic equally, regardless of source, content, destination or size.

The ruling immediately sparked doomsday predictions about the end of “net neutrality” as we know it. “Expect Internet blackouts that extend far beyond the popular vendors, as smaller websites are caught in the crossfire,” wrote Craig Aaron, President and CEO of Free Press. “Tweets, emails and texts will be mysteriously delayed or dropped. Videos will load slowly, if at all. Websites will work fine one minute and freeze the next.”

Some of the less dramatic supporters of the FCC’s net neutrality measures concede that price discrimination and provider favoring work well within private networks, but there is a sense among them that the Internet derives its value from the very fact that it is different. Tim Wu, a professor at Columbia Law School and an expert on net neutrality, says that the Internet is a platform that offers more value when it is less specialized and sub-divided. “The idea,” he writes on his personal website, “is that a maximally useful public information network aspires to treat all content, sites and platforms equally.” Wu likens the web to the electrical grid, which delivers the same degree of electricity regardless of which type of appliance is plugged into it. This neutrality, he argues, is one of the reasons for the success of public utilities, and we should think twice before abandoning it online.

The arguments Wu and others like him make for net neutrality legislation are generally nuanced and insightful, articulating both problems we may face and proposing solutions as well. That being said, however, it is worth noting that the pro-legislation position suffers from some empirical and theoretical shortcomings. In a 2007 analysis of the mobile broadband industry, for instance, Wu identified what he perceived to be a litany of market failures, calling unsuccessfully for heavy regulatory intervention to establish an environment of “wireless net neutrality.” But his fears were unfounded. As an industry trade group pointed out in a direct response to Wu three years later, the lack of state regulation did not stop the wireless industry from utterly erupting with competition. “Contrary to the professor’s view of how the ecosystem would evolve, in the absence of regulation, every element of the wireless ecosystem has expanded,” wrote CTIA–The Wireless Association in a letter to the FCC. CTIA cited hundreds of devices, hundreds of thousands of new applications, multiple operating systems and the proliferation of open source platforms and initiatives all as evidence that the free market in mobile broadband has been a remarkable success and, above all else, a boon to consumers.

Similarly, others claim that net neutrality legislation represents a “fix” to a problem that may not even exist. Gerald L. Faulhaber, formerly of the Wharton School and Penn Law School, cites the research of the FCC itself as evidence that an unregulated broadband industry has been relatively free of abuse. “In over a decade, there were only four examples of purported misconduct…for the entire broadband ISP industry,” he explained in 2012. “By any standard, four complaints about an entire industry would seem to be cause for commendation rather than restrictive regulation.” In the policy report from which that quote is taken, Faulhaber argues that neither the pre-2010 history of unregulated broadband nor the current industry trends provide any indication that government intervention is required to protect the web.

Far from ineffective, in fact, it’s possible that a legislative approach to net neutrality could actually make things worse. As economists Art Carden and Steve Horwitz argued in a 2013 editorial, the idea that imperfect market outcomes—or “market failures”—provide self-evident justification for government intervention overlooks the possibility of government failures. “We are left to wonder which of two imperfect systems will serve us better: the ‘failed’ market or the ‘failed’ political process,” the two write, citing numerous historical and theoretical examples to illustrate their point that the case for intervention is far from self-explanatory. “We have many reasons to think markets will outperform governments in this regard, even in less-than-perfect conditions.”

It’s certainly possible that a free market in broadband ISP could result in sub-optimal outcomes, and the arguments presented above are admittedly less than sufficient to prove beyond a shadow of a doubt otherwise. Moreover, some citizens, even after examining the relative strengths and weaknesses of states and markets, would undoubtedly still choose to place the government in charge of overseeing and maintaining net neutrality. Reasonable people, in other words, will disagree.

Too often, though, this debate has eschewed observation and comparative analysis for hyperbole and melodrama. We would do well to avoid the nightmare scenarios of a perfectly non-competitive, corporate dystopian web and at least consider the historical and theoretical evidence behind market approaches to maintaining Internet freedom.

Chris Bassil, Trinity ’12, is currently working in Boston, Mass. His column runs every other Friday. Send Chris a message on Twitter @HamsterdamEcon.