Are people mostly self-interested egoists who are unlikely to help achieve common goals unless somehow forced or induced to do so by government or other powerful agencies? Although a lot of people (and the assumptions of traditional economics) suggest the answer is yes, there is reason to doubt this is so. Or, to be more precise, there is good reason to doubt it is true of all or even most people. For example, why does anyone voluntarily vote when there is essentially zero chance that doing so will promote the individual voter’s narrow self-interest? And why do people voluntarily recycle or work with others to help out a neighbor in need? At least some people seem to be motivated by a sense of social obligation or some motive(s) other than narrow egoistic self-interest.
[L]awmakers from different classes bring different perspectives with them: how they think, how they vote, and the kinds of bills they introduce often depend on the classes they came from. The shortage of lawmakers from the working class tilts decisions about the distribution of economic resources, protections, and burdens in favor of the more conservative policies that affluent Americans tend to prefer. Social safety net programs are stingier, business regulations are flimsier, and the tax code is more regressive because working-class Americans are all but absent from our political institutions. . . .
Recently, Jesse A. Myerson has defended five economic reforms that he thinks fellow members of his Millennial Generation should “start fighting for, pronto, if we want to grow old in a just, fair society, rather than the economic hellhole our parents have handed us.” Myerson’s proposal is ambitious, including “Guaranteed Work for Everybody” and “Social Security for All [aka universal basic income]”. As one might expect, conservatives and libertarians have been highly critical. One fellow Millennial accused Myerson of trying to convince their generation to have “their livelihoods funded and assigned by the state,” thus completely ignoring the lesson they all were taught by “Lois Lowry’s The Giver in middle school.”
The website “I Side With” offers a fun and interesting way to learn which political party best represents your (current) political views. Similar to “Political Compass,” which helps you to better understand your ideological orientation, I Side With tells you the percent of issues on which you agree with five U.S. political parties: Democrats, Republicans, Libertarians, Greens, and Socialists.
How well do you know your political self? How well do other Civitas readers know their political selves? Let’s find out! Take a minute to fill-out this survey and then spend about five minutes taking the I Side With quiz. After completing it, report the results of that quiz on this form. Once there are enough responses, I will post the results of the two surveys so that all can see how well, on average, Civitas users know their level of agreement with those five political parties.
[Photo is of the temple of the Delphic Oracle, upon which (of course) is inscribed “Know Thyself.”]
Back in 2010, when Congress was contemplating creation of the Consumer Financial Protection Agency (CFPA), Political Scientist Steven Teles wrote an interesting review of Daniel Carpenter’s 856 page history of the Food and Drug Administration, from which Teles drew three “important lessons” for those who favor effective regulation of the financial industry:
First and foremost, while Americans’ skepticism of government is strong, it is not insurmountable. Many Americans think of “bureaucrats” as either ineffectual or self-interested power grabbers, but few feel that way about employees of the FBI, the military, the Social Security Administration, or the National Institutes of Health. And because Americans view these agencies in a positive light, they and their representatives in Congress have been willing to grant them broad power and authority, and in some cases to allow them to exercise power that they have not been explicitly granted—proof that Americans do not oppose handing power to government when they believe it is in trustworthy hands. Just as important, as a result of their reputation these agencies have been able to attract talent that other agencies cannot. . . .
Second, in building an agency with the kind of power that the FDA had at its height, personnel matters. An agency with the ability to control, at least to some degree, its political destiny and strike fear into the hearts of those it regulates requires not only high-quality people, but also people possessed of a particular regulatory spirit. . . .
Finally, developing a powerful organizational image would depend in part on a leadership cadre capable of exploiting the damaged reputation of the financial industry, and taking advantage of new leverage points as they present themselves. The background of the financial crisis obviously should give the CFPA a running start on this. But, as the FDA’s more recent experience shows, the enemies of any regulatory agency with teeth will always be on the lookout for opportunities to strike at its organizational image. Even if the CFPA is able to reshape the financial industry as the FDA once did with Big Pharma, nothing lasts forever. Organizational image, Carpenter suggests, is power—but power based on something as subjective as reputation is also, perhaps inevitably, ephemeral.
The public choice school of political economy has long argued that “regulatory capture”–i.e. the taking over of regulatory agencies by the groups they are supposed to regulate–is nearly inevitable and, thus, effective regulation in the public interest should not be expected. Teles’ and Carpenter’s analysis of the FDA’s experience suggests that this is not necessarily the case. On a related note, Carpenter and David E. Moss have recently published a co-edited volume on the topic of preventing regulatory capture. From the abstract:
When regulations (or lack thereof) seem to detract from the common good, critics often point to regulatory capture as a culprit. In some academic and policy circles it seems to have assumed the status of an immutable law. Yet for all the ink spilled describing and decrying capture, the concept remains difficult to nail down in practice. Is capture truly as powerful and unpreventable as the informed consensus seems to suggest? This edited volume brings together seventeen scholars from across the social sciences to address this question. Their work shows that capture is often misdiagnosed and may in fact be preventable and manageable. Focusing on the goal of prevention, the volume advances a more rigorous and empirical standard for diagnosing and measuring capture, paving the way for new lines of academic inquiry and more precise and nuanced reform.
Governing magazine reports:
Takoma Park[, Maryland] recently became the first city in the nation to lower the voting age for local elections to 16. Since the law change in May, 134 voters, ages 16 and 17, registered to vote in municipal elections, and 59 cast ballots in November. That means that roughly 44 percent of registered voters in the under-18 voting bloc participated in the city election.
That’s good news for long-term civic engagement, says . . . [Peter Levine, a professor of citizenship and public affairs at Tufts University], because academic research shows that “voting is habit forming. If you voted in a past election, you tend to vote again.” The question going forward, Levine says, is whether the under-18 voters continue to vote at the same level, or if participation was abnormally high because this was the first time a 16-year-old could vote. When Congress lowered the voting age to 18 in 1972 for federal elections, turnout reached 52 percent for 18-to-24-year-olds, higher than in any year since. Another possibility, Levine says, is that parents and school teachers helped teenagers understand how voting works and what elected city leaders do, which motivated teenagers to vote. [Keep reading]
The Annenberg Center and Game Innovation Law at the University of Southern California have produced a nifty game for exploring redistricting and gerrymandering. According to their About Page:
The Redistricting Game is designed to educate, engage, and empower citizens around the issue of political redistricting. . . . By exploring how the system works, as well as how open it is to abuse, The Redistricting Game allows players to experience the realities of one of the most important (yet least understood) aspects of our political system. The game provides a basic introduction to the redistricting system, allows players to explore the ways in which abuses can undermine the system, and provides info about reform initiatives – including a playable version of the Tanner Reform bill to demonstrate the ways that the system might be made more consistent with tenets of good governance. Beyond playing the game, the web site for The Redistricting Game provides a wealth of information about redistricting in every state as well as providing hands-on opportunities for civic engagement and political action.
This doesn’t really relate to civic knowledge, but I do want to brag a little about Georgia State. This is really something to be proud of:
Georgia State University is the national leader in efforts to dramatically increase graduation rates, according to the Association of Public and Land-Grant Universities. [keep reading]
One of the most famous lessons taught in introductory (micro) economics courses is that, according to economic theory, minimum wage increases have the unintended consequence of increasing unemployment. Consequently, it is often argued that minimum wage increases actually end up hurting those (i.e. the working poor) whom such policies are supposed to help. As Congress considers increasing the federal minimum wage to $8.20 and certain major cities entertain mandating “living wages” as high as $15, it is worth considering the empirical, as opposed to merely theoretical, economic research on the affects of minimum wage increases on employment.
Back in February, John Schmitt, Senior Economist at The Center for Economic and Policy Research, reported the following results of his meta-analysis (i.e. systematic analysis of published research findings) of research on this topic published since the year 2000:
… The weight of that evidence points to little or no employment response to modest increases in the minimum wage.
The report reviews evidence on eleven possible adjustments to minimum wage increases that may help to explain why the measured employment effects are so consistently small. The strongest evidence suggests that the most important channels of adjustment are: reductions in labor turnover; improvements in organizational efficiency; reductions in wages of higher earners (“wage compression”); and small price increases.
Given the relatively small cost to employers of modest increases in the minimum wage, these adjustment mechanisms appear to be more than sufficient to avoid employment losses, even for employers with a large share of low wage workers.
It would seem, then, that as with the Whack-a-Mole Theory of Consumer Credit Regulation, the predictions of this economic theory are currently not supported by empirical evidence. (See also here and here.)
Notice, however, that this research is about “modest increases in the minimum wage.” Thus, this evidence (on its own) does not speak to the question of whether a large increase, like that being considered in Seattle, will lead to an increase in unemployment.
Also, it is important to keep in mind that no empirical finding, no matter how solidly established, is sufficient in itself for settling policy questions. There is no escaping the need to make core value judgments on issues of public policy. For example, libertarians would object to a minimum wage increase out of principle (regardless of its consequences). That is, they would see such a regulation as an illegitimate government intrusion into the freedom of employers and employees to negotiate the terms of employment. On the other hand, progressives sometimes conclude that the benefits created for the employed by a higher minimum wage would outweigh the reduction in overall employment (if this were in fact the consequence of minimum wage increases). Still, the debate over these value questions should be kept separate from the empirical question about the actual effects of minimum wage increases. And the evidence presently suggests we do not face a significant tradeoff between (modestly) higher minimum wages and employment.
Over at Monkey Cage, Erik Voeten offers an interesting discussion on the decline in the proportion of Congress members who are veterans and what, according to political science research, this might mean for Congressional foreign policy making:
Frank Lautenberg, who passed away this summer, was the last of 115 World War II veterans who served in the U.S. Senate. To the best of my knowledge, there will be only 12 U.S. senators who have experienced active military service in the 114th Congress. Only one in five members of the current House of Representatives were active-duty military. By contrast, during most of the Cold War, 70 percent of the U.S. Congress were veterans, with the peak coming in 1977 (80 percent).
Does this matter for policy making? There is some research suggesting that it does, most notably the work by Peter Feaver and Chris Gelpi. Feaver and Gelpi establish the following regularities (see especially this book and this chapter-length update):
— On issues that concern the use of force and the acceptance of casualties, the opinions of veterans track more closely with those of active military officers than with civilians.
— The U.S. initiates fewer military disputes when there are more veterans in the U.S. political elite (the cabinet and the Congress).
— The U.S. uses more force in the disputes it initiates when there are more veterans in the U.S. political elite.
— Veterans are less likely to accept U.S. casualties for interventionist uses of force than for “realpolitik” uses of force. [keep reading]
[Graph by Kevin Jefferies http://theweakerparty.blogspot.com/2013/02/no-more-ww2-veterans-in-senate.html]
World War I – known at the time as “The Great War” – officially ended when the Treaty of Versailles was signed on June 28, 1919, in the Palace of Versailles outside the town of Versailles, France. However, fighting ceased seven months earlier when an armistice, or temporary cessation of hostilities, between the Allied nations and Germany went into effect on the eleventh hour of the eleventh day of the eleventh month. For that reason, November 11, 1918, is generally regarded as the end of “the war to end all wars.”
. . . In November 1919, President Wilson proclaimed November 11 as the first commemoration of Armistice Day with the following words: “To us in America, the reflections of Armistice Day will be filled with solemn pride in the heroism of those who died in the country’s service and with gratitude for the victory, both because of the thing from which it has freed us and because of the opportunity it has given America to show her sympathy with peace and justice in the councils of the nations…”
. . . The Uniform Holiday Bill (Public Law 90-363 (82 Stat. 250)) was signed on June 28, 1968, and was intended to ensure three-day weekends for Federal employees by celebrating four national holidays on Mondays: Washington’s Birthday, Memorial Day, Veterans Day, and Columbus Day. It was thought that these extended weekends would encourage travel, recreational and cultural activities and stimulate greater industrial and commercial production. Many states did not agree with this decision and continued to celebrate the holidays on their original dates.
The first Veterans Day under the new law was observed with much confusion on October 25, 1971. It was quite apparent that the commemoration of this day was a matter of historic and patriotic significance to a great number of our citizens, and so on September 20th, 1975, President Gerald R. Ford signed Public Law 94-97 (89 Stat. 479), which returned the annual observance of Veterans Day to its original date of November 11, beginning in 1978. This action supported the desires of the overwhelming majority of state legislatures, all major veterans service organizations and the American people.
Veterans Day continues to be observed on November 11, regardless of what day of the week on which it falls. The restoration of the observance of Veterans Day to November 11 not only preserves the historical significance of the date, but helps focus attention on the important purpose of Veterans Day: A celebration to honor America’s veterans for their patriotism, love of country, and willingness to serve and sacrifice for the common good.
[Photo shows soldiers of the 353rd Infantry near a church at Stenay, Meuse in France, as they waited for news about the end of hostilities. This photo was taken at 10:58 a.m., on November 11, 1918, two minutes before the Armistice went into effect.]
Robert Golan-Vilella, of the National Interest, responds to Melvyn Leffler’s argument in the recent issue of Foreign Affairs that cutting defense spending will actually improve security because “when the government is operating under constrained resources, it is forced to make more difficult choices and prioritize more effectively …”
[I]f one wants to make the case for cutting defense spending now, the best arguments for doing so are those that stem from long-range thinking rather than asserting that the cuts themselves will spur better planning. Such an argument might run like this: The United States is a very secure country. It dominates its own hemisphere. It spends more on its military than the next ten countries combined—and many of those countries are its allies. America has adversaries, but its rival great powers are far less dangerous than those of the past. Thus, there is room to cut the U.S. defense budget to right-size it to the threats it actually faces, while still enabling it to maintain preponderant military strength. The money saved could then be used to accomplish any number of other national priorities.
Whether or not you find this line of thinking persuasive, the point is that it starts with an assessment of what the world and the international threat environment actually look like. . . . [Continue reading]
When Neale Mahoney, an economist at the University of Chicago’s Booth School of Business, set out to evaluate the effect of [the 2009 Credit Card Accountability Responsibility and Disclosure Act], he was confident he knew what he and his colleagues would find: It didn’t work.
“I went into the project with this sort of conventional wisdom that well-intentioned regulators would force down fees and that other fees and charges would increase in response,” he told me this week, comparing hapless rule makers to the carnival visitors playing the game known as Whac-a-Mole, where a mole springs up somewhere else as soon as one is knocked down.
But his expectation was wrong. The study came to a conclusion that surprised Mr. Mahoney and his colleagues: The regulation worked. It cut down the costs of credit cards, particularly for borrowers with poor credit. And, the researchers concluded, “we find no evidence of an increase in interest charges or a reduction to access to credit.”
… “Looking at the data forced us to rethink our understanding of the effects of regulating consumer financial products,” Mr. Mahoney told me. “The data changed our view of the world. That is what’s so exciting about being an empirical economist.” [Keep reading]
This is not only “exciting.” It also demonstrates the social importance of empirical social science. It is all too common, sometimes based on mathematically or logically sound (but empirically untested) models, sometimes based on dogmatic acceptance of “laws,” to either overestimate or underestimate what can actually be achieved by politics or public policy. The first principle of empirical social science is that beliefs need to be tested against reality. In this case, it turns-out, contrary to common belief, regulation in the public interest was possible. There are, of course, plenty of cases where empiricism will lead to the opposite conclusion, and that this will challenge or refute widely held optimistic beliefs. Since erring on either side–overestimating or underestimating what can be accomplished–is costly, it is of supreme importance that beliefs be constantly tested against reality.
I often point out to students that several debates that moved outside the mainstream of American politics for most of the second half of the 20th century (or longer) have reentered or are on the verge of reentering the mainstream today. For example, yesterday in my U.S. Constitutional Law (POLS 4130) class, we talked about Jacobson v. Massachusetts (1905), a case in which the U.S. Supreme Court declared the U.S. Constitution does not bar states from enacting mandatory vaccinations in order to promote public health. Of course, mandatory vaccinations have once again become a hotly politicized issue in recent years, although this has not yet resulted (as far as I know) in a new round of constitutional litigation. Another example is the call for repeal of the 16th Amendment (under specified conditions) in the 2012 Republican Platform. Similarly, the Tea Party has endorsed repealing the 17th Amendment. Yet another example, as Sandy Levinson has pointed out, is the reemergence of serious arguments over the constitutional right of states to nullify federal laws (once thought long settled by Cooper v. Aaron in 1958 if not Andrew Jackson’s 1832 “Proclamation Regarding Nullification”) and even to secede (once thought long settled by the Civil War). Still another example is Rand Paul’s insistence during his successful 2010 campaign for the U.S. Senate that he rejected the Supreme Court’s unanimous 1964 opinion in Heart of Atlanta Hotel v. U.S. that the Commerce Clause of Article I, Section 8 authorizes Congress to outlaw racial discrimination by certain kinds of privately owned businesses (as Congress did with the Civil Rights Act of 1964). (I could also point to various other proposals for radically amending the Constitution or drafting an entirely new constitution but that do not yet seem to be on the verge of entering the mainstream of political debate. That said, here are some examples if you are curious.)
My point in mentioning all this in class is to illustrate how the nature and meaning of the Constitution is never really settled. We have moments of relative consensus on certain issues, but you never really know which issues, long thought settled, will reemerge as objects of debate in response to changes in social conditions, intellectual developments, political movements, etc. And I think a defining characteristic of our time is that an unusually broad array of fundamental questions about the constitutionality of decades-old political-institutional “settlements” are increasingly being raised and debated within the political mainstream.
This was very much on display in a recent exchange among self-described “conservative” opinion leaders over the constitutionality of the welfare state–i.e. much of what the federal government has done since the New Deal era. It started with Charles Krauthammer’s appearance on The Daily Show with John Stewart in which he magnanimously praised “the great achievements of liberalism — the achievements of the New Deal, of Social Security, Medicaid, Medicare.” Krauthammer said this in the course of defending what he contends is “true conservatism,” which, he says, is supportive of those “great achievements” but only concerned with keeping them sustainable into the future.
This prompted Andrew McCarthy to argue that Krauthammer’s “conservatism” is no such thing. Rather, it is simply the “moderate statism” of the “Republican establishment” that “‘is more sympathetic to Obama’s case for the welfare state than to the Tea Party’s case for limited government and individual liberty.'” Importantly, McCarthy did not simply claim that true conservatism is opposed to the welfare state. He went further to insist that it is unconstitutional:
[C]onservatives revere an enriching cultural inheritance that binds generations past, present, and future. It obliges us to honor our traditions and our Constitution, preserve liberty, live within our means, and enhance the prosperity of those who come after us. The welfare state is a betrayal of our constitutional traditions: It is redistributionist gluttony run amok, impoverishing future generations to satisfy our insatiable contemporaries…
This, in turn, led Conor Friedersdorf, a third self-proclaimed “conservative,” to offer an extended rebuke of the views expressed by McCarthy, whom he refers to as more of a “fundamentalist for originalism” than a true “conservative.”
You’d think, given the totality of McCarthy’s positions, that “constitutional conservatism” is an end in itself. It isn’t. Advancing life, liberty, and the pursuit of happiness—that is the end. I, like many conservatives, believe that for the most part those ends are best advanced by working within the constitutional framework. Like many liberals, I also believe that slavery and Jim Crow were such abominations that, if the choices were to strictly construe the constitution or to free the slaves and end Jim Crow, to hell with originalist notions of states rights.
What does that have to do with McCarthy’s argument? He is too enamored of the heuristic that what’s constitutional is liberty-enhancing and what’s unconstitutional is liberty-destroying. It’s a good heuristic, but it doesn’t always hold.
Arguing with him, I normally point out why I think that his expansive views of executive power betray Madison’s vision. Today let’s imagine, for the sake of argument, that he has been right all along: that strict adherence to the Constitution really does permit secret kill lists, torture, massive surveillance, and indefinite detention; and it really does prohibit, say, Social Security and Medicare.
Even if that were true, it would not change the fact that the national-security state and its open-ended concentration of unaccountable power poses a far greater threat to liberty than federally bankrolled social-welfare spending (even if you think, as I do, that the spending could be improved upon).
… The Constitution ought to play a prominent role in our politics. But I’d like to see McCarthy construct an argument for his favored policies without any mention of or recourse to the document. Perhaps that would make it clearer that suspending due process puts a country farther along the road to serfdom than old-age pensions.
I say that his position is not conservative because, while conserving our constitutional design is certainly a coherent part of a conservative approach to governing, McCarthy isn’t proposing to conserve something that still exists—rather, he is proposing that we take an approach to social-welfare policy that hasn’t been tried since the early 1930s and apply it to the modern economy: a radical change, whatever one thinks of it. The radicalism and unpredictability of what might happen next doesn’t necessarily make him wrong. But conservative is a weird word for it. . . .
Regardless of one’s views on this normative debate, as a factual matter, I think it’s safe to say that the Constitution–specifically, the debate over its fundamental nature and meaning–is playing and will continue to “play a prominent role in our politics,” and that this means knowledge of constitutionalism is essential for understanding contemporary politics.
(On that note, I highly recommend taking courses offered by the Political Science and History departments on American constitutionalism — usually under the title of “constitutional law” and/or “constitutional history”.)
Alex Tabarrok of Marginal Revolution offers an alarming answer to a provocative question:
Did Obama spy on Mitt Romney? As recently as a few weeks ago if anyone had asked me that question I would have consigned them to a right (or left) wing loony bin. Today, the only loonies are those who think the question unreasonable. . . . Do I think Obama ordered the NSA to spy on Romney for political gain? No. Some people claim that President Obama didn’t even know about the full extent of NSA spying. Indeed, I imagine that President Obama was almost as surprised as the rest of us when he first discovered that we live in a mass surveillance state in which billions of emails, phone calls, facebook metadata and other data are being collected.
The answer is yes, however, if we mean did the NSA spy on political candidates like Mitt Romney. Did Mitt Romney ever speak with Angela Merkel, whose phone the NSA bugged, or any one of the dozens of her advisers that the NSA was also bugging? Did Romney exchange emails with Mexican President Felipe Calderon? Were any of Romney’s emails, photos, texts or other metadata hovered up by the NSA’s break-in to the Google and Yahoo communications links? Almost certainly the answer is yes.
Did the NSA use the information they gathered on Mitt Romney and other political candidates for political purposes? Probably not. Will the next president or the one after that be so virtuous so as to not use this kind of power? I have grave doubts. Men are not angels. [Keep reading]
Eduardo Porter of the New York Times reports:
The United States is one of few advanced nations where schools serving better-off children usually have more educational resources than those serving poor students, according to research by the Organization for Economic Cooperation and Development. Among the 34 O.E.C.D. nations, only in the United States, Israel and Turkey do disadvantaged schools have lower teacher/student ratios than in those serving more privileged students. [Read the rest here]
Miles Kimball writes
Now might be a good time to remind the world just how far the country’s health care sector—with or without Obamacare—is from being the kind of classical free market Adam Smith was describing when he talked about the beneficent “invisible hand” of the free market. There are at least five big departures of our health care system from a classical free market: . . .
A lot of people think that party polarization could be mitigated if states reformed their system of nominating candidates. This is based on the assumption that those who vote in primaries are more ideologically extreme than general-election-only voters. The extreme primary voters, according to this theory, nominate extreme candidates, which leaves the moderate general-election-only voters with no moderate candidates to vote for. This results in the election of candidates who are more extreme than the majority of voters actually prefer.
John Sides points out that the political science consensus suggests otherwise. Among other things, he points to the chart above, which is based on a large survey of voters in the 2008 election. The data demonstrate that there is in fact little ideological difference between primary and general-election-only voters. For the most part, primary voters are simply more interested in politics than are general-election-only voters:
Those who voted in the primary were clearly more interested in politics but did not have very different views on issues (with the possible exception, for Republicans, of raising taxes on the wealthy). Other research finds a similar pattern; see here or here or here. Given these findings, increasing primary turnout would not necessarily create a very different electorate and therefore different incentives for candidates or incumbents. [read the entire post here]
There has been much debate over the extent to which Obama’s presidency signals that we have entered a “post-racial” era in American politics. Does the fact that we (twice) elected a black president mean our politics is no longer shaped by negative racial stereotypes? In recent days, the Monkey Cage has discussed research providing empirical evidence that (1) “racial resentment” played a role in the government shutdown, but (2) Obama’s Presidency may have led young people to have attitudes “more favorable to blacks than every previous generation.” This research suggests that, while “post-racial” is not an apt description of our present politics, there is reason to suspect there may be a post-racial politics on the horizon.