Philosophical commentary on contemporary political issues in the tradition of Charles Taylor, Alasdair MacIntyre, and Michael Sandel.

Friday, December 20, 2013

The Malleability of Preference

In a recent story published on the NPR website, Radiolab co-host Robert Krulwich writes about the recent research around the collapse of civilization on Easter Island. In the past, the assumption was that the people of Easter Island did not last long after the last tree was chopped down on the island. For a people who depended on wood to build boats for deep-sea fishing, it seemed unlikely they would have survived long without wood.

New research, however, paints an even more terrifying picture of what happened after the last tree went down on Easter Island. According to anthropologists Terry Hunt and Carl Lipo, the residents of Easter Island likely lived for years and years afterwards, eating resident rats and living in comparative squalor to their previous arrangement.

When Europeans came across this post-wood society, however, the people did not beg for food, they bartered for hats and other trinkets. This suggests that the people of Easter Island did not consider themselves in squalor, but rather adjusted their preferences to cope with their limited resources. As Krulwich says in his story,
It's like the story people used to tell about Tang, a sad, flat synthetic orange juice popularized by NASA. If you know what real orange juice tastes like, Tang is no achievement. But if you are on a 50-year voyage, if you lose the memory of real orange juice, then gradually, you begin to think Tang is delicious. On Easter Island, people learned to live with less and forgot what it was like to have more. Maybe that will happen to us. There's a lesson here. It's not a happy one.
Another example that has been brought up to underscore the ability of people to dial back their preferences is that of concentration camps. People find ways to cope in times of distress, and it usually takes the form of a shift in preferences and priorities.

But we don't have to look to the extreme examples of Easter Islanders, astronauts, and concentration camps to see preferences being dialed back. The boy born into a family with a history of drug and alcohol abuse often sees no way to succeed outside of a life of crime, like Omaha killer Nikko Jenkins. Someone who has never had a family member in college is more likely to want to go to college than someone who has two parents graduate college.

These examples point to a quality of our preferences that is often ignored by economists and free-market dogmatists. This is the malleability of preferences. Preferences are often treated as a metaphysical inevitability. Much of classical liberal economic thought is built around an idea that our preferences are some way natural, fixed, and self-affirming. However, when we look around at examples in the lives of others or engage in honest self-reflection, we realize the fragility and variability of our preferences.

If we want to use economics to improve our lives and the lives of others, we need to drop the fetishized view of preferences and understand them for what they are: malleable manifestations of an interaction with our environments. While many utilitarians thought they could save their line of thought by an appeal to preferences rather than happiness, preference-fulfillment favors those already situated to form preferences that will lead to greater personal fulfillment.

Thus, it becomes incumbent on a society that is committed to the tenant of equal opportunity to provide the ability for individuals from all different backgrounds the equal opportunity to form preferences that will enable them to live meaningful lives. In the words of Amartya Sen,
Social and economic factors such as basic education, elementary health care, and secure employment are important not only on their own, but also for the role they play in giving people the opportunity to approach the world with courage and freedom.
An honest approach to a just society will not only address preference fulfillment, but also preference determination.

Thursday, August 29, 2013

Why Force Career Specialization on Teenagers?

While the spinoff television show Daria never quite reached the self-mocking genius of its predecessor Beavis and Butthead, a scene by title character Daria Morgendorffer has made the rounds on the internet recently.

Daria, a sarcastic, intelligent, and prematurely jaded student sitting in a college classroom, is prompted to tell the classroom what her goals are. To this, she provides a bitter and incisive response.


Though this is just a quote from a cartoon, it captures a lesson that seems to escape educators and policymakers across the country.

A recent example of this generational amnesia shows itself in a recent op-ed in the Los Angeles Times. In this article, the college presidents of Lewis & Clark College and Northwestern University join forces to advocate for students to make better decisions about where to earn their undergraduate degrees. While they include good messages about the value of discomfort as a tool for education, they then go on to list a number of reasons they think students should pick schools.
If you want a career in theater, pick a school in a community with a vibrant local theater scene...If you want to become a global titan of industry,...pick a place that forces you to gain global literacy...If you're a nerd who has already invented great new apps and wants to be a tech entrepreneur,...go where you can take great courses in design.
The issue with Glassner and Schapiro's analysis here is that they assume that most 18-year-olds have the slightest clue what the job market looks like. If they are enrolling in college, they likely have never held a job that comes close to a career field they would be interested in entering in the long run. Yet we assume they should be making decisions like they have a thorough understanding of what it is to have a job and what it is to look for one in a post-graduate phase of life. But as a city councilmember here in Omaha said earlier this week, most young people don't want to become dentists and accountants, they want to be actors, rappers, and athletes.

American Enterprise Institute Fellow Richard Vedder makes a similar mistake in his recent op-ed where he belittles "anthropology and drama" as the "fields the economy values the least" as opposed to "engineering and math." In an economy in which the average college graduate holds eleven jobs in her first twenty years out of college (I'm on my fifth currently) and the average person changes her career three times in her lifetime, why do we assume the best thing for an 18-year-old is specialized career training? I am sure that someone studying computer programming in college ten years ago does not use a bit of what they learned as an undergraduate in their work today.

Higher education, at least at the undergraduate level, needs to be focused on developing thinkers, communicators, and citizens. While we may change our specific career paths throughout our lives, these three qualities are what will stay with us throughout any vocational experience we have. On top of this, democracies (and, in the long run, markets) needs thinkers, communicators, and citizens instead of cogs to plug into a stagnant economic system. Let us not make the mistake of believing that once someone hits that special age of eighteen that they suddenly become those three things on their own.

Monday, August 5, 2013

Why Liberals Would Love to See Christie on the Ballot in 2016 -- And Maybe in the White House too.

In a poll released earlier today, New Jersey Governor Chris Christie ranked as the "hottest" politician in America, beating out #2 in the poll, ex-Secretary of State Hillary Clinton by one "degree." Yes, the terminology is a little silly, but it points to a possible 2016 match-up that bodes nothing but good news for liberals in America.

Firstly, post-Obama administration Hillary Clinton left her post at the State department as one of the most popular politicians in the country. If she ran, she would have a good shot at becoming the first woman president in United States history as well as marking the Democrats' first 3-term streak holding the presidency since Harry Truman occupied the White House.

While Christie is popular amongst independents according to a Public Policy Polling survey, the last candidate to hold such broad appeal across ideologies was John McCain in 2008, and we remember what happened to him. The Republican party has twice chosen centrist candidates over more right-wing opposition in 2008 and 2012, but have nonetheless been unsuccessful at turning that into victories. In 2012, Mitt Romney won the independent vote, but was still unsuccessful at winning the presidency. Assuming he locks up the nomination on grounds of this appeal, Christie would still have to somehow win the independent vote decisively while also turning out a Republican base that he is alienating with his rhetoric against the still-powerful tea party wing of the GOP.

But if Christie were to overcome these odds and pull off the victory against Clinton in the general election, it still would likely spell out long-term benefits for Democrats. A cycle cited by some political scientists is the opposite-party moderate who cements an ideological pendulum swing in American political affairs.

An example of this is FDR to Eisenhower. FDR cemented the modern welfare state and Keynesian, big-government approach that dominated American policy throughout near the end of the Cold War. It was Republican Dwight Eisenhower, however, who took the steps of instituting the national highway system and forced integration of schools that cemented FDR's pro-government, anti-segregation policies as more than just partisan choices. This swung the other way when Reagan was followed up by Clinton, a Democrat who reformed welfare and signed NAFTA, the most extensive free trade treaty in US history. Clinton's actions reinforced neoliberalism as the ideology of contemporary American policy.

The difference between these developments and election 2016 is that a one-term same party president (Truman and Bush I) sat between the great ideology-shifter and the opposite-party reinforcer. But if Christie won the presidency, would he be going for an Obamacare repeal? Would he dial back on his rhetoric and suddenly support the "cut at all costs" approach of the Tea Party? No, Christie would likely be an Eisenhower or a Clinton: a popular individual who sells out his party radicals by pushing the center a little bit in the other direction.

Monday, July 8, 2013

Why I Don't Talk About Religion

I know many people for whom religion is a fixation. The historical impact and philosophical implications of what we call "religion" provides an allure to people young and old, devout and atheist. I, however, seldom write, talk, or think about religion. In this post I will sketch out the reason why.

My unwillingness (or, more accurately, disinterest) in addressing the topic centers on the inability for what we call "religion" to contribute to either the deep philosophical or everyday practical problems of our day. For the purposes of this post, I consider "religion" to be "a set of beliefs concerning the cause, nature, and purpose of the universe" (dictionary.com), which entails a comprehensive system of belief which provides a basis for all truth.

The problem with religion as such is the inability to coherently say anything about it. To understand this, it is instructive to appeal to Immanuel Kant's famous analytic/synthetic distinction. This distinction is used to separate propositions into two different categories.

An analytic proposition is one in which a description is implied within the definition of the subject. Thus, "red is a color" is an analytic proposition. Another way to look at an analytic proposition is to say "if the subject were not described by the predicate, it would be logically incoherent." It makes no sense to consider "red" as anything but a color, therefore the statement "red is a color" is analytic.

A synthetic proposition is one which combines two concepts that do not entail one another. An example of this is "red is a pretty color." "A pretty color" is not an essential property of "red," so this proposition is providing information not entailed in the subject. In addition, if red were to turn out to not be a pretty color, it would be no stretch of the imagination to say that it would still be "red."

Using this distinction, the problems with making statements about religion begin to become clear. I will use two statements important to religion (or to most anything related to man for that matter), "religion is true" and "religion is good" to illustrate this point. If these statements are treated as synthetic, then the predicate is a concept divorced from the subject. This means that religion would continue to be what it is without the qualities of "good" or "true." But if religion as a comprehensive doctrine of ethics and beliefs is to be of any use, then it must be both good and true. Thus, these statements cannot be synthetic if religion is useful.

Thus, in order for religion to be useful, the questions of its goodness and validity must be analytic propositions. But that means that the statements "religion is true" and "religion is good" hinge on a definition of religion that implies both truth and goodness. In this case, if religion is good or religion is true, then religion cannot be questioned as either false nor evil. Any evidence we have to the contrary of religion being both good and true (negative outcomes of religion, times that religion has turned out to have untrue tenants) must be thrown out to preserve the statements as analytic. This approach makes the qualities of "goodness" and "truth" subservient to religion, which means that reason, intuition, emotion, or any other determinants of goodness or truth will always lose when they disagree with religion. Even if we are to ignore the glaring intuitive absurdity of declaring the two statements analytic, the outcome of such a declaration surrenders every other tool man has to understand the world.

While the synthetic route renders religion hopelessly useless, the analytic route renders religion terrifyingly dangerous. This, however, leads to some silver lining in the usefulness of religion: religion identified not as a philosophical concept but as a sociological concept can still have merit. Religion can provide social capital, solidarity, and brotherhood that has spearheaded such mass movements in the United States such as abolition and the civil rights movement. As much as a society of perfectly rational people who could see the inherent evil of slavery or Jim Crow has an intuitive appeal, it does not fit with the nature of homo sapien, a species that has progressed on the back of certain unjustified analytic assumptions since the dawn of time.

Thus, religion, like the doctrine of natural rights, provides a convenient social glue that holds society together but is ultimately based on a shaky foundation. Can society ever persist without such convenient falsehoods? Is the statement "man is an irrational animal" synthetic or analytic? The answer is yet to be determined.

Friday, June 14, 2013

No, Third Parties Will Not Save American Democracy

If you follow national politics even the slightest bit, you are probably a little bit fed up with partisan politics. "Polarization" is what it's called: an affliction that is stymying legislative progress and keeping our federal government in a seemingly eternal state of gridlock. Examples of this include Congress's inability to pass universal background checks on firearms purchases despite overwhelming public support, the inability to compromise on the federal budget that led to the sequester, and Congress's continuing battle over health care legislation passed years ago.

Some people suggest that the problem with American democracy is systemic. Their argument is that the two-party system is intrinsically flawed since it only provides two options for voters. If we only had more options, we would have more of a chance of having a government that actually represents the people of our country, a people who are arguably more moderate than our current party options.

There are three assumptions of this argument that expose the issue with the deus ex machina third-party solution.

First, the argument that third parties will allow for people to choose parties that more closely align with their personal beliefs follows a consumer model of government. The idea is that each individual has an claim to get the product that they want and that producers will provide them with it if demand is high enough.


While this may be a good approach for fast-food, it doesn't work so well in government. This is because government by the people is what Canadian philosopher Charles Taylor terms an "irreducibly social good." Irreducibly social goods are goods that cannot be reduced to the good accrued to individuals. An intrinsic aspect of political life is interaction, and with interaction comes negotiation of the self with others. If we atomize politics, we diminish the most important aspect of politics: its social aspect. Fostering the growth of third parties will not create parties that perfectly align with individuals, but will only create new avenues for compromise. 

Second, advocating for third parties often hinges on a belief that the two parties are too polarized and that a political middle needs to be found. The problem with this is that contemporary polarization in practice is not the fault of both parties, but is actually the fault of the Republican party.

People compare the Tea Party and the Occupy movement, but only one of these fringe movements has been able to successfully infiltrate the party, create a party caucus, and make moderates in its party pay for not holding ideologically pure positions. There was no high-profile primary upending in the Democratic party on the level of Senator Richard Lugar's embarrassing 2012 defeat. Spending cuts have been agreed to by both parties, but it took a near-implosion of the economy to get Republicans to agree to tax rates lower than the Clinton years. On top of that, President Obama's heritage-foundation based health care reform law has been widely called "socialist" by those on the right, stretching the term to the point of meaninglessness. Obama's economic policy platform would have put him in the mainstream of the Republican party 30 years ago and his social policy platform sides with the majority of Americans today. For those who feel like there is no place for a centrist in America today, look no further than the Democratic party.

Lastly, there is an assumption that polarization is an outgrowth of Washington and that "regular people" are more centrist than their representatives in Washington. The reality is that the American people are just as divided as their representatives in Washington. In the last mayoral election here in Omaha, an open primary was held with all parties going head-to-head and the top two vote-getters advancing to the general election. Out of the five candidates, two were radical right-wing candidates, two were moderates, and one was a strong left-wing candidates. One of the radical right-wing candidates and the left-wing candidate floated to the top.

While third parties are fun for someone who is fascinated with politics, they are not the solution to America's political polarization problem. What we need to look to is voter access. Voter ID laws, gerrymandering, and partisan politicians in state director of election positions pose a much more present threat to American democracy than the two-party system. Let's see what we can do to fix issues that can really bring America closer to its promise of a country of the people, by the people, and for the people.

Tuesday, June 4, 2013

What Role Does an Estate Tax Have in a Meritocracy?

The appeal of a meritocratic system runs deep in American society. We believe that this country should be one where everyone gets a chance. "Equality of opportunity" has become the phrase of choice when approaching what composes a just system of distribution. If everyone starts from the same place, then let them end up where their merits take them. Thus we believe that people should have the right to reasonably exercise control over property that they gained from an initial position of equality with others.

The question of the levying of an estate tax, however, provides a challenge to this meritocratic premise. On the one hand, an estate tax can be construed as seizing the earned property of an individual who justly acquired it. While an argument can be put forth that property rights do not carry on past the grave, it's a strange argument to say that I could transfer my property whatever way I choose while still alive but that I could not, while alive, will my property to certain ends contingent upon my death. Most reasonable people would agree that individuals have a right to some control of their property after death. If I earned my property starting from a position of equality, then the meritocratic prerogative is for me to have control over the transfer of that property.

On the other hand, the ability to collect on an inheritance is a challenge to the meritocratic system. No one is given a chance to choose or "earn" their parents, but regardless, they most often find themselves as the primary beneficiaries of their parents' inheritances. Some may say that children "earn" the inheritance of their parents through love, affection, etc., but children are, by their relationship to their parents, already in a position to provide this love, affection, etc. that puts them in an initial position of inequality with others. Thus, bestowing inheritance stands at odds with meritocratic values.

How can this apparent contradiction be resolved?

One philosopher who provides a pertinent perspective on this case is John Rawls. In his seminal work A Theory of Justice, Rawls points out that our qualities that make up what we call our "personality," qualities such as intelligence, social skills, and work ethic, come to us as a result of luck. We have no control over our genetic endowment or our environment, so however we end up is of no merit of our own. Thus, a Rawlsian analysis of the estate tax would conclude that the estate tax is inherently just as a method of balancing inequalities that arise from unequal distribution of personality assets.

Through this line of reasoning, Rawls explicitly rejects a meritocratic political philosophy. This assertion is highly controversial and came under fire from philosophical libertarians such as Robert Nozick as well as critics of classically construed liberalism such as Alasdair MacIntyre and Michael Sandel. The overarching argument made by both sides is that it is nonsensical to say we could ever fully separate ourselves from our qualities. To say that there is some "me" that cannot be described by my intelligence, social skills, work ethic, sense of humor, interests, or background is an abstraction that tests the bounds of the meaning of "identity," and in the opinion of Nozick, MacIntyre, and Sandel, shatters it.

To say that there is a "me" that has no claim over the fruits of my labor is incorrect. Rawls cannot be dismissed so easily, though. Just because an individual can be established to have some claim over her property does not necessarily mean that this individual has a claim to all her property at the expense of everyone else's. No matter what position of equality one begins in, a system of public infrastructure is necessary to provide for the opportunity for individuals to flourish economically, socially, and politically, and that infrastructure requires public upkeep. This balance was well articulated by Senator Elizabeth Warren in a 2011 speech (fast-forward to 0:50).



What does this mean for the estate tax? It means that it would be unjust to tell people that they have no claim to provide their children with some inheritance with which to be comfortable and launch their lives. But it would be equally unjust to say that some of that money cannot be used to help other children in the community do the same. No fortune is dug out of the mud by an individual alone. A meritocracy depends on a social infrastructure to function, and infrastructure needs taxes.

Wednesday, May 1, 2013

Guantanamo's Decision to Force-Feed Prisoners Recalls Age-Old Questions of Ethics

At this very moment, 100 prisoners at Guantanamo Bay are participating in a mass hunger strike. Of those 100 prisoners, 21 are so badly starved that medical authorities at the base have approved their force-feeding through nutritional supplement tubes that are run through the prisoners' noses.

This decision to force-feed Guantanamo Bay prisoners has not been made without controversy, however. In particular, many American medical professionals are taking issue with the military's decision. Most prominent of these is Jeremy Lazarus, president of the American Medical Association, who wrote a letter to Secretary of Defense Chuck Hegel on the matter.

In the letter, he spelled out that the decision to force-feed Guantanamo Bay inmates "violates the core ethical values of the medical profession." He then quoted the World Medical Association Declaration of Tokyo:
Where a prisoner refuses nourishment and is considered by the physician as capable of forming an unimpaired and rational judgment concerning the consequences of such a voluntary refusal of nourishment, he or she shall not be fed artificially. The decision as to the capacity of the prisoner to form such a judgment should be confirmed by at least one other independent physician.
Lazarus and the professional medical community argue that force-feeding the prisoners in Guantanamo is a violation of their right to self-determine by refusing treatment.

While some may see this situation as a question of law and some may see it as a question of medicine, what it really does is force us to engage in a philosophical exercise. In particular, the situation poses a serious question to us: Is inaction equivalent to action if consequences remain the same?

This question reinvigorates a classic philosophical disagreement: that between Immanuel Kant's categorical imperative and Jeremy Bentham's consequentialism. To spare the details, Kant tends to focus on the content of an action, while Bentham tends to focus on the result of an action. We see this dilemma every day. When a car backs out and accidentally bumps into someone else, the person hit will say "look at what you did to my car!" He is upset about the result or consequence of the action. The person who backs out says "I didn't see you there, I did not mean to hit your car!" The content of the action is innocent. In our legal system, we have to take some side, so the person who pulled out would be held legally responsible, but the pull of the importance of content of an action is still strong. The law can only be justified by saying the consequence was negative (the car was damaged) and the content was negative (the driver must have been neglectful, so he must be at fault).

To bring this back to the force-feeding decision, mainstream medical practice allows doctors to restrain patients from inflicting self-harm. If a patient wants to take a knife to his own throat, then both the content and the consequence of the action is negative. Mainstream medical practice does not, however, permit doctors to force medical assistance on patients who refuse treatment, even if it results in dire health problems or death of the patient. In this way, mainstream medical practice is Kantian. It maintains that inaction and action with the same consequences should be treated differently. The military, on the other hand, makes no distinction between inaction and action if they result in the same health problems for the patients. Slitting my own throat has the same ultimate result as refusing medical attention if my juglar spontaneously ruptured. Thus, a hunger strike is the same thing morally as suicide. In this respect, the military is utilitarian in their appraisal.

Now if mainstream medical practice condoned active self-harm (such as suicide), then a different debate emerges, but it is one that is no less philosophical. There is further conversation to be had about what it means to make an "unimpaired and rational judgment" as well. Only by engaging with philosophy can we get a handle on what rationality looks like and how it could be impaired.

In the words of Michael Sandel, "to engage in...practice is already to stand in relation to theory." How we act depends on whether we decide to be rigorous about that theory or to to be blind to how it affects our every decision as individuals and as a society.

Wednesday, March 27, 2013

Neighborhoods Should Have a Say in their Own Redevelopment

Despite the hard work done by community volunteers to improve their neighborhoods, they tend to come under fire for allegedly impeding economic growth. On Tuesday, Slate contributor Matthew Yglesias wrote an article about a frustrating experience with neighborhood activists in his community. It centered around a group advocating for a liquor license moratorium in their neighborhood. After reading his article, I thought it was only fair to give a perspective of the other side of this conflict.

I work as a community organizer in Omaha, Nebraska. One of the most successful and high-profile community action groups I have seen form in Omaha is the Alcohol Impact Coalition (AIC), a group of twelve prominent neighborhood associations working to increase neighborhood input on the issuance of liquor licenses in urban Omaha.

In 2010, a local Walgreens was applying for a liquor license. The local residents deemed this an establishment and location unfit for liquor sales. The city government agreed and rejected the application. Despite the consensus between the neighborhood and the city government, the Nebraska Liquor Commission overruled the city's decision and granted the liquor license. Thus, the neighborhoods organized and began the AIC and the Let Omaha Control its Alcohol Landscape (LOCAL) campaign. They have been pushing for legislation to allow for neighborhoods to decide when an establishment poses a threat to community values and have made some preliminary progress on this front.

According to Yglesias, these citizens are exhibiting "NIMBY stupidity." This characterization is demeaning and incredibly disrespectful to people who are advocating for a better community and neighborhood for everyone to enjoy. While his experience seems to deal with license issuance moratoriums, he has not seen the other side of the conflict where citizens who are not interested in the drastic path of moratorium but just want some control over who can sell alcohol in their neighborhood are being pushed around by state officials and corporate interests.

I think we can all agree that moratorium is a radical and excessive step, but there are communities across the country who don't mind a quiet bar that is involved in community affairs but find worrisome a bar that doesn't respect the community it resides in. Yglesias's analysis smacks of classic narrow-minded gentrification rhetoric. Redevelopment in our country's urban cores is going to be a key aspect of economic and cultural change over the next few decades, but we cannot push economic growth forward at the sole expense of the current residents. Residents need to be a part of the redevelopment conversation, and the citizens who band together in reasonable, moderate groups like Omaha's AIC should not be belittled on account of the beliefs of radical moratorium advocates.

If there is anyone who should be front and center in redevelopment talks in a community, it is the current residents of the community. Too often, government, business interests, and yes, pundit economists say they know what is best for the community without consulting with the members of the community themselves. As long as we continue to say we know what's best for a community we are not a part of, it doesn't matter if the redevelopers are public, private, or non-profit, they are missing the perspective for their redevelopment plan that is needed the most.

Friday, March 22, 2013

Capabilities and Ethical Choice

“I believe that for all of our imperfections, we are full of decency and goodness, and that the forces that divide us are not as strong as those that unite us.”

-President Barack Obama, January 12, 2011

In his recent essay, my friend Peter Hurford put forth an argument that continued his previous claim that there is no "rationally binding" code of ethics for human beings. His argument hinges on one major claim: that the differences that occur between individuals are too great to result in any clear standard of morality for all individuals. Since we have some aspects of our identities and personalities that are idiosyncratic, we can thus not establish any objective standard for rational moral behavior. This claim is not a new one, but it is one that deserves attention for the sake of showing its flaws.

The claim that people have too much variance to establish a rational morality fails in two main ways: by resting on oversimplification and ignoring the strong empirical evidence available.

To begin with the problem of oversimplification, Hurford himself admits that there are many "shared goals" between human beings, examples of which are "eating and sleeping."1 Martha Nussbaum puts forth a longer list than this in her book Frontiers of Justice. In her innovative "capabilities approach," Nussbaum puts forth a list: life, bodily health, bodily integrity, sense, imagination, thought, emotions, practical reason, affiliation, other species, play, and control over one's environment.

One can argue about what should make list (even Nussbaum does), but to say that there is no such fundamental list of capabilities that every human being needs to flourish denies people of their fundamental humanity. Without such qualities, it is not a stretch to say that we "cease to be human."

This leads nicely into Hurford's claim that knavery and sociopathy provide a challenge to morality derived from man qua man. Knavery, the word I am using here to refer to supposedly "immoral" acts that bring a satisfaction to the actor for their carrying out, seems to fit very well into Nussbaum's capability theory. What is fun about playing the ruthless baron in a game of Monopoly? It carries out man's need for "play," "practical reason," "imagination," "control over one's environment." This can be extended to a certain extent into "troublemaking" of a more serious sort, such as fraternity prank-pulling or April Fools jokes.

A counter to this claim could be that any human action can be drawn back to some capability. But the capabilities cannot be handled in isolation. While a night out for drinks for friends could provide for the human need for affiliation, night after night of drinking can threaten the capability of bodily integrity. It can be generally acknowledged that April Fool's pranks can be had in all good fun while still holding that some pranks would not be appropriate for one reason or another, possibly relating to the damaging of emotional or affiliation ties. This also brings us to the example of the sociopath. The sociopath's crippled emotional capacity and inability to affiliate with others cuts him off from basic human needs, thus holding him back from a moral life and a flourishing existence.

Further, Hurford claims that the diversity of human beings leaves mankind helpless to determine an objective morality. For instance,
Going to the nightclub might be great for one person’s needs and what they ought to do, but definitely not right for another person. One person might accomplish their goal to flourish and self-actualize through community service, whereas it might be a rather miserable experience for another person.
A diversity in the manner by which men exercise their capabilities does not refute the existence of capabilities in the first place. People can be rightly more social or less social, more playful or more serious, but the man who is always alone or always serious shows a distinct lack of humanity and fails to have the opportunity to flourish qua man. This does not mean that every man needs to like baseball rather than hockey or reading rather than gardening in order to flourish. A world in which every man must act exactly the same would not be a world where people were responding to their natures. We must be aware of the differences between people, but not let that obscure the significance of our similarities.

The similarities uncovered by the capabilities approach has some strong empirical footing, as well. An important essay by psychologists Ed Diener and Martin Seligman finds strong empirical data that sociability correlates very highly with happiness. The long-held tabula rasa view of development is unraveling with new studies detecting onset of moral thinking at earlier and earlier ages. On the macro level, control over the political process has provided example after example of better outcomes for democracies, such as the fact that no democracy in human history has ever experienced a famine.

As I have said in previous posts, one does not need to believe morality is a non-natural property to understand that morality is objective and rational. One needs simply to acknowledge that man can only understand the world and thus understand himself qua man. And yes, part of our humanity is our individuality, but our individuality does not eclipse our humanity. That which divides us is certainly not as strong as that which unites us.



1 It should be noted that there is some ambiguity with the use of the phrase "shared goals." For the sake of this conversation, we are not referring to goals that individuals wish to achieve together, but rather goals that separate individuals happen to have.

Wednesday, February 13, 2013

The Origin of Ethics

On Wednesday, my good friend Peter Hurford wrote a short post on his excellent blog titled "Questions for Moral Realists." In this post, Hurford asserts that there are three categories of moral realism, with each category assuming the last. The first category, "success theory," holds that moral statements point towards a real moral standard. Success theorists who believe that there is only one "true" moral standard fall into the second category: "unitary theory." And the unitary theorist who argues that the one true moral standard is the only path for a rational human being partakes in the third category, "absolutism theory."

As someone who sympathizes with claims of moral realism, I decided it would be valuable to shed my perspective on this discussion. In particular, I will try to answer Hurford's three questions from his post:

(1) Why is there only one particular morality?

(2) Where does morality come from?

(3) Why should we care about morality?

I will begin with (3). Ethics is the science of choice. The fundamental question of ethics is "what should one do?" This is what grounds ethics as both the most practical and also the most fundamental philosophical field. While Descartes saw "do I exist?" as the fundamental philosophical question, the question that precedes his epistemological foundation is "should I concern myself with existence?"

The skeptic could take a step backwards and say "do I not need to know if I exist in order to ask myself what I should do in the first place?" One can see how the reasoning can get circular at this point. So the question that presents itself after this is "which is a more important question for me to consider, 'what should I do?' or 'do I exist?'" I tend to think that the pragmatic method needs to be turned to here.  I can go on for years worrying about my own existence, but if I do in fact exist then that is time wasted and if I don't exist then there was not much to gain from discovering such a fact in the first place.

In short, my answer to (3) is that morality is something that we have to concern ourselves with first if we are to take part in our lives as human beings.

I will now backtrack to hit (1) and (2). Since I am now asking the question "what should I do?" The question of who "I" am becomes essential. Ruling out metaphysically interesting but practically irrelevant "brain in the vat" theories, I am fairly certain to say that I am a man, but that I am also different from other men. So the question arises: what do I owe to that of myself that is like others and what do I owe to that of myself that is unlike others?

When I think of how I am like other men, a lot of things come to mind. I share 99.9% of the genetic code of all other men. I need to eat, drink water, stay healthy, and I have a desire to flourish. This last piece is probably the most important of them all.

While I may like different sports and spend my free time differently than others, I share that desire to make my life worthwhile. While I was strongly resistant to this strand of moral philosophy until less than a year ago, I find it hard to deny that morality has an evolutionary basis.

If there is anything that has given human beings the ability to persist as a species, it is our complex system of collaboration. The ability of language and our capacity to develop society and hierarchy has given us a leg up on evolutionary competition that expanded the species from 2,000 individuals in western Africa to over 7 billion people across the world. This has occurred because human beings have rules that allow them to collaborate with one another and to respect one another within cultural systems. The ability to work together through problem solving to hunt, to specialize between architects, engineers, managers, and electricians to build a house, to replicate a system of agriculture from Europe in America, these are the social tools that have fueled our survival, and they are built on an evolved sense of duty, respect, and collaboration. Thus, this is my answer to (2): morality is an evolved concept that occurs within a species.

The reason I think this does not qualify as a case of the Is-Ought Fallacy will also provide an answer to (1). The question "What should I do?" has a specifically transcendental nature to it. This is because no one can ask the question "what should I do?" besides a human being, so who we are has a necessary bearing on what we should do. This is why Nussbaum's capabilities approach makes so much sense. As a human being, I have certain human needs. Those provide a constraint on my menu of actions that can allow me to flourish.

As for the categories of realism listed above, I would have to say that the above take on the origins and nature of morality take part in all three: success, unitary, and absolutism. Morality is a real concept that applies to all human beings and are a rational end to human flourishing. Is it a real concept outside of human existence? No. Does that fact have any bearing on that fundamental question "what should I do?" The answer to this is also "no."  Morality is not "absolute" in a way that transcends humanity, but man cannot transcend humanity, and therefore morality is practically absolute. And in the realm of practice, that is the only thing that matters.