Thursday, March 27, 2014

215. Bracketology hurts NCAA

The NCAA Basketball Tournament, also known as “March Madness,” is a favorite among sports fans. With its “one and done” format, upsets, office pools and last second shots, the tournament is one of the most exciting sporting events of the year. It starts on the cusp of spring and takes sports fan to Opening Day in baseball.

As a sports fan, it has provided some special moments over the years. As both a Duke and Ohio State fan, I can easily recall many magical moments—from Duke’s upset of UNLV in 1991 and Christian  Laettner famous last second shot to Ohio State’s disappointing loss to Michigan’s Fab Five in 1992.

The purpose of the tournament, of course, serves to crown a national champion. The tournament invites 68 teams to participate and the winner usually must win at least six games in a row. And as exciting as it is, one question to be asked is whether this is the fairest way to determine the national champion.

Sports differ in the way they determine a national champion. Of major contention is the NCAA football championship, which was determined by the controversial and unpopular BCS system. Other leagues, like professional hockey, permit many teams into the playoffs—meanwhile professional baseball only recently added wild card teams.

The NFL, due to the physical demands of the sport, plays one game in which the winner advances toward the Super Bowl. The NBA and MLB often play best of seven series to determine the better team.

In determining a champion, regardless of the sport, there are two issues to be considered: which teams are permitted into the playoffs and how do those teams advance within the playoffs.

The general perception is that teams need to earn their way into the playoffs. The majority of games are played in the regular season and the teams that perform the best are selected for the playoffs. The regular season provides the largest sample of a team’s performance. Over the course of the season, in which everybody often plays everybody, in at least in their own division or conference, the aim is to determine who deserves to have the opportunity to play for a championship.

The playoffs are often regarded as being more intense, with the best against the best—conference/division champions facing off against one another.

March Madness fails on both fronts. First, it permits way too many teams into the tournament and secondly, the win or go home format, increases the odds that more deserving teams, based on regular season play, may stumble.

If you replayed the regular season, it stands to reason that the same team would win the regular season championship a majority of the time. But most conferences have a postseason tournament which permits all of the teams to play for a conference championship—and gain a berth in March Madness. The regular season, it turns out, only determines the seeding. On many occasions, the regular season champion does not win the conference tournament (and for some conferences, prohibits them from playing in March Madness).

Consider last year, Liberty University finished the regular season with a conference record of 6-10, tied for ninth place in a twelve team league. They won their conference tournament and were declared conference champion. For the others in their conference, the regular season meant nothing.

If March Madness was replayed 100 times, each time with slightly different seeding, you might get dozens of different outcomes. Consider if Michigan, who lost in the championship game last year, had been the number four seed in Louisville’s regional (which is somewhat arbitrary)—they would have met in the round of 16 and not the national championship game. How differently their season would have been judged.

A superior team always wants a larger sample size, and that is why the regular season should determine the conference champion.  And only those conference champions should be invited to the “Big Dance.” It is ridiculous that a team which finishes ninth in their conference is rewarded, based on one hot streak, with the chance to win a national championship.

The conference tournaments, and the NCAA Championship, are not optimally designed to determine a deserving champion. They are designed around money and excitement. Their popularity is based on the love the underdogs and sudden death tournament play.

But championships should not be arranged for the amusement of the fans, or the chance to win a billion dollars. “Bracketology” is sports talk out of control. And while the champion is usually a pretty good team, and likely deserving on its own merits—six wins in a row usually sorts out the fortunate—letting undeserving teams in the tournament distorts the intent of it all.

March Madness is such a fun and popular event that its shortcomings are often overlooked. With the best players leaving after a year and the decreased disparity between the best and worst teams—the games are closer than ever. It is a fun event; it is just not the fairest to determine a champion.

Thursday, January 30, 2014

214. Crony capitalism increases inequality

I don’t find myself agreeing with the Pope, or any other religious leader, on many issues other than a social and moral obligation to help others. And, in particular, I certainly have my reservations about many perspectives of the Catholic religion and the lack of accountability within their leadership over the last couple of decades.

However, I appreciate some of the values of Pope Francis—and I was particularly impressed with his assessment of the current state of our capitalistic system. His assessment recognizes that there are serious concerns with capitalism—the most significant concern being the increasing disparity in wealth.

True capitalism requires that society operates on a level-playing field. This means several things including an equal opportunity to enter the market and that everyone in the market play by the rules. The privileged enjoy significant advantages in terms of education, networking and capital. The connected are able to negotiate the political and legislative fields to create market advantages or secure corporate welfare benefits.

It’s like playing the game Monopoly with someone who owns half the board, has a large amount of cash and assumes the ability to change the rules—before the game even starts.

Cheaters ruin it for everyone and only inspire more of an incentive to cheat. Cheating eventually also inspires government regulation. Winners viciously compete for more market share by eliminating competition and making it more difficult for others to break into their markets. Enough never is and the winners enjoy exponential grow, while the working class fall further and further behind.

Pope Francis called this the “idolatry of money” at the expense of "dignified work, education and healthcare."

Putting this in proper perspective, Pope Francis asks, "How can it be that it is not a news item when an elderly homeless person dies of exposure, but it is news when the stock market loses 2 points?"

President Obama piggybacked the Pope’s comments and addressed the economical and societal consequences of a society so mired in an inequitable distribution of wealth.

"The combined trends of increased inequality and decreasing mobility pose a fundamental threat to the American dream, our way of life, and what we stand for around the globe," Obama said.

Robert Reich, who served under three presidents, connects the dots that have led to the economic disparity in our capitalistic system. These include the reality that:

almost all economic growth the last three decades have gone to the top
political power flows to the top
corporations and the very rich pay lower taxes and receive more corporate welfare
government budgets are increasingly squeezed
average Americans are competing with one another for slices of a shrinking pie

The deck is stacked and those who have “made it” sometimes both overestimate the work they did, the obstacles they overcame, and underestimate the “breaks” they had along the way. This isn’t meant to be a sweeping generalization—many people work very, very hard and deserve everything they have. However, many other people have worked very, very hard and not succeeded. Everyone can’t make it—it takes talent, hard work and, often, good fortune.

Thus the lie of capitalism is that everyone who works hard will be successful. People who have made it often say, “If I can do it, so can you,” or “you just have to believe” or “never give up.” It’s good advice; however, it’s a statistical inaccuracy. There are really only a few ways to become very wealthy — work in a professional field (actor, doctor, lawyer, athlete, etc.), growth through financial investment (which requires money to invest), as a successful entrepreneur (usually requires investment and labor of others) and by inheritance or lottery.

Obama summarized in his own way, "It's rooted in the nagging sense that no matter how hard they work, the deck is stacked against them. And it's rooted in the fear that their kids won't be better off than they were.”

A timely example was the recently released statistics detailing the extraordinary gains made by billionaires in 2013. Warren Buffett led the list by increasing his wealth $12.7 billion—that’s over $30 million per day! For the very wealthy, capitalistic growth is exponential, a simple concept that escapes most who defend its principles. The top 1% earns money easier and faster— and continues to own a ridiculous 35% of the country’s total wealth.

What Buffett profited this year would have employed over 250,000 individuals at $50,000 last year. I doubt that Buffett “worked” any harder than he did last year. His growth is simply the result of exponential growth in investment—on the backs of millions of corporate employees (who often struggle to make it on their salaries and live in fear of layoffs).    

Income inequality is an eventual inherent result of capitalism. Some of it is by design, those who “deserve” more, get more—but the exponential difference is the result of “unfettered capitalism” as the Pope called it.

The frustration is boiling over, yet many continue to miss the point—deeply dedicated to defending capitalism.  Corporations, CEOs and their shareholders continue to get a pass—and we keep electing the politicians who serve their interest.

Reich writes, “Native-born Americans are threatened by new immigrants; private sector workers are resentful of public employees; non-unionized workers are threatened by the unionized; middle class Americans are competing with the poor. Rather than feel that we’re in it together, we increasingly have the sense that each of us is on his or her own.”

We need a better system, a fairer system—a “modified capitalism.”

Unfortunately, there are really only a couple of mechanisms that will inspire greater wealth equalities—and it starts with compassion and a demand on our political leadership.

"I beg the Lord to grant us more politicians who are genuinely disturbed by the state of society, the people, the lives of the poor," Pope Francis said.

Unfortunately, I think the Pope is right—it’ll take nothing less than divine intervention to make things right.

Thursday, January 2, 2014

213. Do any of us really have free will?

Working in association with the addiction field, and dealing with my own attraction to food, I have often thought about the concept of free will. Does free will really exist, and, if so, on what level? Attacking the concept of free will, addiction separates mental and physical urges from consequences—as well as those who conquer their addiction and those who remain enslaved to it. Whatever the case, it is surely more than just a matter of having a little “will power.”

I recently enjoyed Sam Harris’s book on free will. Less than 100 pages long, Harris, who has a Ph.D. in neuroscience and is best known for his book, The End of Faith, makes the argument that free will is an illusion. From neuroscience specifics such as the milliseconds between the brain’s activities and when an individual actually makes a decision to the ramifications of free will from religious, moral and political perspectives, Free Will certainly inspires thought on the issue.

Perhaps generally defined as having the conscious ability to control one’s actions, most individuals believe they have free will. Distinctly, there is a difference in the ability to make a conscious decision, that is, to overcome biological programming or environmental influence, and the ability to do what may be physically impractical or impossible.

So the question is: What do we really have control over? Our genes are assigned to us, as well as or parents.  Most of our experiences, particularly when we are young, are chosen for us. The environment we grow up in, not only from socio-economic background but also our religious and cultural upbringing, defines and shapes our thought processes. Can we change our perspectives, or are they already ingrained in us—or if we can change our perspectives, is it preciously because of our experiences?

Harris suggests, “. . .  the idea that as conscious beings we are deeply responsible for the character of our mental lives and subsequent behavior is simply impossible to map into reality. Consider what it would take to actually have free will. You would need to be aware of all the factors that determine your thoughts and actions and you would need to have complete control over those factors.”

“Choices, efforts, intentions and reasoning influence our behavior—but they are themselves part of a chain of causes that precede conscious awareness and over which we exert no ultimate control. My choices matter—and there are paths toward making wiser ones—but I cannot chose what I choose,” Harris described.

And for those who think they “could have done things differently,” Harris explains that this is “. . .  an empty affirmation. It confuses hope for the future with an honest account of the past.”

This premise is, however, different from that which happens to us, and the fatalistic premise of divine fate—or that “everything happens for a reason.” Of course, if one believes in the latter, that things “happens for reason,” then one should probably believe in the lack of free will—for those “reasons” are defined in advance and not conveniently postulated or assigned after the fact.

Obviously, equipped with an outcome, particularly a poor outcome, one can either admit that he or she made the wrong decision or ordain it to be deterministic.  But knowing the outcome and the idea that one “would do things differently,” does not suggest free will—for at the same time in one’s life, with the knowledge available at the time, no other decision could be made (unless perhaps we’re bumbling around in parallel universes).

Of considerable debate is the impact of free will on religion and morality. Of particular importance is the treatment of those who break our moral and societal codes. If free will is an illusion, then the question is what level of responsibility should be attached to an individual who breaks the law. Harris provides a continuum of situations for accountability—depending on age, socio-economics and biology. Although the volitional acts are identical, society often acknowledges the circumstances of the situation and the perceived amount of free will involved.

Free will is also a critical concept and dividing line in politics. “Liberals tend to understand that a person can be lucky or unlucky in all matter relevant to his success. Conservatives, however, make a religious fetish of individualism. Many seem to have absolutely no awareness of how fortunate one must be to succeed at anything in life, no matter how hard one works,” Harris surmises.

The concept of “Free Will” will remain to be controversial from a number of disciplines. Its implications on society are far-reaching and dividing—its circular reasoning, and the attempt to separate the conscious from the unconscious, is difficult to conceptualize. Fortunately, you are free to draw your own conclusions—or are you?

Sunday, December 1, 2013

212. Our baggage is easy to confirm with biases

While intuitively applicable, the idea of confirmation bias is an important social concept. The ideology affects our perspective about many important areas of society—such as religion, socioeconomics, politics and philosophy.

Defined succinctly enough in Wikipedia, “Confirmation bias is a tendency of people to favor information that confirms their beliefs or hypotheses. People display this bias when they gather or remember information selectively, or when they interpret it in a biased way. The effect is stronger for emotionally charged issues and for deeply entrenched beliefs.”

While the foolery of any bias is potentially detrimental to any objective reasoning, confirmation biases attract more than a distorted perceptive—it feeds our most treasured and relied upon beliefs. I have often argued that one of the most uncomfortable things to do is to challenge one’s beliefs. It is difficult not only because of its emotional and moral elements, but also because the amount of evidence required to inspire objective thought is often insurmountable.

There is no shortage of issues that may be influenced by confirmation biases—and there is no shortage of sources willing to provide that confirmation. There is of course, personal experience, which lines up neatly with anecdotal evidence and our social groups. There are also media sources—newspaper, radio and television—as well as our institutions and their leaders, such as religions and political parties and “think tanks.”. The government may provide confirmation to biases through its laws and economic systems. Finally, we have corporate and academic confirmation through research and reports.

For those looking to have their beliefs confirmed, odds are you’ll find someone or something to support it. It may not even be intentional, one might try to be objective—but there is a natural filtering to those beliefs we hold dear.

Consider the belief that people on government assistance are lazy and take advantage of the system—costing hardworking taxpayers millions.

There is anecdotal evidence, often in conversation, or on social media—and it usually goes something like, “I was in line at the grocery store and this woman in front of me had the newest iPhone, professionally manicured nails, bought two cartons of cigarettes and then paid for her groceries with food stamps.” Media confirmation is easily found on Fox News, or any number of conservative radio shows that profile lower social classes. The Tea Party finds outrage over almost any government support and provides consistent confirmation that the taxpayers are being robbed by freeloaders. The government proceeds with a capitalistic perspective, which despite the social programs promotes the value of being educated and working hard—and getting people off of government assistance. And, of course, one can easily find a report, from maybe the Heritage Foundation, which supports one’s views that government social programs are full of waste and details the cost of individuals who fraudulently receive aid.

If this is your perspective, you’ll have no trouble finding ways to confirm it. You’ll shake your head in agreement—probably proud of finding yourself on the right side of an issue.

Of course, those who believe that individuals on government assistance genuinely need help through a tough time in their life, and society has a moral obligation to provide reasonable aid, will also find their beliefs confirmed. They’ll talk about the single mom working two low-paying jobs trying to raise her three kids. They’ll attack capitalism through examples of corporate greed and job outsourcing—and the growing wealth inequality. Maybe they’ll even embrace the social aspect of religion and its teachings of helping the poor. And surely proponents will find reports minimizing the perceived waste and fraud in government systems.

With the confirmation of beliefs so readily accessible to both sides—it’s easy to understand why many beliefs do not change.

While none of us are immune from our biases—it’s a part of who we are based on our life experiences and perspective—we can hope, at least, to sort out the most convincing arguments. That is, we need to try to relieve ourselves of our own biases and fairly evaluate the issue and evidence as though we are hearing about it for the first time.

The criteria should be a preponderance of the evidence, not confirmation through reasonable doubt. It should be weighed equally or plotted on a bell curve to attach statistical significance. We need to view it as an impartial judge, and not like someone who has a horse in the race.