Beckoning

In my opinion, it is very worthwhile to study ethics and metaethics in order to answer questions like “What is morality?”, and “Now that I have a strong case for what morality is, am I bound by it?”  To answer these questions, one can study evolutionary psychology and philosophy, among other disciplines.

One aspect of ethical inquiry that I believe is widely under-appreciated is answering the question: “What is my brain’s current algorithm for determining how I feel about moral issues?”  Most moral inquiry, it seems, is more based on why we have our current beliefs, whether they are right or wrong (or if there is such a thing as a right or wrong moral belief), and what we should do about them.  But for now, put those question aside.  I want to focus on the seemingly much simpler question as to what exactly our current moral beliefs are.

When one encounters a stimulus with moral weight (say, someone is murdered, or someone makes a breakthrough in cancer research), our brains quickly execute an algorithm to that produces as output a moral emotion.  This algorithm can be very complicated.  It almost certainly isn’t as simple as “Moral approval if it promotes the greatest good, moral disgust otherwise”, or “Moral approval if it abides by the rules of this old book, moral disgust otherwise”, or “Moral approval if my community feels moral approval when this happens, moral disgust otherwise”.  If your algorithm is actually this simple, I have nothing to say and would recommend you focus your moral inquiry in the more standard directions.

We can test our moral algorithm by considering hypothetical situations and imagining how we would feel.  We have a long history of categorizing those results, and we can estimate how we’ll feel in a hypothetical situation with a high probability of estimating correctly.  In addition, our moral beliefs strongly inform our moral algorithm.  But they cannot fully determine it, just in the same way that beliefs about what make you happy cannot fully determine what actually makes you happy.  Like all emotions, most of the workings of the function that maps stimuli to the experience of the emotion is unconscious.  Think of how difficult it is to determine the algorithm underlying similar emotions, like happiness, desire, satisfaction, and love.  We are notoriously bad at knowing what makes us happy, knowing what we want, knowing what will give us satisfaction in life, and knowing what will spark our feelings of love.

I recently had an epiphany into the nature of my moral algorithm.  For me, morality is in large part a beckoning.  It is a desire for everyone in the world to appreciate beauty with me.  It is “Hey!  Come look at this awesome thing!”  Whatever moves the world towards this end, like ending suffering caused by poverty and disease, feels morally good to me.  And whatever opposes it feels morally bad.

My sense of beckoning is not a belief.  It is a desire and a preference.  I also happen to believe that people should have access to beauty and freedom from suffering.  But what that belief means, and whether it binds my actions, is another matter entirely.

What can you learn about your moral algorithm?  Put aside your beliefs, put aside any statements with the word “should” or “ought”.  Think on the level of preferences.  What is your moral algorithm like?

Advertisements
This entry was posted in Uncategorized. Bookmark the permalink.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s