Human Perception and Randomness

When we began choosing topics for our final projects, I knew I wanted to find some math topic related to human behavior. I stumbled across Mr. Anderson’s book, The Drunkards Walk: How Randomness Rules Our Lives, and immediately dove in. Looking back at my many pages of notes on the bestseller by Leonard Mlodinow, the information and subjects vary widely from in depth statistics to psychology and math. I decided I’ll focus this post and project on the illusions, error, and false beliefs we as humans create in our perception of the world around us.

First Examination:

I wanted to have a class experiment showcasing the false-confidence we have in our senses and perceptions, and decided on the classic Coke Vs. Pepsi taste test. Under the rouse of “data collection with enjoyment”, I asked my classmates which of the two sodas they preferred, and then asked if their preference had changed after a taste test. I poured them a sample of both from their labeled bottles, they tasted, and answered. Almost all said their preference hadn’t changed, and pointed to their “favorite” of the two. However, earlier that day I’d siphoned and switched the two bottles; The stated Coke was really Pepsi and vice-versa. Here is the data from the experiment:

Screenshot (1)

 

Only two students “kept” their preference through the experiment. Despite extreme confidence by certain individuals, most people’s perception of the drinks was dominantly psychological, and with just raw taste, they were unable to differentiate between the two.

These results replicate what hundreds of other in-depth experiments and studies have shown; We believe we can judge subjects on what is better and what is worse, when in reality it is our psychology and not our senses that make the decisions. A few examples of these studies include:

Stephen King Pseudonym. At the start of his career, Stephen King’s publishers told him he could only publish one novel a year so as not to “over-saturate” the King Brand, so he decided to publish more books under the pen-name Richard Bachman. He has said he was also curious as to whether his success was determined by luck or talent. The first book published as Bachman sold 28,000 copies, obviously not a small number, but when King was found out as the real author the number jumped to over 250,000.

The New York Times Blind Vodka Taste Test. Ranked Smirnoff (the cheapest) #1, while much pricier bottles like Grey Goose and Ketel One didn’t even make the top 10.

The Sunday Times of London Book Test. The paper typed up the first two chapters of two novels awarded The Booker Prize, and sent them to 20 publishers and agents as an aspiring author. 19/20 publishers rejected the unrecognized books.

The story of rejection for best-selling authors is a common one, much like for struggling actors, musicians, or any person of any other cultural market. Experts in each field with profit incentive and marketing research are routinely wrong in predicting the next big hit, though looking backwards it is easy to point out the characteristics and reasons a product was successful. Do these experts require more data? Are they simply not skilled enough at their jobs?

In the 1960’s a meteorologist named Edward Lorenz used the first computer calculations and programs to model the future weather patterns based on the recorded conditions at a starting time. He ran his program, and received data representing the weather patterns at the given end time. Afterwards, he decided he wanted to run the program further, but instead of backtracking and having the computer run the entirety of the calculations again, he started it half-way to the original end point as a shortcut. He expected the simulation to produce an identical result for the rest of the calculations up to the original end point, and then continue further. However, he noticed that at the original end time the simulation had already diverged wildly. In his examination of why, he discovered the computer’s printout of data after the first trial was to 3 decimal places, but the actual data memory of the computer was to 6.  For example the number given as .293 would really be .293416. Pretty small difference, wouldn’t you agree? Even modern weather satellites only measure data to 2 or 3 decimal places. The difference between .293 and .293416 is impossible for the satellite computers to differentiate, let alone a human meteorologist. Amazingly, these tiny differences have enormous effect on weather models. The phenomenon is dubbed “The Butterfly Effect”.

 This is a double-rod pendulum, a simulation with only 5 variables: Length of the two rods, masses at the ends of each rod, and starting point. Not only does this showcase the perceived randomness of the simple system, but minute changes in the variables have large effects on the model. Here’s a link to a simulation: http://www.tapdancinggoats.com/double-pendulum-simulation.htm

If weather patterns and modeling are complex, our lives and society are infinitely more so. Humans add the complexity of “irrationality” to any model; The butterfly effect, while invisible to us, exists every day. The unseen consequences of a second alarm snooze, a second cup of coffee, a skipped shower, or literally any small change or variation in our lives effect our future and the world around us, yet we as humans ignore that as a fact. We try to control our future through our actions, and find meaning where it doesn’t exist, looking at perceived patterns in the past to anticipate the future. “20/20 Hindsight” comes to mind.

This second class experiment was to showcase our search for pattern and meaning in random events. For each subject, I told them I’d be drawing cards, either red or green, from a random deck. The deck’s order was kept constant, GRGGRRGGGR. I’d then ask the subject which color they thought I would draw next. Here are the results.Screenshot 2015-06-08 at 11.26.18 AM

 

Randomness, while hard to humanize, rules our lives.

Second Examination:

The prevalence of Normal Distribution in the world around us and its many applications deserved another investigation in this post. Perhaps the most common application of normal distribution in our lives is in our academics, in grading. “Grading on a curve” is a term we’ve all heard before, netting us a few points here and there, or in the case of the Chemistry Exam, taking away a few points. Instead of basing the given grades on the accrued points out of a maximum, Curve Grading bases each student’s grade on the performance of all the students as a whole.

Normal Distribution is very prevalent in the world around us; examples include biological measurements like height, weight, nail and teeth length, measurements of water flow in a river, crime in society, sports scores, car crashes, and many more. Bell curves help us visualize and analyze these data sets to find the average, standard deviation,  in some cases identify fraud, and make accurate predictions.

Ex: Plinky Board Simulator

 

 

 

The graphs below were generated using the numbers on singular rows of pascals triangle, illustrating the distribution of the size of the numbers. As you can see, the size of the numbers on a line of Pascals Triangle are normally distributed. The bell curve is the digits of Pascal’s Triangle in Bargraph form.

c(4,k)c(16,k)c(36,k)c(64,k)c(100,k)

If there were to be a poll, with each participant 50% likely to like/dislike a candidate, the chances “N” participants will like/dislike the candidate is proportional to the numbers on line “N” of Pascal’s Triangle. The higher the “N”, the lower the margin of error. Say the poll occurred,  with a sample size of 1000 participants, before and after a convention. There would be  a 3.5% margin of error. If the percentage of people who would vote for the candidate increased from 50% beforehand to 52% afterwards, would the results be indicative or newsworthy? In the United States Media it is, and that “bump of 2% as choice” was reported after George W. Bush’s rally, even though that change is, in all definitions of the word, meaningless.

In most polls and scientific studies, marginal error of 5% is unacceptable. The difference in financial and real world consequence of 1% and 6% is huge. Yet in our lives, we make judgments and form opinions from far fewer sample sizes and data points than 1000. When we make observations, they are points on a bell curve, but we don’t know where on the curve they fall. Keep that in mind in your life; don’t build a construction for yourself and lock your head around it, have an open mind. Don’t accept everything as reality, so much of this world is random.

James DeCunzo

Published

Updated

Author

DJames

Comments