Showing posts with label science. Show all posts
Showing posts with label science. Show all posts

Sunday, August 25, 2019

A Rough Night

I had an incredibly rough night last night. In the early morning, I woke up and had the terrible feeling that I wasn't alone. I felt someone or something was in the room with me, even in the bed with me, though I knew I was the only one there. Over the excruciating moments, I began to feel I was being haunted or even possessed by something. I woke up this morning unbelievably anxious and feeling sore in every muscle in my body. It seems last night I was the victim... of sleep paralysis.

Sleep paralysis is an interesting, and quite terrifying, phenomenon. What happens is that you wake up while still in REM sleep. Dreams intertwine with reality and can cause such experiences as hallucinations (auditory, visual, even olfactory), emotions (such as fear and dread), inability to move (because your body paralyzes you during REM to keep you from acting out your dreams, that carries over into this semi-wakeful state), and muscle soreness. Though sleep paralysis is more common among people who already have some form of sleep disturbance, such as insomnia, it can happen to anyone. It's been theorized that many so-called experiences of the paranormal are actually cases of sleep paralysis.

There's a great documentary on sleep paralysis I highly recommend if you'd like to learn more:



Has anything like this ever happened to you? Feel free to share in the comments!

Tuesday, March 12, 2019

Are Likert Scales Superior to Yes/No? Maybe

I stumbled upon this great post from the Personality Interest Group and Espresso (PIG-E) blog about which is better - Likert scales (such as those 5-point Agree to Disagree scales you often see) or Yes/No (see also True/False)? First, they polled people on Twitter. 66% of respondents thought that going from a 7-point to 2-point scale would decrease reliability on a Big Five personality measure; 71% thought that move would decrease validity. But then things got interesting:
Before I could dig into my data vault, M. Brent Donnellan (MBD) popped up on the twitter thread and forwarded amazingly ideal data for putting the scale option question to the test. He’d collected a lot of data varying the number of scale options from 7 points all the way down to 2 points using the BFI2. He also asked a few questions that could be used as interesting criterion-related validity tests including gender, self-esteem, life satisfaction and age. The sample consisted of folks from a Qualtrics panel with approximately 215 people per group.

Here are the average internal consistencies (i.e., coefficient alphas) for 2-point (Agree/Disagree), 3-point, 5-point, and 7-point scales:

And here's what they found in terms of validity evidence - the correlation between the BFI2 and another Big Five measure, the Mini-IPIP:


FYI, when I'm examining item independence in scales I'm creating or supporting, I often use 0.7 as a cut-off - that is, items that correlate at 0.7 or higher (meaning 49% shared variance) are essentially measuring the same thing and violate the assumption of independence. The fact that all but Agreeableness correlates at or above 0.7 is pretty strong evidence that the scales, regardless of number of response options, are measuring the same thing.

The post includes a discussion of these issues by personality researchers, and includes some interesting information not just on number of response options, but also on the Big Five personality traits.

Thursday, February 21, 2019

Replicating Research and "Peeking" at Data

Today on one of my new favorite blogs, EJ Wagenmakers dissects a recent interview with Elizabeth Loftus on when it is okay to peek at data being collected:
Claim 4: I should not feel guilty when I peek at data as it is being collected

This is the most interesting claim, and one with the largest practical repercussions. I agree with Loftus here. It is perfectly sound methodological practice to peek at data as it is being collected. Specifically, guilt-free peeking is possible if the research is exploratory (and this is made unambiguously clear in the published report). If the research is confirmatory, then peeking is still perfectly acceptable, just as long as the peeking does not influence the sampling plan. But even that is allowed as long as one employs either a frequentist sequential analysis or a Bayesian analysis (e.g., Rouder, 2014; we have a manuscript in preparation that provides five intuitions for this general rule). The only kind of peeking that should cause sleepless nights is when the experiment is designed as a confirmatory test, the peeking affects the sampling plan, the analysis is frequentist, and the sampling plan is disregarded in the analysis and misrepresented in the published report. This unfortunate combination invokes what is known as “sampling to a foregone conclusion”, and it invalidates the reported statistical inference.
Loftus also has many opinions on replicating research, which may in part be driven by the fact that recent replications have not been able to recreate some of the major findings in social psychology. Wagenmakers shares his thoughts on that as well:
I believe that we have a duty towards our students to confirm that the work presented in our textbooks is in fact reliable (see also Bakker et al., 2013). Sometimes, even when hundreds of studies have been conducted on a particular phenomenon, the effect turns out to be surprisingly elusive — but only after the methodological screws have been turned. That said, it can be more productive to replicate a later study instead of the original, particularly when that later study removes a confound, is better designed, and is generally accepted as prototypical.
The whole post is worth a read and also has a response from Loftus at the end.

Friday, January 25, 2019

Natural Graph

Via Not Awful and Boring, this reddit post discusses a really cool natural graph, measuring the amount of sunlight per day, created with a tree and a magnifying glass:


Apparently, this device is a Campbell-Stokes recorder.

Thursday, October 4, 2018

Resistance is Futile

In yet another instance of science imitating science fiction, scientists figured out how to create a human hive mind:
A team from the University of Washington (UW) and Carnegie Mellon University has developed a system, known as BrainNet, which allows three people to communicate with one another using only the power of their brain, according to a paper published on the pre-print server arXiv.

In the experiments, two participants (the senders) were fitted with electrodes on the scalp to detect and record their own brainwaves—patterns of electrical activity in the brain—using a method known as electroencephalography (EEG). The third participant (the receiver) was fitted with electrodes which enabled them to receive and read brainwaves from the two senders via a technique called transcranial magnetic stimulation (TMS).

The trio were asked to collaborate using brain-to-brain interactions to solve a task that each of them individually would not be able to complete. The task involved a simplified Tetris-style game in which the players had to decide whether or not to rotate a shape by 180 degrees in order to correctly fill a gap in a line at the bottom of the computer screen.

All of the participants watched the game, although the receiver was in charge of executing the action. The catch is that the receiver was not able to see the bottom half of their screen, so they had to rely on information sent by the two senders using only their minds in order to play.

This system is the first successful demonstration of a “multi-person, non-invasive, direct, brain-to-brain interaction for solving a task,” according to the researchers. There is no reason, they argue, that BrainNet could not be expanded to include as many people as desired, opening up a raft of possibilities for the future.
Pretty cool, but...

Tuesday, September 18, 2018

I've Got a Bad Feeling About This

In the 1993 film Jurassic Park, scientist Ian Malcolm expressed serious concern about John Hammond's decision to breed hybrid dinosaurs for his theme park. As Malcolm says in the movie, "No, hold on. This isn't some species that was obliterated by deforestation, or the building of a dam. Dinosaurs had their shot, and nature selected them for extinction."

This movie was, and still is (so far), science fiction. But a team of Russian scientists are working to make something similar into scientific fact:
Long extinct cave lions may be about to rise from their icy graves and prowl once more alongside woolly mammoths and ancient horses in a real life Jurassic Park.

In less than 10 years it is hoped the fearsome big cats will be released from an underground lab as part of a remarkable plan to populate a remote spot in Russia with Ice Age animals cloned from preserved DNA.

Experiments are already underway to create the lions and also extinct ancient horses found in Yakutia, Siberia, seen as a prelude to restoring the mammoth.

Regional leader Aisen Nikolaev forecast that co-operation between Russian, South Korean and Japanese scientists will see the “miracle” return of woolly mammoths inside ten years.
Jurassic Park is certainly not the only example of fiction exploring the implications of man "playing god." Many works of literature, like Frankenstein, The Island of Doctor Moreau, and more recent examples like Lullaby (one of my favorites), have examined this very topic. It never ends well.

By Mauricio Antón - from Caitlin Sedwick (1 April 2008). "What Killed the Woolly Mammoth?". PLoS Biology 6 (4): e99. DOI:10.1371/journal.pbio.0060099., CC BY 2.5, Link

Saturday, September 15, 2018

Walter Mischel Passes Away

Walter Mischel was an important figure in the history of psychology. His famous "marshmallow study" is still cited and picked apart today. Earlier this week, he passed away from pancreatic cancer. He was 88:
Walter Mischel, whose studies of delayed gratification in young children clarified the importance of self-control in human development, and whose work led to a broad reconsideration of how personality is understood, died on Wednesday at his home in Manhattan. He was 88.

Dr. Mischel was probably best known for the marshmallow test, which challenged children to wait before eating a treat. That test and others like it grew in part out of Dr. Mischel’s deepening frustration with the predominant personality models of the mid-20th century.

“The proposed approach to personality psychology,” he concluded, “recognizes that a person’s behavior changes the situations of his life as well as being changed by them.”

In other words, categorizing people as a collection of traits was too crude to reliably predict behavior, or capture who they are. Dr. Mischel proposed an “If … then” approach to assessing personality, in which a person’s instincts and makeup interact with what’s happening moment to moment, as in: If that waiter ignores me one more time, I’m talking to the manager. Or: If I can make my case in a small group, I’ll do it then, rather than in front of the whole class.

In the late 1980s, decades after the first experiments were done, Dr. Mischel and two co-authors followed up with about 100 parents whose children had participated in the original studies. They found a striking, if preliminary, correlation: The preschoolers who could put off eating the treat tended to have higher SAT scores, and were better adjusted emotionally on some measures, than those who had given in quickly to temptation.

Walter Mischel was born on Feb. 22, 1930, in Vienna, the second of two sons of Salomon Mischel, a businessman, and Lola Lea (Schreck) Mischel, who ran the household. The family fled the Nazis in 1938 and, after stops in London and Los Angeles, settled in the Bensonhurst section of Brooklyn in 1940.

After graduating from New Utrecht High School as valedictorian, Walter completed a bachelor’s degree in psychology at New York University and, in 1956, a Ph.D. from Ohio State University.

He joined the Harvard faculty in 1962, at a time of growing political and intellectual dissent, soon to be inflamed in the psychology department by Timothy Leary and Richard Alpert (a.k.a. Baba Ram Dass), avatars of the era of turning on, tuning in and dropping out.

This is a great loss for the field of psychology. But his legacy will live on.

Friday, June 22, 2018

Thanks for Reading!

As I've been blogging more about statistics, R, and research in general, I've been trying to increase my online presence, sharing my blog posts in groups of like-minded people. Those efforts seem to have paid off, based on my view counts over the past year:


And based on read counts, here are my top 10 blog posts, most of which are stats-related:
  1. Beautiful Asymmetry - none of us is symmetrical, and that's okay 
  2. Statistical Sins: Stepwise Regression - just step away from stepwise regression
  3. Statistics Sunday: What Are Degrees of Freedom? (Part 1) - and read Part 2 here
  4. Working with Your Facebook Data in R
  5. Statistics Sunday: Free Data Science and Statistics Resources
  6. Statistics Sunday: What is Bootstrapping?
  7. Statistical Sins: Know Your Variables (A Confession) - we all make mistakes, but we should learn from them
  8. Statistical Sins: Not Making it Fun (A Thinly Veiled Excuse to Post a Bunch of XKCD Cartoons) - the subtitle says it all
  9. Statistics Sunday: Taylor Swift vs. Lorde - Analyzing Song Lyrics - analyzing song lyrics is my jam
  10. How Has Taylor Swift's Word Choice Changed Over Time? - ditto
It's so nice to see people are enjoying the posts, even sharing them and reaching out with additional thoughts and questions. Thanks, readers!

Tuesday, June 19, 2018

Purchases and Happiness

I've heard it said before that it's better to spend your money on experiences than materials. While I appreciate the sentiment - memories last longer than stuff - something about that statement has always bothered me and I couldn't put my finger on what. But a recent study in Psychological Science, "Experiential or Material Purchases? Social Class Determines Purchase Happiness," helped shed some light on when that sentiment might not be true.

The study is a meta-analysis of past research, as well as a report of 3 additional studies performed by the authors, to determine whether social class determines happiness with experiential versus material purchases. Their hypothesis was that experiences are more valuable to people with higher socioeconomic status - that is, because their material needs have been met, they can focus on higher needs - while materials would be more valuable to people with lower socioeconomic status - people who may struggling with basic needs like food and clothing. They confirmed their hypothesis, not only when examining participants' actual SES, but also when it was experimentally determined, by asking participants to imagine their monthly income had been increased or decreased. As they sum up in the article:
We argue that social class influences purchase happiness because resource abundance focuses people on internal states and goals, such as self-development, self-expression, and the pursuit of uniqueness (Kraus et al., 2012; Stephens et al., 2012; Stephens et al., 2007), whereas resource deprivation orients people toward resource management and spending money wisely (Fernbach et al., 2015; Van Boven & Gilovich, 2003). These fundamentally different value orientations translate into different purchase motives held by people from higher and lower classes (Lee, Priester, Hall, & Wood, 2018).

Thursday, May 3, 2018

The Pillars of Creation

About 7,000 light years away are the Pillars of Creation, a collection of clouds and dust in the Eagle Nebula. Inside the pillars, gravity works to pull the gas onto new and existing stars, while heat from internal and external stars expel and evaporate the gas. The picture (actually a composite of many pictures) of the pillars taken in 1995 is perhaps one of the most famous pictures of objects in space:

Eagle nebula pillars

In 2007, NASA released the results of a study saying the Pillars of Creation were toppled about 6,000 years ago, when a star went supernova about 8,000 years ago, destroying the pillars over the next 2,000 years. Because the pillars are 7,000 light years away, we won't be able to see that they've been destroyed for another 1,000 years.

However, an article by Ethan Siegel suggests the pillars have not been destroyed after all:
At 7,000 light years away, the Eagle Nebula is one of the night sky’s most accessible and spectacular nebulae. It was discovered back in 1745, and shortly thereafter was recognized to be an active star-forming region, as the surefire signature of ionized hydrogen was seen in abundance. A large cluster of newborn stars can be found inside, consisting of over 8,000 stars, which is the primary cause of the nebula’s shape.

[I]n 2015, to celebrate Hubble’s 25th anniversary in space, NASA revisited these pillars, and the 20 year baseline between the original 1995 image and the new 2015 one provided insights that strongly refuted the already-destroyed pillars theory.

The 20-year follow-up showcased not only features that couldn’t be seen before, such as additional details, greater wavelength coverage, and a larger field of view. But the greatest and most important advance is the fact that the 20-year baseline allowed us to view changes over time. In the tip of the largest pillar, for example, we were able to not only identify an ejected jet, but to track the extent of its changes. With the incredible resolution of Hubble, we could determine that the size of it, over that additional time, expanded by an extra 100 billion kilometers: 1000 times the Earth-Sun distance, meaning that the stream is moving at 200 km/s.

Moreover, the best evidence for changes comes at the base of the pillars, indicating an evaporation time on the order of between 100,000 and 1,000,000 years. The idea that the pillars have already been destroyed has been demonstrated not to be true. It’s one of the great hopes of science that any controversial claims will be laid to rest by more and better data, and this is one situation where that has paid off in spades. Not only has there not been a supernova that’s in the process of destroying the pillars, but the pillars themselves should be robust for a long time to come.
Here's how the pillars looked in the 2015 photo:

 

Time to witness another great instance of birth/rebirth - the editing of a Wikipedia article. When I checked just now, the nebula article still states the pillars have been destroyed. Let's see how long it takes for that to change:

Friday, March 23, 2018

Psychology for Writers: Your Happiness Set-Point

A few years ago, a former colleague was chatting with someone about the work she did with Veterans who had experienced a spinal cord injury. The person she was chatting with talked about how miserable she would be if that happened to her, even implying that she'd rather be dead than have a spinal cord injury. Many of us who have worked with people experiencing trauma probably have had similar conversations. People believe they would never be able to happy again if they experienced a life-changing event like a traumatic injury.

On the other hand, we've also heard people talk about how unbelievably happy they would be if they won the lottery or came into a great deal of money in some way.

But you might be surprised to know that researchers have been able to collect data from people both before and after these types of events - simply because they've recruited a large number of people for a study and some people in the sample happened to experience one of these life-changing events. And you might be even more surprised to know that people were generally wrong about how they would feel after these events. People who had experienced a traumatic injury had a dip in happiness but returned to approximately the same place they were before. And people who won the lottery had a brief lift in happiness followed also by a return to baseline.

These findings offer support for what is known as the set-point theory of happiness. According to this theory, people have a happiness baseline and while events may move them up or down in terms of happiness, they'll eventually return to baseline. Situationally influenced emotions are, for the most part, temporary. You might be sad about that injury, or breakup, or financial problem, or you might be elated about that promotion, or lottery win, or new relationship for a little while, but eventually, you revert to your usual level of happiness. Everyone has their own level. Some people are happier on average than others.

This theory also explains why people who are prone to depression need to seek some kind of treatment, often in the form of therapy and medication - interventions to increase your baseline level of happiness. Money or love or new opportunities may help in the short run, but what needs to be targeted is your baseline itself. Obviously events can target your baseline - a person may have an experience that changes their way of looking at the world, for better or worse. But events that only affect your mood and don't affect your thought process or reaction are unlikely to have any lasting effects.

This is important to keep in mind when writing your characters. Situations and events can obviously push their current mood around. But for a change to be permanent, it has to do more than simply make the person happy or sad - it has to change their mindset. A divorce might make a person sad. A divorce that changes how secure a person feels in relationships or leads them to distrust others might result in a permanent change. A lottery win might make a person happy. A lottery win that helps them become financially independent, get out of a bad situation, and completely change their way of life might result in a permanent change.

Think about how your character is changing and why, to make sure it's a believable permanent change and not just a temporary happiness shift. And if it's just temporary, know that it's completely believable for your character to work back to baseline on his or her own.

Thursday, March 22, 2018

Science Fiction Meets Science Fact Meets Legal Standards

Any fan of science fiction is probably familiar with the Three Laws of Robotics developed by prolific science fiction author, Isaac Asimov:
  1. A robot may not injure a human being or, through inaction, allow a human being to come to harm.
  2. A robot must obey orders given it by human beings except where such orders would conflict with the First Law.
  3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.
It's an interesting thought experiment on how we would handle artificial intelligence that could potentially hurt people. But now, with increased capability and use of AI, it's no longer a thought experiment - it's something we need to consider seriously:
Here’s a curious question: Imagine it is the year 2023 and self-driving cars are finally navigating our city streets. For the first time one of them has hit and killed a pedestrian, with huge media coverage. A high-profile lawsuit is likely, but what laws should apply?

At the heart of this debate is whether an AI system could be held criminally liable for its actions.

[Gabriel] Hallevy [at Ono Academic College in Israel] explores three scenarios that could apply to AI systems.

The first, known as perpetrator via another, applies when an offense has been committed by a mentally deficient person or animal, who is therefore deemed to be innocent. But anybody who has instructed the mentally deficient person or animal can be held criminally liable. For example, a dog owner who instructed the animal to attack another individual.

The second scenario, known as natural probable consequence, occurs when the ordinary actions of an AI system might be used inappropriately to perform a criminal act. The key question here is whether the programmer of the machine knew that this outcome was a probable consequence of its use.

The third scenario is direct liability, and this requires both an action and an intent. An action is straightforward to prove if the AI system takes an action that results in a criminal act or fails to take an action when there is a duty to act.

Then there is the issue of defense. If an AI system can be criminally liable, what defense might it use? Could a program that is malfunctioning claim a defense similar to the human defense of insanity? Could an AI infected by an electronic virus claim defenses similar to coercion or intoxication?

Finally, there is the issue of punishment. Who or what would be punished for an offense for which an AI system was directly liable, and what form would this punishment take? For the moment, there are no answers to these questions.

But criminal liability may not apply, in which case the matter would have to be settled with civil law. Then a crucial question will be whether an AI system is a service or a product. If it is a product, then product design legislation would apply based on a warranty, for example. If it is a service, then the tort of negligence applies.
Here's the problem with those 3 laws: in order to follow them, the AI must recognize someone as human and be able to differentiate between human and not human. In the article, they discuss a case in which a robot killed a man in a factory, because he was in the way. As far as the AI was concerned, something was in the way and kept it from doing its job. It removed that barrier. It didn't know that barrier was human, because it wasn't programmed to do that. So it isn't as easy as putting a three-laws strait jacket on our AI.

 

Wednesday, March 14, 2018

Statistical Sins: Not Creating a Codebook

I'm currently preparing for Blogging A-to-Z. It's almost a month away, but I've picked a topic that will be fun but challenging, and I want to get as many posts written early as I can. I also have a busy April lined up, so writing posts during that month will be a challenge even if I had picked an easier topic.

I decided to pull out some data I collected for my Facebook study to demonstrate an analysis technique. I knew right away where the full dataset was stored, since I keep a copy in my backup online drive. This study used a long online survey, which was comprised of several published measures. I was going through identifying the variables associated with each measure, and was trying to take stock of which ones needed to be reverse-scored, as well as which ones also belonged to subscales.

I couldn't find that information in my backup folder, but I knew exactly which measures I used, so I downloaded the articles from which those measures were drawn. As I was going through one of the measures, I realized that I couldn't match up my variables with the items as listed. The variable names didn't easily match up and it looked like I had presented the items within the measure in a different order than they were listed in the article.

Why? I have no idea. I thought for a minute that past Sara was trolling me.

I went through the measure, trying to match up the variables, which I had named as an abbreviated version of the scale name followed by a "keyword" from the item text. But the keywords didn't always match up to any item in the list. Did I use synonyms? A different (newer) version of the measure? Was I drunk when I analyzed these data?

I frantically began digging through all of my computer folders, online folders, and email messages, desperate to find something that could shed light on my variables. Thank the statistical gods, I found a codebook I had created shortly after completing the study, back when I was much more organized (i.e., had more spare time). It's a simple codebook, but man, did it solve all of my dataset problems. Here's a screenshot of one of the pages:


As you can see, it's just a simple Word document with a table that gives Variable Name, the original text of the item, the rating scale used for that item, and finally what scale (and subscale) it belongs to and whether it should be reverse-scored (noted with "R," under subscale). This page displays items from the Ten-Item Personality Measure.

Sadly, I'm not sure I'd take the time to do something like this now, which is a crime, because I could very easily run into this problem again - where I have no idea how/why I ordered my variables and no way to easily piece the original source material together. And as I've pointed out before, sometimes when I'm analyzing in a hurry, I don't keep well-labeled code showing how I computed different variables.

But all of this is very important to keep track of, and should go in a study codebook. At the very least, I would recommend keeping one copy of surveys that have annotations (source, scale/subscale, and whether reverse-coded - information you wouldn't want to be on the copy your participants see) and code/syntax for all analyses. Even if your annotations are a bunch of Word comment bubbles and your code/syntax is just a bunch of commands with no additional description, you'll be a lot better off than I was with only the raw data.

I recently learned there's an R package that will create a formatted codebook from your dataset. I'll do some research into that package and have a post about it, hopefully soon.

And I sincerely apologize to past Sara for thinking she was trolling me. Lucky for me, she won't read this post. Unless, of course, O'Reilly Auto Parts really starts selling this product.

Thursday, March 8, 2018

The Art of Conversation

There are many human capabilities we take for granted, until we try to create artificial intelligence intended to mimic these human capabilities. Even an unbelievably simple conversation requires attention to context and nuance, and an ability to improvise, that is almost inherently human. For more on this fascinating topic, check out this article from The Paris Review, in which Mariana Lin, writer and poet, discusses creative writing for AI:
If the highest goal in crafting dialogue for a fictional character is to capture the character’s truth, then the highest goal in crafting dialogue for AI is to capture not just the robot’s truth but also the truth of every human conversation.

Absurdity and non sequiturs fill our lives, and our speech. They’re multiplied when people from different backgrounds and perspectives converse. So perhaps we should reconsider the hard logic behind most machine intelligence for dialogue. There is something quintessentially human about nonsensical conversations.

Of course, it is very satisfying to have a statement understood and a task completed by AI (thanks, Siri/Alexa/cyber-bot, for saying good morning, turning on my lamp, and scheduling my appointment). But this is a known-needs-met satisfaction. After initial delight, it will take on the shallow comfort of a latte on repeat order every morning. These functional conversations don’t inspire us in the way unusual conversations might. The unexpected, illumed speech of poetry, literature, these otherworldly universes, bring us an unknown-needs-met satisfaction. And an unknown-needs-met satisfaction is the miracle of art at its best.
Not only does she question how we can use the essence of human conversation to reshape AI, she questions how AI could reshape our use of language:
The reality is most human communication these days occurs via technology, and with it comes a fiber-optic reduction, a binary flattening. A five-dimensional conversation and its undulating, ethereal pacing is reduced to something functional, driven, impatient. The American poet Richard Hugo said, in the midcentury, “Once language exists only to convey information, it is dying.”

I wonder if meandering, gentle, odd human-to-human conversations will fall by the wayside as transactional human-to-machine conversations advance. As we continue to interact with technological personalities, will these types of conversations rewire the way our minds hold conversation and eventually shape the way we speak with each other?

Tuesday, March 6, 2018

Today in "Evidence for the Dunning-Kruger Effect"

A new study shows that watching videos of people performing some skill can result in the illusion of skill acquisition, adding yet more evidence to the "how hard can it be?" mindset outlined in the Dunning-Kruger effect:
Although people may have good intentions when trying to learn by watching others, we explored unforeseen consequences of doing so: When people repeatedly watch others perform before ever attempting the skill themselves, they may overestimate the degree to which they can perform the skill, which is what we call an illusion of skill acquisition. This phenomenon is potentially important, because perceptions of learning likely guide choices about what skills to attempt and when.

In six experiments, we explored this hypothesis. First, we tested whether repeatedly watching others increases viewers’ belief that they can perform the skill themselves (Experiment 1). Next, we tested whether these perceptions are mistaken: Mere watching may not translate into better actual performance (Experiments 2–4). Finally, we tested mechanisms. Watching may inflate perceived learning because viewers believe that they have gained sufficient insight from tracking the performer’s actions alone (Experiment 5); conversely, experiencing a “taste” of the performance should attenuate the effect if it is indeed driven by the experiential gap between seeing and doing (Experiment 6).
In the experiments, participants watched videos of the tablecloth trick (pulling a tablecloth off a table without disturbing dishes; experiments 1 and 5), throwing darts (experiment 2), doing the moonwalk (experiment 3), mirror-tracing (tracing a path through a maze displayed at the top of the screen in a blank box just below it; experiment 4), and juggling bowling pins (experiment 6). Through their research, the authors isolated the missing element in learning by watching - feeling the actual performance of the task. In the 6th experiment, simply getting a taste of the feelings involved - holding the pins that would be used in juggling without attempting to juggle themselves - changed ratings of skill acquisition.

During the Olympics, when you watch athletes at the top of their game performing tasks almost effortlessly, it's easy to think the tasks aren't as challenging as they actually are. Based on these study results, even having people simply put on a pair of ice skates or stand on a snowboard might be enough for them to realize just how difficult skating can actually be.

Just to help put things into perspective, here's a supercut of awesome stunts followed by a person demonstrating why you should not try them at home:

Tuesday, February 13, 2018

A Work of Art

NASA just released some absolutely breathtaking images of Jupiter, taken by the Juno Spacecraft.




Obviously, the images have been processed to accentuate different elements. You can view the images sent back and do some editing of your own by visiting the JunoCam site. This last one reminds me of a particular work of art:

Tuesday, February 6, 2018

New History of Psychology Book to Check Out

One of my favorite topics is History of Psychology. For every psychology class I teach, I often spend the first lecture giving students historical background on the field/subfield, even if that information isn't discussed in the text. I love tracing the background and showing how our current position is a product of or reaction to everything coming before it.

So I'm always excited to learn about a new History of Psychology book to check out:


William James is responsible for bringing the field of psychology to the United States, and is considered the founder of the functionalism school of thought. That is, one of the early debates in the field of psychology was structuralism vs. functionalism. Basically, structuralists focused on whether consciousness is the product of definable components (structures of the mind) and functionalists viewed consciousness as an active adaptation to one's environment, resulting from complex interactions (focused on function of the mind, rather than the individual components). So you can think of these schools as trees vs. forest focus.

You are probably also familiar with James's brother, novelist Henry James (who wrote The Portrait of a Lady and The Wings of the Dove) and possibly his sister, Alice James (who suffered from life-long mental illness and published her diaries on the topic).

James is well-known for his two-volume Principles of Psychology (which is public domain and can be found here). This new book can be a companion piece to that book and helps place James's work in its historical context.

If only I hadn't made a New Year's Resolution to purchase no books...

Monday, February 5, 2018

Afternoon Reading

I'm currently working from home, meaning I'm sitting on my couch, watching the snow falling as I write this. I'll be driving into the city later this afternoon. For now, I'm happy to be in my warm apartment.

Here's what I'm reading this afternoon:

Friday, January 5, 2018

Teamwork and the Reproducibility Problem

It has been known for some time that psychology has a reproducibility problem, though we may not always agree on how to handle or discuss these issues. I remember chatting with another researcher at a conference shortly after I finished my masters thesis on stereotype threat and its impact on math performance in women. I had failed to replicate stereotype threat effects in my study. She, on the other hand, said her effects were incredibly strong; she described a participant experiencing a panic attack when she was told she had to do math problems, and had even noticed her female participants' math performance was negatively affected when her research assistant had been knitting during the session. (I also remember a reviewer telling me I must have performed the study poorly, not because the reviewer found any flaws in my methods, but because I had failed to reproduce the stereotype threat effects in my research.)

Efforts to handle this crisis thus far have included making psychological research more transparent and large-scale meta-analyses. And a new effort is already underway to harness the power of multiple research labs across the world: the Psychological Science Accelerator. Christie Aschwanden of FiveThirtyEight has more:
[Psychologist Christopher] Chartier, a researcher at Ashland University, doesn’t think massively scaled group projects should only be the domain of physicists. So he’s starting the “Psychological Science Accelerator,” which has a simple idea behind it: Psychological studies will take place simultaneously at multiple labs around the globe. Through these collaborations, the research will produce much bigger data sets with a far more diverse pool of study subjects than if it were done in just one place.

The accelerator approach eliminates two problems that can contribute to psychology’s much-discussed reproducibility problem, the finding that some studies aren’t replicated in subsequent studies. It removes both small sample sizes and the so-called weird samples problem, which is what happens when studies rely on a very particular population — like relatively wealthy college students from Western countries — that may not represent the world at large.

So far, the project has enlisted 183 labs on six continents. The idea is to create a standing network of researchers who are available to consider and potentially take part in study proposals, Chartier said. Not every lab has to participate in any given study, but having so many teams in the network ensures that approved studies will have multiple labs conducting their research.
According to the blog, the Psychological Science Accelerator is taking on its second study, this one on gendered social category representation. And if you're attending the Association for Psychological Science meeting in May, you can check out a symposium on "Large Scale Research Collaborations: Applications in Crowd-Sourcing and Undergraduate Research Experience, Replications, and Cross-Cultural Research." (Day and time TBD - APS is still finalizing the program, and is still accepting poster submissions through the end of this month.)

Wednesday, January 3, 2018

Statistical Sins: Junk Science

This isn't exactly a statistical sin, but it's probably one of the worst sins against science - buying into garbage that, even worse than being of no help, might actually kill you. It's a sign of how comfortable many in our society have become, being free from worry about life-threatening illnesses, that they begin to wonder if the things that are keeping us alive and healthy are of any use at all.

We've seen this happening for a while with vaccinations. And now, it's happening with water:
In San Francisco, "unfiltered, untreated, unsterilized spring water" from a company called Live Water is selling for up to $61 for a 2.5-gallon jug — and it's flying off the shelves, The New York Times reported.

Startups dedicated to untreated water are also gaining steam. Zero Mass Water, which allows people to collect water from the atmosphere near their homes, has already raised $24 million in venture capital, the report says.

However, food-safety experts say there is no evidence that untreated water is better for you. In fact, they say that drinking untreated water could be dangerous.

"Almost everything conceivable that can make you sick can be found in water," one such expert, Bill Marler, told Business Insider. That includes bacteria that can cause diseases or infections such as cholera, E. coli, hepatitis A, and giardia.
In a world where 884 million people do not have access to clean water, rich people in California (and elsewhere) are paying hundreds of dollars for water that could make them sick or even kill them. Perhaps the most telling quote from the article is this one, from Bill Marler:
"You can't stop consenting adults from being stupid," Marler said. "But we should at least try."
In fact, there are a variety of explanations for why people might buy into such junk science. Not only the comfort of never having to worry about a cholera epidemic or seeing firsthand the complications of polio, but also the use of vague euphemisms, like "raw water." The price tag might also be an indicator of quality. I remember hearing a story (possibly apocryphal) about Häagen-Dazs, that it was originally less expensive with a more generic name. But when they changed the name to Häagen-Dazs and increased the price, it started flying off the shelves. Obviously, getting a celebrity or someone of influence on board can also help something take off.

Still, it's fascinating to me how some of this junk science proliferates. The pattern of diffusion for innovations is well-known, and while we know that not every innovation will take off (like the Dvorak keyboard), innovations are by their nature things that make our lives better/easier, and the ones that take off are likely the ones with the best marketing. But junk science is absolutely not an innovation, and in some cases, they make our lives worse or harder. How, then, do we explain some of the nonsense that continues to influence people's behavior? What sorts of outcomes does it take before people see the error of their ways?