Friday, September 30, 2016

City Planning, Healthy Habits, and the Broad Field of Public Health

To improve health, we need to be thoughtful about city design, a series of articles published in the Lancet says. This may seem like a strange juxtaposition, but in fact, city planning has been an important aspect of public health since the beginning. A press release about the series, published by the American Journal of Managed Care, explains:
"With the world’s population estimated to reach 10 billion people by 2050, and three quarters of this population living in cities, city planning must be part of a comprehensive solution to tackling adverse health outcomes," series author Professor Billie Giles-Corti, University of Melbourne, Australia, said in a statement. "City planning was key to cutting infectious disease outbreaks in the 19th century through improved sanitation, housing and separating residential and industrial areas. Today, there is a real opportunity for city planning to reduce non-communicable diseases and road trauma and to promote health and wellbeing more broadly."
Thanks to my time at VA, I've gotten to learn more about this broad field known as public health. This field covers many aspects of understanding and improving health - everything from how diseases spread to understanding health habits to interventions that improve health. And the diseases they study also range from communicable - diseases you can "catch" from others - to noncommunicable - a broad category that includes lifestyle diseases (like Type II diabetes and cardiovascular disease) and accidental injuries (car accidents). City planning has the potential to prevent these noncommunicable diseases, by encouraging physical activity, promoting healthy eating, and creating a safe environment that minimizes traffic accidents. Even pollution and noise levels need to be considered, as these also contribute to physical and mental health.

In fact, city planning is immensely important when considering lower income housing sections, which historically have had limited green space, poor (or no) walking paths, and other safety concerns that keep people from spending time outside and/or engaging in physical activity, as well as limitations in healthy food choices. This only serves to increase health disparities between high and low income individuals. Just as in the past, public health efforts improved living conditions especially for individuals living in low income sections, thoughtful planning is needed today.

The important thing to keep in mind in terms of public health is that improving population health has far-reaching benefits, like increased productivity and decreased disability. Obviously, it's difficult to put a price on the quality-of-life benefits, but for those who think more in terms of tax dollars spent, spending money to improve living conditions actually does pay for itself.

Thursday, September 29, 2016

Susan Schneider Williams Shares Her Story with the Readers of Neurology

Actor Robin Williams died by suicide in August of 2014. In the time that followed, there was much speculation as to why a man who brought joy to so many people could take his own life. His widow, Susan Schneider Williams finally shared that Robin had been suffering from Lewy body dementia, a neurodegenerative disease that runs a rapid course; it has no cure. Personally, I was surprised that she didn't speak about this sooner - her announcement came a year after the suicide. Today I found out the real reason (via Neuroskeptic) in an article Susan published in the journal Neurology, which is freely available here.

Robin and Susan knew something was wrong, and multiple doctors identified multiple explanations for his constellation of symptoms (Parkinson's disease, major depression, and so on). They prescribed many medications that helped only a little or, in some cases, made it worse. As Susan discussed in her article, Robin was doubtful of every diagnosis he heard. He knew something was wrong, and though he didn't know what exactly, the diagnoses being thrown at him didn't get to the heart of what he was experiencing. Imagine how frustrating and terrifying that must have been, not just for Robin but for the person closest to him. He wasn't diagnosed with Lewy body until after his death. Would knowing the true cause have given him some comfort? I see now why she didn't want to speak out right away. Not only that, she wanted to learn more about the disease that had taken her husband:
Three months after Robin's death, the autopsy report was finally ready for review. When the forensic pathologist and coroner's deputy asked if I was surprised by the diffuse LBD pathology, I said, “Absolutely not,” even though I had no idea what it meant at the time. The mere fact that something had invaded nearly every region of my husband's brain made perfect sense to me.

In the year that followed, I set out to expand my view and understanding of LBD. I met with medical professionals who had reviewed Robin's last 2 years of medical records, the coroner's report, and brain scans. Their reactions were all the same: that Robin's was one of the worst LBD pathologies they had seen and that there was nothing else anyone could have done. Our entire medical team was on the right track and we would have gotten there eventually. In fact, we were probably close.

But would having a diagnosis while he was alive really have made a difference when there is no cure? We will never know the answer to this
Since then, Susan has become involved with the American Academy of Neurology and serves on the Board of Directors of the American Brain Foundation. She shares her story now in the hope that it will help others:
Hopefully from this sharing of our experience you will be inspired to turn Robin's suffering into something meaningful through your work and wisdom. It is my belief that when healing comes out of Robin's experience, he will not have battled and died in vain. You are uniquely positioned to help with this.

I know you have accomplished much already in the areas of research and discovery toward cures in brain disease. And I am sure at times the progress has felt painfully slow. Do not give up. Trust that a cascade of cures and discovery is imminent in all areas of brain disease and you will be a part of making that happen.

This Cat's Got Rhythm

Trumped Up Politics

As I've mentioned before, I try to avoid being political online. I see no issue with declaring a stance on important issues, like ensuring strong funding for scientific research, but shy away from sharing my opinion in partisan politics. That being said, I have not tried to hide my opinion on Trump, whose candidacy frightens me. And my thoughts about what he would be like as president range from launching Twitter assaults on anyone who dares criticize him to committing war crimes (if he did, in fact, follow through on past comments that to fight terrorism, you have to go after their families).

Fortunately, I'm not the only one frightened by the prospect of Trump winning the election. And even people and organizations that generally back Republicans are endorsing other candidates. For instance, the Cincinnati Enquirer, which has traditionally backed GOP candidates, just endorsed Hillary Clinton:
The Enquirer has supported Republicans for president for almost a century – a tradition this editorial board doesn’t take lightly. But this is not a traditional race, and these are not traditional times. Our country needs calm, thoughtful leadership to deal with the challenges we face at home and abroad. We need a leader who will bring out the best in all Americans, not the worst. That’s why there is only one choice when we elect a president in November: Hillary Clinton.
The compliments of Clinton are, unfortunately, sometimes backhanded in this editorial. For instance, this is how they start the editorial:
Presidential elections should be about who’s the best candidate, not who’s the least flawed. Unfortunately, that’s not the case this year.
They call Clinton arrogant and unwilling to admit wrongdoing. Never mind the fact that Trump will say something horrible, then deny he ever said it, even when people show him videos or tweets as evidence. But they do offer some genuine praise of Clinton, calling her competent and inclusive, with strong diplomatic skills. This endorsement could be big for Clinton, considering Ohio is a battleground state.

Need a break from politics? I've created the Puppies Not Politics page on Facebook, where I share puppy pictures for each political post I see - head on over to for more adorable puppies!

Wednesday, September 28, 2016

Why Antidepressants Fail: The Answer May Lie at the Heart of Social Psychology

Depression is one of the most common mental disorders in US, affecting almost 7% of the population:

Though antidepressants are a frequent treatment for depression, as well as related disorders, studies estimate that they are ineffective in 30-50% of people. And a new study may have uncovered why:

The new research suggests it is at least partly down to people’s environment whether or not antidepressants work. Antidepressants may give the brain a chance to recover from depression, but more is needed. The rest could be down to being exposed to relatively low levels of stress.
The study was done in mice, so more research is needed to confirm whether this holds true for humans. But this finding aligns with a variety of social psychological (and related) theories. In fact, the influence of environment on physical health has long been the subject of public health and health services research and is part of the motivation for changes to healthcare delivery (like the patient-centered care model).

Tuesday, September 27, 2016

On the Importance of Emotion

During April's A-Z Blog Challenge, I talked about a variety of social psychological concepts. I talked about emotion (also known as affect) and how it influences the decision-making process. The thing many people outside of psychology do not always understand is that there really isn't a hard line between decisions made through cognition ("rationally") and those made through affect ("emotionally"). The two forces work together. Without emotion, we would find it very difficult to make some of the most basic decisions. Why? Because rational thought can only get us so far, and when two options are equally matched on a logical level, it takes that extra push of emotion to make a decision.

In Descartes' Error by neuroscientist Antonio Damasio, we learn about a patient named Elliot (not his real name), who was a successful businessman with a family, until he had a brain tumor removed from his frontal lobe. After, Elliot remained an intelligent, pleasant person, but his life was in shambles:
Any projects he did on the job were either left incomplete or had to be corrected, eventually leading to the loss of his job. He got involved in a moneymaking scheme with a “shady character” that ended up in bankruptcy. He got divorced, then married again to someone his family strongly disapproved of, and divorced again. By the time his referring doctors sent him to Damasio, he was living with a sibling, and, as a final blow, was denied disability assistance. The docs wanted to know if Elliot had a “real disease,” Damasio recounts, since “[f]or all the world to see, Elliot was an intelligent, skilled, and able-bodied man who ought to come to this senses and return to work. Several professional had declared that his mental faculties were intact — meaning that at the very best Elliot was lazy, and at the worst a malingerer.”

[Damasio] learned that when Elliot was at work, he might spend an entire afternoon trying to figure out how to categorize his documents: Should it be by date, pertinence to the case he’s working on, the size of the document, or some other metric? Yet his cognitive faculties were ace: He tested well when given an IQ test and other measures of intelligence; Elliot’s long-term memory, short-term memory, language skills, perception, and handiness with math were all still present. He was not stupid. He was not ignorant. But he acted like he was both. He couldn’t make plans for a few hours in advance, let alone months or years. And it had led his life to ruin.

What was even more confounding is that Elliot could think up lots of options for a decision. When given assignments of assessing ethics (like whether or not to steal something for his family, Les Miserables–style), business (like whether to buy or sell a stock), or social goals (like making friends in a new neighborhood), he did great. But, even with all the idea generation, he could not choose effectively, or choose at all.
Without emotions, it becomes more difficult to know which tasks are more pressing, which organizational method is most preferable, or even when to buy or sell a stock. Those little emotional cues push us along. Whether we mean to or not, emotions come into play regularly. If they didn't, we would be like poor Elliot, forever analyzing organizational methods while blowing work deadlines or falling for scams that might sound legit if not for the little nagging doubt or fear in the back of our minds.

In fact, emotional reactions occur more quickly than cognitive reactions. We feel fear and begin to run before we consciously realize we've just seen a bear during our hike. One reason for this might be how our brain uses short-term memory (also known as working memory) versus long-term memory. Working memory is filled with information we want to have readily accessible, while long-term memory refers to the information and episodes (memories of our lives) in storage.

A recent study in Psychological Science delved into this very topic. Across four studies, they examined how emotional information stored in working memory impacted processing speed. On a computer, they showed participants faces, either neutral or negative (fearful or angry). They manipulated whether participants held this face in their working memory by telling them to remember the face for a task later on. They then flashed faces, which increased in contrast to become more clear across 5 seconds. Participants had to indicate whether they saw the face, and then indicate whether it was the same as the face they were shown initially. They found that people identified faces more quickly when presented with a fearful or angry face:
In sum, the present study extends previous findings by demonstrating that the content of WM can affect emotional processing in the absence of conscious awareness, and such WM modulation effects on nonconscious processing seem to be tuned to threat-related signals (e.g., fear and anger).
Essentially, the faces put people on edge and made them react more quickly. In a computer-driven study, this might not seem very important but what if (for instance) you're out in a public place and you look around and see fear on people's faces? You now know there's something to be afraid of and you will hopefully react more quickly when you encounter whatever you should fear. If you didn't have emotions, faces would just be faces, and whatever emotion they're displaying would be as meaningless as organizing files by document size.

Monday, September 26, 2016

Now I Am Become Blog

On Thursday, I wrote about Susan Fiske's upcoming article in the APS Observer. Today, Neuroskeptic, published its own response to Fiske's article. Unlike other bloggers (such as Andrew Gelman), Neuroskeptic seems to agree with my interpretation that Fiske was not talking about just anyone who criticizes scientific research, but people who do so in an unethical manner. However, he takes things one step farther than me by demanding that Fiske name names:
We should hold the offenders accountable with reference to specific examples of their attacks. After all, these people (Fiske says) are vicious bullies who are behaving in seriously unethical ways. If so, they deserve to be exposed.

Yet Fiske doesn’t do this. She says, “I am not naming names because ad hominem smear tactics are already damaging our field.” But it’s not an ad hominem smear to point to a case of bullying or harassment and say ‘this is wrong’. On the contrary, that would be standing up for decency.

His Name May Ring a Bell

I talk a lot about behavior and conditioning on this blog, so it's surprising that this is the first time I've recognized this big day in the history of psychology:

Happy birthday, Ivan Pavlov!

Pavlov was born on this day in 1849. Though Pavlov's research on classical conditioning has had a tremendous influence on the field of psychology, he was actually a physiologist, interested in studying digestion. His research is a great example of the importance of serendipity - fortunate accidents - in the advancement of science.

In honor of Pavlov's big day, I'm going to do something I don't do very often - share a video of myself. Several years ago, when I was still a grad student and adjunct faculty member at Loyola, I was invited by a colleague to submit an educational video for his YouTube series (which, sadly, didn't take off), in which I spoke about classical conditioning. Enjoy!

Sunday, September 25, 2016

The Scoobies of Stranger Things

In a mashup that was surely created just for me (I know, it wasn't... and don't call me Shirley), someone has created Buffy the Vampire Slayer-style opening credits for Stranger Things:

Nerdist is of course loving this mashup as well, and speculated on who the various Stranger Things characters would align with in Buffy:
In this particularly apt mash-up, YouTube user Tony Harley has combined the characters of Stranger Things with the Buffy the Vampire Slayer intro credits, and it’s so damn perfect I can’t believe no one thought of it sooner. I mean, think about it: a group of lovable young weirdos band together to solve a supernatural mystery and defeat monsters? If that’s not the Scooby Gang from Buffy, I don’t know what is. But the parallels go further than that. Eleven, played to perfection by Millie Bobbie Brown, is the obvious choice for Buffy, not just because she’s the complicated young woman at the center of the story, but because she is tormented by her own powers. [spoilers removed]

Sheriff Jim Hopper (David Harbour) is clearly the Giles in this crossover universe (and we’d love to see him interact with the kids more in season two, just to keep this comparison going), and I SUPPOSE Joyce would be Joyce, although I never felt like Buffy-Joyce supported and believed in Buffy the way Winona-Joyce believes in her missing son. Brooding brother Jonathan reminds me of mopey ol’ Angel, so maybe Steve is Spike? As for the kids, well, that’s where things get tricky. Ultimately I think Mike is Willow (which yes, corresponds with some popular Buffy fanfic out there), Dustin is Xander, and Lucas is Cordelia. That leaves Tara for Nancy, which shakes out nicely. Ok, so the parallels aren’t THAT direct but you get the picture.

Don't Bother, They're Here

In the category of "bizarro news," you may have heard about people dressing up like clowns and terrorizing the locals, in places like Green Bay, Wisconsin, and Greensboro, North Carolina, and a few other places without Green in the name. Being terrified of clowns, they really don't have to do much to terrify me except, you know, look like this:

However, these clowns are being truly scary to everyone who encounters them, including trying to abduct children. So far, there have been reports of creepy clown sightings in 6 states. A few have been arrested, and said after they were just trying to scare people and have fun. But the real question is: WTF? Why clowns? And why so many all of a sudden?

The fear of clowns (also sometimes called coulrophobia) is not an actual diagnosable fear; it doesn't actually appear in the Diagnostic and Statistical Manual of Mental Disorders (DSM) or the International Classification of Diseases (ICD). But there are many instances in pop culture, such as the horror miniseries It in the 1990s, the seem to correlate with increases in reported fear of clowns. A new version of It comes out next year. Perhaps that's the reason for all this clowning around?

Saturday, September 24, 2016

And It Feels Like the First Time

The last couple of nights, I've gotten to engage in two "firsts":

The first first was seeing my favorite movie, The Big Lebowski, on the big screen. Though I've seen this movie so many times I can pretty much recite the whole script, I first discovered it 15 years or so ago, after it had been released on video. (In fact, I'm sure the first time I saw it was on VHS.) Thursday night, I went to a Big Lebowski movie screening/beer tapping party with a friend. So much fun! I wore a Maude Lebowski costume and won the costume contest (!) despite the fact that there was a very authentically dressed Dude there (who I personally think should have won). Here's a photo my husband took of me in costume right before I left for the party:

Then for the second first, I went to my first masquerade ball, the Devil's Ball to benefit the Auditorium Theater. Once again, so much fun! Despite feeling a little rough and having almost no appetite from over-indulging Thursday night, I spent much of the night dancing, partaking in the mask competition, and hanging out with friends. I even had my picture taken by the professional photographer after one of the Auditorium Theater board members complimented my dress. (Note to self - when going to this kind of party, always know off hand "who" you're wearing.) I mentioned to one of my friends that I haven't been coached on how to pose (which I really appreciated, because I'm not naturally photogenic) since my wedding day. Hopefully that photo will be up soon and I can share. For now, I give you a selfie I took right before walking over the ball:

Thursday, September 22, 2016

Tastes Like the Real Thing

And in the category of "peer review is so screwed", some researchers machine-generated reviews and presented them along with actual reviews to participants, who generally could not notice a difference:
Peer review is widely viewed as an essential step for ensuring scientific quality of a work and is a cornerstone of scholarly publishing. On the other hand, the actors involved in the publishing process are often driven by incentives which may, and increasingly do, undermine the quality of published work, especially in the presence of unethical conduits.

We presented to [16] subjects a mix of genuine and machine generated reviews and we measured the ability of our proposal to actually deceive subjects judgment. The results highlight the ability of our method to produce reviews that often look credible and may subvert the decision.
God help us all.

What Has Happened Down Here is a Miscommunication

Has this ever happened to you? You go to see a movie with a friend. You love every minute of the movie, laughing at the jokes, crying when something sad happens, and cheering when the hero saves the day. The credits roll and you turn to your friend and say, "What did you think?" Your friend proceeds to trash-talk the whole movie and you find yourself thinking, "Did we watch the same film?"

That's kind of what I thought when I read a blog post a grad school classmate shared. The post was a response to a forthcoming article for the APS Observer, the magazine of the Association for Psychological Science. The article, by social psychologist Susan Fiske, deals with the new(ish) trend of criticizing psychological research in social media settings (the text of her article is provided in the blog post linked above). While her insistence that criticism of psychological research should be done either in private (i.e., peer review) or in moderated settings (e.g., letters to the editor/invited responses or discussions during conference presentations) is a bit short-sighted in my opinion, she does make a point that because it has gotten easier for people to a) get their message out there and b) get contact information for researchers, some criticisms have been little more than attacks. Attacks that are not necessarily because of issues with the validity of the research or the soundness of the methods, but because of vehement disagreement with the conclusions of the research. Although her article is short and she doesn't call anyone out by name or topic area ("because ad hominem smear tactics are already damaging our field"), she's probably talking about this:
Some [researchers] have weathered frightening vitriol and threats to their reputations. Back in 1975, US Sen. William Proxmire bestowed the first of his infamous “Golden Fleece” awards on a small federal grant given to APS William James Fellows Elaine C. Hatfield of the University of Hawaii and Ellen S. Berscheid of the University of Minnesota. Proxmire denounced their study on social justice and equity in romantic relationships as a waste of taxpayers dollars. The publicity generated threatening letters and phone calls to both scientists, and their federal funding dried up because of the stigma.

In the 1990s, renowned memory researcher and APS Past President Elizabeth F. Loftus, at the University of California, Irvine, drew considerably hostile reactions when her studies challenged people’s claims that they had uncovered — often with the help of therapists — repressed memories of abuse, molestation, and even alien abduction. Loftus even had to have armed guards accompany her to lectures after she received death threats.
Fiske talks (once again, in the general sense) about attacks that share some common elements of these extreme cases:
The destructo-critics are ignoring ethical rules of conduct because they circumvent constructive peer review: They attack the person, not just the work; they attack publicly, without quality controls; they have sent their unsolicited, unvetted attacks to tenure-review committees and public-speaking sponsors; they have implicated targets' family members and advisors.

Which is why I was completely dumbfounded when, after sharing the article in its entirety, the author of the blog post, Andrew Gelman, summed it up as follows:
In short, Fiske doesn’t like when people use social media to publish negative comments on published research. She’s implicitly following what I’ve sometimes called the research incumbency rule: that, once an article is published in some approved venue, it should be taken as truth.
Did we really just read the same article?

But it gets even weirder. Gelman begins talking about the new movement in psychological science to encourage replication of past studies, a movement that has at least created some serious doubts about the validity of past studies. He aims a lot of criticism at Proceedings of the National Academy of Sciences, and a set of articles edited by Fiske. In fact, he's done this before. To be totally honest, I agree with many of his criticisms of these papers, his concerns about the validity of studies that current researchers have failed to replicate, and even the potential errors he highlights in one of Fiske's own papers. So yes, perhaps Fiske does deserve some criticism.

Except that's not what her article is about. She isn't saying there should be no criticism; she's saying that, just as there are ethical guidelines for the proper conduct of research, there are (or should be) ethical guidelines about how to offer criticism of research. But Gelman refers to Fiske as attacking "science reformers" - the people doing replication research - when I think she's referring to ad hominem attacks. I think she would have far less issue with Gelman going through Fiske's work and picking it apart, discussing methodological and analytical errors, than she would with someone writing a blog post about how much Fiske sucks and that she should lose her position at Princeton, and hey, here's the contact information of her department chair and dean of the school, why don't you, dear reader, call them up and tell them how much you hate Fiske.

So I agree with Gelman on his criticism of some of the key studies in psychological science, and his desire for more transparency and replication - something Fiske also references in her short article. And I agree with his final conclusion:
Let me conclude with a key disagreement I have with Fiske. She prefers moderated forums where criticism is done in private. I prefer open discussion. Personally I am not a fan of Twitter, where the space limitation seems to encourge snappy, often adversarial exchanges. I like blogs, and blog comments, because we have enough space to fully explain ourselves and to give full references to what we are discussing.
I frequently do the same thing on my blog. But I take issue with his insistence that Fiske's "destructo-critics" are Gelman's "science reformers." The winds may have changed in psychological science research, but Fiske and Gelman are sailing different seas.

Wednesday, September 21, 2016

Teaching the Machines to Read

In my final years of grad school, I became interested in a different approach to data collection - text mining of online information - and a complementary analysis approach - natural language processing. I started my training in psychology as a purely quantitative researcher, collecting numerical data that I could analyze with any number of statistical tests.

In 2007, I started working on a collaborative project with Loyola (my grad school) and Chicago Public Schools. Unlike my past research, this research drew upon qualitative methods - looking for themes and patterns in narrative text. Over the years that followed, I worked on improving my knowledge of qualitative methods. In fact, that qualitative experience is (part of) what got me my job at VA.

Around 2010, when I was working on my dissertation, I learned more about text mining and natural language processing - not enough to know how to actually do it, but enough to be dangerous (or, for my Dunning-Kruger fans, enough to know there was a lot to learn). I discovered a Python package called Natural Language Processing Toolkit, and started teaching myself Python. I now know there are other languages that can be used to do the same thing, but I still prefer Python for its easy syntax and data analysis capabilities.

My plan was to use natural language processing (or NLP) for a large-scale content analysis of press coverage of criminal trials (pretrial publicity: the subject of my dissertation), but NLP is used in many different applications. For instance, NLP is used to essentially teach computers to read, to improve search engines. However, researchers usually use print sources like the Wall Street Journal to train their computers, which represents Standard English, but doesn't get at other ways people use language, like slang and dialect. But a recent study at University of Massachusetts Amherst might signal a change:
Using only standard English has left out whole segments of society who use dialects and non-standard varieties of English, and the omission is increasingly problematic, say researchers Brendan O'Connor, an expert in natural language processing (NLP) at the University of Massachusetts Amherst, and Lisa Green, director of the campus' Center for Study of African-American Language. They recently collaborated with computer science doctoral student Su Lin Blodgett on a case study of dialect in online Twitter conversations among African Americans.

The authors believe their study has created the largest data set to date for studying African-American English from online communication, examining 59 million tweets from 2.8 million users.

The researchers identified "new phenomena that are not well known in the literature, such as abbreviations and acronyms used on Twitter, particularly those used by African-American speakers," notes Green. adds, "This is an example of the power of large-scale online data. The size of our data set lets us characterize the breadth and depth of language."
Not only can understanding different dialects help improve performance of search engines, it can help with research examining public opinion (on anything from politics to preferences for soda packaging). In order to use language data in this type of research, you have to understand how language is being used by your sample (as opposed to how it should be used). Although that last statement is a matter of debate in the linguistics community:

Tuesday, September 20, 2016

The Hamilton Starting Lineup

Hamilton, the smash hit Broadway show it's damn near impossible to get tickets to, opens its Chicago run one week from today! Thanks to the early group sales, and a thoughtful and organized friend, I'll be seeing the show on October 16.

To help people prepare for the run, Crain's has put together an article (shared with me by my friend over at The Daily Parker) giving the main characters, a quick bio, and 1-3 of their "defining lines." Their main reason for doing this is because the incredible wordplay composer/lyricist/awesome human being Lin-Manuel Miranda was able to put into the play might be a bit much for traditional theatre goers. In fact, the amount of material packed into the show would make the show more like 4-6 hours if written as a "traditional" musical.
"Hamilton" clocks in with twice as many words per minute as its closest competitor, "Spring Awakening." With its long run time and dense lyrics, "Hamilton" has nearly an order of magnitude more words than "1776."
Hamilton 2015 2h 23m 20,520 144
Spring Awakening 2006 1h 1m 4,709 77
Phantom of the Opera 1988 1h 40m 6,789 68
Company 1970 1h 1m 5,085 83
1776 1969 0h 41m 2,735 66
Candide 1956 1h 14m 5,616 76
Oklahoma! 1943 1h 14m 4,303 59
Pirates of Penzance 1879 1h 43m 5,962 58
And it works. Each time I listen to the Hamilton cast album, I'm astounded at the amazing word play and sheer brilliance of what Miranda was able to do. If you haven't listened yet, I highly encourage you to check it out and prepare to be wowed!

You Have Been Weighed, You Have Been Measured

When people find out I'm a psychologist, there are a few very predictable reactions, as predictable as the response I get when I tell people I'm originally from Kansas:

I'm of course asked if I'm analyzing them right now. I have many go-to responses for that one:

"I'm a social psychologist. I don't help people."

"I do research, so actually I'm analyzing your behavior."

"You think any self-respecting psychoanalyst would do that for free?"

But the next most common question is about dreams. I've had many people ask me if I can help them figure out the meaning of a strange dream they had. Even people who have known me for years sometimes ask me about dreams.

Dreams are pretty fascinating. It's like your own TV channel, except the shows all involve you walking around with no pants. (Or in my case, shoes - I definitely have the naked dream but probably more common is realizing I went all the way to work/school/somewhere random with no shoes.) There are certain dreams we all have. The naked dream is one example. But another common one was a recent subject of an article for the Washington Post:
We’ve signed up for a course that we never attend, or we forget we enrolled in it. When final-exam day approaches, we are panic-stricken because we never went to any of the lectures, never took notes and never did the readings or assignments.
The author, a journalism professor at the University of Maryland, did some digging to find out what psychologists had to say about the dream. I give her credit that the first place she looked was at the peer-reviewed literature. She was dismayed when she found nothing and expressed surprise that no psychologists had studied such a common dream. I'll be honest, I sat down and tried to figure out how I would study such a dream, and unfortunately, there don't seem to be a lot of objective ways to study any dream. You'd need to find people right after (or right before) they had the dream and try to measure various aspects of their lives to find any connections or explanations. But the problem with recruiting people right before is that 1) you don't know when or even if they'll have the dream and 2) you risk influencing their dreams by the simple act of studying them, perhaps causing people to have the school dream. In fact, just writing down your dreams have been shown to influence how you dream - I'm told that's the best way to get into lucid dreaming, where you are able to control the direction of a dream during the actual dream.

So with no peer reviewed literature to guide her, she asked various psychologists to offer their thoughts on the dream. Though many of the explanations crossed into the realm of psychoanalysis (see this blog post to find out what I really think about Freud and his ilk, a post that was also about dreams in which I briefly referenced the "going back to school" dream), there did seem to be a thread linking these explanations together that probably gets at the truth:
"I think those who have it tend to be professional and were successful students," says Judy Willis, a neurologist and teacher who lives in Santa Barbara, Calif., and who wrote about the dream in a 2009 Psychology Today blog post. "These are people who have demanded a high performance from themselves. The recurrence of the dream correlates with times of stress and pressure, when people feel they have a challenge to achieve."

Gemma Marangoni Ainslie, an Austin psychoanalyst, agrees. The final exam, she says, "is likely representative of an occasion when the dreamer feels he or she will be tested or measured, and the anxiety is about not measuring up. The dreamer's task in 'awake life' is to translate the final exam to a situation he or she is facing that stirs up concerns about potential failure."

But why school? Why don’t we dream about current pressures — grant proposals that are due, impending legal briefs or oral arguments, or newspaper deadlines?

"Emotional memories and impressions made during high-stress experiences are particularly strong, and are further strengthened each time they are recalled and become the place the brain goes when the emotion is evoked," Willis wrote in an email. "Since each new stress in the current day is 'new,' there is not a strong memory circuit that would hook to it in a dream. But there is that strong neural network of previous, similar 'achievement' stress. Since tests are the highest stressors. . . [it] makes sense as the 'go-to' memory when stressed about something equally high stakes in the 'now.'"
So the dream is really about fear of not measuring up, or of being measured and found wanting. We're concerned about failure. Though in the article, the psychoanalyst goes on to talk about manifest/latent content and how your brain is trying to shield yourself from the real truth, the neurologist's explanation is probably more accurate and certainly more supported. It's kind of the "neural-connectedness" theory of dreams. Information stored in your brain is connected through a neural network. Some connections - such the connection between dog and leash - are stronger than others - such as the connection between dog and duck-billed platypus.

The internet, however, laughs in the face of your strong-weak connection dichotomy and says, "You want to see a dog dressed as a duck-billed platypus? Boom!"
When you sleep, your brain is consolidating memories, and building up the neural network. So if a stressful time in your current life reminds you of a stressful time in school, your resting brain will forge or strengthen that connection. I think, and I think Dr. Willis would agree, that dreams are just your brain making and testing connections, causing you to see elements of these memories and bits of information while you sleep. We see common themes and threads not because your subconscious brain is sending you coded messages, but because that's how neural connectedness works.

If you have had this dream, unfortunately you're probably going to continue having this dream, especially in times of stress. Dreams can be disconcerting but they're really just your brain testing out the wiring - and that's a good thing.

Monday, September 19, 2016

And Then There's This...

... via Funny or Die, a product to help women get men to look them in the eye... sort of. In the words of George Takei, "Oh myyyy:

Hillary Clinton, Complexity, and Likability

A college professor shared this great article that I want to pass on: To find Hillary Clinton likable, we must learn to view women as complex beings. The author begins by talking about many of the great characters in pop culture and literature, characters who weren't always likable and sometimes were complete villains. But we were interested in them, because they were complex. And by the way, they are predominantly white men. The author says we've been "trained to emphasize with white men," whether we realize it or not.

It reminded me of a recent complex, and not always likable character, Jessica Jones (title character of the Netflix series). Jessica is a private investigator with super powers. But she's also a sexual assault victim, someone who was continuously victimized by her ex-boyfriend, who was able to make her stay and do horrible things with his own super powers (mind control). We meet her after she has gotten away from him, when she has become an alcoholic, violent, and tactless. I loved the show, and Jessica, right away. Finally, I thought, a victim who doesn't care about being likeable. Who doesn't have to be so sweet and wonderful that of course we all sympathize with her, that we all ask how something so terrible could happen to her.

No, Jessica would say, f*** that. I don't care if you like me. I don't care if you think I deserved or didn't deserve what happened to me. Because at the end of the day, regardless of what I'm like as a human being, I didn't deserve to have my free will taken from me.

But I spoke to many people who couldn't get into Jessica Jones, or found themselves preferring other characters, because Jessica was so unlikable. And it's true, sometimes I would watch Jessica react to a situation and think she should have reacted differently - if she wanted things to go a certain way, that is. But I loved the character, perhaps because she wasn't always likable, and contradicted the stereotypical victim I have seen far too often. She was not a stereotype. She was a real person, with flaws. She was complex.

So I already knew a little of what to expect in the article about Hillary Clinton. Complex female characters are unusual, and hard to accept, so of course, that would extend to our relations with others, especially people we only view through media. But I think this paragraph really sums up the insanity of this election:
I try to wrap my head around the fact that Hillary Clinton is on one hand the most qualified human being to ever run for president of the United States, and, on the other, one of the most disliked presidential candidates of all time. In fact, Donald Trump is the only candidate who is more disliked than Clinton. And he’s not only overtly racist, sexist, and Islamophobic, but also unfit and unprepared for office. How can these two fundamentally dissimilar politicians possibly be considered bedfellows when it comes to popular opinion?
We lack archetypes for people like Clinton, and often demand perfection - even contradictory traits - from women in general. Even Clinton recognizes this; the article quotes an interview with Clinton, in which she says:
It’s hard work to present yourself in the best possible way. You have to communicate in a way that people say: ‘OK, I get her.’ And that can be more difficult for a woman. Because who are your models? If you want to run for the Senate, or run for the Presidency, most of your role models are going to be men. And what works for them won’t work for you. Women are seen through a different lens.

Sunday, September 18, 2016

My YouTube Rabbit Hole

I woke up super-early this morning, and unable to get back to sleep, I decided to get up and do some writing on my laptop.

Yeah, about that... I'm pretty sure my Macbook Air is dead. I tried to do some trouble-shooting, but no luck. Since it was still crazy early, I decided to deal with it later when people who can help are actually awake, and instead fell down a YouTube rabbit hole on my phone. Here's what I watched.

Jenna Marbles giving 30 life lessons she learned over her 30 years of life - everything from moving to new places whenever you can ("sometimes you just need to be in a place where you don't know anyone and you have to go figure your life out") to always do what's right unless it doesn't feel right ("karma is real, but it's not always your job to make sure it gets served"):

John Oliver on Trump's sarcastic (but not really, but maybe) comment on Obama as the founder of Isis. Yes, John, Trump is indeed a "tremendous floater":

This amazing footage being streamed live from the International Space Station:

And finally, this, because reasons:

Saturday, September 17, 2016

Hipster Businesses

Walking to lunch today, I noticed a new place had opened up in the empty store front by the hipster coffee shop, Owl & Lark. The new place: Steak + Vine.

I immediately thought of this.

Friday, September 16, 2016

On Waterfowl, Learning, and Behavior

My new job is in a corporate plaza with a large pond. This pond has given me opportunities to observe many different kinds of waterfowl on a more regular basis, including swans, geese, ducks, and cormorants. During lunch yesterday, as I was eating outside, I watched a group of geese take off in flight, only to land about 10 feet away. Some of the geese continued to fly up and skid to a landing just a few feet away. I started wondering about what purpose this behavior might serve. Perhaps to stir up tasty fish?

But then it occurred to me that it might serve no purpose at all. After all, humans are not the only creatures to engage in meaningless behaviors, which is something I learned while taking a course on B.F. Skinner in college.

To back up a little bit, organisms have two broad ways of adapting to an environment. There's the long way, which occurs over multiple generations, through selective breeding: evolution. And then there's the short way, which occurs within the lifespan of an organism. We call this way learning. Because we can learn, we don't need to wait for evolution to catch up to certain things. For instance, we don't need to wait for humans to evolve, say, thicker skin or an extra layer of body fat to keep warm in cold locations; we've just learned how to do things to stay warm, like light a fire or create clothing.

Animals can also learn. In fact, some research suggests even single-celled organisms can learn - specifically, a very simple form of learning known as habituation. (Simple, but still quite impressive, considering these organisms don't possess a nervous system.) Creatures with more developed nervous systems will, of course, be able to learn more complex tasks. And those tasks can help them to function optimally in the environment.

Or, as I said, the behaviors could be meaningless. According to behaviorists, behavior is shaped through reinforcements and punishment, through a process of association. If I do something and then receive a reward right after, I'll be more likely to do that thing again in the future. But what if the behavior and the reward have nothing to do with each other, and the two just happened to coincide? That part doesn't really matter - what matters is whether I associate the behavior with the reward.

I play pub trivia with a group of friends. Last time I played trivia with them (which was a couple of weeks ago), I wore a new hat. We won. So let's say I wear that hat to trivia again, and we win. What if I keep wearing it and we keep winning? I might come to think of it as my lucky hat and insist I wear to every trivia event in the future. We call this a superstition, and while many cognitive psychologists explained it as an irrational belief formed through maladaptive thought, Skinner believed it was shaped through reinforcement. That is, he explained superstition in behaviorist terms instead of cognitive terms. And he set out to support his hypothesis with a genius study: he made pigeons superstitious.

The study started off as a typical animal learning study. Animals are first fed what's known as an ad lib diet - they can eat as much as they want whenever they want. Once their level of eating is determined from a period of ad lib eating, they are put on a diet, typically 75% of what they had been eating on the ad lib diet. So now they're hungry, meaning you can use food as an easy reward for training. Skinner conducted much of his animal training in boxes called operant conditioning chambers, or what is often called a "Skinner box."

These boxes were usually designed so that food could be delivered in one of two ways: through a lever inside the box that you would train the animal to press, or from outside the box with a switch. Typically researchers use the switch to shape the animal to get to the lever and press it, but for this study, the only way food was delivered was through the external switch.

So now the fun begins. The hungry pigeon was in the box, wondering (as much as pigeons can) when it would be fed again.

Oliver, starring pigeons
It was doing random pigeon things when suddenly, food was delivered through the dispenser. After quickly eating the food, the pigeon eagerly waited for more. when no food came, the pigeon went back to doing random pigeon things when suddenly, more food. Over time, the pigeon began to associate whatever it was doing with the food arriving. Skinner trained pigeons to do all kinds of things in this study, including turning counterclockwise, head bobbing, or even holding completely still. The thing is, the food was being delivered at random intervals, but 75% of the pigeons in the study began repeating whatever behavior they happened to be doing when food arrived. Because the food was being delivered at intervals, more food would arrive eventually, so if the pigeon kept up the behavior long enough, food would arrive while the behavior was occurring or shortly after it finished.

Obviously, the only way for food to be delivered in the highly controlled operant chamber is through some kind of systematic force. So these superstitious behaviors had to be introduced through research. In nature, however, things just happen, and it would be easy for something good (or bad) to just happen at random. If an animal happened to be exhibiting some random behavior (like jumping up and flapping around) before a nice juicy meal appears, it might come to associate the behavior with the reward, even if the two aren't actually connected. And because some animals can learn by watching others, certain behaviors could get passed on that actually serve no purpose.

Thursday, September 15, 2016

In Case You Want to Be "That Guy" at Parties...

... here's a list of common misconceptions, discovered recently on Wikipedia. Just a few examples:
  1. Putting "metal in the science oven" actually doesn't damage the electronics; you just might get some arcing.
  2. The beloved f-word is not actually an acronym for "Fornication Under Consent of the King" nor "For Unlawful Carnal Knowledge" but instead likely originated from German, Dutch, or Norwegian words.
  3. The vomitorium of ancient Rome is not a room where people went to vomit after eating too much food, but instead refers to the entryway to the stadium. (I've been the "well actually" person correcting this in the past, so I guess I'm already "that guy".) 

Stranger Inspiration

Recently, I started watching Stranger Things, a Netflix series that you've probably heard of, unless you live under a rock - but if you're able to get internet under that rock in order to read my blog, I'm not sure you have an excuse even then. The show takes place in 1980s small-town Indiana, and begins with the disappearance of Will Byers. But then things get, well, stranger. As a big fan of horror movies (for more evidence, look here), I'm of course loving the series. And probably the biggest reason for that is how the series pays homage to 1980s horror and sci-fi. Even though I'm only about halfway through the series, I wanted to sit down and gather my thoughts on all the different references and homages I've seen in the episodes I've watched so far. I'm going to try to keep this as spoiler-lite as possible.

The Title and the Town

First, I'd say the biggest inspiration for Stranger Things comes from the work of Stephen King. Though the show takes place in Indiana, on multiple occasions I've felt like I was in Castle Rock, the Maine town in and around which King sets many of his stories. In fact, the title of the show reminds me of a particular story by King:

Many of the characters also remind me of the characters from King's work, particularly Police Chief Jim Hopper, who reminds me especially of Sheriff Alan Pangborn from Needful Things.

Then there's Eleven, a girl with psychokinetic abilities. I had her linked to one of King's characters, but when Will's friends (who discovered Eleven while trying to find Will) dress her up to sneak her into school, that clinched the link in my mind.

In fact, the storyline around Eleven draws a lot from Firestarter, including the government agency element and the nosebleeds Eleven experiences when using her powers.

The Disappearance of Will

Then there's the storyline around Will's disappearance, the creature who took him, and his mother's (Joyce, played brilliantly by Winona Ryder) efforts to find him and bring him home. This strongly reminds me of Poltergeist. In Poltergeist, Carol Anne is kidnapped from within her house and taken to another place that sits in a dimension parallel to our own; that is, she is essentially still in the house, just on a different plane. In Stranger Things, we are told by Eleven, and then shown later on, that Will is hiding in his own house. He's just over one dimension.

Just as Diane learns she can communicate with her daughter through the television, Joyce learns she can communicate with Will through lights.

The entrance to the creature's realm, which appears to be in the government lab (though there might be other doors), is also similar to the door opened in the children's closet in Poltergeist:

Nancy and her Friends

At the same time as the story around the disappearance of Will, we have the story of Nancy, older sibling of one of Will's friends. She starts dating a guy from a different crowd, and while Nancy is distracted by him, best friend Barb is taken by the creature. Nancy definitely reminds me of Nancy Thompson from A Nightmare on Elm Street, a normal teenager who is pulled into a terrifying mystery after the brutal murder of her best friend.

Just like Elm Street Nancy, Stranger Things Nancy is present while her friend is being attacked but unable to help, though the friend continues to call out to her.

Just like Elm Street Nancy, Stranger Things Nancy has a jockish boyfriend who really doesn't seem like Nancy's type.

And just like Elm Street Nancy, Stranger Things Nancy is unable to get any adults to believe her story and has to figure out the mystery on her own.

Will's Friends

Switching gears a little (that is, moving away from horror movies and on to different movies of the 80s), there's Will's group of friends, Mike, Dustin, and Lucas. Probably the biggest parallel is of the group of boys from the Goonies.

They're going to keep looking for Will. Because Goonies never say die.
I mean, the leader of the group who pushes his friends to find Will, is named Mike (Mikey, anyone?). Barb and Nancy, before the whole disappearance and mystery, could also be Andy and Stef from the Goonies. Of course, I can also see similarities to Explorers, especially when you factor in the boys' love of science and meddling with things a government agency isn't happy about.

There's definitely more, but I'm going to stop there for now. More when I finish watching the series!

Wednesday, September 14, 2016

Doggy Reception at the Cell

Last night was my first Sox game at US Cellular Field (the Cell). It turns out last night was also another momentous occasion: it was their annual Bark at the Park event, where fans attend the game with their dogs!

This year, they set a world record for most dogs at a sporting event:
The Sox set out to achieve the record for their annual “Bark at the Park” event, and a Guinness World Record adjudicator was on hand to verify the record and award the club with a certificate for the feat. The Sox needed a minimum of 1,000 dogs in attendance for the record, and the dogs had to remain in their outfield seats for a period of 10 minutes, starting at the top of the third inning, in order for the record to count.
The final count? 1,122 dogs! I pet probably half of them:

Tuesday, September 13, 2016

More Fun with Perception

Another fun demonstration of perception, this image of a grid and 12 black dots showed up on Facebook and Twitter yesterday:

You probably can't tell that there are 12 dots, because you can't actually see all of them at the same time. Many people report seeing only 1-2 at a time. I was able to squint and see 4 at a time, but that's the most I can manage. So what's going on in this crazy picture?

A little digging and I found out the image was originally uploaded by a Japanese psychology professor, Akiyoshi Kitaoka, and a similar puzzle was presented in this article, which contains many optical illusion demonstrations.

The reason you can't see all 12 dots is because you have to use peripheral vision to do so, and while our peripheral vision is fine for seeing big things, it's not so great with fine detail:
In this optical illusion, the black dot in the center of your vision should always appear. But the black dots around it seem to appear and disappear. That’s because humans have pretty bad peripheral vision. If you focus on a word in the center of this line you’ll probably see it clearly. But if you try to read the words at either end without moving your eyes, they most likely look blurry. As a result, the brain has to make its best guess about what’s most likely to be going on in the fuzzy periphery — and fill in the mental image accordingly. That means that when you’re staring at that black dot in the center of your field of view, your visual system is filling in what’s going on around it. And with this regular pattern of gray lines on a white background, the brain guesses that there’ll just be more of the same, missing the intermittent black dots. Those dots disappear and reappear as your eye jitters around "like a camera that’s not being held stably," [vision scientist, Derek] Arnold says.

From Behind the Zion Curtain

Thanks to The Daily Parker, I learned about this interesting article on the relationship between craft breweries per capita and religiosity by state - that is, states with more people identifying as "very religious" have fewer craft breweries, per capita, which they demonstrated with this lovely graph:

The reason for this relationship is because of the relationship between religiosity and policy:
Local regulations determine the level of production much more than demographic characteristics such as income or education, says Bart Watson, chief economist of the Brewers Association, a craft-beer trade organisation. And religious legislators may get a bit overzealous. Utah, a state populated with many teetotaling Mormons, strictly limits the strength of draught beers and cocktails. Bartenders in the state must also mix drinks behind large barriers called “Zion curtains” (for the sake of the children, of course). No surprise then that it appears not to have caught on to the craft-brewing craze.

Monday, September 12, 2016

Citation Needed

As if on cue, Saturday Morning Breakfast Cereal provided this cartoon in my Facebook feed just now:

It has begun. The computers... they're smarter than we are.

Broken Windows Policing and Validity

Today, an article about the so-called "broken windows theory" of policing came through my Twitter feed. If you're unfamiliar with this paradigm, the article includes a nice overview:
Introduced in the March 1982 issue of The Atlantic magazine, “Broken Windows” was the brainchild of George L. Kelling, a criminologist, and James Q. Wilson, a political scientist. At its heart was the idea that physical and social disorder – a broken window, a littered sidewalk, public drunkenness – are inextricably linked to criminal behavior. By focusing on repairing the windows, cleaning up the streets, and dissuading crude behavior, Kelling and Wilson suggested, police departments can help to forestall more serious crimes from ever taking shape.
The implications of this approach are more arrests for misdemeanor crimes, as well as efforts to prevent minor crimes, like "stop and frisk." I'm sure you can imagine how this approach to policing can be (and already has been) abused. And a recent report from the New York City Office of the Inspector General suggests broken windows policing doesn't actually do what it's supposed to do:
The analysis examined five years of arrest and crime data in a hunt for some statistical relationship between quality-of-life arrests — those made for such offenses as public urination, disorderly conduct, and drinking alcohol in public — and a reduction in felony crimes. The results were undeniable: “OIG-NYPD’s analysis has found no empirical evidence demonstrating a clear and direct link between an increase in summons and misdemeanor arrest activity and a related drop in felony crime,” the report stated.
The NYPD countered with a study of their own. In fact, many studies have been performed on this topic, finding mixed results:
“That’s part of the problem with ‘broken windows’ literature,” said Dan O’Brien, an assistant professor at Northeastern University’s School of Criminology and Criminal Justice. “There’s just so many studies, that people will point to whatever study supports their argument.”
This situation is a great example of the nature of research in any topic. Choose a topic and there are likely multiple studies about it, and there will likely be contradictions (sometimes major) between these studies. That doesn't mean all of them are wrong - in fact, they may all be right to some degree. Whenever you conduct research on a topic, you have to make choices. Choices about the methods to use - for instance, use real-world data or simulate the real-world in the lab and do an experiment; what to measure and how to measure it; how to analyze the resulting data; and so on. It's impossible to do every possible iteration of study design. So even if you conduct the study in the most rigorous way possible, you might find very different results from another study also done in a highly rigorous way. That doesn't mean they are both wrong: the differences might be because of the choices you made.

By the same token, some approaches are better than others, though every approach has trade-offs. Which means some studies are going to be better (more valid) than others. This is why, when you evaluate a particular study, you have to evaluate it based on the methods, to recognize the various trade-offs and factors that limit how far you can apply the findings. If you evaluate a study based on the results, it ends up being based on opinion - what you think is true, regardless of whether it is actually true.

As I always informed my research methods students, learning how to evaluate studies is incredibly useful even for non-researchers, because you're otherwise at the mercy of others for what to believe. And that can have real-world implications, just like broken windows policing.

Saturday, September 10, 2016

Take It To the Limit (And Beyond)

You might be familiar with the historic Route 66, which connects St. Louis to Chicago. What you may not know is that a portion of Route 66, which runs along Joliet Road not far from me, is permanently closed due to overdigging at the Vulcan McCook Quarry. The road has been closed for many years, so I've only ever seen the west end of the closure with the "road closed" signs. But recently, someone shared this great video with me - someone flew a drone over the closed section and recorded footage.

The result is breathtaking, with a cracked road and open air on either side. It's like a mountain road, tumbling off into the abyss (or rather the quarry, which you also get a nice view of). Check it out here:

Friday, September 9, 2016

Cheese, Gromit! Cheese!

Finally, a study to explain why I feel compelled to eat the cheese stuck to the pizza box, even when it ends up being 30% cardboard:
Cheese contains a chemical found in addictive drugs, scientists have found. The team behind the study set out to pin-point why certain foods are more addictive than others. Using the Yale Food Addiction Scale, designed to measure a person’s dependence on, scientists found that cheese is particularly potent because it contains casein. The substance, which is present in all dairy products, can trigger the brain’s opioid receptors which are linked to addiction.
Your body contains these receptors in the brain, spinal cord, and digestive system. Opioids have many side effects, including pain relief and euphoria, which alone can be addictive (who doesn't like being happy and pain-free?), but they also can directly cause addiction through molecular changes in the brain and involvement of naturally occurring (aka: endogenous) opioids in the human body - chemicals like endorphins, dynorphins, and endomorphin. These chemicals can affect the release of other chemicals in the brain, which can results in behavioral impacts (such as causing drug-seeking behavior).

Obviously, cheese addiction isn't as severe as addiction to other opioids (such as, say, cocaine or morphine), or else we'd hear about a lot more cheese addiction treatment programs.

On Psychometrics, Cognitive Ability, and Achievement in the STEM Fields

Yesterday, I stumbled across an article that on the surface (based on the title, at least) appears to be an article for parents wanting to encourage genius in their children, but is in actuality a nice history of psychometrics, the measurement of cognitive ability, and evolution of gifted programs. The article focuses on the research of psychometrician Julian Stanley, who conducted the Study of Mathematically Precocious Youth (SMPY):
As the longest-running current longitudinal survey of intellectually talented children, SMPY has for 45 years tracked the careers and accomplishments of some 5,000 individuals, many of whom have gone on to become high-achieving scientists. The study's ever-growing data set has generated more than 400 papers and several books, and provided key insights into how to spot and develop talent in science, technology, engineering, mathematics (STEM) and beyond.
The study was inspired in part by Lewis Terman's famous study of genetics and genius; what made this study so interesting is that it limited inclusion in the study to people with IQs in the genius range, but found low levels of success in the sample, and also missed by only a few points including two Nobel Prize laureates. This resulted in the conclusion that IQ, once you get to a certain level, does not necessarily correlate with greater success and achievement, a conclusion Malcolm Gladwell echoed in his book Outliers.

But Stanley saw some problems with these results, the most important of which was that Terman was using overall IQ score, which is a measure of general intelligence, to attempt to identify people gifted in science and math, which are specific intelligences. So Stanley skipped using cognitive ability tests and opted to use the quantitative portion of the SAT. Later on, they used an even more specific indicator of mathematical ability - spatial reasoning:
A 2013 analysis found a correlation between the number of patents and peer-refereed publications that people had produced and their earlier scores on SATs and spatial-ability tests. The SAT tests jointly accounted for about 11% of the variance; spatial ability accounted for an additional 7.6%.
Evidence from Stanley's work has resulted in changes in how gifted individuals are identified and educated, including the notion of skipping grades or allowing students to complete materials at their own pace and/or take advanced coursework while remaining in the same grade level (something I did during my time in school). What it comes down to is the importance of proper measurement - knowing what to measure to understand and predict certain outcomes.

Thursday, September 8, 2016

Fun with Stop Motion

And as an interlude for your Thursday afternoon, here's some fun with stop motion, featuring a little Alien versus Predator:

Be Here Now

As I mentioned in a previous blog post, I've accepted a new position outside of the VA. Earlier this week, I started my new job as a Psychometrician at Houghton Mifflin Harcourt. And I realized that I'm exactly where I've always wanted to be.

As a teenager, I became fascinated with tests - most of my exposure was through those stupid personality tests from magazines, which I didn't find nearly as stupid as I probably should have, but I also had the opportunity to take a cognitive ability (what was once known as an intelligence) test in 4th grade. I thought the concept of solving puzzles to find out more about oneself was amazing.

Flash forward to college, when a fascination with statistics I encountered in journal articles during General Psychology resulted in a shift from theatre to psychology as my major. But I was even more excited when I discovered that one of the classes I could take was Testing & Measurement, which was all about the various psychological tests for diagnosis (such as the Minnesota Multiphasic Personality Inventory, or MMPI), examining development (such as the House-Tree-Person task, which literally involves having a child draw a house, a tree, and a person), or for determining cognitive ability (such as the Wechsler Intelligence Scale for Children, or WISC, which is what I learned during that class I took in 4th grade, after I recognized one of the tests involving recreating symbols with blocks):

I briefly considered going into clinical psychology, as a person who would administer these psychological tests, but social psychology drew me more strongly. But I was in luck when one of the classes I could take in grad school was structural equation modeling, which is frequently used in a test development paradigm called classical test theory - an approach about creating a single test that measures a particular concept, and obtaining data to demonstrate it measures what it is supposed to (validity, through comparisons with gold standards, where possible, or similar measures), consistently (reliability, by comparing items from one half of the test to the other - called split-half reliability, or over time - called test-retest reliability). I didn't really do a lot with test development in grad school, unfortunately, but I was able to in my time at VA.

And the last piece in the puzzle was a fellowship I received while at VA, where I was able to receive additional training in a newer area of test development, item response theory, and a related but mathematically different approach, Rasch. In these paradigms, responses to individual items are believed to be determined by two things: the difficulty level of the item (or in the case of personality tests, the amount of a trait a person needs to response in a certain way) and ability level of the person (or amount of trait he/she possesses). This approach has many advantages over what I learned in grad school, the two biggest of which are: 1) you can now more accurately estimate ability level, based on how people respond to the items, and 2) it is no longer necessary to give every person all items or even the exact same set of items. This opens the door for things like computer adaptive testing.

All the tools were there. And what am I doing now? I'll be working on batteries of cognitive ability tests, very similar to the WISC I took as a child. I get to do exactly what fascinated me as a child - working with tests - combined with a love of statistics I discovered in college to really dig into the tests, and make sure they work. I'll even be involved in writing the manuals that go along with the tests! It's a dream 30-some years in the making.