Monday, December 5, 2016
The 2016 Song
Since I'm not friends with all of you on Facebook, I wanted to share this post here as well - the 2016 Song, by two hilarious and sassy ladies. Amazing wordplay that is definitely NSFW. Enjoy!
Trust in a Post-Truth World
The Oxford Dictionaries have announced the 2016 word of the year:
A recent article in The Guardian tied this concept to the notion of trust, and discusses results of the Ipsos Mori 2016 Veracity Index, which assesses level of trust in different societal roles. Unsurprisingly, politicians are among the most distrusted, and saw their trust rating fall this year to 15%. Who are the most trusted? Nurses, at 93%, followed closely by physicians at 91%:
post-truth (adjective) - relating to or denoting circumstances in which objective facts are less influential in shaping public opinion than appeals to emotion and personal belief
The concept of post-truth has been in existence for the past decade, but Oxford Dictionaries has seen a spike in frequency this year in the context of the EU referendum in the United Kingdom and the presidential election in the United States. It has also become associated with a particular noun, in the phrase post-truth politics.
A recent article in The Guardian tied this concept to the notion of trust, and discusses results of the Ipsos Mori 2016 Veracity Index, which assesses level of trust in different societal roles. Unsurprisingly, politicians are among the most distrusted, and saw their trust rating fall this year to 15%. Who are the most trusted? Nurses, at 93%, followed closely by physicians at 91%:
The stellar 93% rating for nurses was warmly welcomed by Janet Davies, chief executive and general secretary of the Royal College of Nurses. “Nurses are some of the most caring, hardworking staff in the UK and it is very encouraging to see their efforts reflected in the eyes of the public,” she said.Here are the full results (and access more details here):
“A trusting relationship is absolutely essential in healthcare. As pressures on the health service rise, it’s particularly positive that the public have maintained their faith in the frontline staff working tirelessly for them throughout these difficult times. These results highlight the critical role nurses play in the lives of people in the UK.”
Sunday, December 4, 2016
Academic Dishonesty, Post-Peer Review and Debunking Research
As a researcher, publishing is a very important part of my job and ongoing career options. Though most researchers engage in research honestly, and if their results are incorrect, it's more likely due to error than malice, there are still cases in which researchers have fabricated data and even entire studies (for more, see here and here). Recently, a friend brought to my attention yet another instance of research dishonesty - a case that came to light last year, but I only learned about today. What is surprising to me, in this case, is that both the dishonest researcher and the one who debunked the research are (or were at the time) graduate students:
So Broockman started talking to people - carefully, because he was informed by many, and suspected himself, that exposing another researcher could get him labeled as a troublemaker or incapable of coming up with his own research ideas. And in fact, LaCour had written the paper on the results with a well-respected political scientist at Columbia, Donald Green. Broockman even said when he described the results to others, they were surprised that the results seemed to fly in the face of previous theory and research, but dropped those arguments when they heard Green was involved. In fact, when Jon Krosnick of Stanford was contacted about the study, he said, "I see Don Green is an author. I trust him completely, so I’m no longer doubtful."
Broockman hit many snags along the way, not just because he was a busy grad student working on his own research and finishing his degree - he was cautioned about exploring these issues by nearly everyone he spoke to. An anonymous post on the poliscirumors.com laying out his suspicions was deleted. And his analyses on the distribution of the data, which looked too clean to be real, failed to uncover major issues.
But still, there were hints that something was wrong. When he messaged LaCour with questions about methodology, the answers were vague and unhelpful. A similar study Broockman conducted with fellow grad student Josh Kalla showed response rates for the first wave around 1%, even though they were offering as much money as LaCour, who reported response rates of 12%. An email to the survey research firm LaCour said he had worked with on the study showed that, not only had LaCour never worked with the firm, the person he claimed to be in contact with (and had emails from) didn't exist. Then, they hit gold: a 2012 Cooperative Campaign Analysis Project that was a perfect match for LaCour's "first wave data."
I think many of us in the research field have witnessed activities that were questionable, perhaps even clearly unethical. But rarely are we encouraged to bring our suspicions to light, and there are certainly no safe venues to bring up concerns that may or may not be accurate. While I've never been actively discouraged from reporting ethical issues, I'm sure there are many researchers who have, like Broockman. And for many grad students and post-docs, it's more likely they are working with more seasoned faculty than other grad students, so when ethical dilemmas come up, the power dynamic may discourage them from doing the right thing. While we certainly don't want witch hunts for data that looks "too good to be true," we need to find ways to protect fellow researchers and the public from bad science and false data. Because that hurts all of us.
The exposure of one of the biggest scientific frauds in recent memory didn’t start with concerns about normally distributed data, or the test-retest reliability of feelings thermometers, or anonymous Stata output on shady message boards, or any of the other statistically complex details that would make it such a bizarre and explosive scandal. Rather, it started in the most unremarkable way possible: with a graduate student trying to figure out a money issue.Michael LaCour, a graduate student at UCLA, talked to David Broockman, grad student at UC Berkley, about a multiphase study he performed in which canvassers were able to change respondents attitudes about gay marriage by revealing their sexual orientation. Broockman, who was fascinated by the results, set out to replicate the study and encountered the first issue: Labour's survey had included 10,000 respondents paid $100 a piece, a rather large grant for a graduate student. So Broockman approached polling firms about the study idea - most said the study they couldn't carry out such a study, and if they could, it wouldn't be feasible on the usual grants grad students could obtain.
So Broockman started talking to people - carefully, because he was informed by many, and suspected himself, that exposing another researcher could get him labeled as a troublemaker or incapable of coming up with his own research ideas. And in fact, LaCour had written the paper on the results with a well-respected political scientist at Columbia, Donald Green. Broockman even said when he described the results to others, they were surprised that the results seemed to fly in the face of previous theory and research, but dropped those arguments when they heard Green was involved. In fact, when Jon Krosnick of Stanford was contacted about the study, he said, "I see Don Green is an author. I trust him completely, so I’m no longer doubtful."
Broockman hit many snags along the way, not just because he was a busy grad student working on his own research and finishing his degree - he was cautioned about exploring these issues by nearly everyone he spoke to. An anonymous post on the poliscirumors.com laying out his suspicions was deleted. And his analyses on the distribution of the data, which looked too clean to be real, failed to uncover major issues.
But still, there were hints that something was wrong. When he messaged LaCour with questions about methodology, the answers were vague and unhelpful. A similar study Broockman conducted with fellow grad student Josh Kalla showed response rates for the first wave around 1%, even though they were offering as much money as LaCour, who reported response rates of 12%. An email to the survey research firm LaCour said he had worked with on the study showed that, not only had LaCour never worked with the firm, the person he claimed to be in contact with (and had emails from) didn't exist. Then, they hit gold: a 2012 Cooperative Campaign Analysis Project that was a perfect match for LaCour's "first wave data."
By the end of the next day, Kalla, Broockman, and Aronow had compiled their report and sent it to Green, and Green had quickly replied that unless LaCour could explain everything in it, he’d reach out to Science and request a retraction. (Broockman had decided the best plan was to take their concerns to Green instead of LaCour in order to reduce the chance that LaCour could scramble to contrive an explanation.)So what happened to the grad student who was repeatedly cautioned that debunking research could be a career killer? The response he received was "uniformly positive" and, oh, by the way, he's now tenure track at Stanford University. About this issue, he says: "I think my discipline needs to answer this question: How can concerns about dishonesty in published research be brought to light in a way that protects innocent researchers and the truth — especially when it’s less egregious?” he wrote. “I don’t think there’s an easy answer. But until we have one, all of us who have had such concerns remain liars by omission."
After Green spoke to Vavreck, LaCour’s adviser, LaCour confessed to Green and Vavreck that he hadn’t conducted the surveys the way he had described them, though the precise nature of that conversation is unknown. Green posted his retraction request publicly on May 19, the same day Broockman, Kalla, and Aronow posted their report. That was also the day Broockman graduated. “Between the morning brunch and commencement, Josh and I kept leaving the ceremonies to work on the report,” Broockman wrote in an email.
I think many of us in the research field have witnessed activities that were questionable, perhaps even clearly unethical. But rarely are we encouraged to bring our suspicions to light, and there are certainly no safe venues to bring up concerns that may or may not be accurate. While I've never been actively discouraged from reporting ethical issues, I'm sure there are many researchers who have, like Broockman. And for many grad students and post-docs, it's more likely they are working with more seasoned faculty than other grad students, so when ethical dilemmas come up, the power dynamic may discourage them from doing the right thing. While we certainly don't want witch hunts for data that looks "too good to be true," we need to find ways to protect fellow researchers and the public from bad science and false data. Because that hurts all of us.
Saturday, December 3, 2016
On Netflix Binges and Emily Dickinson
Yesterday I discovered a series called Black Mirror. It's an anthology show that started on Channel 4, before being picked up by Netflix for its newest season. The most similar show I can think of is The Twilight Zone, which I watched with my dad. Each episode features a stand-alone story with different actors and characters, but they clearly exist in the same universe. One bit of technology that exists across multiple episodes is an implanted device that allows the recording of memories, the ability to see the world through another's eyes through an uplink, and the power to block people completely - when you block someone, they appear as an outline filled in with gray:
You can't see the person; you can't speak to them, in person or otherwise. Even looking at pictures of the person, or remembering events with them, all you see is the gray figure. One character said it was like having your memories vandalized. It's a world that encourages impulsive reactions to others, and those impulsive reactions have lasting consequences.
The current episode I'm watching, Nosedive, explores how people establish their self-worth through social media. In this world, people rate each other - their posts and their interactions. Even total strangers have the ability to rate a person who makes them smile or pisses them off. Your overall rating establishes your place in society, and impacts where you live, what job you can hold, and what medical treatment you can receive. As we follow the main character, Lacie, who is trying to raise her rating so she can get the apartment she wants, she encounters a string of bad luck causing her rating to plummet and everyone around her to shun her. It's much like the block, except now people look at your rating to decide if they're worth your time. They see you, but not really.
Either way, if you're blocked or down-rated, you're nobody. It reminded me of a poem by my favorite poet, Emily Dickinson:
You can't see the person; you can't speak to them, in person or otherwise. Even looking at pictures of the person, or remembering events with them, all you see is the gray figure. One character said it was like having your memories vandalized. It's a world that encourages impulsive reactions to others, and those impulsive reactions have lasting consequences.
The current episode I'm watching, Nosedive, explores how people establish their self-worth through social media. In this world, people rate each other - their posts and their interactions. Even total strangers have the ability to rate a person who makes them smile or pisses them off. Your overall rating establishes your place in society, and impacts where you live, what job you can hold, and what medical treatment you can receive. As we follow the main character, Lacie, who is trying to raise her rating so she can get the apartment she wants, she encounters a string of bad luck causing her rating to plummet and everyone around her to shun her. It's much like the block, except now people look at your rating to decide if they're worth your time. They see you, but not really.
Either way, if you're blocked or down-rated, you're nobody. It reminded me of a poem by my favorite poet, Emily Dickinson:
The message is clear: It's time to stop basing our self-worth on people who don't value us. That may mean accepting that you're nobody to someone. But in the end, what makes the character - and us - happiest are real interactions that aren't based on a momentary up/down judgment.I'm nobody! Who are you?Are you nobody, too?Then there's a pair of us -- don't tell!They'd banish us -- you know!How dreary to be somebody!How public like a frogTo tell one's name the livelong dayTo an admiring bog!
Friday, December 2, 2016
21 Years
21 years.
That's how long it has been since Nie Shubin was executed for the crimes of rape and murder, crimes that the Supreme People's Court of China just ruled he did not commit:
That's how long it has been since Nie Shubin was executed for the crimes of rape and murder, crimes that the Supreme People's Court of China just ruled he did not commit:
Amid emotional scenes in the courtroom, judges ruled that Nie's original trial didn't "obtain enough objective evidence," saying there were serious doubts about the time of death, murder weapon and cause of death.China executes more people each year than any other country, and until 2013, police could use torture to obtain confessions; it wasn't until that year that the Supreme People's Court banned that practice. Psychological research has demonstrated that even without physical torture, police can get people to confess to crimes they did not commit. And they can do it using interrogation tactics that are perfectly legal. Even after the banning of physical torture, how many people in China could have been convicted and executed based on false confessions?
Another man, Wang Shujin, confessed to the crime that Nie was executed for in 2005 -- 10 years after Nie was executed.
For years, it seemed no one would listen, but Zhang [Huanzhi, Nie's mother] later found an unlikely ally in the People's Daily -- the official newspaper of the ruling Communist Party. It ran a scathing commentary in September 2011 that asked: "In a case where someone was clearly wronged, why has it been so difficult to make it right?"
"Rehabilitation means little to the dead, but it means a lot to his surviving family and all other citizens," the paper said. "We can no longer afford to let Nie's case drag on."
Many have viewed Zhang's plight -- and the case involving her only son -- as an egregious example of widespread police torture, deficient due process and lax review of death sentences.
Thursday, December 1, 2016
Keep It Secret, Keep It Safe: Mobile Apps and Your Data
A new study out of Carnegie Melon University suggests that how a mobile app claims it will use your personal data is not always aligned with what it actually does:
An analysis of almost 18,000 popular free apps from the Google Play store found almost half lacked a privacy policy, even though 71 percent of those appear to be processing personally identifiable information and would thus be required to explain how under state laws such as the California Online Privacy Protection Act (CalOPPA).The automated system combines natural language processing and machine learning to analyze privacy policy text, then compares those results to the actual code for the app. It also flags anything in the code that would warrant a privacy policy for apps that don't already have one.
Even those apps that had policies often had inconsistencies. For instance, as many as 41 percent of these apps could be collecting location information and 17 percent could be sharing that information with third parties without stating so in their privacy policy.
“Overall, each app appears to exhibit a mean of 1.83 possible inconsistencies and that’s a huge number,” said Norman Sadeh, professor of computer science in CMU’s Institute for Software Research. The number of discrepancies is not necessarily surprising to privacy researchers, he added, “but if you’re talking to anyone else, they’re likely to say ‘My goodness!’”
Sadeh’s group is collaborating with the California Office of the Attorney General to use a customized version of its system to check for compliance with CalOPPA and to assess the effectiveness of CalOPPA and “Do Not Track” legislation.
New Periodic Table, Who Dis?
Time to throw out your old periodic table and buy a new one - four elements have been added and now they even have names:
Get ready to ring in 2017 with a brand new Periodic Table, because four more elements have officially been added to the seventh row: nihonium (Nh), moscovium (Mc), tennessine (Ts), and oganesson (Og).The elements were already approved and added to the table with temporary names, as you can see in the picture below. But a new table should be available soon with the actual names. Stay tuned!
We’ve been hearing about these four new elements since January, but the International Union of Pure and Applied Chemistry (IUPAC) has finally announced that the names have been officially approved, so we’ve got the go-ahead to tear down all our posters and find some new ones.
To get to know our four new friends a little better, nihonium is derived from "Nihon", a Japanese word for Japan, and moscovium honours the Russian capital city, Moscow.
Tennessine is named after the state of Tennessee, known for its pioneering research in chemistry, and it marks the second US state to be honoured on the periodic table. The first was California, referenced by californium (element 98).
Oganesson is named after 83-year-old Russian physicist Yuri Oganessian, and this is only the second time a new element has been named for a living scientist.
![]() |
By DePiep [CC BY-SA 3.0 (http://creativecommons.org/licenses/by-sa/3.0)], via Wikimedia Commons |
Subscribe to:
Posts (Atom)