Saturday, June 24, 2017

Historical Children's Literature (And Why I'll Never Run Out of Reading Material)

Via a writer's group I belong to, I learned about the Baldwin Library of Historical Children's Literature, a digital collection maintained by the University of Florida. A past post from Open Culture provides some details:
Their digitized collection currently holds over 6,000 books free to read online from cover to cover, allowing you to get a sense of what adults in Britain and the U.S. wanted children to know and believe. Several genres flourished at the time: religious instruction, naturally, but also language and spelling books, fairy tales, codes of conduct, and, especially, adventure stories—pre-Hardy Boys and Nancy Drew examples of what we would call young adult fiction, these published principally for boys. Adventure stories offered a (very colonialist) view of the wide world; in series like the Boston-published Zig Zag and English books like Afloat with Nelson, both from the 1890s, fact mingled with fiction, natural history and science with battle and travel accounts.
The post highly recommends checking out the Book of Elfin Rhymes, one of many works of fantasy from the turn of the century - similar to a childhood favorite of mine, the Oz book series by L. Frank Baum, a world I continue to visit in my adult life through antique book collecting and occasional rereading. The illustrations of Elfin Rhymes are similar to the detailed illustrations you would find in a first edition (or reprinted vintage edition) of an Oz book:

And if you're looking for more classics (and beyond) to read for free, Open Culture shares a list of 800 free ebooks here. This is a good find considering I'm spending my afternoon cleaning out my bookshelf, putting books I've read (and am unlikely to reread soon) into storage to make room for new. My reading list continues to grow...

Friday, June 23, 2017

Map From the Past

I'm finally home from Colorado. On my flight yesterday (my 8th flight in the last month), I listened to a podcast from Stuff You Should Know on How Maps Work.

On this podcast, I learned about an international incident from 7 years ago that I missed at the time - Google Maps almost started a war:
The frenzy began after a Costa Rican newspaper asked Edén Pastora, a former Sandinista commander now in charge of dredging the river that divides the two countries, why 50 Nicaraguan soldiers had crossed the international frontier and taken up positions on a Costa Rican island. The ex-guerrilla invoked the Google Maps defense: pointing out that anyone Googling the border could see that the island in the river delta was clearly on Nicaragua’s side.
This dispute was one incident in a long line of border disputes between Costa Rica and Nicaragua, dating back to the 1820s. The Cañas–Jerez Treaty was enacted in 1858 to alleviate these tensions, and it seemed to work for a while. The International Court of Justice ruled on this small island in 2015, reaffirming that the disputed piece of land belongs to Costa Rica.

You can read an overview of this dispute here.

Tuesday, June 20, 2017

He's No Frank Underwood

Two special elections are happening today: one in the 6th Congressional district of Georgia - the race receiving the most attention - and one in the 5th Congressional district of South Carolina, which happens to be the home district of fictional politician, Frank Underwood of Netflix's House of Cards. And Democrat Archie Parnell seems to be having a great time highlighting this connection. Check out this campaign ad:


Harry Enten of FiveThirtyEight explains why this special election matters, despite receiving less attention:
Voters in the South Carolina 5th are choosing between Republican Ralph Norman, a former state representative, and Democrat Archie Parnell, a former Goldman Sachs managing director who has been using ads parodying Underwood to draw attention to his campaign.

[T]his is not the type of district where Democrats tend to be competitive. It’s not even the type of district where they need to be competitive to win the House next year. Democrats need a net gain of only 24 seats from the Republicans to do that. And there are 111 districts won by Republican House candidates in 2016 that leaned more Democratic than the South Carolina 5th.

There hasn’t been a lot of polling of the South Carolina race, but what we do have shows that Parnell is outperforming the district’s default partisan lean, just not by nearly enough.

Even if Norman wins, as expected, we will still learn something about the state of U.S. politics. As I’ve written before, when one party consistently outperforms expectations in special elections in the runup to a midterm election, that party tends to do well in those midterms.

So keep an eye on how much Parnell loses by (assuming he loses). The closer Norman comes to beating Parnell by 19 points (or more) — the default partisan lean of the district — the better for the Republican Party. A Parnell loss in the low double digits, by contrast, would be consistent with a national shift big enough for Democrats to win the House.

Monday, June 19, 2017

Alexa, Buy Whole Foods

Back in May, I shared a story from the Guardian that Whole Food's sales are declining and the company would be downsizing. The explanation was a combination of high prices (it's called Whole Paycheck for a reason) and increased availability of organic and specialty products at other grocery stores.

Friday, it was announced that Amazon would be buying Whole Foods:
Wall Street is betting Amazon (AMZN, Tech30) could be as disruptive to the $800 billion grocery industry as it has already proved to be for brick-and-mortar retail businesses.

Amazon already had a relatively small grocery business of its own, Amazon Fresh, but its acquisition of Whole Foods is much more ominous sign for competitors.

Traditional grocers are already struggling with fierce competition and falling prices. Amazon's war chest and online strength, coupled with Whole Foods' brand power, could force grocers to cut costs and spend heavily on e-commerce.

"For other grocers, the deal is potentially terrifying," Neil Saunders, managing director of GlobalData Retail, said in a report on Friday. "Amazon has moved squarely onto the turf of traditional supermarkets and poses a much more significant threat."
And of course, Twitter users had a lot to say about the deal:
Stock prices for other grocers fell Friday, totaling about $22 billion in market value. Obviously this isn't trivial, but after finishing Nassim Taleb's Fooled by Randomness recently, in which he specifically discusses randomness in the market, I'd be more interested in seeing what happens long-term (I'm expecting some regression to the mean soon).

And there's the big question - what will happen to Whole Foods? You can already buy groceries through Amazon, including more "mainstream" products you don't see in Whole Foods. Will Whole Foods become just another grocery store?

Sunday, June 18, 2017

Statistics Sunday: Past Post Round-Up

For today's post, I thought I'd share what I consider my favorite posts on statistics - in this case, favorite means either a topic I really love or a post I really enjoyed writing (and for certain posts, those two are the same thing). Here are my favorite statistics posts:

  • Alpha, one of the most important concepts in statistics, in which I also give a short introduction to probability
  • Error, which builds on probability information from previous posts, and starts to introduce the idea of explained and unexplained variance
  • N-1, a concept many of my students struggled to understand in introductory statistics - this post helped me solidify my thoughts on the topic, and I think I understand it much better for having written about it
  • What's Normal Anyway, my first Statistics Sunday post, which had the added bonus of proving to myself there is a way to explain skewness and kurtosis in a way people understand, and that these don't need to be considered advanced topics
  • Analysis of Variance, which used the movie theatre example I first came up with when I taught statistics for the first time - I remember overhearing my students during their final exam study sessions saying to each other, "Remember the movie theatre..."
I plan on getting back to writing regular posts soon, and have a list of statistics topics to sit down and write about. Stay tuned.

Friday, June 16, 2017

Updates

I haven't blogged in the last few days. Why? I'm back in Colorado again. (Sing that last line to the tune of Aerosmith's Back in the Saddle if you could.) A family health issue called me back and I'm writing this post from a dingy motel room with a large no smoking sign that I find hilarious because the room reeks of smoke - but it was the only place with a room available not too far from the hospital. But hey, I'm in Colorado, so here's what I'm doing for fun:
  • Trying all the Colorado beer - I'm currently having New Belgium Voodoo Ranger IPA in my dingy motel room; but I've recently had: Breckenridge Mango Mosaic Pale Ale; a flight at Ute Pass Brewing Company that included their Avery IPA, High Point Amber, Sir Williams English Ale, and Kickback Irish Red plus a tap guest of Boulder Chocolate Shake Porter; and an Oskar Blues Blue Dream IPA
  • Listening to all the podcasts, including an excellent one about how beer works from Stuff You Should Know, as well as some of my favorite regular podcasts from Part-Time Genius, WaPo's Can He Do That?, FiveThirtyEight Politics, StarTalk, Overdue, and Linear Digressions
  • Enjoying three new albums: Spoon's Hot Thoughts, Lorde's Melodrama, and Michelle Branch's (She's still making music! My college self is thrilled!) Hopeless Romantic
  • Reading The Black Swan: The Impact of the Highly Improbable by Nassim Nicholas Taleb, which Daniel Kahneman said "changed my view of how the world works"; Kahneman, BTW, is a social psychologist with a Nobel Prize in Economics
  • Also reading (because one can never have too many books) Sports Analytics and Data Science: Winning the Game with Methods and Models by Thomas W. Miller - because I've been trying to beef up my data science skills and thought doing it with data I really enjoy (i.e., sports data) would help motivate me
  • Acquiring new skills such as hitching a fifth wheel (sadly I didn't discover or watch this video until long after hitching the fifth wheel), driving about 70 miles with said fifth wheel, and storing said fifth wheel - I'm considering adding these skills to my résumé
Tomorrow, I'm planning to spend a few hours checking out the Colorado Renaissance Festival. For now, here's a picture from the Garden of the Gods today:

Tuesday, June 13, 2017

What Democrats and Republicans Can Agree On

Yesterday, I listened to the FiveThirtyEight podcast in which they discussed "the base" - both Democratic and Republican - and they spent some time trying to operationally define what would be considered the base of these parties.

This is actually surprisingly difficult. As is said in the podcast, ideology (a continuum from liberal to conservative) and party affiliation (e.g., Democrat, Republican) are two different things, and although they do go together sometimes, they can also diverge. Determining whether a person is part of the Democratic or Republican base has to be more than simply determining if they're liberal or conservative. They also have to align with party activities and causes, and have a voting track record aligning with the party.

I highly recommend giving the podcast a listen.

In the podcast, they also talk about the parties more generally and even highlight some of the things Republicans and Democrats can agree on - specifically that the President should stay off of Twitter. So U.S. Representative Mike Quigley's COVFEFE (Communications Over Various Feeds Electronically for Engagement) Act is well-timed:
This bill codifies vital guidance from the National Archives by amending the Presidential Records Act to include the term “social media” as a documentary material, ensuring additional preservation of presidential communication and statements while promoting government accountability and transparency.

“In order to maintain public trust in government, elected officials must answer for what they do and say; this includes 140-character tweets,” said Rep. Quigley. “President Trump’s frequent, unfiltered use of his personal Twitter account as a means of official communication is unprecedented. If the President is going to take to social media to make sudden public policy proclamations, we must ensure that these statements are documented and preserved for future reference. Tweets are powerful, and the President must be held accountable for every post.”

In 2014, the National Archives released guidance stating its belief that social media merits historical recording. President Trump’s unprecedented use of Twitter calls particular attention to this concern. When referencing the use of social media, White House Press Secretary Sean Spicer has said, “The president is president of the United States so they are considered official statements by the president of the United States.”

Sunday, June 11, 2017

Statistics Sunday: Parametric versus Nonparametric Tests

In my posts about statistics, I've tried to pay some attention to the assumptions of different statistical tests. One of the key assumptions of many tests is that data are normally distributed. I should add that this is a key assumption for many of what we call 'parametric' tests.

Remember that in statistics lingo, parameter is the term we use to describe values that apply to populations, whereas statistics are values created with samples. When we try to generalize back to the population, we want our sample data to follow a similar distribution as the population - this distribution is often normal but not always. In any case, anytime we make/have assumptions about the distribution of data, we use parametric tests that include these assumptions. The t-test is considered a parametric test, because it includes assumptions about the sample (and hence, the population) distribution.

But if your data are not normally distributed, there are still many tests you can use, specifically ones that are known as distribution-free or 'non-parametric' tests. During April A to Z, I talked about Frank Wilcoxon. Wilcoxon contributed two tests that are analogues to the t-test, but have no assumptions about distribution.

To be considered a parametric test, it isn't necessary to have an assumption that data are normally distributed, because there are many types of distributions data can follow; an assumption of normality is a sufficient but not necessary condition. What is necessary to be a parametric test is to have some assumption of what the data should look like. If test assumptions make no mention about data distribution, it would be considered a non-parametric test. One well-known non-parametric test is the chi-square, which I'll blog about in the near future.

Saturday, June 10, 2017

Alan Smith on Why You Should Love Statistics

I happened upon this Ted Talk from earlier in the year, in which Alan Smith explains why he loves (and why you should love) statistics - his reason is very similar to mine:

Friday, June 9, 2017

Catching Up on Reading

I've been on vacation (currently in Denver) and haven't made time to blog, although I'm sure I'll be blogging regularly again when we return to Chicago. I'm still keeping up on reading my favorite blogs until now, but today will be spent squeezing in our last bit of Denver sightseeing before flying up to Montana to visit family for the weekend. So here's my reading list for when I get a little downtime at the airport:

Monday, June 5, 2017

Greetings from Colorado

I'm writing this post from my cabin in Woodland Park, CO, about 30 minutes from Colorado Springs. We flew in yesterday afternoon and despite a forecast of rain for our full visit, the weather is sunny and clear. Here's some photo highlights, with more to come:

We'll have to pick up some of this excellently named jerky when we go back to the airport.

The castle rock in the aptly named Castle Rock, CO.

As we got closer to Woodland Park, we drove through these gorgeous tree-populated hills...

and red rocks. I'll get better pictures when we head back to Colorado Springs later today for lunch at a brewery.

Our home for the next couple days in Woodland Park, CO.

Our cute cabin...

and my parents' cute dog, Teddy, who came to greet us shortly after our arrival.

We had a nice view of Pikes Peak at dinner last night. We'll have an even better view when we take the tram up the mountain tomorrow.

And because it's Colorado:


The ashtray right outside our cabin is clearly marked "Cigarettes only." Hmm, what else would people be smoking in Colorado? ;)

Sunday, June 4, 2017

Statistics Sunday: Linear Regression

Back in Statistics in Action, I blogged about correlation, which measures the numerical strength of a linear relationship between two variables. Today, I'd like to talk about a similar statistic, that differs mainly in how you apply and interpret it: linear regression.

Recall that correlation ranges from -1 to +1 (with 0 indicating no relationship, and the sign indicating the direction: one goes up the other goes up is positive and one goes up the other goes down is negative). That's because correlation is standardized: to compute a correlation, you have to convert values to Z-scores. Regression is essentially correlation, with a few key differences.

First of all, here's the equation for linear regression, which I'm sure you've seen some version of before:

y = bx + a

You may have seen it instead as y = mx + b or y = ax + b. It's a linear equation:


A linear equation is used to describe a line, using two variables: x and y. That's all regression is. The difference is that the line is used as an approximation of the relationship between x and y. We recognize that not every case falls perfectly on the line. The equation is computed so that it gets as close to the original data as possible, minimizing the (squared) deviations between the actual score and the predicted score. (BTW, this approach is called least squares, because it minimizes the squared deviations - as usual, we square the deviations so they don't add up to 0 and cancel each other out.)

As with so many statistics, regression uses averages (means). To dissect this equation (using the first version I gave above), b is the slope, or the average amount y changes for each 1 unit change in x. a is the constant, or the average value of y when x is equal to 0. Because we have one value for slope, we assume there is a linear relationship between y and x, that is the relationship is the same across all possible values. So regardless of which values we choose for x and y (within our possible ranges), we expect the relationship to be the same. There are other regression approaches we use if and when we think the relationship is non-linear, which I'll blog about later on.

Because our slope is the amount of change we expect to see in y and our constant is the average value of y for x=0, these two values are in the same units as our y variable. So if we were predicting how tall a person is going to grow in inches, y, the slope (b), and the constant (a) would all be in inches. If we use standardized values, which is an option in most statistical programs, our b would be equal to the correlation between x and y.

But what if we want to use more than one x (or predictor) variable? We can do that, using a statistic called multiple linear regression. We would just add more b's and x's to the equation above, giving each a subscript number (1, 2, ...). There are many cases where more than one variable would predict our outcome.

For instance, it's rumored that many graduate schools have a prediction (regression) equation they use to predict grad school GPA of applicants, using some combination of test scores, undergraduate GPA, and strength of recommendation letters, to name a few. They're not sharing what that equation is, but we're all very sure they use them. The problem when we use multiple predictors is that they are probably also related to each other. That is, they share variance and may predict some of the same variance in our outcome. (Using the grad school example, it's highly likely that someone with a good undergraduate GPA will also have, say, good test scores, making these two predictors correlated with each other.)

So when you conduct multiple linear regression, you're not only taking into account the relationship between each predictor and the outcome; you're also correcting for the fact that the predictors are correlated with each other. So when you're conducting multiple regression, you want to check the relationship between your predictors. If two variables are highly related to each other, to the point that one could be used as a proxy for the other, your variables are collinear, meaning that they predict the same variance in your outcome. Weird things happen when you have collinear variables. If the shared variance is very high (almost full overlap in a Venn diagram), you might end having a variable that should have a positive relationship with the outcome showing a negative slope. This is because one variable is correcting for overprediction; if this happens, we call it suppression. The only way to deal with it is to drop one of the collinear variables.

Obviously, it's unlikely that your regression equation will perfectly describe the relationship between/among variables. The equation will always be an approximation. So we measure how good our regression equation is at predicting outcomes using various metrics, including the proportion of variance in the outcome variable (y) predicted by the x('s), as well as how far the predicted y's (using the equation) are from the actual y's - we call this metric residuals.

In a future post, I'll show you how to conduct a linear regression. It's actually really easy to do in R.

Saturday, June 3, 2017

In Good Taste

Several years ago, while I was still in grad school and teaching college classes regularly, I attended a workshop at the Association for Psychological Science Teaching Institute (which occurs right before the full APS conference). The workshop was a demonstration of different taste perception activities one could use in either an introductory psychology or sensation & perception course. One activity used paper that had been soaked in a bitter tasting chemical (probably phenylthiocarbamide); you placed the paper on the tip of your tongue. This activity allows people to identify whether they're a "super-taster" meaning they have a lot of bitter tastebuds. My reaction to the bitter taste was immediate, meaning I'm a super-taster. I was also one of the youngest people in the room, and the person running the workshop went on to share that children have more bitter tastebuds than adults, which may explain why they don't tend to like bitter-tasting foods, like brussel sprouts or broccoli, as much as adults.

Our tastes really do change over time, and there's also a lot of individual differences when it comes to taste, even among people from the same age-group. This month's FiveThirtyEight Sparks podcast involves a discussion about differences in taste, as well as an interview with Bob Holmes, author of Flavor: The Science of Our Most Neglected Sense:


The group also does some flavor tripping. We did a little of that in the APS workshop, and a few years ago I attended a flavor tripping party with a few friends.

Friday, June 2, 2017

State Maps

You've probably seen the most recent state map making the rounds, which displays the most often misspelled word in all 50 states. XKCD had a brilliant response:


Wednesday, May 31, 2017

On Patents, Printers, and Consumer Behavior

Yesterday, the U.S. Supreme Court ruled on a very interesting case. Lexmark argued that 3rd parties refilling their ink cartridges was a patent violation. The Supreme Court took this case as an opportunity to further define patent exhaustion - the point at which a patent holder can longer control what happens to an individual instance of their patented product. According to the court, refilling ink cartridges is not a patent violation because once an individual purchases a product, what happens to it after is no longer in the patent holders control:
Lexmark’s rights to control the use of its patented refillable print cartridges would be “exhausted” when it sells those cartridges to retail buyers, even if Lexmark conditions the sale on the promise that the buyer will not refill the cartridge. That, at any rate, is the argument of Impression Products, which makes a business out of refilling Lexmark cartridges in violation of those agreements. Lexmark’s argument, by contrast, supported by a quarter-century of Federal Circuit precedent, is that modern commerce requires that innovators have the flexibility to devise contracting structures that segment the market into separate sectors, each of which gets a different price commensurate with the uses to which products will be put in that sector.

[T]he court concluded that “extending the patent rights beyond the first sale would clog the channels of commerce, with little benefit from the extra control that the patentees retain.” The court pointedly noted that “increasingly complex supply chains [well might] magnify the problem,” offering a citation to an amicus brief suggesting that a “generic smartphone … could practice an estimated 250,000 patents.”
In a sense, the case is one about control - do companies have control over what happens to their products after someone has purchased them? Specifically, can companies control the behavior of consumers. You can understand where Lexmark is coming from: they're missing out on extra sales if people can simply buy their cartridge once and refill it. But printer cartridges can be expensive, so you can also understand the consumer's behavior here. This ruling will obviously have an impact on the behavior of companies as well as consumers. Third party companies are liable to test the boundaries of this ruling. And I would imagine Lexmark (and other companies manufacturing printers) are going to look for ways to redesign printer cartridges that can't be refilled.

Tuesday, May 30, 2017

B.F. Skinner: The Last Behaviorist

Via Advances in the History of Psychology, I learned today about an upcoming film, The Last Behaviorist, which is "an audio-visual portrait" of Skinner and his theories:
“If I am right about human behavior, I have written the autobiography of a nonperson.”
- B.F. SKINNER, A Matter of Consequences: Part Three of An Autobiography

The Last Behaviorist takes Skinner’s proposition as a conceptual point of departure - it is an audio-visual portrait, examining the biographical history, ideas, words, and representations of a non-person through raw footage of the subject and their environment.
No information is provided on release of the film, but you can sign up to be on their mailing list. As a recovering radical behaviorist, I'll definitely check out this film.

Gender Bias in Political Science

This morning, the Washington Post published a summary (written by the study authors) of a study examining gender bias in publications in the top 10 political science journals.
Our data collection efforts began by acquiring the meta-data on all articles published in these 10 journals from 2000 to 2015. Web-scraping techniques allowed us to gather information on nearly 8,000 articles (7,915), including approximately 6,000 research articles (5,970). The journals vary in terms of the level of information they provide about the nature of each article, but we were generally able to determine the type of article (whether a research article, book review, or symposium contribution), the names of all authors—from which we could calculate the number of authors—and often the institutional rank of each author (for example, assistant professor, full professor, etc.). In what follows, we describe the variable generation process for all types of articles in the dataset, but note that the findings we report stem from an analysis of authorship for research articles only, and not reviews or symposia.

Using an intelligent guessing technique (compared against a hand-coding method) we used authors’ first names to code author gender for all articles in the database. We also hand-coded the dominant research method employed by each research article. We were further able to generate women among authors (%) which is the share of women among all authors published in each journal, as well as other variables related to the gender composition for each article, which include information about whether each article was written by a man working alone, a woman working alone, an all-male team, an all-female team, or a co-ed team of authors. Because the convention in political science is generally to display author names alphabetically, we have not coded categories like “first author” or “last author” which are important in the natural sciences.
As you can see from the table below, there were low percentages of women among authors across all 10 journals:


One explanation people offer for underrepresentation of women is that there are simply fewer women in the field. But that's not the case here:
Women make up 31 percent of the membership of the American Political Science Association and 40 percent of newly minted doctorates. Within the 20 largest political science PhD programs in the United States, women make up 39 percent of assistant professors and 27 percent of tenure track faculty.
Instead, they offer 2 explanations:

1) Women aren't being offered as many opportunities for coauthorship:
The most common byline across all the journals we surveyed remains a single male author (41.1 percent); the second most common form of publication is an all-male “team” of more than one author (24 percent). Cross-gender collaborations account for only 15.4 percent of publications. Women working alone byline about 17.1 percent of publications, and all-female teams take a mere 2.4 percent of all journal articles.
2) The research methods most often used by women political sciences (qualitative methods) are less likely to be published in these top journals than studies using quantitative methods. As a mixed methods researcher, I frequently use qualitative methods - this was especially true in my work for the Department of Veterans Affairs, where we studied topics that were not only complex and nuanced, but poorly studied and sometimes occurring in a small subset of the population. These are the perfect conditions for a well-done qualitative study to establish some concepts that can be studied quantitatively. But it's difficult to write a survey or create a measure without that basic knowledge. (That doesn't stop people from doing it, leading to bad research. But hey, it uses numbers, so it must be good, right? </sarcasm>) I frequently received snide remarks from other researchers and consumers of research, who didn't believe qualitative methods were rigorous or even scientific. And, as I've blogged about before, I received similar comments in some of my peer reviews.

The authors recognize that perhaps the reason for low representation of women may be because they simply aren't submitting to these journals. But:
[I]f women are not submitting to certain journals in numbers that represent the profession, this is the beginning and not the end of the story. Why not?

Political scientists have helped forge crucial insights into the “second” and “third faces” of power — ideas that help explain that the effects of power can be largely invisible.

The second face of power refers to a conscious decision not to contest an outcome in light of limited prospects for success, as when congressional seats go uncontested in districts that are solidly red or blue.

The third face of power is more subtle and refers to the internalization of biases that operate at a subconscious level, as when many people assume, without thinking, that wives — and not husbands — will adjust their careers and even their expectations to accommodate family and spouse.

Let’s apply those insights to the findings from our study. If women aren’t submitting in proportional numbers to prestigious journals, that may result from conscious decisions based on the second face of power: They don’t expect their work to be accepted because they don’t see their type of scholarship being published by those journals. Or they may refrain from submitting because of a more internalized, third-face logic, taking it for granted that scholars like “me” don’t submit to journals like that.

Either way, publication patterns are self-enforcing over time, as authors come to see it as a waste of time to submit to venues whose past publications do not include the kind of work they do or work by scholars like them.

Monday, May 29, 2017

Sara's Week in Psychological Science: Conference Wrap-Up

I'm back from Boston and collecting my thoughts from the conference. I had a great time and made lots of great connections. While I didn't have a lot of visitors to my poster, I had some wonderful conversations with a few visitors and other presenters - quality over quantity. I'm also making some plans for the near future. Stay tuned: there are some big changes on the horizon I'll be announcing, starting later in the week.

In the meantime, I'm revisiting notes from talks I attended. One in particular presented a flip side of a concept I've blogged about a lot - the Dunning-Kruger effect. To refresh your memory, the Dunning-Kruger effect describes the relationship between actual and perceived competence. People who are actually low or high in competence tend to rate themselves more highly on perceived competence than people with a moderate level of competence - and this effect has been observed for a variety of skills.

The reason for this effect has to do with knowing what competence looks like. You need a certain level of knowledge about a subject to know what true competence looks like. People with moderate competence know quite a bit but also know how much more there is to learn. But people with low competence don't know enough to understand what competence looks like - in short, they don't know what they don't know. (In fact, you can read a summary of some of this research here, which I co-authored several years ago with my dissertation director, Linda Heath, and a fellow graduate student, Adam DeHoek.)

The way to counteract this effect is to show people what competence looks like. But one presentation at APS this year showed a negative side effect of this tactic. Todd Rogers from the Harvard Kennedy School presented data collected through Massively Open Online Courses (MOOCs - such as those you'd find listed on Coursera). These courses have high enrollment but also high attrition - for instance, it isn't unusual for a course to have an enrollment of 15,000 but only 5,000 who complete all assignments.

Even with 66.7% attrition, that's a lot of grading. So MOOCs deal with high enrollment using peer assessment. Students are randomly assigned to grade other students' assignments. In his study, Dr. Rogers looked at the effect of quality of randomly assigned essays on course completion.

He found that when students received high quality essays, they were significantly less likely to finish, than if they received low quality essays. A follow-up experiment, where participants were randomly assigned to receive multiple high quality or low quality essays, confirmed these results. When people are exposed to competence, their self-appraisals go down, mitigating the Dunning-Kruger effect. But now they're also less likely to try. Depending on the skill, this might be the desired outcome, but not always. Usually when you try to get people to make more accurate self-assessments, you aren't trying to make them give up entirely, but perhaps accept that they have more to learn.

So how can you counteract the Dunning-Kruger effect without also potentially reducing a person's self-efficacy? I'll need to revisit this question sometime, but share any thoughts you might have in the comments below!

In the meantime, I leave you with a photo I took while sightseeing in Boston:

Sunday, May 28, 2017

Statistics Sunday: Getting Started with R

For today's post, I'm going to get you started with using R. This will include installing, importing data from an external file, and running basic descriptives (and a t-test, because we're fancy).

But first, especially for statistics newbies, you may be asking - what the heck is R?

R is an open source statistical package, as well as the name of the programming language used to run analysis (and do some other fancy-schmancy programming stuff we won't get into now - but I highly recommend David Robinson's blog, Variance Explained, to see some of the cool stuff you can do with R). R comes with many statistical and programming commands by default, part of what's called the 'base' package. You can add to R's statistical capabilities by installing different libraries. Everything, including new libraries and documentation about these libraries, is open source, making R an excellent choice for independent scholars, students, and anyone else who can't blow lots of money on software.

R will run on multiple operating systems, so whether you're using Windows, Mac OS, or a distro of Linux, you'll be able to install and run R. To install R, navigate over to the Comprehensive R Archive Network (CRAN). Links to install are available at the top of the page. I just reinstalled R on my Mac, with the newest version (at the time of this writing) called "You Stupid Darkness" (aka: 3.4.0). If and when you write up any statistical analysis you did on R, you'll want to report which version you used (this is true anytime you use software to run analysis, not just when you use R).

After you install R, you'll also want to install R Studio. It's an excellent resource regardless of whether you're new to R or an advanced user.


R Studio organizes itself into four quadrants:
  1.  Upper left - Any R scripts or markdown (for LaTex lovers, like myself - future post!) files are displayed here. Code you write here can be saved for future use. Add comments (starting the line with #) to include notes with your code. This is great if you (or someone else) will revisit code later, and it's also helpful to remind yourself what you did if and when you write up your results. Highlight the code you want to run and click 'Run' to send it to the console.
  2. Lower left is the console. This is where active commands go. If you run code from a script above, it will appear here along with any output. You can also type code directly here but note that you can't save that code for later use/editing.
  3. Upper right records any variables, lists, or data frames that exist in the R workspace (that is, anything you've run that creates an object). There's also a history tab that displays any code you ran during your current session.
  4. Lower right is the viewer. You can view (and navigate through) folders and files on your computer, packages (libraries) installed, any plots you've created, and help/documentation.
The great thing about R Studio is that you can access many things by clicking instead of typing into the console, which is all you get if you were to directly open R instead of R Studio. For some things, you'll find typing code is faster - such as to change your working directly, or load or install libraries. In fact, when I first started using R regularly, I was installing 4-5 libraries a day, which I briefly considered (half-jokingly) using as a measure of productivity. Now that I've reintalled R on my Mac (because I completely wiped the hard drive and reinstalled - long story), I could actually collect these data instead of just joking about doing so.

But when you have to go through multiple steps for certain things - such as viewing the help for a specific command within a specific library - you'll find R Studio makes it much easier.

R Studio will also do some auto-complete and pop-help when you type things into the script window, which is great if you can't quite remember what a command looks like. It can also tell when you're typing the name of a dataset or variable and will pop up a list of active data and variables. Super. Helpful.

Hopefully you were able to install these two programs (and if you haven't done so yet because you've been distracted by this love letter to R Studio fantastically written post, do that now). Now, open R Studio - R will automatically load too, so you don't need to open both. By default, the whole left side of the screen will be console. Create a new script (by clicking the icon that looks like a white page with a green plus and selecting R Script or by clicking File -> New File -> R Script) and the console will move down to make room.

The first thing I always do in a new script is change the working directory. Change it to whatever folder you'll be working with for your project - which can vary depending on what data you're working with. For now, start by downloading the Caffeine study file (our fictional study about the effect of caffeine on test performance, first introduced here), save it wherever you want, then change the working directory to that folder by typing setwd("directory") into the script (replacing directory with wherever the file is saved - keep the quotes and change any \ to /).  (If you really don't want to type that code, in the lower right viewer, navigate to where you saved the file, then click More -> Set As Working Directory. The code you want will appear in the console, so you can copy and paste it into the script for future use.

Let's read this file into R to play with. The file is saved as a tab-delimited file. R base has a really easy command for importing a delimited file. You'll want to give the dataset a name so you can access it later. Here's what I typed (but you can name the first part whatever you'd like):

caffeine<-read.delim("caffeine_study.txt", header=TRUE, sep="\t")

You've now created your first object, which is a dataframe called "caffeine." The command that follows the object name tells R that the file has variable names in the first row (header=TRUE) and that the delimiter is a tab (sep="\t"). Now, let's get fancy and run some descriptive statistics and a t-test, recreating what you saw here. But let's make it easy on ourselves by installing our first package: the psych package*. Type this (either into your script, then highlight and Run, or directly into the console): install.packages("psych"). You just installed the psych package, which, among other things, lets you run descriptive statistics very easily. So type and Run this: 

library("psych") (which loads the library you need for...)
describe(caffeine) (or whatever you named your data)

You'll get output that lists the two variables in the caffeine dataset (group and score), plus descriptive statistics, including mean and standard deviation. This is for the sample overall. You can get group means like this:

describeBy(caffeine, group="group")

Now you'll get descriptives first for the control group (coded as 0) and then the experimental (coded as 1). It will still give you descriptives for the group variable, which is now actually a constant, because the describeBy function is separating by that variable. So the mean will be equal to the group code (0 or 1) and standard deviation will be 0. You should have group 0 M = 79.27 (SD = 6.4) and group 1 M = 83.2 (SD = 6.21). Now, let's run a t-test. R's base package can run a t-test: t.test(DV ~ IV, data=nameofdata). So with the caffeine dataset it would be:

t.test(score ~ group, data=caffeine)

If your data are normally distributed, you'll get a standard Student t. Otherwise, you'll get a Welch's t, which shifts your degrees of freedom slightly to account for lack of normally distributed y variable (future post!). Apparently, my data weren't normal, so I got Welch's for my output. That's what I get for fabricating a dataset - oh yeah, as I said previously, these data are fake, so don't try to publish or present any results.

R can read in many different types of data, including fixed width files, and files created by different software (such as SPSS files). Look for future posts on that. And R can go both ways - not only can it read a tab-delimited file, it can write one too. For instance, if you're doing a lot of different transformations or computing new variables, you might want to save your new datafile for later use. I've also used this command to write results tables to a tab-delimited file I can then import into Excel for formatting. You'll need to reference whatever you named the command, so if you wanted to write your descriptives to a tab-delimited file, you'd need to name the object:

desc<-describe(caffeine)

Note that above, we just typed the describe command in directly, so you'll want to rerun it with a name and the arrow (<-). (This is, in my opinion, the easiest way for a new R user, but there is a way to do all of this in one step that we can get into later.) Now, write the descriptives to a tab-delimited file:

write.table(desc, "desc.txt", row.names=FALSE, sep="\t")

Without the row.names command, R will add numbers to each row. This might be helpful when writing data to a tab-delimited file (it basically gives you case numbers) but I tend to suppress this, mostly because I almost always give my cases some kind of ID number from the beginning. 

One note for any R-savvy readers of this post - the sep command technically isn't needed in either the read.delim or write.table commands, because tab ("\t") is actually the default, but I include just to be clear what delimiter I'm using, and so you get used to specifying. After all, you might need to use a comma delimiter or something else in the future.

Hopefully this has given you enough to get started. You can view help files for different packages by going to the packages tab in the lower right, then clicking on the package name. Scroll through the different commands available in that package and click on one to see more info about it, including sample code. I hope to post some new R tutorials soon! And let me know in the comments if you have any questions about anything.

*Check out William Revelle's page for great resources about the psych package (which he created) and R in general.

Saturday, May 27, 2017

Different Distributions

As I was logging some recently watched movies in Letterboxd, I found something interesting: the ratings for Alien: Covenant are normally distributed.


Get Out, on the other hand, is negatively skewed:


I'm still at the conference. More later. 

Friday, May 26, 2017

Sara's Week in Psychological Science: Conference Day #1

Today was my first full day at the conference - the annual meeting of the Association for Psychological Science. (Last night's post was hastily written on my phone while enjoying a beer and dessert, hence the lack of links.)

I'll be presenting a poster tomorrow afternoon. In the meantime, I've been sitting in interesting presentations today.

First up this morning was a panel on psychometric approaches. There was a lot of attention given to Bayesian approaches, and this just signals to me something I've suspected for a while - I should learn Bayesian statistics. I'll probably write more about this approach in a future Statistics Sunday post, but to briefly summarize, Bayesian statistics deal with probability differently than traditional statistics, mostly in the use of "priors" - prior information we have about the thing we're studying (such as results from previous studies) or educated guesses on what the distribution might look like (for very new areas of study). This information is combined with the data from the present study to form a "posterior" distribution. There are some really interesting combinations of Bayesian inference with item response theory (a psychometric approach, which I've blogged about before and should probably discuss in more detail at some point). One great thing about Bayesian approaches is that they don't require normally distributed data.

The panel was devoted to the benefits and drawbacks of different kinds of psychometric models and the research situations in which you should use special models - here's one of my favorite slides of the panel:


I also attended a presentation for a brand new journal, Advances in Methods and Practices in Psychological Science, which will be publishing its first issue early next year:
The journal publishes a range of article types, including empirical articles that exemplify best practices, articles that discuss current research methods and practices in an accessible manner, and tutorials that teach researchers how to use new tools in their own research programs. An explicit part of the journal’s mission is to encourage discussion of methodological and analytical questions across multiple branches of psychological science and related disciplines. Because AMPPS is a general audience journal, all articles should be accessible and understandable to the broad membership of APS—not just to methodologists and statisticians. The journal particularly encourages articles that bring useful advances from within a specialized area to a broader audience.
I already have an idea for a paper I'd like to submit.

The last session of the day I attended was on implicit bias, and how they impact real-world interactions between police and community members, doctors and patients, and employers and employees.

All that's left is a reception tonight. At the moment, I'm relaxing in my hotel room before heading out to try a German restaurant with an excellent beer selection.

Thursday, May 25, 2017

Psychological Science in Boston

I just arrived in Boston earlier this afternoon to attend the annual meeting of the Association for Psychological Science. I'll have details on the conference tomorrow. While they had events and workshops earlier today, these were mostly pre-conference activities. I attended the official opening reception earlier this evening. Here's some photo highlights of the day:

The view from my hotel

Some great psychology buttons I found at the opening reception 

A beautiful old church I walked by on my way back to the hotel

The hotel bar

And dessert

In fact I'm enjoying dessert right now. After this, I'll head back up to my room for some reading and/or Netflix before heading to bed.

Wednesday, May 24, 2017

Science, Uncertainty, and "The Hunt for Vulcan"

Today, I'm listening to a science podcast from earlier this month, "How the Planet Vulcan Changed Science Forever":
In the podcast, which runs in FiveThirtyEight’s What’s The Point feed, senior science writer Maggie Koerth-Baker, lead science writer Christie Aschwanden and senior editor Blythe Terrell talk through how science ideas evolve over time — and how challenging that process can be.

The second part of this month’s podcast features Christie interviewing [Thomas] Levenson about [his] book, [The Hunt for Vulcan].
I'll have to add Levenson's book to my reading list. And if you want to read ahead for next month's podcast, they'll be talking about "Flavor" by Bob Holmes.

Tuesday, May 23, 2017

Trump's Budget

Trump released his first budget, which FiveThirtyEight observes is built on fantasy:
President Trump’s first budget, released Tuesday, is not going to become law. First, because presidents’ budgets never become law, not the way they’re initially proposed. And second, because the specifics of Trump’s fiscal 2018 budget — enormous cuts to nearly every significant government program other than defense, Social Security and Medicare in order to pay for huge tax cuts that would go disproportionately to the wealthy — seem designed to alienate not just Democrats (at least a few of whom Trump needs to get his budget through the Senate) but also moderate Republicans and the public at large. Trump likely knows this; the White House released the budget while he is thousands of miles away on his first foreign trip as president.
Hmmm...

The fantastical part of his budget? He's basing it accelerating economic growth, up to 3% by 2021. This is much higher than estimates from the Congressional Budget Office (1.9%), the Federal Reserve (1.8%), and what was observed last year (1.6%). There are also countless threats to economic growth, including limits on immigration and the retirements of baby boomers. Productivity is also slowing down and no one knows why, making it difficult to predict what the economy will look like.

The response from the White House is basically chiding the Obama administration for being so pessimistic about the nation's economic growth, and faith that we can obtain 3% growth. Hope and faith is important, but not to build a budget upon.

Monday, May 22, 2017

Would You Like Fryes With That?: A Psychological Analysis of Fraud Victims

What started as an over-the-top music festival in the Bahamas ended up as a social media joke. The Frye Festival, which was supposed to take place in late April, was canceled - after guests had already started arriving:
On social media, where Fyre Festival had been sold as a selfie-taker’s paradise, accounts showed none of the aspirational A-lister excesses, with only sad sandwiches and free alcohol to placate the restless crowds. General disappointment soon turned to near-panic as the festival was canceled and attendees attempted to flee back to the mainland of Florida.

“Not one thing that was promised on the website was delivered,” said Shivi Kumar, 33, who works in technology sales in New York, and came with a handful of friends expecting the deluxe “lodge” package for which they had paid $3,500: four king size beds and a chic living room lounge. Instead Ms. Kumar and her crew were directed to a tent encampment. Some tents had beds, but some were still unfurnished. Directed by a festival employee to “grab a tent,” attendees started running, she said.

Now, they're under federal investigation for fraud. In hindsight, the whole thing is clearly a scam. Websites disappeared because designers weren't getting paid. Past customers of previous services complained that special offers never materialized. Not to mention hearing from disgruntled past employees and contractors. In fact, it's so clearly a scam, it's surprising anyone fell for it.

It's very easy for us to look at all of this information now, and come to the conclusion that it was a scam. The problem with hindsight is that it's always 20/20. The same cannot be said for foresight but that doesn't stop people from saying they would have known all along. This is called hindsight bias.

There's probably also some victim blaming going on here. How could these people not know any better? Had we been in the same situation, of course we would have known. We distance ourselves from the victims of this fraud, because it helps us feel more safe, more in control of our world. The same thing could never happen to use because we wouldn't let it.

It's easy to understand reactions after-the-fact. What's more interesting, I think, is to try to figure out what got the attendees and contractors to buy into this fraud to begin with. We ask incredulously, "What were they thinking?" But seriously - what were they thinking?

Human beings are social creatures. We have to be. In order for our species to survive in a hostile environment, it was necessary for us to band together. We formed groups, which became tribes, which become whole societies. And in order to survive in these social structures, it was necessary for to have some trust in the people around us. You could argue that trust is an evolutionarily selected trait in humans. Let's face it, if you don't trust anyone else, it's really unlikely that you're going to reproduce. You have to at least trust one person to do that (at least, if you're reproducing on purpose).

So now we have a species pre-disposed toward trusting others. But we don't give our trust to just anyone - rather, to people we perceive as having certain traits. The more charismatic the leader, the more likely we are to trust them. And if everyone else in our social group trusts a certain person, we're more likely to trust that person too, at least externally.

Internally we may be more skeptical. If we look at the results of the Milgram study, we find that many people reported after the fact feeling very uncomfortable with what they were doing. They even had doubts as to whether they were doing the right thing. But they continued shocking the learner nonetheless. Why? Because somebody in the lab coat, somebody they perceived as having expertise, told them to. This person knows better than me, so I'm just going to keep doing what they say. It doesn't matter whether they actually have any expertise. It's the perception of expertise. And that is something charismatic leaders can do. They can convince you that they know more than they actually do, that they are an expert in something that you are not an expert in. Mc Farland had people believing that he was an expert in entertainment, technology, and rubbing elbows with celebrities. He had people convinced that he could help them to do the same thing.

I'm sure there are some people who didn't trust him. But they went along with him anyway, because there were people who did believe him, who believed that he could do exactly what he said he was going to do, despite instances in the past where he had simply wasted other people's money. But that's the nature of conformity. At the very least, if everyone else is doing it, that makes us more likely to question why we aren't doing it too. Maybe the rest of the group knows something that we don't. Maybe we're misreading the situation.

In the 1950s, Solomon Asch conducted what he said was a study on perception, that was actually a study of conformity.  Actors who pretended to be fellow participants publicly selected what was clearly the wrong answer, to see if the true participant would do the same; 32% of participants conformed with the wrong answer every time across multiple trials, and 75% conformed at least once.

Obviously, there are some other cognitive fallacies occurring here and in similar scams. The sunk cost fallacy, for instance, would explain why people held onto the idea of the festival, especially if they kept paying into it over time. It's the same principle that keeps people pumping money into slot machines or staying in bad relationships - if I keep this up, eventually it will be worth it, and I've put in too much time, money, and/or effort to walk away now. That's what happens when something has a variable schedule of reinforcement. We learn from variable schedules that if you just keep it up, the reward will eventually come.

Combine the sunk cost fallacy with a charismatic leader, the promise of rubbing elbows with people we admire, and other members of our social group going along with it, and it's not surprising at all that people fell for this scam. The problem is that people are going to keep falling for it. The people who were hurt in this particular scam will probably learn their lesson and stay far away from McFarland and his endeavors. But there will always be others will fall for it. And they're unlikely to learn anything from the negative experience of their peers - they'll blame the victims, they'll insist they would have known all along, and they'll distance themselves from those who have been hurt. They'll think of them as the outgroup - people who aren't like them in the ways that matter - and ascribe negative characteristics to them.

There will always be people like McFarland. And there will always be people who fall for his song and dance.

Sunday, May 21, 2017

Statistics Sunday: The Analysis of Variance

In the t-test post and bonus post, I talked about how to use the t-test to compare two sample means and see if they are more different than we would expect by chance alone. This statistic is great when you have two means, but what if you have more than 2?

Here's a more concrete example. Imagine you're going to a movie with three friends. You buy your tickets, get your popcorn and sodas, and go into the theatre. You turn to your friends to ask where they'd like to sit.

The first says, "The back. You don't have anyone behind you kicking your seat and you can see the whole screen no matter what."

"No," the second friend says, "that's way too far away. I want to sit in the front row. No one tall in front of you to block your view, and you can look up and see the actors larger-than-life."
And you can be like these guys

"You're kidding, right?" asks the third friend. "And deal with the pain in neck from looking up the whole time? No, thank you. I want to sit farther back, but not all the way in the back row. The middle is the best place to sit."

How do you solve this dilemma? With research, of course! (Why? How do you solve arguments between friends?) You could pass out a survey to movie goers to see who has the best experience of the movie based on where they sit - front, middle, or back. But now you want to see which group, on average, has the best experience. You know a t-test will let you compare two groups, but how do you compare three groups?

Yes, you could do three t-tests: front v. middle, front v. back, and middle v. back. But remember that you inflate your Type I error with each statistical test you conduct. You could correct your alpha for multiple comparisons, but you also increase your probability of Type II error doing that. As with so many issues in statistics, there's a better way.

Enter the analysis of variance, also known as ANOVA. This lets you test more than two means. And it does it, much like the t-test, by examining deviation from the mean. In any statistical situation, the expected value is the mean - in this case, it's what we call the grand mean, the mean across all 3+ groups. If seating location makes no difference, we would expect all three groups to share the same mean; that is, the grand mean would be the best descriptor for everyone. We're testing the statistical hypothesis that the grand mean is not the best descriptor for everyone. So we need to see how far these groups are from the grand mean and if it's more than we expect by chance alone.

But the mean is a balancing point; some groups will be above the grand mean, and some below it. If I took my grand mean, and subtracted each group mean from it, then added those deviations together, they would add up to 0 or close to it. What do we do when we want to add together deviations and not have them cancel each other out? We square them! Remember - this is how we get variance: the average squared deviation from the mean. So, to conduct an ANOVA, we look at the squared deviations from the grand mean. Analysis of variance - get it? Good.

Once you have your squared deviations from the grand mean - your between group variance - you compare those values to the pooled variance across the three groups - your within group variance, or how much variance you expect by chance alone. If your between group variance is a lot more than your within group variance, the result will be significant. Just like the t-test, there's a table of critical values, based on sample size as well as the number of comparisons you're making; if your ANOVA (also known as a F test - here's why) is that large or larger, you conclude that at least two of the group means are different from each other.

You would need to probe further to find out exactly which comparison is different - it could be only two are significantly different or it could be all three. You have to do what's called post hoc tests to find out for certain. Except now, you're not fishing - like you would be with multiple t-tests. You know there's a significant difference in there somewhere; you're just hunting to find out which one it is. (Look for a future post about post hoc tests.)

The cool thing about ANOVA is you can use it with more than one variable. Remember there is a difference between levels and variables. A level is one of the "settings" of a variable. For our caffeine study, the levels are "experimental: receives caffeine" and "control: no caffeine." In the movie theatre example, the variable is seating location, and the levels are front, middle, and back. But what if you wanted to throw in another variable you think might effect the outcome? For instance, you might think gender also has an impact on movie enjoyment.

There's an ANOVA for that, called factorial ANOVA. You need to have a mean for each combination of the two variables: male gender-front seat, female gender-front seat, male gender-middle seat, female gender-middle seat, male gender-back seat, and female gender-back seat. Your ANOVA does the same kind of comparison as above, but it also looks at each variable separately (male v. female collapsed across seating location, and front v. middle v. back collapsed across gender) to tell you the effect of each (what's called a main effect). Then, it can also tell you if the combination of gender and seating location changes the relationship. That is, maybe the effect of seating location differs depending on whether you are a man or a woman. This is called an interaction effect.

On one of these Statistics Sundays, I might have to show an ANOVA in action. Stay tuned!

Saturday, May 20, 2017

Long Road Ahead

So a special counsel has been appointed to continue the Russia investigation. People on both sides of the political continuum are pretty happy about this (after all, 78% of Americans in a recent poll were in support of this) - people who believe Trump is innocent of any wrongdoing can depend on the investigation to exonerate him, while people who believe Trump is guilty can finally move one step closer to removal from office.

But it's liable to be years before anything definitive comes out of this investigation. Clare Malone at FiveThirtyEight sat down with political scientist, Brandon Rottinghaus, to discuss the history of political scandals and when this particular investigation might end:
Number one, the president is insulated politically so that it’s hard to get the president’s staff and counsels to turn on the president.

Number two, presidents are often insulated legally; they have the ability to do a lot of things that staff or Cabinet members aren’t able to do.

The third thing is that independent counsels, special counsels and any other investigatory bodies are reluctant to challenge the president in a way that might lead to impeachment for fear that it looks like a non-democratic outcome to the legal process. Although obviously these things run into partisanship very quickly, people are less willing to remove a president unless the crisis is severe and the implications are egregious.

[Y]our standard investigation, even of a person who’s a Cabinet member or staff member, is probably between two and three years. For a president in particular, it tends to be longer because the amount of care to be taken is greater.
Regardless of how long the process will take, the fact that an investigation is underway will still have an impact on the current administration, long before any results are shared:
[T]hese kind of events often lead to legislative paralysis, and if you’re not producing legislation, the public tends to take it out on the incumbent party, especially the president. So it’s a kind of double whammy for presidents looking to keep those approval ratings above water.
It's a long road ahead:

This gorgeous photo is by photographer Glenn Nagel