Awesome news first thing on a Monday! My paper, "Effect of the environment on participation in spinal cord injuries/disorders: The mediating impact of resilience, grief, and self-efficacy," published in Rehabilitation Psychology last year was awarded the Harold Yuker Award for Research Excellence:
This paper was the result of a huge survey, overseen by my post-doc mentor, Sherri LaVela, and worked on by many of my amazing VA colleagues. The paper itself uses latent variable path analysis to examine how resilience, grief, and self-efficacy among individuals with spinal cord injuries/disorders mediates to the effect of environmental barriers on ability to participate. The message of the paper was that, while improving environmental barriers is key to increasing participation, by intervening to increase resilience and self-efficacy, and decrease feelings of grief/loss over the injury/disorder, we can impact participation, even if we can't directly intervene to improve all of the environmental barriers.
So cool to get recognition that one of my favorite and most personally meaningful papers was also viewed as meaningful and important to experts in the field.
Showing posts with label self-efficacy. Show all posts
Showing posts with label self-efficacy. Show all posts
Monday, July 30, 2018
Monday, May 29, 2017
Sara's Week in Psychological Science: Conference Wrap-Up
I'm back from Boston and collecting my thoughts from the conference. I had a great time and made lots of great connections. While I didn't have a lot of visitors to my poster, I had some wonderful conversations with a few visitors and other presenters - quality over quantity. I'm also making some plans for the near future. Stay tuned: there are some big changes on the horizon I'll be announcing, starting later in the week.
In the meantime, I'm revisiting notes from talks I attended. One in particular presented a flip side of a concept I've blogged about a lot - the Dunning-Kruger effect. To refresh your memory, the Dunning-Kruger effect describes the relationship between actual and perceived competence. People who are actually low or high in competence tend to rate themselves more highly on perceived competence than people with a moderate level of competence - and this effect has been observed for a variety of skills.
The reason for this effect has to do with knowing what competence looks like. You need a certain level of knowledge about a subject to know what true competence looks like. People with moderate competence know quite a bit but also know how much more there is to learn. But people with low competence don't know enough to understand what competence looks like - in short, they don't know what they don't know. (In fact, you can read a summary of some of this research here, which I co-authored several years ago with my dissertation director, Linda Heath, and a fellow graduate student, Adam DeHoek.)
The way to counteract this effect is to show people what competence looks like. But one presentation at APS this year showed a negative side effect of this tactic. Todd Rogers from the Harvard Kennedy School presented data collected through Massively Open Online Courses (MOOCs - such as those you'd find listed on Coursera). These courses have high enrollment but also high attrition - for instance, it isn't unusual for a course to have an enrollment of 15,000 but only 5,000 who complete all assignments.
Even with 66.7% attrition, that's a lot of grading. So MOOCs deal with high enrollment using peer assessment. Students are randomly assigned to grade other students' assignments. In his study, Dr. Rogers looked at the effect of quality of randomly assigned essays on course completion.
He found that when students received high quality essays, they were significantly less likely to finish, than if they received low quality essays. A follow-up experiment, where participants were randomly assigned to receive multiple high quality or low quality essays, confirmed these results. When people are exposed to competence, their self-appraisals go down, mitigating the Dunning-Kruger effect. But now they're also less likely to try. Depending on the skill, this might be the desired outcome, but not always. Usually when you try to get people to make more accurate self-assessments, you aren't trying to make them give up entirely, but perhaps accept that they have more to learn.
So how can you counteract the Dunning-Kruger effect without also potentially reducing a person's self-efficacy? I'll need to revisit this question sometime, but share any thoughts you might have in the comments below!
In the meantime, I leave you with a photo I took while sightseeing in Boston:
In the meantime, I'm revisiting notes from talks I attended. One in particular presented a flip side of a concept I've blogged about a lot - the Dunning-Kruger effect. To refresh your memory, the Dunning-Kruger effect describes the relationship between actual and perceived competence. People who are actually low or high in competence tend to rate themselves more highly on perceived competence than people with a moderate level of competence - and this effect has been observed for a variety of skills.
The reason for this effect has to do with knowing what competence looks like. You need a certain level of knowledge about a subject to know what true competence looks like. People with moderate competence know quite a bit but also know how much more there is to learn. But people with low competence don't know enough to understand what competence looks like - in short, they don't know what they don't know. (In fact, you can read a summary of some of this research here, which I co-authored several years ago with my dissertation director, Linda Heath, and a fellow graduate student, Adam DeHoek.)
The way to counteract this effect is to show people what competence looks like. But one presentation at APS this year showed a negative side effect of this tactic. Todd Rogers from the Harvard Kennedy School presented data collected through Massively Open Online Courses (MOOCs - such as those you'd find listed on Coursera). These courses have high enrollment but also high attrition - for instance, it isn't unusual for a course to have an enrollment of 15,000 but only 5,000 who complete all assignments.
Even with 66.7% attrition, that's a lot of grading. So MOOCs deal with high enrollment using peer assessment. Students are randomly assigned to grade other students' assignments. In his study, Dr. Rogers looked at the effect of quality of randomly assigned essays on course completion.
He found that when students received high quality essays, they were significantly less likely to finish, than if they received low quality essays. A follow-up experiment, where participants were randomly assigned to receive multiple high quality or low quality essays, confirmed these results. When people are exposed to competence, their self-appraisals go down, mitigating the Dunning-Kruger effect. But now they're also less likely to try. Depending on the skill, this might be the desired outcome, but not always. Usually when you try to get people to make more accurate self-assessments, you aren't trying to make them give up entirely, but perhaps accept that they have more to learn.
So how can you counteract the Dunning-Kruger effect without also potentially reducing a person's self-efficacy? I'll need to revisit this question sometime, but share any thoughts you might have in the comments below!
In the meantime, I leave you with a photo I took while sightseeing in Boston:
Thursday, April 21, 2016
R is for (the Theory of) Reasoned Action
I've talked a lot this month about groups, how they are formed, and how they influence us. But a big part of social psychology, especially it's current cognitive focus, is on attitudes, and how they influence us. And as good social psychologists, we recognize the formation and influence of attitudes is determined by others and our perceptions of what they expect from us.
Attitudes are tricky, though. They alone do not shape what we do. In fact, there is a great deal of research on how attitudes are a poor predictor of behavior, known sometimes as the attitude-behavior gap or value-action gap. There are other factors that influence us, that may interact with or even counteract our attitudes. Instead, various forces including attitudes shape what is known as behavioral intention - what we intend to do in certain situations. This intention is then used to predict the behavior itself, recognizing that situational forces may exert their influence between intention and behavior.
Two social psychologists, Fishbein and Ajzen (pronounced Ay-zen), developed the Theory of Reasoned Action to predict behavioral intention, and in turn behavior, with two factors: attitudes and norms. Attitudes can vary in strength - from very important to not important - and evaluation - positive to negative. Norms can also range from very broad, such as societal norms, to more specific, such as norms within your social group. Within that norm factor, there are two subconcepts: normative beliefs (what we think others expect of us) and motivation to comply (that is, do we want to conform or be different?). If we draw this model, it would look something like this:
Not long after publishing on this model, Ajzen decided to build on this theory to improve its predictive power. Thus, the Theory of Planned Behavior was born. This new theory adds one additional component to the old theory: perceived behavioral control. This concept was influenced by self-efficacy theory, and represents a person's confidence in his/her ability to engage in the behavior in question. Perceived behavioral control is influenced by control beliefs, or beliefs about the factors that may help or hinder carrying out the behavior. Each of these three factors not only influences behavioral intention, they can also influence each other. For instance, your own attitude about something can color your judgment of what others think. The degree of control you believe you have over the behavior can influence your attitude. And so on.
When Ajzen drew the model, it looked like this:
Because psychologists recognize that perception can be biased, he also included a box for "actual behavioral control." What we think may not be accurate, and what is actually true may still influence us, even if we fail to notice the truth. Humans are skilled at self-deception.
One important thing to keep in mind if you're trying to predict behavior from attitudes is that specific attitudes are more predictive than general attitudes. Asking someone their general attitude toward the legal system will be far less predictive of how they vote as a juror than their attitude about a specific case. But even when you measure a specific attitude, you may not get the behavior you expect. For my dissertation research, I studied pretrial publicity - information seen in the media before a case goes to trial - and its influence on verdicts. Pretrial publicity is an interesting area of research, especially because no one has really found a good theory to explain it. That is, we know it biases people, but when we try to apply a theory to it, the study still finds pretrial publicity effects but often fails to confirm the theory.
I decided to apply attitudes to the study - very specific attitudes. That is, I hypothesized that pretrial publicity information is only biasing if a person has a specific attitude about that piece of information as indicative of guilt. So, to put it more simply with one of the pieces of information I used in my study: finding out a person confessed is only biasing if you believe that only guilty people confess. I gave participants news stories with one of four pieces of information: confession, resisting arrest, prior record, or no biasing information (control condition).
Then I told them they would be reading a case and rendering a verdict, but first, I asked them to complete a measure of attitudes. These measures are sometimes used during a process known as voir dire, in which potential jurors are questioned to determine if they should be added to the jury. Embedded in this measure were questions about the specific pieces of information. They read the case, and selected a verdict.
The problem is that, like so many other studies before, I found pretrial publicity effects, but attitudes were often unrelated. Even people who didn't believe confession was indicative of guilt were more likely to select guilty when they heard that information pretrial. I was able to apply some different theories to the results, ones related to thought suppression and psychological reactance, concepts I've blogged about before. But I was quite disappointed that I still couldn't fully explain what I was seeing.
Like I said, attitudes are tricky.
Attitudes are tricky, though. They alone do not shape what we do. In fact, there is a great deal of research on how attitudes are a poor predictor of behavior, known sometimes as the attitude-behavior gap or value-action gap. There are other factors that influence us, that may interact with or even counteract our attitudes. Instead, various forces including attitudes shape what is known as behavioral intention - what we intend to do in certain situations. This intention is then used to predict the behavior itself, recognizing that situational forces may exert their influence between intention and behavior.
Two social psychologists, Fishbein and Ajzen (pronounced Ay-zen), developed the Theory of Reasoned Action to predict behavioral intention, and in turn behavior, with two factors: attitudes and norms. Attitudes can vary in strength - from very important to not important - and evaluation - positive to negative. Norms can also range from very broad, such as societal norms, to more specific, such as norms within your social group. Within that norm factor, there are two subconcepts: normative beliefs (what we think others expect of us) and motivation to comply (that is, do we want to conform or be different?). If we draw this model, it would look something like this:
Not long after publishing on this model, Ajzen decided to build on this theory to improve its predictive power. Thus, the Theory of Planned Behavior was born. This new theory adds one additional component to the old theory: perceived behavioral control. This concept was influenced by self-efficacy theory, and represents a person's confidence in his/her ability to engage in the behavior in question. Perceived behavioral control is influenced by control beliefs, or beliefs about the factors that may help or hinder carrying out the behavior. Each of these three factors not only influences behavioral intention, they can also influence each other. For instance, your own attitude about something can color your judgment of what others think. The degree of control you believe you have over the behavior can influence your attitude. And so on.
When Ajzen drew the model, it looked like this:
Because psychologists recognize that perception can be biased, he also included a box for "actual behavioral control." What we think may not be accurate, and what is actually true may still influence us, even if we fail to notice the truth. Humans are skilled at self-deception.
One important thing to keep in mind if you're trying to predict behavior from attitudes is that specific attitudes are more predictive than general attitudes. Asking someone their general attitude toward the legal system will be far less predictive of how they vote as a juror than their attitude about a specific case. But even when you measure a specific attitude, you may not get the behavior you expect. For my dissertation research, I studied pretrial publicity - information seen in the media before a case goes to trial - and its influence on verdicts. Pretrial publicity is an interesting area of research, especially because no one has really found a good theory to explain it. That is, we know it biases people, but when we try to apply a theory to it, the study still finds pretrial publicity effects but often fails to confirm the theory.
I decided to apply attitudes to the study - very specific attitudes. That is, I hypothesized that pretrial publicity information is only biasing if a person has a specific attitude about that piece of information as indicative of guilt. So, to put it more simply with one of the pieces of information I used in my study: finding out a person confessed is only biasing if you believe that only guilty people confess. I gave participants news stories with one of four pieces of information: confession, resisting arrest, prior record, or no biasing information (control condition).
Then I told them they would be reading a case and rendering a verdict, but first, I asked them to complete a measure of attitudes. These measures are sometimes used during a process known as voir dire, in which potential jurors are questioned to determine if they should be added to the jury. Embedded in this measure were questions about the specific pieces of information. They read the case, and selected a verdict.
The problem is that, like so many other studies before, I found pretrial publicity effects, but attitudes were often unrelated. Even people who didn't believe confession was indicative of guilt were more likely to select guilty when they heard that information pretrial. I was able to apply some different theories to the results, ones related to thought suppression and psychological reactance, concepts I've blogged about before. But I was quite disappointed that I still couldn't fully explain what I was seeing.
Like I said, attitudes are tricky.
Saturday, April 2, 2016
B is for Bandura
Albert Bandura is perhaps one of the most well-known social psychologists - in fact, a few other subfields of psychology probably want to claim him as a member as well. This is in part because his theories are widely applicable and highly influential.
I blogged recently about one of his most well-known studies, the famous Bobo doll study he conducted in 1961 to examine how children learn aggressive behavior. In the study, he had children watch an adult playing with a Bobo doll aggressively (punching, kicking, etc.) or non-aggressively. The child was then given a chance to play with the Bobo doll, and observers recorded whether the child imitated the behavior s/he saw from the adult. Follow-up studies examined whether the model was equally influential if the children watched a film version of the behavior, or even a cartoon version of the behavior. Results were similar across these conditions.
Though this research obviously has direct application to understanding the link between media violence and aggressive behavior, this research served as building blocks for Bandura's social learning theory, which deals with learning by watching others, a concept called vicarious learning.
Previously, behaviorism was the main approach to understanding how people learn and the effects of rewards and punishments on our own behavior. Bandura's theory dramatically changed our understanding of learning, by showing that it occurs in a social context, and that watching others receive rewards or punishments for their behavior can influence our own behavior. Even more important, though the social context influences our behavior, we can also influence the social context - he called this concept reciprocal determinism: the environment shapes us, but we also shape our environment.
As Bandura continued testing and updating his social learning theory, he identified and developed a concept he called self-efficacy - a topic I currently study as part of my job.
According to Bandura’s self-efficacy theory, a person who possesses certain skills and abilities carries out a behavior, which results in particular outcomes. However, the individual has certain expectations or beliefs about his or her ability to carry out a behavior, which affect whether the person initiates the behavior and whether s/he persists in carrying out that behavior in the face of obstacles.
That is, provided the individual has or can learn the skills necessary to carry out the behavior, the individual’s beliefs about whether he or she is capable of carrying out that behavior will determine what the individual does next. That collection of beliefs is called self-efficacy.
A variety of sources can be used to figure out one's self-efficacy: direct experience with the behavior, learning from watching others, verbal feedback from others, and physiological states, such as anxiety.
Additionally, self-efficacy is made up of three dimensions: strength, or the intensity of one’s efficacy beliefs (are you really certain you can do it, or just kind of certain?); generalizability, or the belief that one’s skills are applicable to multiple situations; and magnitude, which refers to the difficulty of tasks one feels able to accomplish.
Bandura is also a prolific author of journal articles - many of which you can find online for free - and books. One of my favorites is, of course, his book on self-efficacy:
I blogged recently about one of his most well-known studies, the famous Bobo doll study he conducted in 1961 to examine how children learn aggressive behavior. In the study, he had children watch an adult playing with a Bobo doll aggressively (punching, kicking, etc.) or non-aggressively. The child was then given a chance to play with the Bobo doll, and observers recorded whether the child imitated the behavior s/he saw from the adult. Follow-up studies examined whether the model was equally influential if the children watched a film version of the behavior, or even a cartoon version of the behavior. Results were similar across these conditions.
Though this research obviously has direct application to understanding the link between media violence and aggressive behavior, this research served as building blocks for Bandura's social learning theory, which deals with learning by watching others, a concept called vicarious learning.
Previously, behaviorism was the main approach to understanding how people learn and the effects of rewards and punishments on our own behavior. Bandura's theory dramatically changed our understanding of learning, by showing that it occurs in a social context, and that watching others receive rewards or punishments for their behavior can influence our own behavior. Even more important, though the social context influences our behavior, we can also influence the social context - he called this concept reciprocal determinism: the environment shapes us, but we also shape our environment.
As Bandura continued testing and updating his social learning theory, he identified and developed a concept he called self-efficacy - a topic I currently study as part of my job.
According to Bandura’s self-efficacy theory, a person who possesses certain skills and abilities carries out a behavior, which results in particular outcomes. However, the individual has certain expectations or beliefs about his or her ability to carry out a behavior, which affect whether the person initiates the behavior and whether s/he persists in carrying out that behavior in the face of obstacles.
That is, provided the individual has or can learn the skills necessary to carry out the behavior, the individual’s beliefs about whether he or she is capable of carrying out that behavior will determine what the individual does next. That collection of beliefs is called self-efficacy.
![]() |
Or don't. Either way, it's self-efficacy... |
A variety of sources can be used to figure out one's self-efficacy: direct experience with the behavior, learning from watching others, verbal feedback from others, and physiological states, such as anxiety.
Additionally, self-efficacy is made up of three dimensions: strength, or the intensity of one’s efficacy beliefs (are you really certain you can do it, or just kind of certain?); generalizability, or the belief that one’s skills are applicable to multiple situations; and magnitude, which refers to the difficulty of tasks one feels able to accomplish.
Bandura is also a prolific author of journal articles - many of which you can find online for free - and books. One of my favorites is, of course, his book on self-efficacy:
Thursday, August 18, 2011
Celebrities and Weight Management
I didn't think, when I started this blog, that I would even bother responding to celebrity quotes. True, I could probably blog forever and a day about the things celebrities utter in interviews, on their Twitter page, etc. - in fact, there are many successful blogs devoted to just that topic. In a recent interview, however, Mila Kunis talked about weight loss. Since weight management is one of my areas of research, I felt I needed to respond -- plus, I was looking for a good reason to talk about weight management research on here.
Essentially, Mila said that people who are “trying to lose weight” and are unsuccessful are simply not trying hard enough. What prompted her to reach this conclusion is the fact that she was able to lose 20 pounds for her role in Black Swan, a substantial amount, considering she normally is very thin. Of course, what Mila said is problematic for a few reasons.
Even when an individual is successful at losing weight through a program, weight gain in the time following the program is very common; most will gain back two-thirds of the weight within a year, and nearly all of it within 5 years. Why? Because sudden and drastic changes are difficult to maintain. That’s one reason you’ll find that, for many people, losing weight, especially with “fad diets”, is easy but maintaining weight loss is difficult. The approaches celebrities often take to lose weight for a role definitely work over the short-term, but are rarely sustainable. Look at celebrities who did not lose weight for a role, but who did so because their weight was unhealthy – for example, Oprah Winfrey (whose weight has often been the target of comedians) lost 67 pounds on the liquid diet, and unveiled her new look on her show while pulling a wagon of fat… only to regain much of that weight later. In fact, within a week of going off the diet, she had gained 10 pounds. Such low calorie diets cannot be maintained for very long, and there’s a good reason for that. In fact, even in cases where a doctor has prescribed a very low calorie diet (an approach taken only for patients who are very obese), the patient has to (or is supposed to) undergo intense medical supervision.
Whatever changes you make to lose weight, whether it is diet, exercise, or some combination of both, they have to be changes you’re willing to maintain over the long-term, or your chances of regaining the weight are high.
My predominant concern when I see celebrities losing large amounts of weight, and talking about how easy it is and “anyone can do this”, is that it creates unrealistic expectations. In fact, a lot of research shows that people entering weight loss programs come in with really unrealistic expectations.
Furthermore, telling people they’re going about weight loss in the “wrong way” and to “try harder” doesn’t instruct them on how to lose weight effectively and in a healthy way. This is probably one reason that media coverage of celebrity weight loss and the constant messages about what people are “supposed to look like” can lead to disordered eating and other maladaptive behaviors. Figuring out how to lose weight is pretty intuitive – cut down on calorie intake and/or increase physical activity – but the approaches one needs to take to lose weight healthily are definitely not intuitive. Even if someone decides to do some research into losing weight, there are many sources of information, some teaching really unhealthy approaches. A lot of people don’t even realize what disordered eating means, thinking that, as long as they aren’t starving themselves or forcing themselves to vomit after eating, their behaviors (like “fasting” after large meals or exercising to the point of exhaustion) are perfectly normal and even healthy.
I’m certainly not attacking celebrities. I know that Mila probably felt that by telling people “no, you can” was an attempt to boost people’s confidence in themselves and their ability to reach their goals (a concept psychologists call “self-efficacy”). It’s definitely a noble goal, because research suggests that people starting weight management programs often have low self-efficacy.
Even so, boosting confidence may lead people to try something to lose their weight, but not necessary the right thing, so increasing self-efficacy needs to be done in concert with teaching healthy weight management approaches. This is one of the many reasons that people trying to lose weight on their own are not very effective.
Just once, rather than hearing a celebrity go on and on about how much he loves to eat fast food or how she is able to keep thin simply by “playing with her kids”, I would love to hear a celebrity say, “You know what, keeping thin is hard work! Here are all the things I do…” Okay, not as a great a sound-bite, I know (and arguably not the celebrity's responsibility), but it might help to balance out some of the other celebrity sound-bites that I fear do more harm than good.
Of course, celebrities are not the only ones sending the wrong, or at least, incomplete messages. Proposed policy to outlaw Happy Meals or add additional taxes to “junk food” is just as bad as simply saying, “What you’re doing is wrong” – it doesn’t teach what people should be doing instead. Rather than punishing people for making the “wrong” choices, we need to incentivize them to make the “right” choices.
Thoughtfully yours,
Sara
Essentially, Mila said that people who are “trying to lose weight” and are unsuccessful are simply not trying hard enough. What prompted her to reach this conclusion is the fact that she was able to lose 20 pounds for her role in Black Swan, a substantial amount, considering she normally is very thin. Of course, what Mila said is problematic for a few reasons.
Even when an individual is successful at losing weight through a program, weight gain in the time following the program is very common; most will gain back two-thirds of the weight within a year, and nearly all of it within 5 years. Why? Because sudden and drastic changes are difficult to maintain. That’s one reason you’ll find that, for many people, losing weight, especially with “fad diets”, is easy but maintaining weight loss is difficult. The approaches celebrities often take to lose weight for a role definitely work over the short-term, but are rarely sustainable. Look at celebrities who did not lose weight for a role, but who did so because their weight was unhealthy – for example, Oprah Winfrey (whose weight has often been the target of comedians) lost 67 pounds on the liquid diet, and unveiled her new look on her show while pulling a wagon of fat… only to regain much of that weight later. In fact, within a week of going off the diet, she had gained 10 pounds. Such low calorie diets cannot be maintained for very long, and there’s a good reason for that. In fact, even in cases where a doctor has prescribed a very low calorie diet (an approach taken only for patients who are very obese), the patient has to (or is supposed to) undergo intense medical supervision.
Whatever changes you make to lose weight, whether it is diet, exercise, or some combination of both, they have to be changes you’re willing to maintain over the long-term, or your chances of regaining the weight are high.
My predominant concern when I see celebrities losing large amounts of weight, and talking about how easy it is and “anyone can do this”, is that it creates unrealistic expectations. In fact, a lot of research shows that people entering weight loss programs come in with really unrealistic expectations.
Furthermore, telling people they’re going about weight loss in the “wrong way” and to “try harder” doesn’t instruct them on how to lose weight effectively and in a healthy way. This is probably one reason that media coverage of celebrity weight loss and the constant messages about what people are “supposed to look like” can lead to disordered eating and other maladaptive behaviors. Figuring out how to lose weight is pretty intuitive – cut down on calorie intake and/or increase physical activity – but the approaches one needs to take to lose weight healthily are definitely not intuitive. Even if someone decides to do some research into losing weight, there are many sources of information, some teaching really unhealthy approaches. A lot of people don’t even realize what disordered eating means, thinking that, as long as they aren’t starving themselves or forcing themselves to vomit after eating, their behaviors (like “fasting” after large meals or exercising to the point of exhaustion) are perfectly normal and even healthy.
I’m certainly not attacking celebrities. I know that Mila probably felt that by telling people “no, you can” was an attempt to boost people’s confidence in themselves and their ability to reach their goals (a concept psychologists call “self-efficacy”). It’s definitely a noble goal, because research suggests that people starting weight management programs often have low self-efficacy.
Even so, boosting confidence may lead people to try something to lose their weight, but not necessary the right thing, so increasing self-efficacy needs to be done in concert with teaching healthy weight management approaches. This is one of the many reasons that people trying to lose weight on their own are not very effective.
Just once, rather than hearing a celebrity go on and on about how much he loves to eat fast food or how she is able to keep thin simply by “playing with her kids”, I would love to hear a celebrity say, “You know what, keeping thin is hard work! Here are all the things I do…” Okay, not as a great a sound-bite, I know (and arguably not the celebrity's responsibility), but it might help to balance out some of the other celebrity sound-bites that I fear do more harm than good.
Of course, celebrities are not the only ones sending the wrong, or at least, incomplete messages. Proposed policy to outlaw Happy Meals or add additional taxes to “junk food” is just as bad as simply saying, “What you’re doing is wrong” – it doesn’t teach what people should be doing instead. Rather than punishing people for making the “wrong” choices, we need to incentivize them to make the “right” choices.
Thoughtfully yours,
Sara
Subscribe to:
Posts (Atom)