by Nedra Weinreich | May 24, 2009 | Blog, Research, Social Media
This post will be somewhat of a Twitter inside baseball topic – not what I usually write about – so if you are not interested in Twitter arcana, you might want to skip this one.
Those of us on Twitter could not have missed the uproar that happened a couple of weeks ago when Twitter management decided to make a “small settings update” to its service by eliminating an option they said was “undesirable and confusing.” This change removed the option to see the one side of the conversations people you follow are having with people you don’t. Now you can only see these “half conversations” (called @replies) if you follow both people. Sounds like a small thing, but those of us who had chosen to see all @replies are now missing out on interesting conversations, resources and the opportunity to discover new people.
Fairly quickly, word spread across Twitter about the change and a revolt took place as people started tagging their protests with “#fixreplies,” which became the top trending topic on Twitter for a few days. After first seeming just clueless about how people use their service, Twitter offered a non-solution posing as a fix and then flat-out said they will not be bringing the option back for technical reasons. The reason from their post:
Even though only 3% of all Twitter accounts ever changed this setting away from the default, it was causing a strain and impacting other parts of the system.
Okay, given the millions of new users that have come on board in the past month or two in the wake of Oprah and Ashton Kutcher’s Twitter publicity stunts, it makes sense that the system is strained. But, having been on Twitter since around the end of 2007, I found it hard to believe that only 3% of the other users had touched their @reply settings. And given the extent of the outcry, either this was a very vocal 3% or a lot of people were jumping on the protest bandwagon even though the change did not affect them at all.
More likely, this was a disingenuous statistic chosen by Twitter to make their point, but that does not give the whole story. I suspect that they are counting 3% of anyone who has ever created an account on Twitter – including those who try it out for a day and never come back. A recent Nielsen study found that 60% of those who sign up do not return the following month (though this statistic does not take into account the many who sign up at Twitter.com but actively use TweetDeck or another client application). What if they looked at the percentage of active Twitter users (the people who should actually matter) — particularly those who have been on the service for a while? Would the percentage change?
This question nagged at me for a while until I decided to do a quick survey to see if my suspicions were right. I created a four-question survey, which asked the following questions:
- How long have you been actively using Twitter?
- How many people do you follow on Twitter?
- Before Twitter took away the option, how was your @Replies option set?
- How has the loss of the @Replies option affected your Twitter experience?
I sent out a tweet asking people to complete the survey and to retweet it (repost in Twitter parlance) to their followers as well. My objective was to send it far and wide on Twitter so that it was not just my own Twitter followers responding, but a wide swath of users across the service. The result was that people who were following my account retweeted the post 28 times, with a subsequent total of 118 retweets dispersed around different social circles. I ended up with 402 total responses to the survey.
I will be the first to admit that this sample may not necessarily be statistically representative of all active Twitter users (though if it were, the sample size gives us a 5% margin of error and 95% confidence level). Respondents were not chosen randomly, and the people who decided to participate may be more likely to have a strong opinion on the topic. Nonetheless, I think it may be helpful to take a look at the results because this segment of Twitter user has been strongly impacted by the change. (The results for each question can be seen here. I’m happy to share my full statistical analyses as well if you’d like to see them.)
Most respondents had been actively using Twitter for 3-12 months (40%), with 36% on for more than a year and 24% for less than three months. I figured that the longer someone had been using Twitter, the more likely they are to have played around with the options to see what they prefer rather than leaving the default of only seeing @replies when they follow both people.
A vast majority (63%) follow between 50-500 people on Twitter. Next is 501-5000 follows at 24%, fewer than 50 with 12%, and only 2% follow more than 5000. I hypothesized that those who were following more people would probably not notice much of a change in their cluttered feed.
Now, the kicker here is that before the option was taken away, 63% of the respondents had chosen to show all @replies for the people they followed — much higher than the 3% cited by Twitter. Those who had the default selected – to show only @replies between people they follow – were 19%, plus another 17% who said that they didn’t know what option was selected (and presumably hadn’t changed the default), for a total of 36%. And only 1% had chosen the option not to see any @replies unless directed at them.
Finally, 57% said that the loss of the @replies option had affected their Twitter experience for the worse. These were presumably those who had the option taken away from them, but could also be people who did not want to see all @replies for people who started making their replies visible to everyone, such as by putting a character before the “@” symbol or embedding the @reply name within or at the end of the tweet. Only 5% said their experience was better and 39% reported no change (close to the 36% who were already set at the default option).
I also ran some chi-square stats to see how these variables affected each other and created some nifty charts at Chartle.net. Here’s what I found (only reporting the statistically significant correlations at p< .05 href="http://www.social-marketing.com/blog/uploaded_images/Time-Following-707772.png">The number of people that respondents were following on Twitter correlated with how long they had been on the service, at the highest and lowest following numbers. But most people – no matter how long they had been on – were comfortably in the 50-500 range.
Users who had been on Twitter for a longer time were more likely to choose the “show all @replies” option, with 72.2% of old-timers who had been on for at least a year and 65.2% of those on 3-12 months. Still, almost half (46.3%) of the newbies on for less than three months also selected that option.
Not surprisingly, given that time on Twitter and number following are correlated, the more people a respondent followed, the more likely they were to select the “show all @replies” option (
The quality of respondents’ experience on Twitter after the policy change, as you would expect, depended on which @reply option they had selected before all defaulted to showing only mutual @replies. For those with the “show all” option, 78.2% said their experience is worse, the direct opposite of the other two options (show only mutual=72.9%, show none=75.0%). The correlation between number following and quality of current experience on Twitter also mirrors the distribution of @replies option selected.
***
So what does this all mean? Even if this sample is not representative of all Twitter users, it does represent a substantial segment of users who are not as happy with their experience on the site since the option was taken away. Twitter would be smart to pay attention to this group, which is not only comprised of crotchety old-timers and “power users.” To avoid losing these disgruntled users, Twitter needs to come up with a way to bring back seeing all @replies in a way that they can live with. At the very least, Twitter needs to be honest about the percentage of its actual active users (not including abandoned accounts) who were using the “show all @replies” option. Whether it’s closer to 3% or 63%, by dismissing those who were upset by the #fixreplies kerfuffle as a tiny group of whiners, Twitter increased user dissatisfaction and the likelihood of defection should a service come along that works harder to meet its users’ needs.
UPDATE (6/1/09):
New research that has just come out from Harvard from a random sample of 300,000 Twitter users in May 2009 shows that the top 10% of Twitter users account for over 90% of tweets. And the median number of lifetime tweets per Twitter users is one. So there is a huge difference between the typical Twitter account and an active Twitter account.
This backs up my survey findings that many more active Twitter users were affected by the recent @replies option change than Twitter was willing to admit. To say that only 3% of users had selected the “see all @replies” option was extremely deceptive when it turns out that 90% or so of the total Twitter accounts are not even being actively used. Those who do use their accounts tend to opt to see all @replies. Twitter should not be able to so easily dismiss this loud, vocal majority.
Image Credit: monettenriquez
by Nedra Weinreich | Jan 23, 2007 | Blog, Communication, Research, Social Marketing
I may lose some friends out there, but I have to speak up about a phenomenon I’ve noticed over the past few years. It came to the fore for me with the recent story about the battle between the TV meteorologists over stripping the American Meteorological Society certification from any weatherman who expresses skepticism about the degree to which global warming can be blamed on human activity.
My intention here is not to do battle over the facts of global warming, so please don’t leave me comments listing all the reasons why it is or is not an environmental catastrophe. I am less a global warming skeptic than a global warming agnostic — I am not convinced yet either way, but I’m open to the data.
My concern is that global warming has become on par with religious dogma. When anyone, including legitimate scientists, dares to present contradictory data or a different interpretation of current data, they are attacked and harassed. It is assumed that they have evil intentions or are shills for the oil industry. Anyone who does not toe the global warming party line is considered akin to Holocaust deniers. Any data that deviates from the established doctrine is dismissed as biased or not worth looking at.
This is a problem. Science should not be politicized. A particular interpretation of the data should not be taken as the gospel from on high. Our knowledge of science evolves over time. Just a few decades ago, scientists were concerned about the catastrophic effects of global cooling and the coming Ice Age. Going even further back, to the 1630s, Galileo was convicted of heresy by the Church for supporting the radical Copernican theory that the Earth revolves around the sun, rather than the other way around. We should not be subjecting scientists to another Inquisition because they do not agree with commonly accepted ideas. Science does not advance without people who are willing to challenge the dominant paradigm.
While there is some consensus among scientists, there is a huge degree of uncertainty in the models that are being used to predict the future. Meteorologists can’t even predict the weather for next week accurately. To speak of global warming as something that is definitely happening is going way beyond the limits of the data. When everything that happens with the weather is attributed to man-made global warming, the credibility of the claims start coming into doubt. But “maybes” don’t make good news stories.
I have no doubt that most people who are concerned about global warming are well-meaning individuals who want to do the right thing for the planet. I don’t intend this as an attack on those who believe that global warming is a problem we need to address, but rather those who “believe in” global warming as if it were a religious doctrine that cannot be challenged.
I see a parallel with the dogma around evolution — on both sides. Some fundamentalists who reject the scientific version of how life evolved accept as creed that the Earth is about 6000 years old and that dinosaurs lived at the same time as humans before the great flood. I’ll give them a pass on being dogmatic, though — this is their religion, after all. But many evolutionists cling just as tightly to Darwinism, despite the fact that there are holes in the fossil record and big gaps in our knowledge about exactly how life evolves. Until we understand better how evolution works and how to answer some of the remaining questions, we should not assume that Darwin is necessarily the final word on how life came to exist, though it might be the best model we have right now. And why can’t the Bible and science co-exist? MIT-trained nuclear physicist Gerald Schroeder has written some amazing books that use quantum physics and the theory of relativity to reconcile the two precisely.
Similarly, there are things people on both sides of the global warming debate should be able to agree on, even if they do so for different reasons. Changing our energy consumption habits and taking care of the environment are goals that most people can get behind. In any case, I don’t think that the specter of global warming is immediate or concrete enough to get most people to take action to prevent something that may or may not happen in a hundred years or more. It’s just too big of a problem for an individual to feel that they can make an impact. But show people how they can save money by conserving energy, reduce their dependence on foreign oil by driving a hybrid, keep humans and wildlife healthy by reducing pollutants… this could get people motivated to act.
Scaring the public and silencing dissenters is not the way to bring about effective change. If only our leaders could put the same energy into solving the problems people face right here and now in terms of disease, poverty, and violence, we would all be better off in the future whether or not the climate eventually changes for the worse.
One thing is certain: what we know about the science of climate can and will change over time. The most shortsighted thing would be to close our minds to evidence that might bring us closer to the objective truth, whatever it happens to be.
Technorati Tags: global warming, science, meteorology, religion, evolution, environment, research
by Nedra Weinreich | Jan 19, 2007 | Blog, Research, Technology
Downtown Los Angeles has the largest homeless population in the US. But until recently, the data on the problem has been spotty. Starting in November 2006, the Los Angeles Police Department (LAPD) has been surveying the streets of Downtown every two weeks to count the number of homeless people, their exact locations and some basic demographics. All this data ends up on an Excel spreadsheet. But what could they do with this raw data? Just looking at the numbers is almost meaningless, since there are so many data points to compare.
Enter Cartifact, a custom mapping firm based in Downtown LA. They offered to work with the LAPD to help them visualize the information in a meaningful way and to see changes over time. Together the LAPD and Cartifact have created the Downtown Los Angeles Homeless Map, which takes the information from the biweekly Excel spreadsheet and converts it into a GIS-based heatmap superimposed on a street map of Downtown that shows the density and location of homeless people on each day of data collection. The individual maps are animated together to show the changes between each two-week period.
Eric Richardson, who writes blogdowntown, is also the lead developer for Cartifact. He notes on his blog how the most recent data provided some immediate insights into what is happening with the homeless population:
Interesting to note, though, is the way in which temperature affects the number of people on the street. It’s cold outside, and has been for several days now. The count for January 15th (Monday) was down 271 people from January 2nd. It got cold and the people who could find somewhere to go did so.
And in the comments he explains why these maps are helpful:
But also this sort of visualization is vital because it tells us what trends are occurring over time. Since enforcement of Safer Cities began there has been a definite spread of homeless to areas outside of Skid Row, particularly into the Toy District, the Fashion District and into South Park. Anecdotally we see this every day, but visualizing hard data allows us to say it for certain. That sort of knowledge is important for planning strategy.
This type of mapping could be used very effectively as a basis for understanding many health and social problems in a particular geographic area. Imagine using this to map the spread of an infectious epidemic – you could easily see what direction it was moving in, what types of neighborhoods it hit the hardest, what the boundaries of a quarantine area might need to be. You could look at areas with high exercise density (where people running or walking for exercise tend to be found) and make sure there are sidewalks and crosswalks on those streets. Map out gang-related incidents to see where to concentrate your violence prevention billboards or locate your program’s youth drop-in center.
I’m sure some form of mapping is occurring in many programs. The advantage of this model is that the heatmap format conveys a lot of information in a quick glance, and that it is easy to visualize changes over time. As Jerry Maguire might have said, had he been a social marketer rather than a sports agent, “Show me the data!”
(via LAObserved)
Technorati Tags: maps, homeless, public health, Los Angeles, LAPD
by Nedra Weinreich | Nov 7, 2006 | Blog, Research, Social Marketing
Authors of a study just published by the American Journal of Public Health (available online now and in the Dec. issue of AJPH) say that youth anti-smoking television ads funded by tobacco companies are ineffective, and that the spots intended for parents may even have harmful effects. Among 10th and 12th graders, they say, higher exposure to the parent-targeted ads was associated with lower perceived harm of smoking, stronger approval of smoking, stronger intentions to smoke in the future, and a greater likelihood of having smoked in the past 30 days.
“Of course,” I can hear you saying to yourself, “we all know that Philip Morris is intentionally sabotaging the ad campaign so that it ends up bringing in more future smokers, or at least is just burnishing its reputation with this campaign as window-dressing.” I would have thought so myself. Except that in looking into the campaign, I found out that an old friend and colleague, Cheryl Olson, is on the advisory board for Philip Morris USA’s Youth Smoking Prevention initiative.
Cheryl and I met in grad school, and we have since worked on various projects together, including evaluating tobacco prevention programs. She, along with her husband, psychologist Larry Kutner (who is also the chair of the advisory board), founded and co-direct the Center for Mental Health and Media at the Harvard Medical School. I know that Cheryl is no tobacco industry patsy, and that she would not compromise her integrity if she suspected there were any nefarious strategies behind this campaign.
I got in touch with Cheryl and asked her for her take on the research results that were just published. She, not surprisingly, had a lot to say about why this study is flawed and may just be showing what the researchers wanted to find. I invited her to send me her thoughts to post on the blog, which I’ve reprinted here:
For the past couple of years, I have consulted to Philip Morris USA on smoking cessation and prevention. I had primary responsibility for the content of the QuitAssist cessation guide, and also review and contribute to materials aimed at parents from the Youth Smoking Prevention group.
I work with a group of researchers and clinicians who are affiliated with various universities and hospitals. (We do this work independently from our institutions.) Part of our mandate is to oversee the quality of material content and evaluation, and be vigilant for any unintended negative effects.
Collaborating with a tobacco company can be an awkward and uncomfortable experience for a public health researcher who worked in tobacco control. But since Philip Morris USA is voluntarily committing 100s of millions of dollars to prevention and cessation – going well beyond the requirements of the Master Settlement Agreement – it’s important that a group of independent researchers and clinicians be part of this process to ensure that the resulting materials are honest, research-based, and effective.
My work has included: interviewing parents and former smokers and choosing quotes to use in print and web materials; incorporating research and advice from experts (selected by me) who work with smokers and parents; writing print and web content; and observing focus groups to help formulate and test content (including groups of Spanish speakers with simultaneous translation). I am proud of the brochures and guides I’ve helped develop. Their quality is apparent to anyone who reads them – which may explain why I have received at most a half-dozen phone calls or emails from academics, health workers or reporters asking why I got involved in this.
It is also exciting to be part of a project with such a huge reach. To date, PMUSA has distributed 70 million parent brochures, and hundreds of thousands of QuitAssist guides. The main role of the guide is to encourage smokers to connect with useful government and nonprofit cessation resources; I have heard that the PMUSA web site is the most visited cessation site in the US, and refers more traffic to government web resources than any other source.
I am not involved in developing PMUSA’s TV, radio or magazine ads on smoking cessation or prevention. But I do have some concerns about the article in December’s AJPH. There are some serious flaws in this study’s methodology that make it hard to draw any conclusions about the effects of the ads in question. To describe just two:
1) The “Talk, They’ll Listen” campaign that supposedly harmed children (the one aimed at parents) is based on an estimated exposure to an average of 1.13 thirty-second ads over a four-month period. Let’s take a closer look at this measure of ad exposure.
According to the recent Kaiser Family Foundation national survey, kids between the ages of 8-18 spend an average of 3 hours and 4 minutes per day watching broadcast television. Let’s call it three hours for simplicity’s sake.
There are 122 days in a 4-month period. So kids watch an average of 366 hours or 21,960 minutes of television in a 4-month period. A parent-oriented commercial lasts 30-seconds – or 1/43,920 of their viewing time. If they see 1.13 parent-oriented commercials in 4 months, that means that the commercials comprise 0.000026 (twenty-six one-millionths) of their television viewing content and time. Does it make sense to assume that such an extremely rare event would have the levels of influence on behaviors and attitudes that the authors claim? Based on this exposure issue alone, it’s hard to take the article seriously.
2) The authors should have used a 99% confidence interval (not 95%) with such a big data set [n=103,172]. That is a standard approach to avoid getting significant results just due to a large sample size. The use of 95% CIs raises questions about the odds ratios.
This is pretty disappointing. It’s hard not to think that the authors were determined to find something negative to say. The dramatic statements in the abstract are hedged a lot in the actual paper text, but the abstract is all that many researchers – and most journalists – will read.
This could have been an opportunity to get some useful lessons that could be applied to future media campaigns, and to model state-of-the-art methods for evaluating media-based behavior change materials – methods that could be used by academics and industry alike.
Among other things, it’s too bad that the authors lumped together ads aimed at youth from two companies that used very different approaches. This doesn’t tell us anything about what aspects might have been particularly helpful or harmful.
Given their well-documented past behavior, tobacco companies (and their anti-smoking media materials) must receive ongoing scrutiny from the public health community. But that shouldn’t mean checking our common sense at the door.
Cheryl raises several important points. As much as many of us would like to pillory the tobacco industry, we can’t let that cloud our desire for the truth (as best we can find it statistically). When you see (or do) research that confirms your preconceptions, you still need to look at it critically and make sure that you are not letting your assumptions guide your conclusions.
Thanks to Cheryl for taking the time to share her valuable perspective. I would love to see research on whether parents who have seen the ads and read PMUSA’s materials have spoken with their children about not smoking and whether their kids are less likely to smoke as a result. But even if that part of the campaign were found to be effective, I have a feeling there are many who would not believe it in any case.
Technorati Tags: smoking, philip morris, ajph, tobacco, advertising, youth