All About Content

Media 2.0 Day: Pew Internet & American Life

Posted by Melanie Phung on Tuesday, June 15, 2010 at 11:08 pm

This is Part 3 of a series of notes from Media 2.0 Day, part of the Digital Capital Week conference. Part 1 covered a session called “Social and Traditional Media: How News and Media Organizations are Getting Social and Why They Need To Do It”; Part 2 covered a talk by Jeff Pulver. (These are basically my raw notes. I’ll be going back and cleaning up typos and formatting as time allows.)

… Lack of connectivity in the Media 2.0 venue was driving me crazy, so I sneaked out to grab some wifi time at the Caribou Coffee around the corner. I ended up being a little late to this session, which is where Pew Research talked about the latest findings.

I walk in as Lee Rainie (@lrainie), Director of the Pew Internet & American Life Project, is talking about the rise of the “internet of things”.

Next, he shows a graphic from the Metaverse Roadmap, that segments the different directions in which the internet could evolve or is involving:

DC Week

We now get into surveys that asked experts to predict the future of the internet. Rainie says that previous survey respondents (who were asked to predict where we’d be in 2010) got some things right but there were also some misses.

Before going into the results of the latest survey, Rainie explains that there were some methodology changes. Among them is that the new survey included new participants (not just previously questioned “experts”, but new people too).  Furthermore, the questions were set up as tension pairs; rather than posing a prediction and asking respondents to agree or disagree, the option was now presented as A or B, and respondents had to pick which they agreed with more.  Then they were asked to explain why.

First question was based on Nick Carr’s argument: The internet makes us dumber vs. internet makes it easier to connect and get smarter. Most said it doesn’t make us dumber.

Themes arising out of the free-form answers:

  • Cognitive capacities will shift. Different skills are necessary in the new world. Not necessary to remember stuff, but critical thinking becomes more important.  People who can sift through info will do better.
  • There are new types of literacy. Fourth “R” is retrieval (reading, writing, ‘rithmetic and now retrieval). “Extreme  Googlers” was a term that was mentioned to describe a new skillset. As networked individuals make decisions, we need to adapt and learn to search out info.
  • People are people. Internet applies same tendencies. If you are lazy/distracted, the internet helps you be you. If you are an info omnivore, technology lets you do that better. Tech isn’t the problem; it’s inherent character traits of people.
  • Performance of “information markets” is a big unknown, especially in age of social media and junk information. There will be pressure on technologists to filter good vs. bad stuff.

What’s around the corner? The tension pair was: Hot gadgets are pretty evident today (no surprises) vs. hot new tech are not anticipated by many of today’s savviest innovators.  Most answered that the hot new thing is not something we know about yet.

Common themes in responses:

  • Look what the iPhone did – an example of something we couldn’t predict.
  • Tech people aren’t very good at anticipating the market place and social stuff
  • Innovation ecosystems will change: bandwidth/processing. Ecosystem will be different so hard to anticipate what will work there, what marketplace will work/how it’ll function.

What about the future of online anonymity? Which is more likely: Anonymous online activity will be sharply curtailed vs. by 2020 it will still be easy to communicate anonymously? Results to this question were pretty evenly split

Themes in the free-form answers:

  • Anonymity will be a different thing by then. New definitions of anonymity.
  • New laws/regs will give people some privacy protections even though they are required to disclose more. (i.e., more feeling of anonymity, even if less anonymity in reality)
  • There will still be work-arounds: “Pseudonymity” will be available to people. Public disclosure will be separate from than registration requirements.
  • Anonymity not same as confidentiality and autonomy. The latter will replace yearning for anonymity.
  • Rise of social media is as much a challenge to anonymity as authentication requirements. Reputation management and “information responsibility” will emerge.  Being part of SM, showing some part of yourself and your social graph, will allow people to figure out who you are… it’s not the tech itself that discloses who you are, it’s the social practices and people’s ability to just look at info YOU are sharing.

Next question deals with impact of internet on institutions. Will institutions change/become more responsive? Most experts agreed they would change.

Themes:

  • Pressures for transparency are powerful
  • The “future” is unevenly distributed – businesses will change most; governments least.
  • Data will be platform for change.
  • Even if institutions don’t change, social media will facilitate work-arounds. Tools in consumers’ hands will help figure out ways around these barriers erected by institutions. Citizen engagement/crowdsourcing will force change in market place.
  • Efficiency and responsiveness aren’t the same thing.
  • More people responded anonymously when saying they are worried about corporate power. Institutions will resist.

Rainie shared there that there were quite a few criticisms about this question as lumping different types of  “organizations” (nonprofits, governments, businesses) into a single category didn’t make sense. He concedes this point.

Next question deals with impact of internet on reading, writing, rendering of knowledge.

He points out that young people don’t think of texting as “writing”… it’s just conversation.  So it’s not fair to use “text speak” as evidence that literacy is suffering.

More experts agree that the internet will improve reading, writing, rendering of knowledge.

Themes:

  • People are doing more reading, writing now so it has to be better. Participation breeds engagement.
  • Pressure to get better driven by concerns about reputation, etc.
  • Reading/writing will be different in 10 years.  “Screen” literacy will become important. Content creation will be done in public. It’s not better or worse, just different. These are public acts, so feedback will compel people to get better.
  • Networked information models are changing creation and consumption process. So metrics of consumption will change (become richer/broader more complex).

Next question: Will internet continue to be dominated by end-to-end principle? Most of the respondents think it’ll remain the same.

Themes:

  • Openness has its own virtues and its served us well so far.
  • Those who disagreed weren’t arguing for the end of this paradigm. It wasn’t a value judgment; rather they were predicting that there will be pressures to regulate (including from users who want to avoid bad experiences).

Next question was about the semantic web. Answers were fairly even split. Comments were along the lines that the semantic web won’t take off until there’s a killer app for it.

The speaker now rushing through slides are breakneck speed and it’s hard to catch any details.

Next two slides are about the internet’s influence on human relationships and something about the millenials. The latter dealt with opinions that millenials will continue to be very enthusiastic about information sharing even as they move onto other phases of their lives.

At this point, I make a note to refer to Pew Internet site so follow up on these data. Lots of interesting stuff, but Rainie doesn’t have the time to cover all the material in his deck and he wants to move on to Q&A. (Note: his slides can be viewed here. In fact, his slides probably have all the info I typed out above, but with fewer typos. Sigh. )

DC Week: Pew Internet & American Life

Audience Questions:

Q: Is there a  correlation between literacy and broadband adoption?

Rainie says this is a really interesting question. No direct studies of correlation were done but he throws out some related questions: What are the issues? Is it access/price versus no perceived need/interest?

Sometimes it’s a knowledge issue – people who don’t have internet only know what media says about the internet (it’s a dangerous place full of scams, etc). They don’t want/need it because they don’t know what it is.

Others think it’s a tech issue. They are afraid of the technology/computers.

What’s needed to get next increment of new users may be combo of tech support/hand holding and public education about what the internet is.

What is the internet? Rainie says it’s personal, participatory, pervasive.

Rainie expands to discuss some hard data about the Digital Divide:

  • 79% of adults use the internet. (i.e, 21% don’t)
  • Of those who self-report as using the internet, 93% have email. This percentage of email users has stayed pretty constant. Even when only 50% of population had internet, 90% used email.

He goes on to say that maybe we need to rethink how we define the Digital Divide and access issues. How do you count people who use only mobile web? Does having an internet-enabled mobile device lead to the same level of access?

Q. What’s the future of the the Web?

Biggest challenge is the business  model itself. (e.g., compelling people to pay for access to content?)

Rainee thinks Chris Anderson is onto something with freemium idea/model.

He says the media world frequently gets slammed for being slow to embrace the internet, but that this isn’t a fair characterization. He says  it’s the advertising world that hasn’t figured it out; editorial side of things has innovated tremendously. Not fair to knock the editorial side of publishing for not getting it or jumping on opportunity fast enough.

This presentation about what the “experts” predict will be the future of the internet is a great segway into the next panel, which is supposed to cover “The Future of Media”… stay tuned. (Although if you were at the event, you know the next session didn’t go so well.)

Comments Off

Category: Data

What Information Sources Consumers Trust

Posted by Melanie Phung on Monday, December 15, 2008 at 12:01 pm

Newly released survey data from Forrester sheds light on some things that reinforce what we already suspected (consumers don’t trust company blogs) and some things that I find quite surprising: Who Do Consumers Trust?

Forrester graph via the Groundswell blog.

According to the survey, people trust email from people they know and consumer ratings – not a surprise. The third most trusted source is search engine results (mwuahahaha… just kidding), with half or Forrester’s respondents putting a high level of trust in the likes of Google. Somewhat ironically, only a third of people trust Wikipedia as an information source even though Wikipedia.org tends to be at the top of Google’s search results practically by default.

What I find flabbergasting once you go down the list is that more people trust things like Facebook’s Friend Feed than do online content sites like the New York Times’ website! I mean come on.

I get that people are leery of corporate blogs (only 16% said they trusted company blogs as information resources), but more people place trust in message boards (which are open to manipulation and spammers) and personal blogs (ditto) than company blogs, which at least have a brand to protect and generally tend to be fully transparent by virtue of being part of the company’s own site.

This confirms my suspicion that the average consumer of information is both paranoid and naive, at the same time, about what information sources are trust worthy. I mean sure, most corporate blogs aren’t very good, tend to lack personality or worthwhile content, but does the fact that they tend to rehash press releases make these blogs inherently untrustworthy sources of information? Less worthy than, say, a message board?

Rohit Bhargava, the authors of Groundswell and a few others have some thoughts on how corporate bloggers can win consumer trust. (But maybe someone else can address how the NYTimes.com can improve its trust factor.)

Comments (3)

Category: Data

Have You Found Jesus on My Blog?

Posted by Melanie Phung on Monday, November 24, 2008 at 11:54 am

We all know that panel data can produce some odd results when sample size is really small — exaggerating trends that in reality might not signify anything at all or missing some data altogether. In the case of reporting search traffic to my site, however, the panel data from Compete.com seems to be pointing at something that really doesn’t exist.

Check it out:

Compete.com says that the term “Jesus Christ” is responsible for one quarter of my search traffic.

Obviously I don’t expect Compete’s free data to match my (also free) analytics program perfectly, but I can say with a lot of confidence that this data appears to be sampling something that can’t possibly exist.

Now before you go around decrying me as a heathen and a heretic, my point here isn’t that Jesus Christ doesn’t exist … simply that he certainly does not exist on this blog.

I’ve never, ever used the phrase “Jesus Christ” on this site. Until now, of course. No, in general I tend to favor exclamations like Jeebus! or Good Gawd! or Sweet Lawd Almighty!

Nor have I, to the best of my knowledge, ever been Googlebombed with that term.

In short: I do not rank, and there’s no reason for me to rank, for the search term [jesus christ] — And showing up in search results would seem to be a prerequisite for driving search traffic.

Here are the terms that drive search engine visits according to my analytics program (although no single term drives anywhere close to 25% of my search traffic):

See? No Jesus.

While I do use Compete.com for research and competitive intelligence, I’m going to be taking their data with an ever larger grain of sand. Their data isn’t just skewed, in some cases it’s just patently wrong.

Post Script

The idea that someone looking for Jesus Christ would find Him on my blog struck me, frankly, as insane. But I hear He works in mysterious ways, so I’m just going to go ahead and go with it.

Jesus Christ Loves All-About-Content.com

Tagged: sacrilegious

Comments (14)

Category: Data,Navel-Gazing

Insights Into Signal and Noise

Posted by Melanie Phung on Wednesday, August 6, 2008 at 9:32 pm

Google just released a service called Google Insight, which is basically data porn for marketers. Good-bye WordTracker, ComScore Compete and whatever other hodge podge of free tools we’ve made due with over the years; now we can be even more dependent on the GOOG.

Google Insight compares (normalized against a baseline, not in absolute terms) volume of search traffic over any period of time, maps those against news items, lets you break data out for states and cities, and even gives you related search terms.

You can compare search volume of individual terms in various locations or compare two time periods. Like Zeitgeist, it also shows you the top 10 most popular searches of any time period and those rising in popularity.

And since search terms can be so ambiguous depending on what topic you’re looking at, Google Insights lets you filter ALL this info by categories. If you’re logged into your Google account, you get numerical scores (because you’re already giving them info on what you search for, what sites you own, how much traffic they get, what they’re about, how you’re advertising them, what terms are most profitable for you… you might as well tell them what keyword terms you’re researching).

Signal versus Noise

To test drive this sucker, I chose a topic that’s been of particular interest to me lately: signal vs. noise. I limited the query to U.S. users only.

Google Insight indicates that there’s been a huge spike in searches for both terms in recent weeks, but searches for noise continue to outnumber searches for signal. However, the silver lining is that interest in signal appears to be at a three-year high.

Of those interested in signal, residents of these cities are the most interested:

  1. Los Angeles
  2. Irvine
  3. Washington
  4. St Louis
  5. Austin

The most interested in the popular subject of noise were residents of:

  1. San Francisco
  2. Pleasanton
  3. Boston
  4. New York
  5. San Diego

When comparing interest in both terms in a single city, Google Insights reveals that within Washington DC, searchers are more interested in noise than they are signal, but their interest in signal is high relative to the rest of the country.

In terms of subregions, only California shows up in the Top 10 states for searches on both signal and noise, but interest in noise does edge out signal by a little bit (I blame it on the Southern Californians).

There are many more ways to break these data down, but the big picture is pretty clear. Plain as day.

Google shows quantitative proof that Americans consistently seek out fluff over substance. Except Tennessee… God Bless Tennessee.

Comments (5)

Category: Data

More Searching, Less Communicating

Posted by Melanie Phung on Monday, February 18, 2008 at 2:15 pm

According to Nielsen NetRatings and the Online Publishers Association, the proportion of time users are spending on search-related activities increased noticeably at the end of 2007, at the expense of communication activities like email and IM.

Comments (2)

Category: Data,User Behavior