"Leaders of the nation's biggest technology firms warned President Obama ... at the White House on Tuesday that National Security Agency spying programs are damaging their reputations," the Washington Post reported Dec. 17. The executives — who included representatives from AT&T, Google and Facebook — "also pressed the need for transparency and for limits on surveillance," the Post disclosed.
I know what you're thinking:
Facebook? Wants limits on surveillance? And worries spying may damage its reputation? FACEBOOK?
While government surveillance makes front-page news, Americans yield personal data to the private sector every day. We worry about Big Brother reading our mail, while leaving our diaries right where our little brothers can read them.
But there are limits to that trust, too ... as a Carnegie Mellon University doctoral student found out this winter.
Earlier this year, Ph.D. student Sauvik Das co-wrote a paper titled "Self-Censorship on Facebook" with Adam Kramer, a Facebook employee. In the study, Facebook staffers tracked 3.9 million users over 17 days to see how often they engaged in "last-minute self-censorship" — typing a message but then deleting it before posting.
People self-censor on Facebook, it seems, far more often than you'd guess from what is posted there. Das and Kramer found that some 71 percent of users engaged in the practice: Men were more likely than women to do so, while "users with more politically and age diverse friends censor less."
In April, those findings received some gee-whiz online coverage from The Atlantic magazine and the Huffington Post. Then all went quiet ... until a couple days before that White House meeting. That's when Slate.com posted an article whose headline warned, "Facebook wants to know why you didn't publish that status update you started writing."
Writer Jennifer Golbeck was less interested in Das' findings than in his methodology. Facebook researchers could tell if you'd self-censored, she noted, by accessing code in your browser which tracks what you type. (A similar function allows Gmail to store your email drafts automatically.) Das' paper stressed that Facebook "record[ed] only the presence or absence of text entered, not the ... content," but Golbeck seemed unconvinced: "The same code Facebook uses to check for self-censorship can tell the company what you typed," she warned.
Golbeck cited a line in Das' paper — the observation that if people self-censor, Facebook "loses value from the lack of content generation" — to suggest that its business model depended on prying open our craniums. Facebook, she added, might be even worse than the NSA. After all, the government "is monitoring things we have actually put online," while Facebook "is analyzing thoughts that we have intentionally chosen not to share."
Das was traveling when I tried to reach him. But his faculty adviser, professor Jason Hong, says, "We were pretty surprised" by Golbeck's article. While Hong didn't work directly on the Facebook paper, he says that Golberg "misinterpreted parts of the study."
For one thing, he says, Das' work need not lead to Big Brother demanding that we dish about last night's date. It could lead to better tools for managing social circles on the front end: "'Worlds colliding' is a well-known problem in Facebook: You have college friends, family friends, people from work all reading the same post," Hong says. "If a lot of people are self-censoring because of that, maybe we need better ways to help manage the problem."
But as someone who studies social media and privacy concerns for a living, Hong is sympathetic to the concerns underlying Golbeck's piece. For one thing, he says, Facebook probably could access text we typed without posting. "It's not that people are evil," he says, but "Once your business model is advertising-based, you have an incentive to collect as much information as possible. There's so much data that can be collected, and this is an ongoing arms-race" — with users trying to preserve their identity while developers try to circumvent those efforts.
But on any social-media platform, he adds, "The same information can be used either to help us, or to harm us." (Hong himself has investigated using smartphones to help guard against depression, by tracking such factors as a sudden drop in phone conversations, or late-hours use that could reflect sleeping problems.) Everything depends on striking "the right balance between the end-user's privacy, sustainable business models, and making sure people have informed consent."
The problem is that while sites like Facebook encourage users to be an open book, the sites are often guarded about their own agendas. Frequently, Hong says, "When people don't know why [personal information] is being used, they start getting paranoid."
Which might be why Golbeck's story has gotten far more attention than Das received last spring. At least 53,000 Facebook users have "liked" the Slate piece so far. (Take that, Zuckerberg!) And as Hong says, "In social media, when people talk about this paper, it's often, 'Facebook is monitoring things you aren't submitting.'"
Which goes to show: If you're Facebook, maybe a bit of online self-censorship isn't so bad after all.