Tuesday, 4 July 2017

Twitter Said to Be Looking for Ways to Let Users Flag Fake News

Twitter Said to Be Looking for Ways to Let Users Flag Fake News

 Twitter Said to Be Looking for Ways to Let Users Flag Fake News

HIGHLIGHTS

  • New feature would let users flag tweets with false or harmful information
  • Part of the company's uphill battle against rampant abuse on its platform
  • Twitter also said it was building new tools to remove hate speech
Twitter is exploring adding a feature that would let users flag tweets that contain misleading, false, or harmful information, according to two people familiar with the company's thinking.
The feature, which is still in a prototype phase and may never be released, is part of the company's uphill battle against rampant abuse on its platform. It could look like a tiny tab appearing in a drop-down menu alongside tweets, according to the people, who spoke on the condition of anonymity because they were not authorized to release details of the effort.
Twitter has been plagued by everything such as fake accounts that can be purchased outright for pennies and spread automated messages and false stories. Extremists use the service as a recruiting tool and hate-spewing trolls have threatened women and minorities.
These longstanding problems gained new urgency in the aftermath of the presidential election, when critics and researchers pointed to the toxic effect of social media on the public debate. Two thirds of American adults say fabricated news stories that spread on social media have caused a "great deal of confusion" about basic facts and public events, according to a December 2016 poll from Pew Research Center. One estimate from the service Twitter Audit found that 59 percent President Trump's follows are bots or fake accounts, while Hillary Clinton's are 66 percent bots (Twitter does not comment on third party estimates).
Twitter spokeswoman Emily Horne said the company had "no current plans to launch" the feature but said she would not comment on whether it was being tested. "There are no current plans to launch any type of product along these lines," she said.
But Horne insisted that the company has been addressing the problem. Earlier this month, Twitter said that it was expanding personnel, adding resources, and building new tools, but shared very few details about the effort.
Twitter is "working hard to detect spammy behaviors," Vice President of Policy Colin Crowell said in a blog post earlier this month. Such behaviors include automated accounts that retweet the same message over and over or all at once in a concerted effort to manipulate trending topics, he noted. "We've been doubling down on our efforts," Crowell said.
Facebook is also crowd-sourcing the fake news fight. In March, the company rolled out a tool that lets users flag content they think might be false by clicking a tab to dispute it. If enough people click "dispute," the story is sent to independent fact checkers that Facebook has partnered with.
Google has also asked the public to help spot pages that are misleading or offensive.
It was not clear how Twitter's crowd-sourcing feature would function, and the company is still researching how to design it, one person said. The process is moving slowly in part because there are concerns that people could use the new button to game the system, the way other aspects of Twitter have been manipulated. Twitter's process for testing new features usually begins with prototyping. The next step is for employees to test the product internally before it gets released to a small subset of the public before being launched throughout the service.
Another aspect of Twitter's developing efforts includes a focus on machine learning, a method in which software attempts to detect micro-signals from accounts to determine whether they are fake. For example, if an account tweeting political messages in English consistently came from an IP address in Russia, the company might take notice. The company could also look at whether certain accounts are frequently retweeted by people associated with credible or verified accounts, such as reporters at mainstream news organisations, or whether a news site has previously been associated with false information.
Given the sheer scale of social media, curtailing this type of abuse is a formidable challenge even for wealthy tech companies. Twitter has more than 300 million monthly users and Facebook announced earlier this week that reached 2 billion users. There is also a fine line between abuse and free speech, and between false or sensational content, and technology companies have struggled to define the problem. Philosophically, they've been also been reluctant to accept their newfound responsibilities as they do not want to be in the business of policing their users' free expression.
"We, as a company, should not be the arbiter of truth," Crowell wrote earlier this month, and emphasized that Twitter users "journalists, experts, and engaged citizens" tweet side by side to correct public discourse every day in real time.
Still, critics point out that Twitter and other social media companies are already arbiters, and, however reluctantly, have steadily been increasing their roles in policing content. A ProPublica investigation of Facebook's policies toward removing content shows decision-making that is becoming increasingly elaborate and sometimes contradictory. For example, the company decided that Facebook posts denouncing white men as racist would be removed while posts calling for the killing of "radical Muslims" would stay up because radical Muslims are a subset of a protected class. The policy only calls for protecting individuals who are part of a protected class, but not a subset, according to the ProPublica report.

No comments:

Post a Comment

New Update

What Miley Cyrus Wrote in Her Letter to Hannah Montana

  Hannah Montana   premiered on the   Disney Channel   on March 24, 2006, meaning that yesterday was the   15th anniversary of the character...