Government in the Fake News Business
Fake news is the promotion and propagation of news articles via social media. These articles are promoted in such a way that they appear to be spread by other users, as opposed to being paid-for advertising. The news stories distributed are designed to influence or manipulate users’ opinions on a certain topic towards certain objectives.
The term “fake news” became increasingly common during the past year. While this concept has many synonyms—disinformation campaigns, cyber propaganda, cognitive hacking, and information warfare—it’s just one facet of the bigger problem: the manipulation of public opinion to affect the real world. Thanks to the connectivity and digital platforms that make it possible to share and spread information, traditional challenges such as physical borders and the constraints of time and distance do not exist anymore. Unfortunately, it also makes it easier to manipulate the public’s perception of reality and thought processes, resulting in the proliferation of fake news that affects our real, non-digital environment. Each new incident shows how much impact the technological manipulation of public opinion can have on people’s daily lives. TrendMicro
How an operator employs or abuses underground, gray, and legitimate marketplaces to disseminate fake news
Volumes of articles have been written on the dangers of a draconian regime of censorship by allowing the government to regulate free speech and there have been disastrous consequences each time it has been challenged. The opinions vary with the context being discussed but one fact remains irrefutable:
The First Amendment in the Bill of Rights allows for FREE SPEECH
“The First Amendment (Amendment I) to the United States Constitution prevents Congress from making any law… abridging the freedom of speech, the freedom of the press…”
History has proven how dangerous it is to have the government regulate fake news. The deliberate dissemination of false information is a menace to any society is not being argued. But the strategy to fight false statements may be even worse than the lies themselves. Do we want the government to get involved in the process of verifying information and removing what it defines as “fake news”? Media companies that do not comply fined? Citizens sued for libel for expressing an opinion? World agencies have addressed this very issue as well:
“as such does not prohibit discussion or dissemination of information received even if it is strongly suspected that this information might not be truthful. To suggest otherwise would deprive persons of the right to express their views and opinions about statements made in the mass media and would thus place an unreasonable restriction on the freedom of expression.”
The current issue has arisen from a proposal published by Berkeley lecturer Ann Ravel, ex-chair of Federal Election Commission that has strong Democratic support. She along with Abby Woods and her co-authors believe that there is support for expanded regulation in the wake of reports foreign governments spent $100,000 on 2016 political ads on Facebook. This stems from political content on the internet, whether paid or not, should face substantial federal regulation to eliminate undefined “disinformation,” and users of platforms and news feeds, from Facebook, to Twitter, to the Drudge Report and even New York Times, could be punished for sharing “fake news” from those sites.
This would undermine the use of the Internet as a forum for free speech.
It would include “fake news,” not just paid ads, to be regulated, though it’s never defined other than the Democrat’s description of “disinformation.” And anybody who shares or retweets it could face a libel suit. The regulation would also include the targeting of people who share stories deemed fake or disinformation by government regulators.
If adopted, a “social media user” would be flagged for sharing anything deemed false by regulators:
“after a social media user clicks ‘share’ on a disputed item (if the platforms do not remove them and only label them as disputed), government can require that the user be reminded of the definition of libel against a public figure. Libel of public figures requires ‘actual malice,’ defined as knowledge of falsity or reckless disregard for the truth. Sharing an item that has been flagged as untrue might trigger liability under libel laws.”
“I have been writing about the threat to free speech coming increasingly from the left, including Democratic politicians. The implications of such controls are being dismissed in the pursuit of new specters of “fake news” or “microaggressions” or “disinformation.” The result has been a comprehensive assault on free speech from college campuses to the Internet to social media. What is particularly worrisome is the targeting of the Internet, which remains the single greatest advancement of free speech of our generation. Not surprisingly, governments see the Internet as a threat while others seeks to control its message” by Jonathon Turley
EXCERPTS from Proposal: Fool Me Once: The Case for Government Regulation of “Fake News”
Fake News is not “news”; it is native political advertising. Disinformation, spread under the guise of “news”, is particularly confusing to voters. Fake news, or, as we call it, “disinformation advertising”, undermines voter competence, or voters’ ability to make the choice that is right for them. Regulations to address it should aim to improve voter competence in three ways: (1) reduce the cognitive burden on the voter by reducing the amount of disinformation to which they’re exposed; (2) educate and nudge social media users in order to inoculate voters from the negative effects of the disinformation and teach them how to avoid unintentionally spreading it; and (3) improve transparency to facilitate speech that counters disinformation after it is spread, creating the possibility that voters will receive “corrected” information. The transparency improvements we recommend will conform disclosure requirements for online advertising to requirements for broadcast, cable, satellite, and radio political ads.
Government regulation is particularly powerful and useful when it solves informational deficits… We also recommend ways government can “nudge” and educate social media users to help stop the spread of disinformation.
Create a Single Repository for all Versions of Online Ads. Government should require all platforms to save and post every version of every political ad placed online, whether placed “for a fee” or not. The ad should be placed in a dedicated repository, and all advertisers must provide a link to the repository on their home pages and social media pages.
Educate social media users. Social media users can unintentionally spread disinformation when they interact with it in their newsfeeds. Depending on their security settings, their entire online social network can see items that they interact with (by “liking” or commenting), even if they are expressing their opposition to the content. Social media users should not interact with disinformation in their feeds at all…
Platform self-regulation is important but insufficient (and) there are many other things the platforms themselves can do to help reduce the quantity of disinformation and help reduce the decay in voter competence caused by disinformation advertising. They can start with enforcing their terms of service and identifying and labeling disinformation advertising and those who spread it.
TrendMicro offers these tips:
What can Readers Do to Combat Fake News?
Ultimately, users are the first line of defense against fake news. In a post-truth era where news is easy to manufacture but challenging to verify, it’s essentially up to the users to better discern the veracity of the stories they read and prevent fake news from further proliferating.
Here are some signs users can look out for if the news they’re reading is fake:
• Hyperbolic and clickbait headlines
• Suspicious website domains that spoof legitimate news media
• Misspellings in content and awkwardly laid out website
• Doctored photos and images
• Absence of publishing timestamps
• Lack of author, sources, and data
Apart from identifying red flags, readers should also exercise due diligence such as:
• Reading beyond the headline
• Cross-checking the story with other media outlets if it is also reported elsewhere
• Scrutinizing the links and sources the article uses to back up its story, and confirming those aren’t spreading misinformation themselves
• Researching the author, or where and when the content is published
• Cross-referencing the content’s images to see if they’ve been altered
• Reviewing the comments, checking their profiles (if they’re real or bots), and observing the timestamps between comments (i.e. see if a paragraph can be written and posted in a minute or less, or if previous comments were posted verbatim, etc.)
• Reading the story thoroughly to see if it’s not satire, a prank, or hoax
• Consulting reputable fact checkers
• Getting out of the “filter bubble” by reading news from a broader range of reputable sources; stories that don’t align with your own beliefs don’t necessarily mean they’re fake
That leaves the targets of fake news: the general public. Ultimately, the burden of differentiating the truth from untruth falls on the audience [not the government]. The pace of change has meant that acquired knowledge and experience is less useful in finding the truth on the part of the public. Our hope is that by becoming aware of the techniques used in opinion manipulation, the public will become more resistant to these methods. Awareness of these techniques can also help institutions such as governments and credible media outlets determine how to best counteract these techniques. Applied critical thinking is necessary not only to find the truth, but to keep civil society intact for future generations.
Featured Image credit to TrendMicro