The 2020 Election Will Take Place on Two Different Internets ...
There are now more people online than ever before, using the internet more heavily than ever before, and the internet is no longer a sideshow or afterthought for political campaigning. Though the idea of old platforms getting usurped by newer, shinier ones with younger, more engaged users is not a new phenomenon — Facebook supplanted MySpace after all — the balkanization of platform identity will never be on greater display than in the upcoming election.
If Facebook represents a barely regulated arena of “angry boomer” politics, younger-skewing counterparts like Instagram, or Snapchat, or TikTok (all of which are mostly free from the feature bloat and confusing maze of services that Facebook offers, and which — as a result — do not suffer misinformation at the scale Facebook does) represent simplicity and clarity. If we assume that younger people tend to skew liberal, as studies have shown, then we can relatively safely assume younger-skewing app user bases will likely skew liberal as well. And if we assume younger people are less susceptible to misinformation, then we can assume the platforms they prefer are, too. It’s difficult to say how these platforms will affect political discourse. Instagram and Snapchat are arguably more about private messaging than algorithmic publishing, and that makes tracking sentiment and deploying programmatic advertising difficult, if not useless. A platform like TikTok, borne out of teen-favorite app Musical.ly, is loaded with first-time voters who get their information almost exclusively from online sources. Whatever shape candidate messaging takes there will be interesting to see.
Our conception of filter bubbles in the past has been as balkanized communities within a single online platform, carved up by algorithm recommendations. Red feed, blue feed. Going forward, we might see online communities separated into even stricter online buckets: different platforms entirely. This situation makes supposed cure-alls for discursive problems — exposing oneself to opposing viewpoints, civil debate, whatever — far more difficult to execute. How would you imagine an older person might encounter a younger person’s views on an app like Snapchat, where private messaging is the main form of communication, or TikTok, where … well, who knows what the hell Gen Z is doing on TikTok? For many boomers, Facebook might as well be the entire internet, and these sorts of demographic divides entrench Trump’s ability to activate older, passionate voters on Facebook, a group whose members are more likely to be conservative, without any counter.
Granted, it’s not like the platforms who appeal to all ages are any help. In the middle of these two poles, forming a sort of age-demographic no-man’s land, are Twitter (a platform that’s loud and fast-moving, and where fanatics talk past each other, and boomers and millennials fight for supremacy) and YouTube (which has never met an audience it won’t cater to, short of literal Nazism). Those are hardly solutions for people who worry that the upcoming election, and online self-separation, will only make people angrier and more entrenched. For Trump, however, that’s part of a winning strategy.
New York Mag - Brian Feldman
WaPo - Brian Klaas
Deepfakes are coming. We’re not ready.
If 2016 was the election of “fake news,” 2020 has the potential to be the election of “deepfakes,” the new phenomenon of bogus videos created with the help of artificial intelligence. It’s becoming easier and cheaper to create such videos. Soon, those with even a rudimentary technical knowledge will be able to fabricate videos that are so true to life that it becomes difficult, if not impossible, to determine whether the video is real.
In the era of conspiracy theories, disinformation and absurd denials by politicians staring down seemingly indisputable facts, it is only a matter of time before deepfakes are weaponized in ways that poison the foundational principle of democracy: informed consent of the governed. After all, how can voters make appropriate decisions if they aren’t sure what is fact and what is fiction? Unfortunately, we are careening toward that moment faster than we think.
Deepfakes are created by something called a “generative adversarial network,” or GAN. GANs are technically complex, but operate on a simple principle. There are two automated rivals in the system: a forger and a detective. The forger tries to create fake content while the detective tries to figure out what is authentic and what is forged. Over each iteration, the forger learns from its mistakes. Eventually, the forger gets so good that it is difficult to tell the difference between fake and real content. And when that happens with deepfakes, those are the videos that are likely to fool humans, too.
Of course, fakes and forgeries are not new. Whether it was the Soviets airbrushing out “undesirables” or Hollywood special effects, convincing imitations of reality have been around for a while. But in both instances, there were only a few masters of the trade who could pull off a convincing fake. Deepfakes, on the other hand, require little technical expertise, meaning that virtually anyone with the right software will be able to make any fake video of just about any person seemingly saying whatever they want.
That democratization of forgery is just around the corner. “I would say within another 18 to 24 months, that technology is going to get to a point where the human brain may not be able to decipher it,” Hany Farid, a professor of computer science at Dartmouth College, recently told me. Soon, the forger will consistently fool us.
Given how poorly our democracies performed with easily debunked fake-news articles about, say, the pope endorsing President Trump, the prospect of having to question videos that we can see with our own eyes is even more harrowing.
Mark Zuckerberg personally made the decision that Facebook would keep running political ads, even though the ads were weaponized in 2016
As the US gears up for its next presidential election in 2020, citizens can expect to keep seeing political ads on Facebook.
The company will continue to run political ads, The Wall Street Journal reported on Thursday, even as it tries to move past the scandals from its social-networking platforms being used to manipulate voters in major elections worldwide in recent years.
Facebook is making some changes to protect its political ads from being exploited by Russia or other bad actors. The company will no longer pay commissions to salespeople for selling political ads, The Journal reported.
That's a big about-face from 2016, when Facebook not only paid commissions but also embedded its staffers into campaigns to help them with their Facebook targeting strategies. It offered such white-glove service to both the Trump campaign and the Clinton campaign. Since Trump's campaign was smaller and less digitally savvy, it used this service heavily. Brad Parscale, Trump's 2020 campaign manager, even praised Facebook for helping it raise money, Wired reported in 2016. And Trump's digital-advertising director, Gary Coby, called one of Facebook's staffers his MVP.
Political ads were once viewed as a promising growth area for Facebook, but now, as Facebook faces increasing scrutiny over how it handles user data, such ads have fallen out of favor internally, one employee told The Journal.
Doctored Pelosi videos offer a warning: The internet isn't ready for 2020
The Pelosi videos and their narratives were not the product of advanced technology, nor did they take a different route to prominence than previous misinformation efforts.
The 2020 election is set to face some very 2016 challenges when it comes to the spread of misinformation.
The emergence of distorted videos of House Speaker Nancy Pelosi, edited to make her appear to have trouble speaking, has provided a stark reminder that technology often remains an enemy of truth in politics, just as it was four years ago. The core issues of social media virality, confirmation bias and the fringe internet-to-conservative media pipeline have endured from 2016 and do not even need particularly sophisticated techniques to do real mischief.
The videos also offer a warning that concerns about election interference from foreign countries should not overshadow the ability of domestic actors to influence what people see, hear and think. President Donald Trump himself distributed one of the carefully edited videos on Twitter on Friday morning, and though he denied knowing that they were altered, he continued pushing their underlying theme that Pelosi is somehow impaired.
Alex Stamos, Facebook’s former chief security officer, said that the Pelosi video illustrated two of the biggest risks ahead of the 2020 election: lightly edited “shallow fakes” that use real and often lightly edited footage to manipulate the discourse, and the power of domestic misinformation efforts.
“A video that is slightly deceptively edited can be the source of very divisive and possibly false narratives,” said Stamos, who is an NBC News contributor and adjunct professor at Stanford University. “With all of our focus on the Russians, we’ve forgotten that domestic actors are the most likely to have insight into the kinds of narratives that will become viral and will have a big impact on people’s beliefs in American institutions and individuals.”
Why it’s so alarming that Trump shared an edited video of Pelosi
DID PRESIDENT TRUMP share a fake clip of Nancy Pelosi? The seemingly simple question is a vexing one to answer. Passing judgment on his behavior is less challenging.
Conservative accounts on social media circulated a clip this week deliberately distorted to make it seem as if the speaker of the House was slurring her speech: “Drunk as a skunk,” commentators declared. The video, which some declared a “deepfake,” employed much too simple technology to merit that term. Deepfakes use artificial intelligence to synthesize human images into a reality that is entirely fabricated; the smear of Ms. Pelosi merely slowed down parts of an existing interview and modified her pitch.
The clip Mr. Trump tweeted alongside the words “PELOSI STAMMERS THROUGH NEWS CONFERENCE” was part of the same narrative, but it was not distorted, or even doctored, so much as it was edited. The clip splices together short segments of Ms. Pelosi (D-Calif.) stuttering in a lowlight reel that offered a misleading impression of a perfectly coherent 21-minute news conference. Mr. Trump did not make this video, or pull it from the right-wing fever swamps of social media. He took it instead from the fever swamp of Fox Business Network.
The clamor for firms such as YouTube, Facebook and Twitter to remove or limit the distribution of these clips as misinformation invites a vexing debate about what counts as fake in the first place. The “slurring” video, accompanied by manufactured accusations of drunkenness, may fall on one side of the line. The stammering video may fall on the other. But drawing that line at all has far-reaching implications. People edit videos all the time, sometimes for fun and sometimes to prove a political point. When does editing become doctoring, and when does doctoring become distorting? Is distorting always impermissible, or does it depend on intent, effect or something else altogether?
These difficulties both are caused by and contribute to the erosion of trust in today’s America, where it is hard to say what there is more of: false cries of “fake news,” or viral “news” that is actually fake. Technology certainly has helped this issue along, providing both an easy means to craft propaganda and an easy means to promote it. The increasing sophistication of image editing that creates the threat of actual deepfakes filling the Web will make that worse.
DNC urges 2020 campaigns to 'immediately' delete Russian-developed FaceApp from their phones
Don't expect to see any FaceApp memes coming from the 2020 Democrats anytime soon.
The Democratic National Committee on Wednesday urged every 2020 campaign not to use FaceApp, the popular app that allows users to apply filters to photos and has recently been used on social media to age-up pictures, noting it was "developed by Russians," CNN reports. The app was created by Wireless Lab, which is based in Russia.
The DNC's chief security officer, Bob Lord, told the 2020 campaigns the organization has "significant concerns about the app (as do other security experts) having access to your photos, or even simply uploading a selfie." Concerns were previously raised about the app, which notes in its privacy terms that by using it, you "consent to the processing, transfer and storage of information about you in and to the United States and other countries," The Washington Post reports. The company said on Wednesday that "the user data is not transferred to Russia."
Lord said in his warning to 2020 campaigns that "it's not clear at this point what the privacy risks are," per CNN, but that "the benefits of avoiding the app outweigh the risks."
well, why should they have to transfer your data to Russia when they have all the trolls and Russian intelligence they need right here in the good ole US of A?
"I know that human being and fish can coexist peacefully"
--- George W Bush
--- George W Bush
Insert signature here: ____________________________________________________
A Trump social network readies for launch
Sometimes Twitter isn’t enough.
After spending years alleging anti-conservative bias on social media, President Donald Trump will soon have another way to get his message out how he wants.
Trump’s reelection campaign plans to launch a smartphone app this fall to encourage supporters to donate, volunteer and reel in like-minded voters — all while providing the president more unfiltered access to his followers. Supporters who download the all-in-one app are expected to be able to sign up for a Make America Great Again rally, canvas a neighborhood or call voters, maybe even register to vote as the campaign looks to turn passive supporters into activists.
Perhaps the most important feature will be the app’s use of prizes — maybe VIP seats or a photo with Trump — to persuade the most fervent supporters to recruit their friends, rewarding them as campaigns have been doing for top donors for years, according to people familiar with the plans.
The upcoming launch is the latest sign of how Trump’s team, which ran a ragtag operation in 2016, is using its huge coffers to drive a more professional and data-driven operation. While campaigns have had apps for years, the Trump app is expected to let the president‘s team track followers in a more comprehensive way than ever before in its bid to secure the president another White House term.
That reminds me of those marks who signed up with Trump university, where one of the perks was supposed to be a photo taken with Trump so you would meet him and such. What they actually got was a photo taken with a Trump cardboard Stand up photo. You think Fuckface (nothing against Faces) would stand there for a couple of hours to meet a few thousand of his biggest fans?
The difference between the Middle Ages, and the Age of the Internet, is that in the Middle Ages no-one thought the Earth was flat.
Inside Elizabeth Warren's Selfie Strategy
In late March, Jocelyn Roof, a sophomore at the University of Iowa, picked up a call from an unknown number and heard Senator Elizabeth Warren’s voice on the other end of the line. Warren asked her what got her “in this fight”—Roof said she was very concerned about income inequality—and thanked Roof for her $25 donation.
As soon as she got off the phone, Roof took a selfie of her shocked face. She posted it to Snapchat with the caption “MY WHOLE LIFE WAS MADE,” then screenshotted it and posted it to Twitter. Warren retweeted the selfie, with the comment “I’m so glad we got to talk!”
Before the call, Roof said she was undecided about who she would support in the Iowa caucuses. Afterwards, her enthusiasm for Warren “skyrocketed,” she says. “I couldn’t imagine voting for anyone else.” She started donating $5 to the campaign every month and buying snacks for volunteers at field offices. In the first three weeks of September, she registered more than 1,000 new voters on her University of Iowa campus, independent of the Warren campaign.
On Sept. 20, Roof was wearing a “Women for Warren” shirt as she waited in line to take a selfie with the Senator after a rally on her campus. The call and the tweet had made a young undecided Iowa voter into an avid supporter, grassroots donor and potential volunteer.
As Russia makes 2020 play, Democratic campaigns say they are in the dark, and experts fear U.S. elections are vulnerable
Several Democratic presidential campaigns targeted by a Russia-based operation on Facebook’s popular Instagram app said they had been unaware of the new foreign disinformation efforts until the tech giant announced them publicly last week, raising alarms that American democracy remains vulnerable to foreign interference even after three years of investigations into the Kremlin’s attack on the 2016 election.
The lack of advance notice to the apparent victims of the first-known attempts by Russians to interfere directly in the 2020 race has heightened fears that campaigns are largely on their own when it comes to guarding against attacks from foreign interests.
Campaign officials, security experts and Democratic lawmakers said the latest material served as a warning sign that the Trump administration and the tech industry are still struggling to coordinate their response as threats to the U.S. political system intensify. In particular, the threats now emanate from multiple countries, including Iran and China, where malicious actors have adopted Russia’s playbook in a bid to manipulate social media to their political advantage.
Some said they were unnerved by the nature of the recent Instagram posts, which seemed to target battleground states and demonstrated a nuanced understanding of the dynamics at play in the 2020 Democratic primary. They appeared, for instance, to stoke African American resentment of former vice president Joe Biden while tapping into themes designed to undercut Sens. Elizabeth Warren (Mass.) and Kamala D. Harris (Calif.), as well. The Russian network appeared to be relatively small and in an audience-building mode, analysts said.
“The Russians are repeating the same tactics they used during the 2016 election but only growing more strategic in identifying divides and capitalizing on those divides to create fault lines in society and distrust between people and institutions,” said Ali Soufan, a former longtime FBI agent who wrote a report in May for the Department of Homeland Security that warned, “To date, the United States has no national strategy to counter foreign influence.”
Opponents of Elizabeth Warren spread a doctored photo on Twitter. Her campaign couldn’t stop its spread.
The reaction to the tweet, viewed hundreds of times, offers a hard lesson: Homespun disinformation campaigns on social media represent a rising threat
A tweet from liberal activists touting Elizabeth Warren drew what seemed like a typical response from one of the Democratic presidential candidate’s fans this September:
“Thank you for endorsing Elizabeth Warren!!!” the user wrote, sharing a photo of black women holding “African Americans with Warren” signs.
The post gained only a single retweet at the time. But it found new life this past weekend, making its way to sharp-eyed Twitter users who realized it was fake, with the campaign placards photoshopped over Black Lives Matter signs.
Twitter users seized on a side-by-side comparison of the doctored version and the original, assailing the Warren campaign for the apparent misrepresentation. What they did not realize was that the account that had propagated the photo has been identified by the Warren campaign as a “troll,” only feigning support for the Massachusetts Democrat as it pushed out falsified content in an apparent effort to undermine her candidacy. ...
As the image solidified negative views of Warren among some who favor other Democratic candidates, the incident offered a fresh lesson about political disinformation: Homespun operations on social media represent a rising threat, capable of inciting conflict among voters and turning unwitting users into agents of online deception.
Steyer buys 'Keep America Great' domain name
SAN FRANCISCO — Tom Steyer's presidential campaign says the Democratic billionaire candidate landed a special Cyber Monday deal — nabbing the www.keepamericagreat.com domain name under his campaign's own branding.
The Steyer campaign, in a release Monday, said, "Trump’s campaign prides itself on hoarding websites of political opponents, but they forgot to pick up the URL for their signature re-election slogan, 'Keep America Great.'"
The result: A visit to the website with the Trump slogan now reveals the headline, "Trump is a fraud and a failure.'' It offers the opportunity to purchase a bumper sticker which the campaign says "highlights what a majority of Americans already know about Donald Trump,'' that he's "borrowed billions of dollars to bankrupt businesses."
It's unclear how much Steyer paid for the domain name. Web records indicate that keepamericagreat.com was created on June 25, 2015, nine days after Trump formally launched his 2016 presidential campaign with the "Make America Great Again" slogan. The records also show the domain registration was updated Sunday.
Bernie Sanders Unveils Sweeping 'High Speed Internet For All' Proposal
Sen. Bernie Sanders (I-Vt.), a 2020 Democratic presidential hopeful, unveiled on Friday a plan that would provide funding to municipalities, states, and others to build their own publicly-owned broadband networks in an attempt to bridge the digital divide in the United States.
The plan also calls for using “existing antitrust authority” to break up internet and cable companies.
In the plan, called “High-Speed Internet For All,” Sanders essentially proposes treating broadband like a public utility. As part of this, Sanders said, if elected, he would provide $150 billion in grants to localities to build “publicly owned and democratically controlled, co-operative, or open access broadband networks.”
“It’s time to take this critical 21st century utility out of the hands of monopolies and conglomerates and bring it to the people while creating good-paying, union jobs at the same time,” the plan reads. “This is not a radical idea. Cities across the country deliver municipality-owned, high-speed internet to their residents, from Chattanooga, Tennessee, to Lafayette, Louisiana.”