Connect with us

Report

We’re fighting fake news AI bots by using more AI. That’s a mistake.

Any time you log on to Twitter and look at a popular post, you’re likely to find bot accounts liking or commenting on it. Clicking through and you can see they’ve tweeted many times, often in a short time span. Sometimes their posts are selling junk or spreading digital viruses. Other accounts, especially the bots…

Published

on

Any time you log on to Twitter and look at a popular post, you’re likely to find bot accounts liking or commenting on it. Clicking through and you can see they’ve tweeted many times, often in a short time span. Sometimes their posts are selling junk or spreading digital viruses. Other accounts, especially the bots that post garbled vitriol in response to particular news articles or official statements, are entirely political.

It’s easy to assume this entire phenomenon is powered by advanced computer science. Indeed, I’ve talked to many people who think machine learning algorithms driven by machine learning or artificial intelligence are giving political bots the ability to learn from their surroundings and interact with people in a sophisticated way. 

During events in which researchers now believe political bots and disinformation played a key role—the Brexit referendum, the Trump-Clinton contest in 2016, the Crimea crisis—there is a widespread belief that smart AI tools allowed computers to pose as humans and help manipulate the public conversation. 

Pundits and journalists have fueled this: There have been extremely provocative stories about the rise of a “weaponized AI propaganda machine”, and stories claiming that “artificial intelligence conquered democracy.” Even my own research into how social media is used to mold public opinion, hack truth, and silence protest—what is known as “computational propaganda”—has been quoted in articles that suggest our robot overlords are already here. 

The reality is, though, that complex mechanisms like artificial intelligence played little role in computation propaganda campaigns to date. All the evidence I’ve seen on Cambridge Analytica suggests the firm never launched the “psychographic” marketing tools it claimed to possess during the 2016 US election—though it said it could target individuals with specific messages based on personality profiles derived from its controversial Facebook database. 

When I was at the Oxford Internet Institute, meanwhile, we looked into how and whether Twitter bots were used during the Brexit debate. We found that while many were used to spread messages about the Leave campaign, the vast majority of the automated accounts were very simple. They were made to alter online conversation with bots that had been built simply to boost likes and follows, to spread links, to game trends, or to troll opposition. It was gamed by small groups of human users who understood the magic of memes and virality, of seeding conspiracies online and watching them grow. Conversations were blocked by basic bot-generated spam and noise, purposefully attached to particular hashtags in order to demobilize online conversations. Links to news articles that showed a politician in a particular light were hyped by fake or proxy accounts made to post and repost the same junk over and over and over. These campaigns were wielded quite bluntly: these bots were not designed to be functionally conversational. They did not harness AI. 

Dumb no more

There are, however, signals that AI-enabled computational propaganda and disinformation are beginning to be used. Hackers and other groups have already begun testing the effectiveness of more dangerous AI bots over social media. A 2017 piece from Gizmodo reported that two data scientists taught an artificial intelligence to design its own phishing campaign: “In tests, the artificial hacker was substantially better than its human competitors, composing and distributing more phishing tweets than humans, and with a substantially better conversion rate.”

Problematic content is not spread only by machine-learning-enabled political bots. Nor are problematic uses or designs of technology being generated only by social-media firms. Researchers have pointed out that machine learning can be tainted by poison attacks—malicious actors influencing “training data” in order to change the results of a given algorithm—before the machine is even made public. 

Kalev Leetaru, a senior fellow at George Washington University, suggests that the first attacks driven by AI bots may not be aimed at social media but instead would involve what’s known as a distributed denial-of-service attack, which involves shutting down targeted web servers by flooding them with traffic.

“Imagine for a moment that you handed that botnet over to the control of a deep learning system and gave that AI algorithm complete control over every knob and dial of that botnet,” Leetaru writes

“You also give it live feeds of global internet status information from major cybersecurity and monitoring vendors around the world so it can observe second-by-second how the victim and the rest of the internet at large is responding to the attack. Perhaps this all comes after you’ve had the algorithm spend several weeks monitoring the target in exquisite detail to understand the totality and nuance of its traffic patterns and behaviors and burrow its way through its outer layers of defenses.”

Beyond defense

In April 2018 Mark Zuckerberg appeared before Congress: he was under the political microscope for the mishandling of user information during the 2016 election. In his two-part testimony he mentioned artificial intelligence more than 30 times, suggesting that AI was going to be the solution to the problem of digital disinformation by providing programs that would combat the sheer volume of computational propaganda. He predicted that in the next decade, AI would be the savior for the massive problems of scale that Facebook and others come up against when dealing with the global spread of junk content and manipulation. 

So is there a way we could use AI or automated bot technology to tackle the manipulation of public opinion online? Can we use AI to fight AI? 

The Observatory on Social Media at Indiana University has built public tools that harness machine learning to detect bots by looking at 1,200 features to determine whether it’s more likely to be a human or a bot.

And Facebook product manager Tessa Lyons said in a 2018 announcement that “Machine learning helps us identify duplicates of debunked stories. For example, a fact-checker in France debunked the claim that you can save a person having a stroke by using a needle to prick their finger and draw blood. This allowed us to identify over 20 domains and over 1,400 links spreading that same claim.” 

In such cases, social-media firms can harness machine learning to pick up, and even verify, fact-checks from around the globe and use these evidence-driven corrections to flag bogus content.

There is a big debate in the academic community, however, as to whether passively identifying potentially false information for social-media users is actually effective. Some researchers suggest that fact-checking efforts both online and offline do not work very effectively in their current form. In early 2019, the fact-checking website Snopes, which had partnered with Facebook in such corrective efforts, broke off the relationship. In an interview with the Poynter Institute, Snopes’s vice president of operations Vinny Green said, “It doesn’t seem like we’re striving to make third-party fact checking more practical for publishers—it seems like we’re striving to make it easier for Facebook.” 

Organizations like Facebook continue to rely on small, usually nonprofits, to vet content. Potentially false articles or videos are often passed to these groups with no background information on how or why they were flagged in the first place.

These efforts aren’t geared toward helping news organizations vet the heaps of content or leads they receive each day to help under-resourced reporters do better work. Rather, they help a multibillion-dollar company keep its own house clean in a post hoc fashion. It is time for Facebook to take responsibility internally for fact-checking, rather than passing off the task of verifying or debunking news reports to other groups. Facebook and other social-media companies must also stop relying on fact-checks after the fact—that is, only after a false article has gone viral. These companies need to generate some kind of early warning system for computational propaganda.

Facebook, Google, and others like them employ people to find and take down content that contains violence or information from terrorist groups. They are much less zealous, however, in their efforts to get rid of disinformation. The plethora of different contexts in which false information flows online—everywhere from an election in India to a major sporting event in South Africa—makes it tricky for AI to operate on its own, absent human knowledge. But in the coming months and years it will take hordes of people across the world to effectively vet the massive amounts of content in the countless circumstances that will arise.

There simply is no easy fix to the problem of computational propaganda on social media. It is the companies’ responsibility, though, to find a way to fix it. So far Facebook seems far more focused on  public relations than on regulating the flow of computational propaganda or graphic content. According to The Verge, the company spends more time celebrating its efforts to get rid of particular pieces of vitriol or violence than on systematically overhauling its moderation processes. 

Beyond fact-checking

It will be some combination of human labor and AI that eventually succeeds in combating computational propaganda, but how this will happen is simply not clear. AI-enhanced fact-checking is only one route forward. Machine learning and deep learning, in concert with human workers, can combat computational propaganda, disinformation, and political harassment in several other ways. 

Jigsaw, the Google-based technology incubator where I served a one-year term as a research fellow, designed and built an AI-based tool called Perspective to combat online trolling and hate speech. This tool (which I didn’t work on myself) is an API that allows developers to automatically detect toxic language. 

It’s controversial because it not only runs the risk of false positives—flagging posts that don’t actually contain trolling or abuse—but also moderates speech. According to Wired, the tool was trained using machine learning, but any such tool is also trained using inputs from humans, who have their own biases. So could a tool built to detect racist or hateful language could fail because of flawed training?

In 2016 Facebook launched Deeptext, an AI tool similar to Google’s Perspective. The company says it helped delete over 60,000 hateful posts a week. Facebook admitted, however, that the tool still relied on a large pool of human moderators to actually get rid of harmful content. Twitter, meanwhile, finally made moves at the end of 2017 to work more carefully to ban similarly threatening or violent posts. But while it has started curbing this problematic material—and is also deleting hordes of political bot accounts—Twitter has given no clear indications of how it is detecting and deleting accounts. My research collaborators and I continue to find massive manipulative botnets on Twitter nearly every month.

Beyond the horizon

It’s unsurprising that a technologist like Zuckerberg would propose a technological fix, but AI is not perfect on its own. The myopic focus of tech leaders on computer-based solutions reflects the naïveté and arrogance that caused Facebook and others to leave users vulnerable in the first place.

There are not yet armies of smart AI bots working to manipulate public opinion during contested elections. Will there be in the future? Perhaps. But it’s important to note that even armies of smart political bots will not function on their own: They will still require human oversight to manipulate and deceive. We are not facing an online version of The Terminator here. Luminaries from the fields of computer science and AI including Turing Award winner Ed Feigenbaum and Geoff Hinton, the “godfather of deep learning,” have argued strongly against fears that “the singularity”—the unstoppable age of smart machines—is coming anytime soon. In a survey of American Association of Artificial Intelligence fellows, over 90% said that super-intelligence is “beyond the foreseeable horizon.” Most of these experts also agreed that when and if super-smart computers do arrive, they will not be a threat to humanity.

Stanford researchers working to track the state of the art in AI suggest that our “machine overlords,” at present, “still can’t exhibit the common sense or the general intelligence of even a 5-year-old.” So how will these tools subvert human rule or, say, solve exceedingly human social problems like political polarization and a lack of critical thinking? The Wall Street Journal put it succinctly in 2017: “Without Humans, Artificial Intelligence Is Still Pretty Stupid.” 

Grady Booch, a leading expert on AI systems, is also skeptical about the rise of super-smart rogue machines, but for a different reason. In a TED talk in 2016, he said that “to worry now about the rise of a superintelligence is in many ways a dangerous distraction because the rise of computing itself brings to us a number of human and societal issues to which we must now attend.” 

More important, Booch stressed, current AI systems can do all sorts of amazing things, from conversing with humans in natural language to recognizing objects—but these things are decided upon by humans and encoded with human values. They are not programmed, but they are taught how to behave. 

“In scientific terms, this is what we call ground truth,” Booch says, “and here’s the important point: in producing these machines, we are therefore teaching them a sense of our values. To that end, I trust an artificial intelligence the same, if not more, as a human who is well trained.”

I would take Booch’s idea even further. To address the problem of computational propaganda we need to zero in on the people behind the tools. 

Yes, ever-evolving technology can automate the spread disinformation and trolling. It can let perpetrators operate anonymously and without fear of discovery. But this suite of tools as a mode of political communication is ultimately focused on achieving the human aim of control. Propaganda is a human invention, and it’s as old as society. As an expert on robotics once told me, we should not fear machines that are smart like humans, so much as humans who are not smart about how they build machines.

Excerpted from The Reality Game: How the Next Wave of Technology Will Break the Truth, by Samuel Woolley. Copyright © 2020. Available from PublicAffairs, an imprint of Hachette Book Group, Inc.

Read More

Continue Reading

Report

Suspect Who Shot 2 Louisville Cops During Breonna Taylor Protests Identified

Published

on

Officials with the Louisville Metro Cops Division have actually recognized the man captive that they say shot and injured 2 law enforcement officers Wednesday evening (Sep. 23) amid objections in the city.

The Louisville Courier-Journal reports:

Larynzo Johnson, 26, was apprehended at 8: 40 p.m., according to his citation, which mentioned he would certainly face numerous costs of first-degree assault of a police officer and first-degree wanton endangerment.

LMPD acting Principal Ronert Schroeder stated Thursday that the suspect will certainly be billed with 2 counts of attack and 14 counts of wanton endangerment, “all guided versus law enforcement officer.”

Johnson is implicated of shooting two LMPD officers around 8: 30 p.m. Wednesday evening, as demonstrations continued across the city in the after-effects of the announcement that simply among the three police officers who terminated their weapons the evening Breonna Taylor was killed would certainly encounter costs.

Johnson’s apprehension citation, offered by the workplace of the Jefferson Area Circuit Notary, said the suspect’s actions “revealed an extreme indifference to the value of human life” and also put policemans at the scene at risk of death or serious injury.

The citation said LMPD officers were reacting to a big crowd at Broadway as well as Creek Street in downtown Louisville that had actually established fires and would certainly not spread after being warned.

Johnson was amongst the group and “deliberately utilized a handgun to fire numerous bullets at officers. Two police officers with LMPD were struck by the bullets causing serious physical injury.”

Witnesses determined him as firing the gun and afterwards ranging from the scene, the citation states, and he was in belongings of a handgun when he was restrained.

Footage reviewed by LMPD policemans, according to the citation, showed him shooting the tool, as well as a National Integrated Ballistic Details Network examiner was exploring an association between the firearm recuperated as well as covering housings recouped from the scene.

” There is a high chance that a tiny contrast, by a guns inspector, will confirm the organization in between the gun’s ballistic proof,” the citation from the detaining officer states.

Johnson’s document shows no previous arrests for terrible criminal activities or felony sentences. His address on the apprehension citation listed no residence address but “CAL,” meaning city at big.

One Facebook Live video taken at the time of the shooting by a person in the group shows up to show a guy in a various colored hooded sweatshirt shooting a handgun at a team of officers. A male was nabbed putting on a t shirt that appeared to match that summary.

The two police officers injured in Wednesday evening’s shooting were determined Thursday early morning as Maj. Aubrey Gregory and Robinson Desroches, an officer with LMPD’s 2nd Department that has been with the division considering that March 2019.

Gregory was struck in the hip and also was released from the medical facility overnight, LMPD acting Principal Robert Schroeder said, while Desroches undertook surgical treatment after being hit in the abdominal area. He is also anticipated to make a complete recuperation, Schroeder stated.

” Last night’s circumstance could have been so much worse for our policemans and also for individuals that were protesting when the shooting rang out,” Schroeder stated. “… We are very fortunate these 2 officers will certainly recover.”

The night before, Schroeder described the tense scenario as “extremely significant” as he talked to press reporters in a press instruction that was broken up after just minutes.

” I am extremely concerned about the safety and security of our officers,” Schroeder said. “Clearly we’ve had actually 2 officers shot this evening, and that is extremely significant. … I assume the security of our police officers as well as the neighborhood we offer is of the utmost value.”

And also to believe … every one of this could have been avoided had they jailed the police officers that killed Breonna Taylor.

Suspect Who Shot 2 Louisville Cops During Breonna Taylor Protests Identified is a message from: Chatter On This – Popular Culture, Information & & Videos

Continue Reading

Report

Kanye West gives Kim Kardashian birthday hologram of dead father

Published

on

[ad_1]

Earlier this week, the reality TV star

was mocked on social media for revealing she had taken her family to a private island for her birthday.

[ad_2]

Source link

Continue Reading

Report

Covid: Wales ‘will not have local lockdowns after firebreak’

Published

on

Early voting surge in US points to record turnout

The highest turnout in more than a century is predicted, with over 80 million votes already cast.

Continue Reading

Trending