Stanford experts discuss the Internet, Social Media and Democracy
Articles Blog

Stanford experts discuss the Internet, Social Media and Democracy


When we started talking at
CDDRL about the Internet, it was with such hope and I’d say, I would confess since I
started the project and initially framed it, naive optimism. And we called the program
Liberation Technology. And we defined it as an effort
to understand and document and maybe help nurture along,
the ways that information technology was being used to do a lot of the work
that you people are trying to do. To defend human rights,
to improve the quality of governance, to deter and expose electoral fraud for
election monitoring, to fight and track corruption, to marginally
expose government wrongdoing, to promote political participation, level the playing field in terms of
campaign finance, empower the poor, promote economic development protecting
environment, educate consumers. Interesting applications
that we documented, in a way help try to incubate in
terms of improving public health. Wow, right! Liberation technology in all respects,
political, social, economic, and so on. And then, in the term that has become common in our discussion of this,
the empire struck back. Authoritarian regimes are of
course fighting for their lives, they’re not gonna be passive
in the face of this. And we saw a number of modes
of authoritarian adaptation. One was repression,
like the great firewall in China, and other not quite so
comprehensive but increasingly technologically empowered
efforts to control, filter, censor the Internet and
really keep people from seeing a lot of what was on cyberspace to the point now
where some people are worrying that we may be entering a world where
we don’t have an Internet, a globally accessible
interoperable Cyber sphere, but a series of national intranets that are
gonna break apart this global phenomenon. Then we had the Empire Strikes Back in
the way that the Russian Government, but many others, [COUGH] have been
leaning in in terms of using and usurping the Internet for political ends, both in their own countries and
abroad to advance their strategic aims and to subvert,
stigmatize opposition to mobilize support. And through the use of robots,
Internet bots, and trolls, and armies of Internet agents,
create a kind of fake reality and a fake presence of non-human,
or non-authentic actors, who are supposedly presenting
grassroots and real opinions. We, of course, Tim struggles with this
in his work, saw the dark side of the Internet in terms of pornography and
exploitation of children, hate speech, invitations to violence and
support for terrorism. And we’ve seen what Nate
addresses in his article and may discuss here the way that
social media can become venues for severe, intense, relentless,
political polarization. So here are some of the issues
we’re struggling with. How do we balance the right to free speech
in the digital world with other values. Security, civility, privacy, obviously fighting terrorism,
and sexual exploitation. I think this is a good
part of what Tim’s mission is in this extremely innovative and
ambitious online project. Secondly, how do we ensure [COUGH] that
the contribution of the Internet and social media to our democratic
politics is gonna be on balance, enriching, constructive? I say on balance,
politically empowering and democratizing, and not fragmenting, polarizing and creating a world in which compromise becomes really impossible. And third, how do we strike a good and
democratic balance among these competing values, in terms of government
policy that seeks to regulate and facilitate the digital ecosystem? So I think those are going to be
the challenges that we take up as we move from one speaker to the next, though
I know they’re gonna engage one another. Tim, over to you.>>Well, thank you very much Larry, and it’s great to be back here
with this terrific group. I’ve been writing about dissidence
in various contexts for 40 years. I’ve been working on this book and
this website for ten years and I now have ten minutes in which to
distill all the nuggets of this work.>>[LAUGH]
>>You have twelve.>>And so I’m gonna essentially just make five points. The first one takes straight
off from what Larry just said. The Internet in and of itself,
never set anyone free, and will never set anyone free.>>[LAUGH]
>>The Internet in and of itself never aggressed anyone. This is the fallacy of
technological determinism, like all technology, it is double edged. The first human being
who discovered a knife could use it to cut their meat or
to murder their neighbor. And that’s true of all technology,
through history, all information and communication technologies
because this is a big one. The upside is enormous, we have the
possibility anyone who has a Smartphone of communicating in theory directive with
roughly half of humankind, that’s amazing. The downside is equally large,
Larry already mentioned a death threat issued out in one place,
carried out in another, harassment, hate speech, they have pedophile,
pornography, and surveillance. Ruth a great expert on
cyber security says, surveillance is the business
model of the Internet. What Google, and Facebook, Twitter know
about you is beyond wildest dreams. If you put that together we will
[INAUDIBLE], so it’s double edged. And the basic question
before us is always how do I maximize the opportunity and
minimize the risk. Second point follows from this. Incidentally, just when I’m talking
about the dangers, I was amused to read in The Guardian today that talking
about the dangers of fake news. Software developed at Stanford University
is able to manipulate video footage of public figures to allow a
second person to put words in their mouth. Even as we here talking about the danger
of fake news, somewhere else on Stanford campus, somebody’s
inventing the tools for fake news. Second point. Because of the nature of this
connected world, a world in which as a result of mass migration and the
Internet, it’s a combination of those two, we are all tendentially becoming neighbors
with each other, physically and virtually. The classic modern way of
thinking about free speech, namely, primarily in terms of the state,
what the state allows and what the state forbids, should or
should not, is actually not totally outmoded but radically incomplete. I argue that your effective freedom
of expression at any one time, freedom of information, too,
is actually determined by four forces. Number one, international treaties,
organizations, and networks, formal and informal. The most obvious being the ICCPR. They are all the countries that have
signed up in theory to Article 19. There is a list of those which can actually go to
the UN Human rights Committee. Then you have what I call the big dogs,
the big cats, and the mice. The big dogs are the governments. The big cats are Facebook, Google,
Twitter, Amazon, and Apple. And the mice are you and me, right?>>[LAUGH]
>>And the question is what do we do about it? So, first of all,
the big dog’s the government. It is a cyber utopian illusion to believe that the Internet has ended
old-fashioned territorial sovereignty. In the year 2000 Bill Clinton said for China to try to control the Internet would
be like trying to nail jello to the wall. And you know what, the Chinese
communist party turned around and said Bill, just watch us. And over the last 15 years,
the Chinese Communist party has made a pretty good stab
at nailing jello to the wall. Although, only it has to be said,
look at this, by developing what a Harvard
study considers to be the largest apparatus of censorship in human history,
cause. Then we have the big cats. And the position here is extraordinary. Here is the world map of social networks. Blue is of course Facebook. The leading social network by country. As you can see,
with the exception of Russia, China and a couple of other places, Facebook is the
leading social network across the world. The map of the world is painted blue. We have something utterly unprecedented
which is a global public sphere which is privately owned. And in this case owned by one company. And then we have us, the mice. And here, you know much more than I
do about what it means to try and use the opportunity of the Internet and
face the dangers of the Internet. I argue in the book that
actually networked mice have fantastic opportunities of impact. But the one quick point I would make here
is it’s never just the online world. It’s always in combination with
some more traditional media. Al-Jazeera TV in the case
of the Arab Spring. The physical courage and presence of people on the streets,
and old fashioned politics. So actor, I can tell you the story of
that, was eventually brought down by combination of civic activism and
parties in the European parliaments. So I just put that point
briefly on the table. It’s never online on it’s own. It’s always in some three or
four dimensional combination. Third point. It follows from this that
if we are living in such a connected world where we are all
becoming neighbors with everybody else, and where our effective freedom of speech
is determined by all those four forces, talking in terms of the law and constitutional tradition of any one
country will not get you so far. It’ll get you some way,
but not all the way. What I argue is that we need to go
back to first principles and try and work out in the simplest
possible terms what it is we really want to try to achieve with
free speech in all these key areas, balance, knowledge, journalism,
religion, privacy, secrecy, and so on. And then, work out how we get it. Who is the key addressee, right? So, sometimes it will still be the state,
the government. But quite often it won’t be. It will be an international organization
or it will be Google and Facebook. But first, you have to know
what you are trying to achieve. These 10 principles are presented
in 13 languages on the website. I do urge you to go to it and
use it, but I’m not gonna go through all ten because
I have only ten minutes. So my fourth point is
to focus in one area, which is media and journalism. And this goes, cuz our subject is,
after all, social media and democracy. And our principle here is we require
uncensored, diverse, trustworthy media so we can make well-informed decisions and
participate fully in political life. So, a strict to the essential statement of
the classic argument of free speech for democratic self-government. Now, that may seem like motherhood and
apple pie, but actually the terms
are very carefully chosen. Uncensored, diverse, trustworthy. Most of the time,
even in authoritarian regimes, the problem now is not so much explicit
censorship, as it is in China. It is most typically control through other
means, notably ownership of the media. And I see many of you nodding around the
table, cuz you can see at once it applies. This is a piece we have about the Turkish
media during the 2013 Gezi Park protests. One of the biggest civic protests, civic
movements, in recent Turkish history. What was Tele showing? CNN Turk was sharing
a documentary about penguins. Why? Well, the piece argues persuasively, because the main TV channels
are owned by conglomerates, which are dependent on the state for
advertising revenue and have many other interests which
also depend on the Erdogan regime. So you have the biggest protest for
years going on in your capital and you’re showing a documentary
about penguins, right? So that’s,
go after somewhat different targets. And this brings us, that I think is
true in so many different places. In Hungary, for example,
a would-be authoritarian regime, there is no censorship. But there’s massive control
through ownership, particular, and informal pressure. This brings me on to the key
point about trustworthy. And this brings us to what we’re gonna
spend a lot of time talking about, I think, what Larry already mentioned,
everybody is talking about. Fake news, alternative facts,
post-fact, post-truth, echo chambers, silos Disinformation and
misinformation, which by the way, I would distinguish between the two,
stipulatively. I think disinformation is false
information deliberately circulated for political reasons,
Putin’s Russia being the prime example. Misinformation is Macedonian informs. It’s false information being circulated
to make money, or for other reasons. Disinformation or misinformation. Now I wanna make a very simple,
very boring point here. Which is something that academics say far
too often, which is we need more research. But we really do because
there is out there, you any have opened the New York Times. And there’s another
offered making sweeping generalization about fake news and
echo chambers. Actually, we don’t really
know what is going on. There is relatively little
good empirical research. This is a paper which I commend
to all of you by my colleague Jonathan Bright who looks at
political posting on Twitter. And what he finds is slightly more
complicated than the usual picture. What he find is that people who
are in the middle broadly defined in the political center. And that is quite a large political center
in the case of say Germany or Canada, much smaller in other countries. Actually, I getting more diverse, use and views through the Internet and
social media. But people with strong ideological views,
at the extremes. There you see the really strong echo
chamber effect and podilization effect. And so it’s a somewhat more
differentiated finding. It’s the ideologically fired up, the crusaders who really
are in the echo chambers. There’s a rather similar finding
in a Stanford study which I also commend to you. There’s one other finding which
I think is well established, which is that false information. Whether disinformation of
misinformation is as likely or more likely to go viral
than accurate information. That I think is a reasonably
well established finding. And this takes me to my fifth and
final point in introduction. Which is to ask the question, which I think we should be asking. One of the questions we should be asking. We’ve all thought a lot about what
we should ask of the state and what government of course should do that,
there’s a vast literature on it. There are international organisations,
there are treaties covering it. But we have this whole new,
incredibly important player, The Big Cats. Google, Facebook, Twitter, Amazon,
Apple, Tencent, Baidu in China, what I call the private super powers. And the question we should be asking is,
what should we together be asking of them? What are the most important
things to ask of them? And when you talk to the people at
the top of these companies, many of whom, by the way, are good American liberals and
readers of the New York Times. They say our problem is from where we sit, we’re being bombarded from all side,
360 degrees. And many of these demands
are contradictory. So the free speech advocacy groups
are saying leave more content up. But the women’s rights and LGBTQI and minority rights groups are saying take
more content on because it’s hate speech. Privacy and data protection advocates
are saying do more to protect our privacy. Security agencies and governances are
saying share more private information with us to help to fight terrorism. So there, I think quite reasonably say,
what is it you want us to do? Get together and work out some set demand, which both broadly interested
in liberal democracy, human rights and
free speech would want of us. Let me finally, [COUGH] just mention and this just fairly openly the discussion. A few things I think we could talk about
as asked of the private superpowers. Number one, a major reason why we don’t really know what is going on
is that we don’t have the data. The only people who have the data
are Google and Facebook, and on the whole they’re not sharing it. So my colleague’s study was done with
Twitter because Twitter is an exception, it does share its data. So one demand I think we can reasonably
made is give us enough data so we can actually work out what’s going on. Because that is absolutely in the public
interest since you have a private fear and public fear. Secondly, and I put it to you for discussion, I think we need to go back
to Facebook on its real name policy.>>Mm-hm.>>Anonymity is a double-edged sword. It’s the protection for
the pedophile and the terrorist. But as you all know,
it’s also the protection for the dissident and
the human rights campaigner. The Electronic Frontier Foundation
went to Facebook a few years ago, and said why don’t we agree to this? You have a real name policy,
but you acknowledge that there are circumstances in which people
absolutely legitimately need anonymity. For example, when you’re working
in an authoritarian regime or an oppressive community. Allow anonymity on the condition
that you have a real email address which has
been requested of this. That seems to be quite a reasonable
compromise, Facebook said no. I think we should be demanding
anonymity for those who really need it. Thirdly, I think in the whole
area of this information and Stanford has been doing
a lot of work on this. One thing we could ask them to look
at is of taking down bad bots, because if you identify a mass bottle top, it’s a pretty good bet. That it’s a bad actor of one kind or
another, okay? So I think that’s something
that we could at least discuss. I don’t think we should be doing what the
Germans are doing which is they take down fake news. Because we don’t want the state getting
into the business of deciding what is true and what is false. But I think take down the box. I also think that on Facebook News Feed, we can reasonably ask them to show us
roughly what the a logarithm is doing. And to us it gives greater
diversity of news and views because that is actually a public
good with vital toward our democracy. At the moment what we know
about the alogarithm is that it privileges the viral of
everything else what is shared. And so in the balance between
what I might call virality and veracity, which is the key
journalistic and democratic value. I think we could reasonably ask of them
to put more weight in the newsfeed algorithm on veracity and not just on the
virality where it’s the fake news shared. Those are just a few suggestions
To open up the discussion, which I very much look forward to-
>>Okay, Nate, if you’re with us, you’re on.>>So, I’m gonna pick up to some extent
where Tim left off, because I’m here in New York at a meeting that’s organized
by Facebook, with social scientist, in order for us to advise them on what
they should be studying, and what data they should make available to us, and
so this is the first step, at least, that Facebook is taking in order to do
some of the things that TIm mentioned. I’m sorry I’m not there, but it’s for
reasons associated with this conference. I should say just the way I
became interested in this topic, of social media and democracy,
was from the lens of campaign finance, because before the 2016 election, for the most part when we talked
about the internet in democracy, we were holding up the model of
Obama’s digital campaign geniuses, or the success of micro targeting,
or the fact that tv was no longer gonna be the primary
medium of political campaigning, and it was, as Larry suggested,
more a liberation technology story than the sort of dark cast
that has come since the 2016 [INAUDIBLE] associated with
events in Europe and elsewhere. I was thinking about this issue from
the standpoint of campaign finance, because when we talked about the campaign
finance problem was with television as the sort of primary mode of communication,
and so I’ve written at Frank Fujiyama’s, wrote a article a few years ago saying the
campaign revolution wouldn’t be televised. It was all about the implication of moving
from television to the Internet for campaign finance law as well as regulating
political communication, and so we in the US had a very significant
supreme court case, winning one of the most notorious in the last 20
years called, Citizens United vs FEC, which gave the right to corporations to basically spend as
much money as they want advocating for the election and defeated candidates. But that case really wasn’t
about television commercials, it was about a movie called,
Hillary, The Movie, that a non-profit corporation
had put on demand, sort of like HBO On Demand, or other kinds
of on demand programming, and the case, while it’s been sort of talked about
in the context of giving corporations Human Rights or First Amendment rights,
was really a case about changing technology, and about what happens
when you move from linear television programming to on demand nonlinear
programming through the Internet and through other types of technology, and
what the implications would be for them. And so that was my sort of entry point,
and since then, we’ve had conferences at Stanford, and I’ve been working
somewhat with the platforms. Like I said,
I’m here with Facebook in New York, and I’ve done some work with Google
over the last few years as well. We often identify with the Internet and
campaigning. Things like fake news or hate speech and
the like, because those are, in some ways, just manifestations of
old problems that we’re seeing online, and what I try to focus on, what is unique
about the technology of the Internet that then has particular,
threatens democracies in particular? And so, I’ve come up with
six different uniquenesses of the Internet that have particular
potential for threats to democracy. Some of them Tim mentioned, and
I wouldn’t go into detail with them, but I’ll just give you the list upfront, and
I don’t think I’ll have time to talk about solutions, but that would be something
that we could talk about in the Q&A. So, the first is the velocity of
communication, the fact that now, with the Internet, we speak more quickly,
and you can get your information and communication out there more quickly
than at any pointy in history, to a large audience. Second, as Tim mentioned,
the virality of communication, the unmediated form of communication that
is predominant because of social media. Third, anonymity, that Tim also mentioned, which facilitates both the bot activity
and the hate speech that he mentioned. Fourth is this, what we call homophily,
or echo chambers, and filter bubble, the ability to opt into even more
narrow selected communities online, which you really couldn’t
in the pre-Internet world. I’m gonna talk a little bit about that. Fifth is the lack of sovereignty
in regulating the Internet. So when it used to be that you could
easily control the community patient environment, or electro communication, we in the US have a federal communication
commission, federal election commission. But once, the worldwide web is worldwide
after all, and so that is why, among other things, you can’t protect
your democracy from foreign influence today in the way you could if you were
regulating their ways, and finally, and I won’t spend too much on this,
cuz I think Tim dealt with this very well, is the problem of monopoly. So, while it was true that we had,
say three television networks, or we had major newspapers, there’s never
been anything like Facebook, Google, and Twitter, in terms of the power that
they have to shape communication across so many domains. All right, so that’s the big point. So let me just try to flesh
them out a little bit. So if you look on the Internet,
you’ll find this quote, a lie can travel halfway around the world,
while the truth is putting on its shoes. I don’t know if you’ve
ever heard that quote. It’s attributed on the Internet,
I saw, to Mark Twain in 1919. It turns out Mark Twain was dead by 1919.>>[LAUGH]
>>So that’s just, even fake news about Internet velocity,
but the point of the quote,
which is sort of interesting pedigree, is that, in relation to
the topic we’re talking about, is that the cat is much more
quickly out of the bag in the Internet age than in pre-Internet age,
and that it is, for false beliefs,
a sensationalist language, or for that matter, any communication, to get widely transmitted
than it was previously. That is important in particular when it
comes to democracy and elections, because, as we saw in this past election in the US,
timing is crucial in an election. So that if you can skillfully
transmit information, communication, lies, and
it’ll lead up to an election, it’s extremely difficult to claw that
back, and to try to sort of have speech, combat speech, false speech,
in the time period before people vote. Related to that also is this problem
of virality, and I should say, as Tim mentioned before, the Internet is a tool,
a double-edged sword, it can be used for good of ill, and this is again
a strength and weakness of the Internet, that it has the sort of lack of mediation
that we had in the pre-Internet world. In the US, prior to the Internet, and
certainly part of the cable programming. We had three main television networks. You had roughly a third of a population
that would watch the evening news. We had a famous television personality,
Walter Cronkite, who would literally end each each broadcast
by saying, that’s just the way it is. Okay? Now there is no one in America
these days in the media who can say that’s just the way it is and
people will actually believe that person. People have opted into
their own news sources. And, more importantly, they’re getting so much more of their news from their
friends, through their Facebook feeds, through their Twitter contacts and
the like. And so that has facilitated the kind
of polarization and media and information systems that we see. One consequence of that is the problem
of fake news that both Larry and Tim were discussing. Fake news, everyone hates the term
fake news who talks about this, but that’s the way we talk about it,
so I’ll adopt that. As Tim mentioned in distinguishing
between disinformation and misinformation, there’s all
kinds of fake news, right? There is the fake news for profit
like the Macedonian teenager problem. There is the problem of reckless
reporting, problem of conspiracy theories that we saw in the US election, even
leading to some violence in the famous, I can talk about this afterward,
the Pizzagate incident in Washington DC. But as Tim said, there is considerable
debate about how much of this false reporting is actually having an effect
on people’s knowledge and attitudes. We do know that there is
a widespread belief in false facts. Whether it’s that President Obama
was born in another country or that weapons of mass
destruction were found in Iraq. Or any number of things dealing with
the personalities in this past campaign. We know from the studies of Twitter
that the run up to the 2016 election that the URLs to fake new sites were competitive
with the mainstream media. So roughly 15% of the Tweets
in the runup to the election in the US that had news links
were linking to fake news sites. Engagement on Facebook with false
stories was as high, if not higher than engagement with stories from the New
York Times, Washington Post and the like. But as Tim mentioned and put up on the
screen from our colleague Matt Gentzkow, it’s very hard to actually
figure out the actual effect of fake news on people’s voting behavior and
the like. It may be that people who opt in to some
a lot of these false stories are ones who, if you get a story that says
the pope has endorsed Donald Trump, you might be someone who’s already very
likely to support Donald Trump, right? Next problem, as I said before,
is the problem of anonymity. And again, as Tim mentioned, anonymity
has its benefits as well as its costs. I’ll just focus on two problems,
hate speech and bots. Because the internet, this is again,
another area where the internet has facilitated widespread
unaccountable speech that, sometimes it’s not
even done by humans. In the case of bots, we, for
example, know that roughly 15%, well roughly 8% to 10% of all
Twitter accounts are bots. In Russia, by the way it’s 45% of
Twitter accounts are bots now. In the US, roughly 15% of
the election related conversation in the run up to the In the debates or
so was done by bots. Something like a third of
Donald Trump’s followers, maybe something a little bit less for
Hillary Clinton, were bots. And so you had that problem, it’s mainly, I should say,
a problem on Twitter than Facebook. Facebook has been more aggressive
in getting out the bots. Bots, I should say, are also one of
these double edged swords you could have bots that do good things as well. You can have a bot that tells
you the weather everyday, right? And so banning all bots is
a sort of hard thing to do. But what you wanna do is try and get at these problematic bots that
are spreading misinformation. So in the run up to the election,
as I said, roughly 15% of all lecture related
stories on Twitter came from bots. And there were over 400,000 bots that
produced 4 million election related tweets just in one of the months
preceding the election. Alleged hate speech, and I’d say this is
one area that we’ve got some really good research by my colleague
Josh Tucker here at NYU. And one of the things he finds,
is that it’s not clear that hate speech has actually risen online,
at least over the last year or so. There’s no question that there’s been
an increase in hate speech directed toward journalists. And you can see that if you follow any
prominent journalists who’ve been on the receiving end of this. But he looks at it as a very sort of
creative way of trying to measure hate speech, and to look at what’s
been happening on Twitter. And he’s specifically looking on Twitter,
and doesn’t see a kind of secular rise in hate
speech over the course of the election. It does seem like there was a rise
in white nationalist rhetoric as well as misogynist rhetoric in
the month or so after the US election, but we’ll see whether that continues. Next, talking about homophily or
echo chambers, there is no question that the Internet facilitates community
building for good or ill, right? And so, if you go on Reddit,
you can find a community that’s dedicated to certain types of dog species or
other kinds of pets, right, as well as finding a group of neo-nazis,
if you wanted to associate with them. And so the kinda cauldrons of hate,
that you can find on the Internet are facilitated by the echoe chambers
that certain websites make available. But one of the critical questions in
thinking about this problem of echo chambers is to ask the question
as compared to what? And by that I mean to what extent
is your social media feed really different than the types of people
you would meet offline, right? And so there is good arguments that
suggest that the people that you see on [INAUDIBLE] are actually more
politically diverse than, for example, if you were to walk outside right
now down University Avenue in Palo Alto, or if I were to walk outside here and
go into Greenwich Village, right. And there’s no question that there’s
increased polarization online. There’s increased polarization in
terms of extremists opting into different news sources and the like, but
that is true in the offline world as well. And so this another area where Matt
Gentzkow’s done some formative work in finding that the Internet is at least as
politically diverse as your offline life, say in your workplace or
in your neighborhood. And it’s probably more politically
divisive than a lot of those places. Finally, last two points, sovereign
to gain monopoly, somewhat related. But in thinking about sovereignty
we know what happened with the, well we are continuing to learn
what’s happened in terms of the Russian intervention
into the US election. But many of you come from countries where
this is a problem that you’ve known for some time. And we saw we’re gonna sort of continue to
get more information about what happened in the US election. But now that the model has been suggested, or that people have seen that what
may have happened in this election, it’s gonna I think become more prevalent. And I think the US would take more
defensive tactics around the world, and I should say, of course,
this in in the pre-Internet age. Obviously, the US tried to influence
elections around the world, to tell you this is not a new thing. But the point is that, when you think
about the communication ecosystem, and how it used to be that nations
were able to regulate their sort of political communications in ways that
were a little bit more powerful, now with the advent of the internet
that you cannot essentially tell whether speeches are coming from Russia, coming
from Macedonia or coming from Tennessee. Last, I’ll just mention monopoly, and
I’ll suggest sort of the irony here, which is that, on the one hand we have this
extremely fragmented media enviroment. We have the big television stations having
the power that they once did, and yet this concentration of content delivery
through the main platforms Google, Facebook, and Twitter. Each one of those platforms though,
sort of behaves in a different way, and has different pathologies
that are associated with it. I could,
we’ll talk a little bit about that. Let me just end by throwing out there what
I think the reform universe looks like, the what I call the sort
of six D’s of reform. I didn’t intentionally have them as they
all happen to start with D, so it goes. The first is thinking about reforms
based on disclosure, trying to find out the identities of people who
are engaging in media, in the light. That’s one of the things
that TIm was mentioning. That of course again, is a double edged
sword, because you worry about what would happen worldwide if we
forced people not to be anonymous. Second is demotion, making sure that
certain types of, whether it’s false or hateful speech, is then demoted in
content, and search results, and the like. The third is delay. This is one of the things that the former
head a Google News has suggested, which is that when there’s certain speech that
is achieving a certain level of virality, that the platforms could delay
the content from gaining that kind of popularity, at least until it
could be checked for truth and the like. Fourth is dilution. This is actually one of the things
that you have in European democracies, where there’s real public broadcasting, where they can try to combat bad speech
with over assault of good speech. But also, this is related to the next
point, which is distraction and diversion. This is something that our
colleague Jennifer Pan at Stanford has researched with China. It’s not just that they ban speech,
but it’s also that they try to divert speech on certain
topics in different directions, through sort of government intervention,
into social media environments, and through trying to essentially
pollute hashtags, and to distract from the kinds of speech
that they seem as threatening. Then finally is deletion,
which is just basic censorship, and while we recoil at the idea about that, we should recognize that the social
media platforms do this already. That they regulate an enormous
amount of content, whether it’1s intellectual property
reasons, insightment, hate speech, certain types of advertising like for
guns or for drugs, and the like. Let alone obscenity and
other characteristic areas that have sort of first amendment controversy,
and I should say that if, in the United States, we legislated
into law the terms of service of Google, Facebook, or Twitter,
they would all violate the constitution. Okay, and this is by way of saying
that to think about these platforms as being just the same as governmental
entities and obeying the same types of constitutional rules is the mistake,
because they don’t do it already, and the real question is whether
the rules that they’ve developed for commerce and the like should be the same
ones that they apply for politics, and I will leave you with that question and
look forward to your questions. Thank you so much, Nate. You obviously love democracy so much that every reform idea you
get also begins with the letter D. So that’s very inspiring. [LAUGH] Now we go to
the Global Digital Policy Incubator, which is actually an effort to try and translate some of these ideas and
concerns into policy, reconciliation, and innovation,
so Eileen, over to you.>>And I will just take it up right
there with what Tim said was his goal, is we’re all looking to capitalize on
the upside benefit of technology and protecting against the downside risk. That’s the optimization
that we’re trying to do. Let me just say,
I’m in the same boat as everybody else. This is the most fascinating,
complex, topic. It is very difficult to
distill into ten minutes. It also happens to be somewhat terrifying. I come at this as a human
right advocate and someone who is trying to hold on to basic
optimism about the potential of digital technology for society,
whether it’s for economic development, freedom of expression, and
the work of civil society generally. The part that I am most concerned about, is that we started with this great
moment of optimism, as Larry said, and many of us were very naive about the good
guys were ahead of the curve, and we’re gonna be able to use technology and
run away from authoritarian governments. We’ve seen that authoritarians
have caught up and they actually use the Internet very
effectively to control information, and they are also now flooding
social media with propaganda. There is this sense that anti-Democratic
forces are using technology much more effectively than the good guys,
and we also have this extra territorial reach of authoritarian governments
across geographic boundaries to disrupt democratic processes in other
places, and it’s just terrifying. There’s this whole spectrum of ills
that Tim and Nate talked about it, but what we have is this bazaar
interplay globally between foreign and domestic anti-Democratic forces. The key question, I think, is what should democratically
inclined governments do about it? Nate brought up, I think you had a list
of six different characteristics that make governing, with respect
to digital technology, challenging. I would highlight three, and
these are characteristic that have been challenging from the beginning
of the Internet, they’re not just related
to this dark moment. The first of which is simply,
the extra territorial mode of operation is inherently challenging
to an international order, based on the concept of
sovereign Nation States, with jurisdiction over territorial
boundaries, and people within it. It doesn’t mean, as Tim said,
it doesn’t mean the end of sovereignty whatsoever, in fact we’ve
seen a total retrenchment in that regard. But it is certainly challenging
democratic governments because these platforms are global, and
there are big jurisdictional questions about how far
a government’s reach should extend. The other, the second characteristics
that I would highlight of what’s made democratic governance difficult
is digitization itself. The digitization of everything is
inherently challenging to democracy. Whether it’s servailance capitalism or
by the state. And the very simple idea is obviously you,
it’s a Erosion of privacy in the extreme. And it turns out privacy is very
important to freedom of expression, freedom of assembly and association. The simple idea being that if everything
you say and do is tracked and monitored, it will have a chilling effect on
what you feel free to say, where you feel free to go, what you feel free to do,
what you feel free to search for. So, and it also, in the big picture, it risks inverting
the democratic order itself. The idea that the people are sovereign. The people watch the government. We now have this flipped. And it’s the combination of government and
private sector. The cats and the dogs together
are monitoring everything we do. The third characteristic that
I would highlight is this basic trend toward
privatization of governance. Which is just happening under
our feet by virtue of the fact that the private sector owns,
operates, and secures most of the critical
Internet infrastructure that we use. And in addition, the platforms have become
effectively the public square for society. And those platforms and the boundaries of
freedom of expression are being controlled by terms of service,
community guidelines and algorithms. And so, that is a dramatic change. So, and I think the idea there is,
part of what’s so challenging about that is that the private
sector is not democratically accountable. In the democratic arrangement,
the idea is that the people are sovereign, they enter into a social
contract with government. The government, it provides liberty and
security under the rule of law. And what we have seen, is that those governing responsibilities
have shifted to the private sector. And yet we don’t have a system
of democratic accountability. Similarly, the other thing that’s been
disrupted, very important in democracy, is journalism and the professional media. And they’ve all been disrupted, they’re
struggling to find a business model. And yet this journalism has functioned as
the watchdog of governments in democracy. And so we’ve lost that source
of accountability, and I think those are big issues for
us to grapple with. We’ve seen a lot of tension over
time since we’ve had the Internet within democratic governments and
between democratic governments. How do you do democratic
governance with the Internet? So the prototypical case of
tension within a democratic system was the FBI versus the Apple Case. That question of how do you
optimize simultaneously for privacy of citizens,
digital security of citizens and law enforcement interest in access to
information and national security needs. That tension is still unresolved. We’ve seen tensions between democracies,
most notably the transatlantic tensions over whether the interest
of protecting privacy, of citizens in one place
warrants taking down content and potentially undermining freedom of
expression for people around the world, with the right to be forgotten case,
again unresolved. It’s still under debate in Europe. I think that what’s going
on now is new though. And the new tension is between freedom
of expression, pure free speech, unadulterated, more speech to counter
bad speech approach on the one hand. And the quality of discourse that’s
necessary to sustain a democracy. And the trend from the Internet
of democratizing the means of distributing content which we thought
was nothing but a force for good now turns out to be eroding the quality
of discourse necessary in democracy. And I think if you add this to
the [INAUDIBLE] problem where, especially the Russians reaching into the
US election and the European elections. I think it’s just brought us to
these breaking point and causing democratically inclined governments
to want to regulate in ways but without necessarily being
very thoughtful about it. And so let me just turn to,
my biggest concern I would say is that democratically inclined governments risk
doing even greater damage to democracy than the disinformation they’re
trying to protect against. And I would highlight the case of the German network enforcement
law that was passed on June 30th. This is just like a primo
example of what not to do. This law effectively requires networks
to delete what quote unquote, evidently unlawful
content within 24 hours. It does not provide a definition of what
evidently unlawful is or criteria for that assessment. This move effectively handed over judicial
authority to the private sector for what should be a government
responsibility. It’s a democratic government, they’ve
defined what’s criminal in their society. They have handed over
judicial authority for assessing when third party speech meets
those criteria to the private sector. They simultaneously in that
same move have imposed liability on platforms for
the speech of third parties. And they’ve really intensified extra
censorship by making the fines so great. 50 million euros for failure to take
down criminal speech within 24 hours. So this move undermines the core concept
of platform immunity from liability for third party speech. Which has very much been the lynch pin of
the free flow of information globally, and the democratization of discourse. So, I would say bottom line, the German government has essentially
thrown freedom of expression and the free flow of information under the bus
in the name of security and democracy. And that’s a big mistake. The question is what should be
the responsibility of platforms? What should the roles and
responsibilities be? I very much agree, Tim,
you brought up, effectively, I don’t think you used the word, but
the scrutability of algorithms we need. Diversity of content should be baked
into the algorithms as a support for freedom of expression. I completely agree with that. I think the simple idea
is platforms should not be liable for
third party speech on their platforms. They should be required to take
it down when instructed by a judicial authority if
something is criminal. But platforms should instead, this is the
focus, the platform should be responsible for what they are intentionally
pushing with their algorithms. And as Nate said demoting bad content or
fake content. Fake news in and of itself, whether
it’s misinformation or disinformation, is not technically illegal,
should not be illegal in a democracy. And the reason for this, we’ve been
reminded by civil society actors from around the world that authoritarian
governments like to control what information is
perceived as legitimate. And if the government gets in the business
of defining what is legitimate info and what you can and cannot say,
that in and of itself erodes democracy. And we do not want democratic
governments moving in that direction. I wanna highlight last, I’d say, a couple of things the platforms
have already done, which I think are very important moves. Up there is,
Google recently banned 200 publishers for impersonating real news sites. It was not based on the contents so much as the manipulation and
the impersonation. And I think there is a really
interesting vein for study here about encouraging
platforms to think about manipulation through various mechanisms,
rather than evaluating content of speech, and taking it down on
the basis of content. Obviously Facebook, and Google have worked
hard to advance journalism projects, and helping journalist find
sustainable business models. They’ve created a global forum on
counter-terrorism to share knowledge and research about how to
identify terrorist content. Very importantly, Facebook has committed
to combat information operations. And this again is a really interesting
move because it’s not content based. It’s based on inauthentic amplifiers and
mechanisms of manipulation. And they feel very justified in
going after those mechanisms. And I think that’s a great move. Two other really interesting moves
that the platforms have made. One in April, Google announced that
it was tweaking its algorithm, to service higher quality content. And I think this goes to Nate’s,
your point about demotion. The idea here of bad content,
demoting bad content. So the idea in the Google
algorithm has been that there’s a semi-subjective
quality built in which is what does this person want when they
enter this search, what are they seeking. And they’re trying to
give you what you want. There’s also been a semi-objective strand,
which is more like crowdsourcing what do most people
think is the right answer to this query, like a majoritarian kind of view. Those things have existed. They have also now added what might
be viewed as a more objective qualitative assessment of
the information itself. And they are factoring in
indeshure signals of authority. Indeshure that the information was
garnered through fact checking or source credibility. Things that journalists
would have normally done. And so they’re trying to bake
in that qualitative assessment. And also Facebook made a change
to its mission statement which was based on building community. Which I think points to enhancement
of quality of discourse, not just allowing users to share. I think time’s up. Most basic point I can make is democratic
government, it’s always been difficult. Optimizing for freedom,
security, and democracy is hard. And I think democratically oriented
governments should not throw freedom of expression under the bus in the name of
protecting either security or democracy.>>Great, well what a rich and
interestingly reinforcing though not in any way
identical set of comments. So thanks to the three of you,
just awesome.

Leave a Reply

Your email address will not be published. Required fields are marked *

Back To Top