Can we tell truth from fiction in the age of social media?
Articles Blog

Can we tell truth from fiction in the age of social media?

[MUSIC] So, welcome everybody. I’m Jennifer Widom,
I’m the Dean of the School of Engineering. Thanks everyone for coming. I also wanted to thank the Stanford
Alumni Association who is co-hosting this event with the School of Engineering
and the School of Humanities and Sciences. In addition to all of you here,
the audience of graduate and undergraduate alumni, we’re live streaming to thousands
of alumni and friends all over the world. This is the fourth in our
series called Intersections. And the the purpose of this series is to
bring together faculty from engineering and humanities or social sciences to
share insights on a common theme. And tonight’s theme is trust. And one interesting question is whether
we will discover if we have the same definition of trustworthiness across
engineering, social sciences. We hope we will find that out. At Stanford in general, we recognize
that as we address the world’s biggest challenges, they can’t be
addressed by scholars in isolation, but working together across disciplines,
across fields. And the panelists we have today
are exemplars of that philosophy. So I’m just gonna briefly introduce the
panelists and then turn it over to them. First, Sharad Goel. Sharad is an Assistant Professor in
the Department of Management Science and Engineering, here in
the School of Engineering. He’s an expert at
computational social science, which is an emerging discipline
that is at the intersection, really of computer science,
statistics, and the social sciences. He founded and directs
the Stanford Computational Policy Lab. They use data in computation
to research hot button issues. For example, stop and frisk,
racial bias, voter fraud, filter bubbles, algorithmic bias,
and online privacy. Sharad’s work is relevant to a lot
of other researchers on campus and he has courtesy appointments in
the departments of computer science, sociology, and the law school. Perfect panelist today. Second perfect panelist, Jeff Hancock. Jeff is the Harry and Norman
Chandler Professor of Communication. He’s in the School of Humanities and
Sciences, but we see him now and
again around engineering. His research relies on computational
linguistics and experiments to understand how the words we use can
reveal psychological and social dynamics. For example, deception and
trust, emotional dynamics, intimacy in relationships,
and social support. Jeff founded Stanford’s Social Media Lab
where researchers try to understand psychological and
interpersonal processes in social media. He’s also an expert on lies. So, he has a TED Talk on deception
that has been seen over 1.3, yeah, 1.3 million times,
and that is the truth.>>[LAUGH]
>>So, second perfect panelist. And our moderator for
this evening is Janine Zacharia. She is the Carlos Kelly McClatchy visiting
lecturer in Stanford’s Department of Communication. Between 2005 and 2009 she worked as
the chief diplomatic correspondent for Bloomberg news based in Washington DC. During that time she travelled to more
than 40 countries with then US Secretary of State, Condoleezza Rice, and
other senior administration and military officials. From December 2009 through April,
2011, she was the Jerusalem Bureau Chief and Middle East
correspondent for the Washington Post. She reported widely throughout
the Middle East during that time, including the uprisings in Egypt and
Bahrain as they began in early 2011. She appears regularly on cable news,
shows, and radio programs. So, for a journalist, at a time when
a large number of Americans believe that media outlets are reporting fake
news, this topic tonight of trust, and how we establish trust,
must be personal and important for Janine. So please join me in welcoming Sharad,
Jeff, and Janine for what I’m sure will be
an interesting panel.>>[APPLAUSE]
>>Thank you Dean Widom, and thank you to all of you for coming out this evening and to everybody
who’s watching us on the live stream. I think they may be carrying it
on the Stanford Facebook page so hello to everybody. Let me just briefly explain
the format tonight. I’m gonna just frame our topic very
briefly, lead a discussion with our two distinguished panelists and
then open it up to questions. You all should have cards,
I believe, on your seats. So, as we’re talking, if you don’t
have a card, raise your hand and maybe one of the staff can bring you one. We’ll start collecting them in about a
half hour and we’ll get to a couple people here saying they don’t have any index
cards so they’re coming around. Maybe some pencils or something. And we’ll get to them as many as
possible and we’ll wrap by 8:30. So, we’re gathered here
to discuss the future of trust at a time when that question of, are we living in a post-truth
world seems unavoidable. We’re in a post-truth world with
eroding trusts and accountability. It can’t end well, that was the headline
in the Guardian last November. An Atlantic magazine headline in January
read, trust is collapsing in America. When truth itself feels uncertain
how can a democracy be sustained? But then in August the online courts
publication came to the rescue with the headline,
a philosopher of truth says, we’re not living in
a post-truth world after all.>>[LAUGH]
>>That was not you, Jeff.>>[LAUGH]
>>It was Cambridge University Philosopher, Simon Blackburn
who wrote on truth and who argues that the truth has always
beein twisted by politicians. A topic we’ll get to,
I’m sure, this evening. Still, the current White House
seems content to feed this notion that there’s no ground truth. President Trump has said,
quote, what you’re seeing and what you’re reading is
not what’s happening. White House Advisor Kellyanne Conway
famously described alternative facts. And Trump’s lawyer Rudy Giuliani,
is he still the lawyer?>>Yes.>>[LAUGH]
>>Rudy Giuliani said not too long ago, quote, ready? Truth isn’t truth. Worrisomely, as Dean Widom pointed out, a fake news system is still thriving,
the real fake news. I’m not talking about the fake news
the President calls fake news. Real fake news on the eve of another
national election just a few days away. Along that we have President Trump’s
attacks on credible fact-based news organizations that he
continues to describe as fake. And we have these disturbing statistics,
three quarters of Americans say mainstream media outlets report fake news,
at least occasionally. 63% of registered voters believe
in at least one conspiracy theory. And the Edelman Trust Barometer which
looks at trust in business, government, NGOs, and the media internationally,
reported a few months back that quote, no markets saw a steeper
declines than the United States, with a 37 point aggregate drop in
trust across all these institutions. So with this as a backdrop, what I hope
we could do tonight, Jeff and Sharad, is step back from those headlines and go
a bit deeper in order to figure out what we can learn from social science and
computer science. A perfect pairing about what’s happening
to trust in all of these institutions. And I’d like it, if we could leave here
tonight, having developed some better understanding of a few questions that I
bet the audience is interested in hearing. I know that I am. First. Is there more fraud or
lying and deception today or does it just seem that way because of
the way the internet amplifies things? Second, why is trust declining? What are the factors? And what is the digital revolution
spearheaded maybe in this room, right? Right here in this part of Silicon Valley,
have to do with it? And third,
are we actually conflating two problems? Growing distrust in institutions,
and growing political polarization. So Jeff,
let’s just start with you to kick us off. You research trust
interception in your Stanford computational social science lab in our
department, the communication department. But before we talk about
the decline in trust. Can you just explain
the fundamentals of trust to us and how research has shown that our default
is, psychologically, is actually to trust?>>Right, well, thank you Jeanine,
and good to be with you Schwart. So as Jeanine said, I’m a psychologist,
I think a lot about trust and deception. And a couple questions in there. First one, what is trust? Typically most people
associate it with risk. So, I have to take some sort of risk in
order to decide whether Janine will be nice to me tonight or some sort to that. Or I take some risks to decide
whether that person will pay me back. And so risk is a pretty common concept. The German philosopher, Lehrer,
calls it confidence in one’s expectation. So what do you think is
gonna happen in the world? And how confident are you or not? My take and
my research take it’s like a promise. So if Schwart promises to be here at
this time to be on the panel with us, I feel confident that
he’s going to do that. So those are some sort of core
concepts of trust, how much do we believe in our expectations that other
people are going to do what they say. And it matters a lot, so in business for
example, will I take that leap to trust that I can get in that Uber
car with a total stranger. Will I take that leap to believe
that someone say my house in not destroy it when they
rent it through AirBnB? So those are some main concepts or trusts. In terms of that question,
are we lying or trusting each other more? There’s not great data in terms of, we’ve been asking this question of
people throughout decades or centuries. But there’s lots of
evidence that this concern that we’re living in a deceptive time is
actually very old for a human society. Pretty much every generation
thinks that the current generation is more deceptive than the previous one. And I could give you some examples from
yellow journalism from 100 years ago where fake news was a major concern
in the US and in Canada. But we can go way back
to the Greeks Diogenes, and when you remember their Diogenes,
any philosopher? All right?
Great, what was you looking for?>>An honest man.>>A single honest man, and
of course he dies with the funny one. It’s a very uplifting story.>>[LAUGH]
>>But that whole cohort of Greeks were worried
philosophically, so really deeply that the current generation of Greeks
were too deceptive, too dishonest. And you read their writings, it feels
very much of what we’re experiencing now. It’s like it feels like there’s
too much deception and mistrust.>>So, when I press you on this, cuz I do
feel despite what Jeff just said, that there’s something distinctly different
about the moment we’re living in. You tell me, Jeanine, you’re conflating
the warranted and the unwarranted worlds. Talk a little bit about that, about when you get stuff from
people you know versus depending. It depends.>>Right, right, right. So a lot of people ask me about
the same kind of thing, about trust. And I think when we think about the online
world, or social media, or online communication, whatever you want to call
it, we conflate two worlds in one space. And I call them the unwarranted world and
the warranted. Unwarranted is all those people
that can now get in touch with you. Can communicate with you that you’ve
never met, you never expect to meet, you have no relationship with. These are the Russians
trying to contact you. That Nigerian guy with all the money,
that if you just give him a little bit, he’ll give you lots. So that’s from the unwarranted world,
spam.>>He’s not real?>>He’s not real.
I’m sorry, Sean. I needed to tell you. I meant to tell you earlier, it’s me.>>[LAUGH]
>>So there is this world where it’s
advertising, this is how a lot of the influence campaigns that we hear
about in the news, that’s how it occurs. But when Sharat and Jeanine contact me
either through texting, or email, or Facebook, or on Instagram,
you name the platform, that’s fine. I know these guys, and I care about them,
I do business with them. I collaborate with them. And I might not have even
known Janine now, but I might want to know in the future,
cuz she might be a future sales contact or someone that will hire me. This is the unwanted world where
I don’t know anybody know, I don’t expect to know them. And here’s the world of family, and
friends, and business associates. And maybe, somebody I want to date,
somebody I want to have a relationship. Those two worlds kind of
get conflated online. And this world,
I will tell you unwarranted, there is a massive amount of deception. We are being bombarded with deception and
manipulation. Where companies like Facebook,
and Twitter, and Google, the major platforms are defending
against this attacks constantly. But in the warranted world where it is
sort of Sharat, and Jeanine, and I talking to each other and communicating and
deciding what we gonna talk around this. I see a lot of honesty. I see no more deception
in studies that way. In fact a lot of times, online communication between known people
is more honest than if it’s face to face. One of the reason for
that is recordability. We leave a record of everything we say and
when it’s a relationship we care about that actually changes the way
we lie to one another.>>Maybe we can come back
to what that honesty or that trusting does to actually
perpetuating lies in a way. When your uncle, who you trust, forwards
you something, you’re gonna trust it.>>Right.
>>May not be true though, right?>>Right.
>>Sharat, so a big question, and there seems to be conflicting views on
this is the role that social media and the digital revolution have played in
perpetuating this distrust of institution. So whether or not fundamentally
the Internet has led to filter bubbles or echo chambers that have siloed us, or
actually a wider array of opinions. And I know you’ve published
a paper on this back in 2016. So can you talk a little bit about what
your research is demonstrated on this.>>Yes, so first, a little bit of context. In 1970, how do people get the news? There are three major networks, and they had a combined viewership
of 80 million people. And so more or less, they were this point,
counterpoint journalism. And for better or for worse, people had
access to roughly the same information. And I would say it’s to the extent
that this is objective, it’s more or less accurate information. Now if you fast forward 50 years,
and where are we? We’re getting the news from the Internet. And when we did our study, we tracked the browsing behavior
of a sample of Americans. And it’s not particularly surprising, but there are difference in how Republicans
and Democrats get their needs. So what’s the number one news site for
Republicans, any guesses?>>Fox.>>Fox, that’s an easy one. What’s number one news site for Democrats?>>CNN [CROSSTALK].>>No, [LAUGH].>>New York Times.>>No, you would want that to be. It’s Huffington Post, right?>>[LAUGH]
>>And so its like, sorry, we’re done.>>[LAUGH]
>>What is the punchline here? Both Democrats or Republicans are getting
their information from these sites that have advertised themselves and
cater to these audience are looking for partisan information. Neither Fox News nor Huffington Post
are really going out there and saying, this is the truth. And maybe there is some kind
of version of this out there. But really there’s a lot of parse and
commentary on these sites. And this is the number one site for
both Democrats and for the Republicans. And this is growing overtime,
when we look back five even ten years, we do see this rift expanding over time. Now the flip side is, and this is why this
is a complicated story Is that a lot of times online, people getting their
information from the other site as well. So if you go onto Facebook,
again, for better, for worse, you often see those views that you
wouldn’t see in your everyday. If I am sitting here,
really, what is Stanford? Stanford’s a bubble inside of a bubble,
inside of a bubble. It’s the ultimate filter bubble, and
how much do I interact with people who really fundamentally disagree
with what I’m saying, almost never. At least online,
you can get that type of experience. And so
this is funny fact that they’re both, we think both of these facts
are actually accurate. That the Internet is polarizing
people in the sense of where they actually getting
most of their information. But at the same time it’s giving people
opportunities to get outside of their geographic socioeconomic bubbles.>>Mm-hm.>>But, so if there’s conflicting findings
in terms of what, I mean, and does do any of them relate to this rapid decline in
people’s trust for the media broadly? The fact that they’re only
reading certain things.>>So this is a really great question,
we don’t know the answer to it. It’s like it’s very hard to say
what is the effect of these things. So one a couple of data points here. So one is that we looked at
this during the election and we looked at the first couple of
months before the election, and we try to determine our people who go into
these sites are becoming more polarized. And we actually don’t see
any evidence of that, that’s what we were expecting that we
are saying that, all you getting all these exposure to Fox News you’re getting
all these exposure to Huffington Post. We were expecting that
people would spread out. In fact, we don’t see evidence of that. Now there are a couple of theories,
the one is that this was such a polarizing election that everyone has already
selected into whatever news media source they were going to go to, that if you’re
watching Fox News then that’s it, you’ve already made up your mind, if you’re going Huffington Post
you’ve already made up your mind. It’s not gonna have any more of an effect. Another is that this isn’t really
fundamentally what’s causing a polarization the first place. Now, we don’t know what’s
going on if you unravel this. At some point it breaks out,
it breaks down that it does feel like the media is giving
us these perspective on the world. And so at some point, it does feel like
it has to be the media, at the same time, we’re already in this equilibrium maybe
where the media can’t actually push us. How many people do say, only if they
had read this New York Times article, they would change their view on Trump,
and that seems farcical at this point. That’s not actually gonna happen. And so what is the role of the media? It’s unclear.>>So I have a lot to say
on the role of media, but I’m gonna hold off because the trust
question goes beyond the media, and so we can circle back to that
I think in the Q and A. So Sharat, I can’t open up
the New York times on a Sunday without seeing like 18 articles on
artificial intelligent machine learning. And you teach a class called law order and
algorithms, and I know that your research has shown that algorithms
can actually, for example, help eliminate bias in things like sentencing decisions
and things if it’s used properly. But there still seems to be a lot of
concern about how bias is being built into algorithms that are becoming an
increasingly prevalent part of our lives. How do you see this whole question?>>Yeah, so I think this again is
one of these complicated issues. There’s no clear answer but
let me give you one story. So one recent incident. You probably maybe you’ve seen this,
Amazon was in the news recently because of this hiring algorithm that it was using
to that applicants and what was it doing. Well, we don’t know exactly
the details but roughly, it was training this
algorithm to understand if applicants were similar to their
current applicants, their current hires. And one thing that came out is that
this algorithm was down waiting the word women in its CVs. And so women’s chess team,
women’s college, things like that, they were down with it. So this I think many people
would interpret as bias. And I think that’s probably right,
we don’t know the details but that seems like it’s one of these high
impact algorithms is actually having direct impact on people’s lives and
probably recreating these types of bias. Now the headline on that story was,
the algorithm is bias. And algorithms are gonna kill us. Now what is the back story? Well, where are these
algorithms coming from? They are trained to human decisions. And so, it’s not we have this
choice between these imperfect, biased, terrible algorithms and
these great humans. It’s like, that’s not the choice and so
it’s like, the algorithms don’t have intent, we shouldn’t have to them,
they’re basically just reflections of us. And in this circumstance, again,
this got a little bit buried in the story, probably Amazon was doing something funny,
maybe implicitly, and it’s unclear what’s going on but probably that’s what was
driving these types algorithmic effects. So I think it’s very
important to understand that, yes, algorithms can recreate bias, they can exacerbate bias, they can do
all sorts of these types of things. Now the flip side is, in many cases we
can actually get around human bias. And so in one of the examples that I study
in my work is in judicial decisions. And so here, and this is, it sounds a
little bit scary but algorithms are being used to guide judicial decisions all
across the country, does it happen? It’s not fictional,
it’s happening right now. And honestly, I would personally rather be evaluated
by an algorithm than a human judge. And so this is the trade off that we need
to understand is what is the counter. What is that we’re trying
to decide between? And in many cases, if we designed these algorithm well
then we can get around some of these human biases that are creeping into these
decisions that will always be there.>>Jeff, you want to weigh in on all that? You agreed with Sharat? You know, I do. I mean, we see-
>>Why do you sound so surprised?>>[LAUGH]
>>Normally, I don’t Mr. Sharat.>>Yeah,
I mean we have been looking a lot. So Stanford is getting really big,
obviously, into AI. And in particularly,
the human standard aspects of AI. Can we create AI that
are going to help humanity? They’re gonna make things better rather
than just optimizing for whatever outcome. I think what Sharat was saying is that,
there are often a reflection of us. And in many times with biopsies for
humans, it’s really hard to overcome. So in hiring decisions, women will often be just as biased
against other women as men, all right? So, there’s lots of evidence of that. But when it comes to algorithms,
we can sort of look at them, and it’s not us, it’s a mirror of us. But we can look at that mirror and
make some decision. So, I’m actually pretty optimistic
around that kind of thinking.>>So Sharat,
another example of intersections between communication department and
computer science was your work with our colleague Cherryl Phillips on what’s
called the Stanford open policing project, where you all gathered traffic,
stop data from across the country. Talk about what you found from that
research and the disparities in how the criminal justice system impacts
different people from different races. And how it ties to declining trusts in
another institution, namely the police.>>Yeah, so
we look at traffic all across the country. We collected data over several years
of over I think 250 million traffic records that we now collected. And we analyze these for
evidence of racial bias. And again, surprisingly to many as I
suspect we found evidence of racial bias. And so this, I think one interesting
thing about this line of work and how it relates to trust, is that there is this kind of reflective
adage that transparency builds trust. And in the short term,
I don’t think that’s right. I think, in the short term,
is like yes, we’re being transparent. We’re showing that this is actually after. We really do believe, and
we have strong statistical evidence to suggest that there is bias
in police decision-making. We also see these videos that have changed
the way that we perceive the police. And again, this is a form of transparency,
we’re getting to see what’s going. But if what’s actually going
on is not particularly fair, it’s not particularly what we wanna see,
that degrades trust. And again at least anecdotally, I think
that is contributing to our current perception of pleasing and our lack of
trust for many people’s lack of trust. In pleasing is that we actually
get to see what’s going on. And so
there’s this tension that in the long run, I kind of have to believe
that transparency is good. But sunlight is the ultimate disinfectant,
I think in the long run that is true, but in the short run there’s this
uncomfortable place where you’re seeing how its made. You’re seeing how actions
are being taken and that is I think that kind of
trust at least in the short term.>>So Jeff I know you and I have talked
about this that you think the decline in trust institutions has to do with
seeing how the sausage is made.>>That’s right, yeah, and
I’ve never talked to Sean about this so you know you can trust us,
this is a too independent opinion.>>[LAUGH]
>>I think a lot of the, if you think back to most of the political and corporate
scandals over the last five, ten years. A lot of times it’s because
something digital allowed us to see the sausage being made in
an institution from the Catholic church to the Volks Wagon diesel gate scandal,
to the Panama Papers. The fact that we’re digitizing everything, which means that we can apply big data
techniques including machine learning and AI, we can share results with
colleagues instantaneously. I think these are some of the things
that in the short term and Sean I like the way you put it. In the short term it’s undermining our
trust cuz it’s like wait a minute, all these institutions that I’ve trusted
for so long, government media, not for profits. They’re not as trustworthy as I thought, we’re seeing that in part because
of this digital transformation. I think in the long
term that this is good, this is gonna make those organizations,
institutions to realize that they can’t be hiding some of the sausage making
that they have done in the past. But there is a short term price to pay for
that and that’s these scandals.>>And
hopefully it changes the incentives too.>>Right.
>>So, it’s like, when we’re transparent, when you know you’re gonna be audited. It’s uncomfortable at first,
but hopefully that makes it so that you’re actually doing
the right thing at the end.>>All right so,
I’m gonna ask a Trump question. There’s a Canadian reporter named
Daniel Dale who live tweets a lot of President Trump’s rally’s. And he’s been covering him quite
consistently every day and so he tweeted on,
I’m guessing directing this at you Jeff. On October 22,
after certain day in the news, this is quote, I fact checked every word
Trump has uttered for two full years. This is one of his most dishonest weeks
in political life, he’s lying about so many different things at once,
and in big ways. Not exaggerating or stretching,
completely making stuff up.>>[LAUGH]
>>And then a few minutes later, Harbor Hound, whoever that is,
tweeted this question. How did we get to this point where
a self promoting con-man is able to impose his own reality
on the entire nation? So whether or not you’re a Trump supporter
or not, there’s this perception, and there’s this, I mean we know from Glenn Kessler at
the Washington Post that he’s lied. 5000 times or whatever the number’s
up to since taking office, how did we get to this point do you think?>>Yeah, so for
somebody to say there’s deception, Trump has been really amazing and-
>>[LAUGH]>>I like the way Jean said it so this is about thinking about deception not,
left or right or Democrat, Republican. So I’m actually a Canadian,
so I have some->>Like a Canadian journalist too.>>Yeah a Canadian journalist exactly. So, one concern that a lot of
my Republican colleagues that I meet say in the hockey locker room. I play hockey and so it’s a great place
to meet people, cuz here on campus, we don’t see a lot of Republicans,
>>And I think I spotted one the other day.>>Yeah did you, over tower, right?>>So
I talked to my Republican friends, and we sort of talk about why they support
Trump, it’s actually not unreasonable. So, one of their concerns is what
about these fact-checking sites. They’re all from these
liberal organizations, and so I took that to heart, and what we started
to do was look at how fact-checkers have compared Trump to other Republicans. And, so you can go back to McCain, which is one of the first times when
fact-checking really is done at scale. They looked at almost every one of
his statements on the campaign, and you look at Romney, okay? So you also see him and
that’s been carefully documented for all of his deceptions. And most politicians, those two included,
are in the 20 to 30% range. So using a relatively stringent
definition of what is deceptors so that means there’s some
deception in this claim. Most Republican politicians are in
the 20 to 30% of all claims they make on the trail have some deception. In terms of Democratic, candidates are in
the same range, we could look at Rubio and Cruz who make it pretty far in
the Republican primary against Trump. So we’re kind of controlling from time,
they’re also in this rate, 20 to 30 range. Trump at that time was
in the 55 to 60% range, which it’s actually
hard to lie that often. We’ve looked at lots of students,
we get them, we track their lives in text
messaging and, very rarely even when their most prolific liars till
they get above 30%, it’s actually hard.>>[LAUGH]
>>So we’re comparing Trump against other Republicans, and we’re comparing
him against contemporary Republicans. And then you can also look at
people that have worked with Trump, you can also look at Trump’s own words. He doesn’t deny that
he’s into hyperbole and saying things that aren’t necessarily
true, he doesn’t deny it. People that work with him
also recognize this and a lot of his supporters
are also fine with it. So we thought initially
this was a real puzzle, why is it that he can lie all this time
and be sort of like admitted about it, why are his followers like
yeah he does he lies. And still this trust level never declines
with his base, he remains around 37% trust level in the US, which is
around the same as Clinton for example. And so Gene and I have been talking about
this a lot, and I think that rather than being a crisis of trust in somebody
like Trump, it’s a polarization issue. And we have a colleague in
communication that studies politics and polarization, his argument is like,
Jeff, this is no puzzle. If you’re a republican these days, and
that’s your guy, you will vote for him, and you will trust him. And if you’re a Democrat and
she’s your person, then you’re gonna vote and trust them,
and it’s as simple as that. And he has this great and horrifying
story, where he’s like, in the 70’s, if we asked people,
how would you be if your son or daughter married somebody
of a different race? And the Americans at the time were like,
I’m not so comfortable with that, about 80% would say I’m
not comfortable with that. Now, if you were to ask that question,
good news is, most Americans are totally fine with that, over
80% were like yeah, that’s totallyl fine. Bad news is if you ask, what if you
marry somebody of the political idealogy that’s the opposite of yours,
80% say I’m not okay with that.>>[LAUGH]
>>Right, to the point of like I’m might
not talk to my daughter or son. Just porization is like as bad as
racism was in the 60s and 70s.>>Let me just try and push back for
a second here because I do think there’s something unique about the frequency and
the spread. The way these things spread,
the way the media covers it, and all these things, and
the lack of accountability. I mean, I think there is accountability
journalism being done about this, but I guess the lack of the impact. And so, is it something that someone
who’s not like I am who follows the news every second, it’s the President,
or is it that they just don’t care? He doesn’t mean it, there’s become this
complecancy he doesn’t mean it, it’s okay.>>Yeah, so initially when we
were looking at Trump, and this is before he was elected, we were
relying on this guy Harry Frankfurt’s philosphical definition of bullshit,
I’ll say BS, okay? And BS is when somebody says something
that isn’t necessarily deception, the person doesn’t even
know what the truth is. They’re just unconcerned with Facts and
I did some work initially on this and thought that it was an interesting
framework cuz a lot of times Trump will just say things and it’s not clear
whether he knows what the truth is. He just doesn’t care. And over the last two years, it’s become
clear, I think that was a mistake because BS trivializes what happens when someone occupying the presidential office,
any office of power, okay? So it can be any place, not just the US. The rational thing, if you are a citizen,
okay, and you’re living your life, you’re working, you check in on
the news every once in a while. But the rational thing is
to believe your leadership. That person has been voted by you and the majority in your country roughly that-
>>[LAUGH]>>You should believe what your leader has to say. And this is something
that I should clarify. There’s only one effect in all deception
detection research over 60 years, hundreds of studies. There’s only thing that ever
replicates every single time. That’s the truth bias. That is,
we by default believe other people. It’s what language is built on,
it’s what society is built on. So the natural thing, the rational thing,
is to believe what your leader, your politician,
your political leadership thinks. And so the danger here and I don’t know
if we have the Hannah Arendt quote. But the danger is that when a president,
somebody occupying that position lies, he or she is actually changing reality for a
lot of people, at least in the short term. Because those people are rationally
trusting what he or she is saying. And that’s the danger is when somebody
that’s unconcerned with the truth is put in a position where they
can create that reality. I have a Hannah Arendt quote from
The Origins of Totalitarianism, I’ll just read a portion of it. Quote, in an everchanging
incomprehensible world, the masses had reached the point where they would, at the
same time, believe everything and nothing. Think that everything was possible and
that nothing was true. Mass propaganda discovered that its
audience was ready at all times to believe the worst, no matter how absurd. And did not particularly
object to being deceived, because it held every
statement to be a lie anyhow.>>Right.>>And it goes on and on, but-
>>And so, because we’re so polarized, I think what
happens then is that okay, so my leadership said something that
has been shown to me to be false. And we normally think that
would be a violation of trust, but in Hannah Arendt’s analysis,
she says at this point when there’s two parties that really
hate each other, they’re at war. When a political leader does that,
they say well, I knew that he was lying. It’s a weapon, it’s a tactic. It’s clever,
he is trying to beat that other side. And so when Trump’s followers say,
yeah, of course, he was lying. He has to do that in order
to beat the liberals. He has to do that, it’s a tactical thing,
and so it’s not irrational. I think that’s really important. It’s upsetting, and
it’s really difficult to deal with, but it’s not necessarily irrational.>>So Char, it’s not only that
people when they go online are confronted with just
an impossibly hard platform of, to navigate, of real news,
fake news from the Russians. All these different kinds of things. Lies from the president sometimes, all these things hitting them at once,
conspiracy theories. They’re soon gonna be facing deep-fake. And so I know I said we go behind
the headlines, but just the other day, in the New York Times, will deep-fake
technology destroy democracy? Namely the idea that you can take,
this audience will know, but for those who don’t, take a video of President Obama and make
it look like he’s saying something else. Do I have to worry about that, too?>>Yes.
>>[LAUGH]>>Be very scared.>>Be very scared.>>Yes,
have people seen these deep-fake videos?>>How many people have seen them? About half or a little more than half.>>Okay.>>Yeah, if you won’t sleep after you
watched one, it’s extremely disturbing. The only comfort I take is that people
already don’t care too much about the truth, so.>>That’s your comfort?>>Yeah.
>>[LAUGH]>>Maybe we’ve already bottomed out. If we have like half of Republicans
still think Obama was born in Kenya. That wasn’t a deep-fake,
that was like a pretty shallow fake. And so it’s like Jeff was saying, that there’s this partisanship here
that goes well beyond the facts. And I do think that one of the big
problems with this technology is it raises the cost of identifying the truth. It doesn’t make it astronomical. And so I do think that the mainstream
media will be able to discover what is true and what is not. Maybe it will take them 12 hours now. But the problem is in those 12 hours lots of people will already
have formed an opinion. And it’s very hard once they
formed that opinion to switch it. And so I do think that there’s
a real cost here, not for people who genuinely want to understand
what’s going on, but by adding noise. By making it just much, much harder for the regular average person to figure
out what’s real and what’s not.>>So in this climate where everybody has
become a publisher of anything they want, a news story, a fake news story, a fake
video, manipulating President Obama, whose responsibility is it ultimately then to police this to make sure that we
actually can still have our democracy? Cuz what you’re saying is that,
well, people are gonna buy it. I mean, we’ve had this discussion a lot. Is it the platforms? Is it the government? Is it the news organizations? Who has take the lead here for
either of you?>>Go ahead, Sean.
>>Is it us in the academy?>>I trust Jeff.>>[LAUGH]
>>So in some ways, I don’t know if this is the right
question that we can’t just decide who’s gonna police it and then we’re done. It’s not clear this can be policed. This is just happening and
sites have already banned this type of content when they can
identify it, which is not always the case. And so it’s not clear to me that you can
just say, okay we’re just gonna ban it. And then we’re good. You know the media, what exactly are any of these actors
actually gonna do concretely. And I’m not sure that what we
actually can do except for keep on doing what we are and
just saying this is true and this is not. And hope that at the end of the day
people actually trust these sources that have a reputation for
telling us the truth. But can we actually curb this technology? I’m not sure.>>Well, that’s the solution. Well, your colleague, Minisha,
is working on this, right? I mean, whether we can promise. Every video’s been manipulated, even
like a professional video in some way, right, Jeff? Yeah.>>Yeah, I mean, I have a similar
take issue but I think there’s two reasons that I remain an optimist above
and beyond the fact that I’m Canadian.>>[LAUGH]
>>The first is, it’s often like an arms race. When spam took over our inboxes in the
90s, it felt like email is dead, right? It’s a dead technology because
it’s been overwhelmed by spam. And email is thriving. It’s still used as much
as any other application. There was solutions to it. It wasn’t just technological,
it was also financial. It had to remove the incentives
that spammers have, but there’s still billions and
billions of pieces of spam every day. We just have never to see them anymore. So I think there’s always
this arms race component, and our other colleagues here at Stanford
will fighting back against deep-fakes. Trust, totally right. It won’t always be perfect, and
there will be some that get through and some that don’t. But I never feel like there’s
a technology that will just wipe us out. I just don’t buy
technological determinism. That brings me back to my
second one which is us. We, I think,
will adjust to our new media ecology. It’s gonna take a little while I mean,
if we think about it, World War II is when more people on
earth were literate than we’re not, that was the tipping point for literacy. And since that time, the amount of
changes in our world has been radical. How many people today
have written something?>>[LAUGH]
>>Everybody, everybody has written something today. So in 1940 only half of the world could
even imagine writing something in the day. So we’re undergoing massive change
in our information environment. And it will take time to adjust. It will be painful. There will be mistakes made. There will be casualties. But I just fundamentally believe
that we as humans will adjust to the technologies that we’ve created.>>As the American, I’ll take
the more pessimistic view on this.>>Let’s do it.>>[LAUGH]
>>I think the fundamental difference between spam and
deep fakes is people want deep fakes. No one actually wants, maybe some
people want that Nigerian letter, but the most part I think we
don’t actually want it. With deep fakes,
people really want to see it. They might even intellectually
know maybe it’s not quite true but they like seeing it. And I think this is a real problem.>>Yeah.>>And so whenever people want it, I think
the market is going to provide some avenue forgetting this type of information.>>Yeah.>>And maybe these mainstream,
maybe Facebook will ban it, maybe Twitter will ban it. But there’s always gonna be these fringe,
and this is both the benefit and the fear of the Internet, is that
people can do whatever they wanna do. And there are going to be these
sites that cater to this audience. And this is a way of polarizing attitudes.>>You know, as depressing as that is,
I like that framing. Because it actually,
it brings us back to us, right? It’s not like the technology is causing
us to want to see deep fakes that are like revenge porn, really
horrible horrific effects on people. It’s human motivations in a way, right? Like I think that point
is a good one which is, I want to see Obama say
something really stupid right? Or I want to see Trump say
something really stupid. So to me then it’s us in a way. And then it gets back to education,
values, norms, all of these issues we’ve
spent millenia working on. And what do we as human think
are important valuable and what is wrong? But it doesn’t hand off everything
over to technology and say well, because it is here, because it can
do this, can do that we are doomed. As pessimistic as it is,
I actually like that approach. It puts it back on us.>>We don’t have the tools, Jeff. How are we going to do that? People don’t even know what they’re
reading, what they’re looking at. They don’t even believe, they think that,
you know I was at an event and a guy was trying to convince me that the New York
Times had made up the open anecdotal lead. I mean, the whole notion that people
are going to be able to figure this out on their own.>>Mm-hm.>>It’s too much of a burden,
but I recognize the problems. I mean the research show if
you label things as fake, then they’ll believe everything
else that is not labelled. So you have to label an infinite
amount of material, right?>>Yeah.
>>And so I think how we restore respect for
institutions like government agency data?>>Yeah.>>Which our own president has doubted. But not only the media, the media I mean,
there’s gotta be some shift, right? I mean, something’s gotta
happen to restore that faith.>>Mm-hm, well hey, so one thing that’s
concrete in that Edelman TRUST report. So this is the Edelman TRUST Barometer,
it’s been going on for what, 25 years, asks people how much they
trust different things. Lots of countries, over 50 countries. And one thing I don’t know if we’ve talked
about Jean, but there is one big plus. So one group of individuals gets a 12%
boost from this year over last year, and that’s journalists. So even though there is-
>>In the past year in the US?>>Yes, yes,
even though there’s a decline in media.>>Cross check?>>Yeah, right, exactly.>>[LAUGH]
>>I know this one very well. Academic experts, we’re not plus ones, so. But no, so I think that people
are starting to recognize that journalists play a valuable role as a gatekeeper. I do believe there’s a turn. So that’s one concrete piece. Here’s something more speculative. Jeanine, you might be right that we,
all of us in this room, are maybe somewhat unequipped to deal
with the world that we’ve created. But I do believe that our children,
younger people, who are much maligned these days,
wrongly I believe. I think that they’re developing
the skills to be able to determine what is something that’s genuine,
verses something that’s not. I don’t have a lot of hard
evidence about this, but we’ve been working with a really great not
for profit that works in the Bay Area. All kinds of kids, very diverse,
all kinds of socioeconomic statuses. They’re going in and
meeting with these kids and doing some interventions and
learning about. And these are adult problems. They’re like, no,
I would just check on this. I would never believe one video. And it’s like you adults do that? You guys believe one video?>>Yeah.>>And so I kinda think that they’re
developing, because they’re children, they’re developing a way of understanding
the world that we simply can’t. So for example, my favorite
mode of communication is email. And I think it’s kinda like you know, your
favorite music is the music that heard when you were in your teens,
early twenties, right? You still,
even though we’re old we still like that. It’s like technology. I like email cuz that’s what
I kinda grew up with, right? None of my students use email except to
communicate with old people like me and Shrod and Jeanine.>>Not me, not me.>>Not even you!>>I don’t use email.>>So I think there’s a certain, I have a deep sort of confidence in
younger people that will kinda come in. And the problems that we’re like these
are unfathomable, to them will be like, we’ve been dealing this
ever since we were five.>>You really are an optimist.>>I know.>>[LAUGH]
>>It’s true, it’s true.>>The young people will save us.>>[LAUGH] Yeah,
the young people will save us.>>It does help having an office next
to Jeff when you’re worried about these issues.>>[LAUGH]
>>We can start taking, if anyone wants to bring me the questions from the audience,
I can start integrating them. But while they’re being brought to me, what about your own trust for
the two for you. I mean in what you read,
say in the press or in academic articles. Sharod, I think you told me that you’re
always skeptical of reading any scientific study on discrimination because it’s
often tinged with partisanship. I mean, do each of you look at peer-reviewed
academic articles differently? I do, I feel like we’re now falling
into the optimist pessimist [LAUGH].>>Yeah.>>But yeah, I’m pretty skeptical. And I do think that partisanship
has to me very surprisingly entered into the academic sphere. And again, this is me being naive. But five years ago, I would have though
that computer science, statistics, we sit above this sort of debate that
is happening in social sciences. It’s like no, this is affecting our work. We think of this as very mathematical, we
think of this as objective, in fact there is all sorts of attitudes through which
the results that we report are filtered. And I see that a lot and
I find that highly disturbing. And like you were saying, this means
that I am particularly skeptical when I read papers about these polarizing issues.>>Right, yeah, yeah. So Sharod’s right,
like in social sciences, there’s been concerns about partisanship
influencing academics and social science. My colleague who’s here, Jeremy Belenson,
who’s having a Congressperson visit and he was showing him some
climate change stuff. And afterwards Jeremy said, what do you
think of this climate change stuff? And he’s like, it’s good, it’s
liberal science, but it’s pretty good. And to me it was just like whoa,
wait a minute. We’re like dividing science now,
by what your political party is? And that just feels really wrong and
that is upsetting. For me I just finished
this giant meta analysis, 256 studies of how social media
affects your psychological wellbeing. And boy, reading some of them you just see in it
people came to the research with an idea. I think Facebook is bad or
I think Facebook is good and you just see it in the In the paper. And that’s been a bit depressing,
I have to admit.>>Okay.>>Finally, I got you down.>>Yeah, finally, there it is.>>I broke you down.>>You broke me.>>[LAUGH]
>>So a couple questions on Blockchain here. And someone working in
the Blockchain space and the idea to be able to do
trustless transactions. How would this affect, personal transactions would
affect social trust Blockchain. Where are you guys on Blockchain?>>Bye.>>[LAUGH]
>>[LAUGH]>>Where do you? Deep thoughts on this.>>Some. So Blockchain’s idea that I don’t
need to trust an institution anymore. I can trust a sort of a network defined set of trust, I think the same, it’ll be
an adaptation into this sort of space. For me, one thing that’s exciting there
is we think about something called folk theory which is how we
think about complex system. And so Blockchain is a complex system and
most of us don’t know, and including me the specific
mathematics of it. And so
whatever folk theory they bring to it, is gonna play a big role in
whether they trust it or not. So the actual nature of the Blockchain
is one thing people psychological understanding and perceptions of
it will be a whole other thing.>>Okay, people are concerned. I guess we’re striking
the worry tone here.>>Sharat.>>It’s probably me.>>[LAUGH]
>>What can we do here? If someone believes a lie, maybe Jeff,
if someone believes a lie, what can anyone or any institution
do to change that person’s mind? Yeah, so I’ll take two approaches. Anybody remember Columbo? Sorta like the Columbo thing? Yeah, okay. So Columbo, he didn’t go about trying
to learn things by looking like, I’m not gonna look in Sharat,
or Jeanine’s eyes to try and see whether they’re telling the truth,
because there is no Pinocchio’s nose, there is no reliable queue that
tells you something’s going on time. What Columbo did was ask questions. He formed relationships with everybody,
I could say Sharat, where were you last night,
he would tell me then I’d ask Jeanine and I’d ask Brian, people that know him. And by asking questions, I learn ultimately as good of truth is as
I might want and so that’s the converse. We live in this amazing area,
where we have massive tools for asking questions of giant amounts of
information, so to me, I think that one is his ways of asking questions that we
are just now taking advantage of. In terms of changing somebody’s mind,
it depends if you are in their tribe, if I’m in Democrat and I’m trying to
convince a Democrat about something that they have been misled on,
I gonna do really well. If I’m doing that across, tribes right
now in the US, again cuz I think polarization is maybe one of the top
problems in the space, good luck. It’s gonna be really difficult. And so I think the tribal aspect
is gonna be really important.>>[INAUDIBLE]
>>I think this is all predicated on the assumption that people
want to know the truth, and it’s not at all clear to
me that that’s the case. So this is the question that how
you persuade someone while pre supposes that they actually want
to know what’s going on, and this tribalism makes it very unclear that
this is fundamentally what’s at play, that people maybe just
wanna be comfortable. And that doesn’t necessarily
have to play out by understanding what’s really
going on in the world. And that’s to me, one of the most
deserving aspects of this and also this normalization of this
timelines and in this political climate. Is that it brings to the surface this idea
that it’s not necessarily about the truth. It’s about how people
decide to view reality.>>And I’ll just follow up on that
because that thing is very true, and then when you get on
the side of political space. If you talk to corporate leaders, you talk to people in the military
they have none of this, right? They’re like, no, I need to know where
the facts are because I’ve got people out there that will die if not. I’ve got a thousand people
that I’m in charge of. And if I’m basing off, I’m doing this
corporation based on some facts that aren’t really real,
I’m gonna hurt a lot of people. So I think the political
space here is exactly right. I think in a lot of other parts of
human life, facts really do matter and people really care and
they do wanna know the truth.>>Medicine.>>What’s that?>>Medicine.>>Medicine, yes, exactly. My dad and
I disagree a lot about climate change, but when it comes to medical
issues he doesn’t wanna hear what some crank that has no
background in Science has to say. But when it comes to climate change, he’s very interested in the crank
with no accurateg science.>>[LAUGH]
>>But I think that’s right. There’s domains of our life where
facts and truth are super important. And there’s other ones where I
don’t want to know the truth. I want to feel good.>>I think that’s exactly right. And again, one example from my own work
is looking at this police discrimination, who is a highly polarizing topic when I
talk to general audiences about this work, it’s clear that this is dividing
people on parts and lines. The first time I went to
talk to a police department, I was terrified that this was
not gonna be well received. And in fact, much to my surprise they were extremely
receptive because they want to know. Now, they fundamentally need to know how
their organization is operating, and how to make it better. Lives are in stake, the community
really will react to their policies. And when you have something there on
the other side when you actually have some skin in the game,
then I think that changes the balance. And this is actually something
that we are currently doing, is we’re trying to pay people to
tell us what they really think. And so if you ask them,
where do you think Obama was born, maybe half of them are gonna say Kenya. Well, if you pay them maybe not so
many are gonna tell you, if we them we’re gonna pay
them if you’re right or wrong.>>Right.
>>And so if they actually have something on the line, maybe they are much more
willing to interrogate the information.>>It’s kind of scary what they’re saying.>>[LAUGH]
>>Because I thought it was gonna be an issue of storytelling, or how do I get
my students to want to read the news. Incredible, fact-based news and how do
I take that to the broader population. What you seem to be saying, both of you, is that the population in the United
States of America right now prides itself on not caring whether something’s a lie.>>I don’t think that’s what we’re saying. I do think that in some domains of life,
people that are highly partisan, which is a chunk. Not all Americans are highly partisan, are prepared to believe what their
leaders in their tribe are saying. I do think that’s true. But I don’t think that,
that means that it’s the end. I don’t think that’s
a permanent state of being. I think that young people are completely
fed up with the way that politics work right now, and
aren’t fitting in this partisan lines. But no, I think political
life is once fear of life. It’s a really important when,
it’s one that we think about a lot, but it’s not everything, and I do think there are a lot of parts
of life that people want the truth.>>But again,
I’m gonna be the downer here and say that politics is not everything but
it’s a lot.>>Yeah.
>>And so the sorta traditional view in political science is that,
that once you have a stable democracy, relatively wealth country,
you’re just there, that’s it. You just stay there forever. And the early 21st century,
it’s showing that that’s not the case if you look at countries
like Hungary, Poland, Turkey, Brazil. There is this phenomenon
of democratic backslide that when you lose trust in the judiciary,
the rule of law. When you lose trust in the media. When you lose trust in
electoral integrity, all of which we’re seeing right now, it is very real possibility that we will
migrate to these authoritarian regimes. So I think this isn’t just,
politics is one form of life. I think you’re right, that when somethings on the line people
actually will seek out the truth. The problem is all these small
changes to the way that our institutions operate can actually have
traumatic events, maybe it takes 20, 30 years, and it is not things on
the street, that’s not how it ends, it ends through the gradual erosion
of these democratic institutions.>>And That reminds me of some work
I got really interested in income inquality for a while. I was interested in
the psychological aspects and starting looking at the history of it. And to Sharod’s point we find
that when you look through history when income inequality in
whatever side you wanna look at. Reaches some point, and it varies for each
society, but there’s some sort of breaking point where it’s like,
the social contract has been broken. You see trust in institutions
decline very quickly. So Florence for
example was this amazing society. And at some point the rich
people were like, you know what we’re not letting
other people in on this. And they sort of like prevented other
people from joining the market. And that’s the beginning
of the decline of Florence. And I think it’s a really complex problem. It’s not anything to do with
necessarily technology. It’s not just polarization. I think the sort of income inequality
part of it is really important too. Because it’s like I’m giving my life
in part to this social contract but I’m not getting anything back, but
a few people are getting everything. So I don’t disagree with you there, but
I don’t think that it’s like technology is causing it, or
necessarily polarization is everything. I think it’s a really complex thing. It’s not that it’s without danger.>>It’s not without danger.>>Sharod, just shifting gears a bit,
going back to the algorithm discussion. How much is it really
untrustable algorithms quote, unquote, is due to coders not
understanding the physics of the problem? How much mistrust is just due to, I can’t
it’s a curse word, bad programming?>>[LAUGH]
>>I think these are related issues,
and so what is bad programming? I think a lot of it isn’t
malicious programming. I think a lot of it is uneducated
programming or inexperienced programming. And that’s exactly what
we’re trying to do on this campus is basically
every student codes now. But we haven’t actually given them
the thorough curriculum that’s necessary to make these ethical
decisions that are necessary for them to understand the implications
of the systems that they design. In part because we as a community,
as a computer science community, we haven’t actually thought of this
ourselves as deeply as we need to until relatively recently. And so I do think that these
are unforced errors in some cases. I suspect, anecdotally,
that that’s a lot of it, that there is probably some
element of malicious intent. But my guess is most of
it Is just inexperience. And there I am optimistic that with
more education that we actually can get these algorithms to perform
better than they currently are.>>And we actually, as an aside,
have interesting classes partnering between computer science and
the journalism classes. Where the coders are working together
with the journalists on solving some of the solutions around making sure people
get the news they need to be smart citizens. Back to trust online and social media. This is a smart question about Facebook
began with a robust web of trust among users authenticated by
virtue of .edu email addresses. But we no longer have to have
this authentication necessarily, I think they are trying to go back to it. Would that work, digital ID on a larger scale like
identification to restore trust?>>Yeah, I certainly and think that’s a major trend that a lot
of the platforms are moving towards. They want to be able to
authenticate people more. I think there’s almost like
a banking model we’ll start to see where you have to authentic
who your customer is. Because one of the major problems
right now is I could go right now for $179 buy about 1000 followers for
whatever page I want. And I just checked that yesterday. And so,
cuz I buy fake followers all the time.>>[LAUGH]
>>I’m very popular online.>>[LAUGH]
>>Who knows, so there’s this huge market and
so how did they do that? It’s bots, it’s faked accounts. And so Twitter recently estimated
about 15% of its accounts are bots, it’s probably a little bit higher. So no, I think this idea of who are we, that warranted notion that I
mentioned before, is important. And we’re gonna see
probably a splintering. So there’ll be platforms where you have
authenticated type IDs and accounts and then they’ll be more wild wild west. Like this platform Gab that’s getting
a lot of attention where it’s like you can be anonymous on it and say whatever
you want and things like that. So I do see there’ll be
some bifurcation in that.>>Sharod, do you have a thought
on the authenticity question? I know when you talk to the human
rights people internationally. They’ll say this is terrible because it
undermines all the things that social media is actually used for
good, for rallying people in places where social media is used as
a tool to crack down on dissidents. But I guess that’s a cost-benefit
analysis there right?>>Yeah, I mean I think a lot of
social media use is tradeoffs. And people always ask me,
is it good for me or bad for me? And if I never hear that question again
I think it would be good because it’s everything.>>I’ll see if it’s in here I’ll skip it.>>Yeah, our colleague in the department,
Byron Reeves, often talks about media and technology as like fire. It’ll warm your house and
it’ll burn it down. But what are you going to do? Not use heat anymore? Are you not going to cook? And so in the big giant
meta analysis I mentioned, we see overall there is no effect on
psychological wellbeing overall, okay? When you look across all 256 studies. But when you look inside the kinds of
wellbeing, we see if you use it a lot, you get a little bit of bump
up in anxiety and depression. It’s significant, but it’s really small. You also get a bump up, it’s a little
bit bigger, but still pretty small, in relational wellbeing.. So how well-connected I feel
with my friends and family. So there’s a trade-off. You use social media a lot,
you’re gonna be a little bit more anxious, a little more depressed, and you gonna feel more connected with
your social network, your people. So I think tradeoffs are just
like a part of life. And if we focus on one or
the other we make mistakes.>>This is a getting back to question
of where I said, we cant do it. Please speak to the correlation
of trust and distraction. It seems there is no time or
energy to decide on trust when there is so much distracting, can our brains
even process all of this at once?>>Hm, yeah, yeah, well, so-
>>Now I need a neuroscientist next to be my guest.>>I feel like that all the time. But again we are not the first
humans to feel that. So I have this really great slide from
the 16th century, so it’s after printing press comes along and it’s a big giant
wheel almost like a water wheel. And monks would put books into it, so
it can hold eight books at a time. And they can move it around and look
at different pages in different books. And what was it for? It was to deal with
the information overload. Because there is like 10 or
11 books being published a year. It was insane, they couldn’t take it. So they were developing
technology to deal with that. And now we look at it, now of course,
it’s magnitudes of scale different here. But again I just think like we are not
the first generation to feel like we are overloaded with information at all.>>But it’s possible that it’s
just getting worse over time? That it’s actually true-
>>Right, right, it’s true, we were worried before, but
now we’re really worried.>>And I feel the urge to check my email,
if I were to use email.>>Yes, yes.>>[LAUGH]
>>Well, how many of you have been looking at your screen reports that Apple
sends out once a week now, is anybody? And it is shocking how
often we check our phones. Again, Byron, who I just mentioned,
we’re checking our phones well over 100, sometimes 2, 300 times a day,
and usually it’s for seconds. And so
that might be really problematic for sure. But we don’t know, we don’t know
if that’s just like a new norm. And our brains are like really
happy finally, you know, finally I’m getting some enough
information in simulation. So but we don’t know, we don’t know.>>We might have a guess.>>Yeah.
>>[LAUGH] I wanna apologize to the audience because there’s
100 amazing questions here, so I’m trying to get
through as many as I can. So I apologize if I don’t get to yours. So one person says, do you think
the Internet’s always gonna be like this, filter bubbles, confirmation
bias accelerator, fake news, or is this just the current set of business
models technology/cultural patterns?>>So
I think it’s easy to blame the Internet. We’ve had filter bubbles forever,
at least echo chambers forever, offline echo chambers. And you just think about
neighborhood segregation, and even like workplace segregation,
who you’re interacting with. And so again I think it’s somewhat
misplaced to say that it’s the Internet that’s the problem, so
all we have to do is fix the Internet. It’s not either the problem or the solution to any of those issues
that we’ve been talking about. And so
I think the Internet is going to evolve. And in some ways it’s gonna get better,
and in some ways it’s gonna get worse. But I think fundamentally
we’re not going to, and maybe this is me being pessimistic,
is that we’re not going to have this like fluid transformation of ideas,
where everyone is perfectly informed and living in just like with these
perfectly rational thought processes. I don’t think that’s gonna happen. We’ve never had that before. We are not gonna have in the future. The Internet is gonna play some role
in this, but I don’t think it’s actually the proximate cause for
the situation that we’re in right now.>>Yeah.>>And I like the question
mentioned of the business model. So Sharron, neither of us do a lot of
economics or business thinking, but I do feel like the business model
is a big problem in a lot of this. The advertising is the core thing. And when you think about fake news or
you think about computational propaganda, all these things that we’re
worried about influencing us, and it’s usually because advertising
is at the core of it.>>So when Sheryl Sandberg went to Capitol
Hill a while back, and what’s the name? Trey Gowdy, the congressman from,
where’s he from?>>South Carolina.>>South Carolina,
he was trying to understand some of this in terms of the incentives of some of one
portion of the Internet, okay, Facebook, a big portion of it. And he said, so if I write it to Tuesday,
I’m paraphrasing, obviously. If I write it to Tuesday but
it’s really Wednesday, so that’s okay?>>And she said something like, yes, that’s not a violation of our
terms of service, Congressman. And he was kinda hung up on this idea of,
so basically, to make the point that anybody can write
just about anything and it can go viral, and then we’ve got these problems. So is there a way to, and whenever I
talk to people, to companies about this, they’re all, you are for censorship. But is there a way to
change the incentives, apropos the question to prioritize
the information that’s actually true? Like for example, Apple News. There was a big story in the business
section of The Times this weekend. They’re curating. They’re saying we’re picking,
we’re not leaving it to the, no offense, to the algorithms chart,
but we’re gonna pick it. We’re gonna have a human doing it. So are there ways to, without censoring, uplift credible information on the
Internet, and push down the less reliable.>>Well, that’s what Facebook is doing is
not lifting up, but it’s pushing down. So when things get flagged as fake,
or hate speech, what they’re trying to do now, one of
their main tactics is call demotion. So if you’re using something like fake news as a method,
usually you have two motivations. One is persuasion, somebody convince you
about something to be true, or profit. And for both those to work I need my post,
whatever it is, to spread widely. That’s what you’re always looking for. So Facebook’s primary tool right now, and
there’s some evidence that it’s working, is you can say it, but they’re just
not gonna share it with anybody, or share it with as few
people as possible. And so that’s currently one
of their main techniques. It’s not gonna solve everything, but
that is the one of the main ideas. So the last people to say something, just
trying to limit how far it can spread.>>I think one of the poems here is
there’s not the between the real news and fake news.>>Yeah.>>That there’s this
whole gray area between, I actually almost never see fake news,
like truly fake news. But what I do see a lot
of is things that’s like, is that headline really accurate? Was it like clearly
designed to get that click? And you see this even in these
well-regarded mainstream media outlets. You see this in the New York Times and
the Washington Post. Or this is like yeah,
that’s kind of true, certainly not fake. But it’s really designed to
get me to click on that link. And this is what really bothers me, I think that getting away from
that incentive is very hard to do. And people want their stories read. And if it’s in that gray area,
what are you gonna do? Because I think that you
then worry about censorship. You can say okay,
that’s slightly misleading, but it’s like now how can we really decide? If it’s fake, I think these are a class of views where
we really say that shouldn’t be spread. But what about all that gray area, which I think actually occupies the vast
majority of what we see online.>>Charlie, they’re not putting
click bait only because they want their stories to be read. They gotta drive traffic to the sites so the advertisers will have enough money
cuz people aren’t buying subscriptions. So if people bought more subscriptions
then we wouldn’t need as much click bait, theoretically, right? Just to weigh in quickly on this cuz we’ll
be coming back to the media question. I think when people are asking,
what’s a trustworthy media, I’ll tell you what’s not a good idea. Making Facebook your main domain for
getting your news. It’s not built for it. It’s just not built for it, right?>>Completely.
>>So go old school. All of us who email, go to a site and
read it there, perhaps. So people coming back to this
biased algorithms questions. In the 1970s, major symphony orchestras
began holding auditions behind the curtain so as not to see the performer. This resulted in considerably
more women being selected. Could algorithms be today’s
equivalent of the orchestra audition curtain by eliminating the unconscious
bias many people might have when seeing the job applicant or
the resume? Might interviews be de-emphasized
in the hiring process? What do you think, Dean Whitham?
Would it be a good one?>>[LAUGH]
>>I mean, it’s interesting, no?>>So I think it’s yeah,
it’s a good question.>>Yeah.>>[LAUGH]
>>Yes, so I think it’s interesting, one thing that’s,
let’s notice that there’s this idea that you want the algorithms not to
consider things like gender and race. And that feels superintutive. But what we found in our research is that
that’s not always bad to consider these protected characteristics. Let me give you one example, if you
look at these risk determinations that the judges used when deciding whether or
not to detain or release somebody. If you look at just the basic
features like someone’s age or their criminal history, well,
women tend to recidivate and to re-offend much less often than men
with exactly the same characteristics. And so if you are gender neutral,
again, some jurisdictions do this, then what are you gonna end up doing? You’re gonna end up
overestimating the risk of women, underestimating the risk of men. And if you make your decision
based on that alone, you’re gonna end up incarcerating
relatively low risk women. And so that’s one reason that some
jurisdictions have actually decided to use something like gender in their risk
determinations to make it more fair. And so again, this pushes against our
intuition of what does it mean to be fair. In some cases, we actually want
to look behind the curtain to make sure that we’re making
the most equitable decisions.>>So Breshard,
just following up on the AI, a little related in terms of how is trust
represented in AI today, in software?>>Yeah, so this is, it’s a hard,
it’s a complicated question. So one way that we use trust, again, in these types of high stake decisions
is making the algorithm interpretable so you can look at it, you can inspect it,
it’s not a black box. So traditional machine learning, how
did it work, we’re just looking at that epsilon improvement and
we didn’t care how it got there. We just like to stock lots and lots of
information, and we have some binary decisions, show this ad, not show this ad,
and that was machine learning. And it really didn’t matter
what that mechanism was, and no one was interrogating how
that algorithm actually worked. So we discovered that in a lot
of these high stake decisions, that’s not good enough. I can’t just walk into a courthouse and
say, I have this algorithm, and say okay, detain this person,
release that person. People won’t trust that system. And so now what we’re moving to is a Is
a white box, a clear box that we can look into and say this is how exactly
how the algorithms is working. It’s looking at this factor,
this factor, and this factor. It’s giving you this weight and
you can interrogate it. So that’s just one
solution that I think for trying to build up trust in these
types of high stakes algorithms.>>But a lot of people wanna
know some solutions here and we’ve touched on some of them. But how we can use for
example information technology to, I like this phrase where he wrote to
reorient people back towards the truth. Is there some, when it sort of relates to
some of these questions of how we can use AI and all these other things to do that. Are there things that come to
mind that aren’t being tried yet that are coming down the pike? Jeanine, I think in the long run,
I am an an optimistic and I feel like with enough time,
with enough data, with enough analysis, we will find out what’s going on. And this will replace our anecdotal
evidence thinking going back to police discrimination. For a long time,
we had to rely on anecdotal evidence and that I think serves a swell
in some circumstances. But there is also a place for data and
rigorous statistical analysis. And so I do believe that at
least some decision makers, some sub set of decision makers
fundamentally care about the truth. And some of the circumstances
we can use advances in AI, advances in data collection and data analysis to uncover those trees
that we’ve previously couldn’t.>>So in other words, the people we need
to convince that, to go back to data and real information,
other decision maker you said.>>Yeah.
>>Maybe we don’t need to worry about everybody.>>I do think we need to, there is this PR issue here-
>>Yeah.>>That we do need to get people
who care about the facts to elect politicians who care about the facts. I think that’s a very real constraint. But again when we work with people
who are actively making the decisions I don’t see this type of neglect of
the facts that I encounter much more often>>When there’s nothing on the line.>>Well, I guess we could argue about
whether that’s happening when they discuss climate change policy though for
example, right? Are they actually considering the full
range of facts when they’re in Bali waiting in the Trump administration
environmental regulations? Or are they cherry picking certain things? Or they’re just getting rid of
certain data from the websites? You know what I mean?>>Yeah
>>How do we know that they’re actually->>So it’s optimistic in the long run.>>The long run.
>>[LAUGH]>>Yeah, the long run.>>Yeah, I think it’s pretty clear
that people are cherry picking. And they’re trying to,
here’s what my goal is, deregulation, and I’m gonna do whatever
it takes to get there.>>So I think that’s one of those examples
of it’s a political space where they’re just trying to get to some ends. And they’ve decided In their
value system that, perhaps, not finding the truth is fine. But to your question, every other week, I have somebody that wants to meet with
me because they’re in the valley, and they’re starting a new company and
they wanna improve news. We’re going to do crowdsourcing,
something will happen, right? There’s innovation taking place on this, there’s a solution here somewhere
I don’t know where it is. But also I think there’s current
systems that are very powerful. So we’ve been looking at
Google search in particular. And you may remember that Trump said okay,
its bias, it’s against me, we’ve been all doing a project for
us, two years. Looking how Google search results, what are they were like biasing people out
towards the fringes left versus right? Or emphasizing websites
that were more essential? And really cool results, one of my
students here in computer science, Google is very much results
are based on authorities. So, does the site that this
is linking to have authority. And it was really eye opening to see that
Google searches is much less biased than if you just went on Twitter and
Facebook something like that. It gets good information through Google. So, there is an existing
established system.>>So we only have about seven minutes
left so we don’t have enough time to get through all these Trump,
there’s a lot of Trump questions.>>[LAUGH]
>>And I want you to have time for some final thoughts and
I wanna be respectful of people’s time. It seems that Trump wants to use all
the oxygen in the room by making outrageous lies. The media is playing into his
game by giving him airtime. Should the press just say Trump lied in
this case and move on to another topic. Or Trump like a kid that makes more
outrageous actions to get attention if ignored. Any thoughts?>>[LAUGH]
>>Yes.>>The answer is yes. Well, so if anybody is interested in more, there’s a group called Date In Society,
they are out of New York and they just released a really great report
last month on media manipulation. So it’s not specifically about Trump,
but one of their case studies is the alt right, so this is Far Right,
White Nationalists, White Supremacist. And they sorta outline how they
have developed techniques. Based on some of the counter-culture that
came out of Silicon Valley in the 60s even on how to manipulate journalists. How to get more air time, how to get people talking about
how to mainstream your crazy idea. It’s really worrisome, but the cool thing
is it’s starting to be documented now. And I know communication we have really
great journalism program that Janine is part of. And that is now starting to be taught. Journalists are now starting to be made
aware of the fact that they are being manipulated with these
sort of media techniques. So again, not to sound too repetitive but
I feel like with education like this though we’ll start to understand
how to deal with that. And in Trump I don’t think I
don’t know if you can ignore him. He’s the president of United States. Can we treat him differently? Yes, I think so but
I think journalists sort of bound of the fact that there’s
a normative relationship. We’re aware he’s the president we have
to take what he says seriously and react to that. And I think they’re just now
adjusting to that he’s not following presidential norms.>>I would like to say a quick
word on this if it’s okay, that I think that’s right Jeff. That the White House media and the media
in Washington in general are trying to cover him like they
covered ever president.>>Right.
>>The president said something, you got to cover it. That’s sort of the impulse reaction. And I know there’s a lot
of debates going on. Say within the editorial boards of the top
newspaper about do we cover what he says or do we cover what he does? What do we focus on? Big debate,
people are fighting about it everyday. And I think he wanna try and draw
a distinction between the print newspapers and the cable news outlooks, right? Because the cable news outlooks are
strictly profits ratings driven, right? So they’re just gonna carry it, they’re gonna bring on panels to talk
about the 14th amendment all day today. Can he, what is it today? I mean everyday is something else. Today was, he’s gonna take away
the right to citizenship and then your whole day on that. Is that the most news
worthy story of the day? Does it merit that much coverage? I agree with the person who wrote
the question that we’d be able to say, you said this and then move on, right? Do both, but not at such length. So we’re just almost out of time, so I just wanna end with really easy
question of, what if we fail?>>[LAUGH]
>>What if all this stuff we’ve been talking about technology? And we’re gonna find the solutions. I know you’re an optimist,
I can never persuade you not to be. Sherad, you’re pretty optimistic,
I got to say.>>Me?
[LAUGH]>>Yeah, you are.>>Okay.>>But what if we fail? I mean what happens if we can’t
restore trust in this nation, in all these institutions? What are the consequences?>>I think democratic backsliding
is a very real possibility. We see this in other countries around
the world and we see it happen from, sort of decline of all these institutions
that you just share the media, the electoral system. And so I think the stakes are enormous and I think there is actually a pretty
good chance, we are gonna fail. So we can’t have this sort of American
exceptionalism, saying, yeah no, we’ve have this stable democracy and we don’t have to worry about
these thing happening around us. This is very real, and
I think there is a lot on the line.>>Jeff, final word?>>Yeah, I actually don’t
disagree with Sherad on this one. And I’m gonna-
>>You don’t seem so surprised.>>I know, right? I’m an optimist because
I believe in people and I believe that it’s certainly
true that there is those people that are willing to believe things
that missing false to them. But they’re gonna believe anyways because
it is this tribal, this polarizing things. And so the risk of failure is real. I’m not saying that like no
matter what were gonna be fine. But I just believe in people’s
desire to really understand what is going on in their world and
ultimately make choices based on evidence. And so I guess my optimism is
based in my belief in people, not in technology and all those things. But-
>>I wanna meet some of these people.>>Yeah.>>And take them out, these Canadians.>>[LAUGH] Only in Canada,
I say hello to everybody. But there’s worst in things. The election Brazil, the Italian election. So when you look at the political space
there’s definitely alarm bells going off, absolutely. And I’ll just leave it at that. Optimist in the long term.>>We’ve a lot of work to do here
in social sciences and HNS, and in engineering. And I want you to please join
me in thanking our panelists.>>[APPLAUSE]

Leave a Reply

Your email address will not be published. Required fields are marked *

Back To Top