The Tech Humanist
The Tech Humanist
Sep 11, 2020
The Tech Humanist Show: Episode 8 – John C. Haven
Play episode · 59 min

About this episode’s guest:

John C. Havens is Executive Director of the IEEE Global Initiative on Ethics of Autonomous and Intelligent Systems. He is also executive director of the Council on Extended Intelligence (CXI). He previously served as an EVP at a top-ten global PR firm, where he counseled clients like Gillette, HP, and Merck on emerging and social media issues. John has authored the books Heartificial Intelligence and Hacking Happiness and has been a contributing writer for Mashable, The Guardian, and The Huffington Post. He has been quoted on issues relating to technology, business, and well being by USA Today, Fast Company, BBC News, Mashable, The Guardian, The Huffington Post, Forbes, INC, PR Week, and Advertising Age.
John was also a professional actor in New York City for over 15 years, appearing in principal roles on Broadway, television, and film.

He tweets as @JohnCHavens.

This episode streamed live on Thursday, September 3, 2020. Here’s an archive of the show on YouTube:

About the show:

The Tech Humanist Show is a multi-media-format program exploring how data and technology shape the human experience. Hosted by Kate O’Neill.

Subscribe to The Tech Humanist Show hosted by Kate O’Neill channel on YouTube for updates.

Transcript

01:44
because something’s screwing up
01:46
the live stream
08:48
hello everybody hello humans
08:53
uh glad to see some of you turning out i
08:56
think
08:56
uh everybody’s kind of excited we have a
08:59
good show lined up for you today
09:01
let me hear from the from those of you
09:02
who are already online
09:04
where are you tuning in from uh who’s
09:07
out there
09:08
say hi let’s get some audience
09:10
interaction going because we’re gonna
09:12
want that audience interaction i want
09:13
you guys asking questions
09:15
of our guest today i will have some nice
09:17
interactions some fun
09:19
uh we’ll have a good time so go ahead
09:21
and start commenting let me know who’s
09:23
out there and
09:24
and where you’re where you’re dialing in
09:26
from
09:28
hope you’re not dialing in we’ve uh
09:31
we’ve moved past the dial-in days on the
09:33
internet
09:34
uh thank goodness so uh if you’re just
09:37
joining
09:38
for the first time this is the tech
09:40
humanist show
09:42
and it is a multimedia format program
09:44
exploring how data and technology shape
09:47
the human experience
09:48
we’ve got sam lau from southern
09:51
california hi sam
09:52
welcome um and
09:56
go back here so i’m your host obviously
09:58
kate o’neil and
09:59
uh oh i’m georgia from chicago
10:04
that’s my mom my mom is tuned in that’s
10:07
fun
10:08
we’ve got mark bernhard from wisconsin
10:11
yay hi mark
10:13
and davia davia uh tuning in from
10:16
jamaica glad to have you
10:19
we’re we’re truly cosmopolitan now we’re
10:21
all over the place
10:23
so you hopefully your fault i see i see
10:26
um
10:27
sam and mark are tuned in from linkedin
10:30
and
10:30
uh my mom georgia is tuned in from
10:32
youtube
10:33
and davia is tuned in from facebook so
10:35
good we’re getting good
10:36
good uh reach across all the different
10:38
channels we stream across
10:40
youtube facebook linkedin twitter and
10:43
twitter although no one watches on
10:45
twitch if you’re watching on twitch
10:47
give a special shout out because i don’t
10:50
think we’ve had
10:51
any viewers from twitch so far um
10:54
all right so i’m going to go ahead and
10:56
introduce our guest because i know
10:57
that’s why
10:58
a lot of you are tuned in today it’s
11:00
really exciting
11:02
today we are talking with john c havens
11:05
who is executive director of the ieee
11:08
global initiative on
11:09
ethics of autonomous and intelligence
11:11
systems he is
11:12
also executive director of the council
11:14
on extended intelligence or cxi
11:17
he previously served as an evp at a top
11:20
10 global pr firm
11:22
where he counseled clients like gillette
11:24
hp and merck
11:25
on emerging and social media social
11:28
media issues
11:29
he is authored the book’s heart official
11:31
intelligence if you caught that it’s
11:33
heart official intelligence and hacking
11:36
happiness
11:37
and has been a contributing writer from
11:38
mashable the guardian and the huffington
11:40
post he’s been quoted on issues related
11:42
to technology
11:43
business and well-being by usa today
11:46
fast company bbc news
11:48
mashable the guardian the huffington
11:50
post
11:51
forbes inc if you are awakened
11:54
advertising age it just goes on and on
11:56
wait there’s more this is the best line
12:00
in the entire bio you ready
12:02
john so a professional actor in new york
12:05
city for over 15 years
12:07
appearing in principal roles on broadway
12:09
television and film so please audience
12:11
start getting your questions ready for
12:13
our fantastic
12:14
guest and please welcome the obviously
12:18
multi-talented
12:19
john c havens john you are live on the
12:22
tech humanist show
12:24
yeah kate o’neil hey
12:28
thank you so much for being here it is
12:31
an honor to be here seriously i’m stoked
12:33
to be on your show thank you
12:34
i’m stoked to have you you have a fan
12:37
following
12:39
announce the show i i typically do over
12:42
the weekend and
12:43
all of a sudden there was just this
12:45
mountain of of
12:46
uh in of in stream streaming in
12:49
notifications that people were super
12:51
excited and and
12:52
i got a bunch of outreach from in and
12:54
everywhere going like oh my gosh i’m so
12:56
glad you’re having john on the show so
12:58
your your audience is very excited right
13:01
now
13:03
well thank you very much and again honor
13:05
to be here and by the way you were
13:06
rocking the glasses
13:07
headset thing i mean it’s like you
13:10
should look like
13:11
really good i could probably just like
13:13
glue the top of them on to the top of my
13:15
headset
13:18
i’m never gonna be seen without the
13:20
sunglasses on my head it’s one of my
13:21
please
13:23
i’d be sad one of my brand things i
13:25
don’t know so naturally john i gotta
13:27
start off
13:28
the question that’s the part of your bio
13:30
that’s the least relevant to the topic
13:32
of the show
13:33
but the most colorful so you were a
13:35
professional actor tell us all about it
13:38
yeah i moved to new york in 1992 um
13:41
when i was like four i’m kidding anyway
13:44
i moved to new york and
13:46
yeah i was in the screen actor skill uh
13:48
equity had a great agent
13:49
for about 15 years i did small parts but
13:53
like in law and order
13:54
law and order svu i did a partner
13:56
broadway show
13:57
and uh yeah i was an actor for 15 years
14:00
that’s fantastic
14:01
i i love so it just sort of reinforces
14:04
my
14:05
long-standing theory that people who
14:07
have lived multiple lives within their
14:08
lifetime
14:09
are the most interesting that’s very
14:13
cool
14:14
and so what what’s the trajectory there
14:17
though like
14:18
when do you go from acting to advising
14:21
companies on social media to
14:23
leading ethical guidance for the world’s
14:25
largest association of tactical
14:26
professionals
14:28
sure that’s a normal trajectory um
14:32
i think a lot of it comes from
14:33
introspection um my dad was a
14:35
psychiatrist he’s passed away
14:37
my mom is a minister so apparently i was
14:40
just raised in a household where like
14:41
examining your feelings was kind of a
14:43
thing
14:44
um and then acting most of your work
14:46
being the human condition
14:48
and then i was on sets a lot of times my
14:50
parts were comedic
14:51
roles and i did a lot of really bad
14:53
industrial films like bob i don’t think
14:55
the photocopier works that way
14:58
and they’d say can you make this funny
15:00
and so i went from writing scripts
15:02
uh to then working in pr and that’s
15:05
where i got the pr
15:06
job back when i got that pr job my
15:08
friend was like
15:10
come help me run the new york office of
15:11
this big pr firm and i was like i don’t
15:13
know about pr
15:14
no one here under twitter and i was like
15:17
um
15:17
and then i found out about ieee when i
15:20
was writing a book on ai ethics
15:21
and i pitched them about this idea so
15:23
that’s the fast version of the
15:25
trajectory
15:26
well how did you get to writing the book
15:27
on ai ethics was that part of the work
15:29
you were doing with the pr firm or was
15:31
that
15:31
on your own somehow pure unbridled fear
15:35
fear matter seriously it was about
15:38
six years ago i was writing a series for
15:40
mashable all those articles are still
15:42
live and um what i was finding is that
15:45
even back six years ago there were there
15:48
were only the extremes
15:49
here’s the dystopian aspect of ai here’s
15:52
the utopian and i just kept calling
15:55
people
15:55
and saying okay is there a code of
15:57
ethics for ai because i’d like to know
15:59
and that will kind of help balance
16:01
things out and more and more no one knew
16:02
of ones
16:03
like like here’s the here’s the code of
16:05
ethics for ai from the yada yada you
16:07
know
16:08
doc you know and so i was like that
16:10
seems like a good thing to have
16:11
yeah and you have
16:14
helped create what is one of the most
16:18
useful and informative sets of design
16:21
ethics but
16:22
or design guidelines i should say but
16:24
we’ll come to that because
16:26
i want to make sure that we also build
16:28
into you know
16:29
your uh your job now your your multiple
16:32
roles there are a lot of words
16:36
i wonder if you could briefly explain
16:38
your various roles for us
16:41
so many words i’m going to take the rest
16:43
of this episode
16:45
enjoy uh well first of all i should say
16:47
this i am deeply honored to work at ieee
16:50
i love my job
16:51
on the show i’m john so i’m speaking
16:54
it’s john not all of my
16:56
statements formally obviously um you
16:58
know represent at your belief so
16:59
disclaimer alert retweets are not
17:01
endorsements yeah exactly so you know
17:04
but i just want to say that um so one
17:06
job
17:07
what happened is that this book about uh
17:09
a artificial intelligence was really
17:11
saying
17:12
what is it about our own values as
17:14
individuals we may not know
17:16
because if we don’t ask we won’t know
17:18
and then if we don’t know them we can’t
17:20
live to them
17:21
so it’s pretty basic right people often
17:23
why i’m
17:24
unhappy and if you don’t actually know
17:26
your values
17:27
maybe you’re not living to your values
17:28
it’s not the only reason you’ll be
17:30
unhappy but it’s one of them
17:32
so this is a big jump so enjoy anyone
17:34
technical on the phone
17:35
but when you come to like data
17:37
scientists human uh
17:38
hci human computer values
17:42
alignment it’s a technical term but it’s
17:44
similar right
17:45
what are you building who’s going to use
17:47
it what are their values how can you
17:49
align it
17:50
so i was writing this book um really
17:52
thinking about
17:53
how is our personal data related to our
17:55
values and then how are all these
17:57
beautiful machines and technologies kind
17:59
of in one sense looking back at us
18:01
and i just had the really good fortune
18:03
there were some senior people from ieee
18:05
in the audience it was at south by
18:06
southwest they’d asked me to come speak
18:09
and i pitched them and i got really
18:10
lucky there’s this
18:12
guy named constantinos cataholios he’s
18:14
the managing director of ieee standards
18:16
association
18:18
and he and so many people at ieee had
18:20
already been planning something along
18:21
these lines
18:22
so i was really catalyst and then
18:24
there’s hundreds of people who’ve
18:26
actually really done
18:27
the work to create all the work perfect
18:30
because you know
18:30
i think a lot of people for a long time
18:33
have thought of
18:34
ieee as a sort of a a dry
18:37
organization concerned primarily with
18:39
standards and whatnot i mean that that’s
18:41
kind
18:42
of the impression that i had when i
18:44
first came into attack
18:45
25 years ago so it’s interesting to
18:48
change and
18:49
and you know to know the origin of that
18:52
change but
18:53
how did how did you come to hold roles
18:55
that are so clearly focused on human
18:57
impacts was it that the
18:58
shape was already being created or did
18:59
you bring that that
19:01
uh the vision of that to the role
19:05
um well first of all tribally uh the
19:08
tagline itself is one of the reasons i
19:10
actually wanted to work
19:12
with the organization it’s advancing
19:13
technology for humanity
19:15
i actually i genuinely love that yeah
19:17
when i when i first pitched this idea
19:19
and constantinos
19:21
it resonated with he built on it i can’t
19:23
say enough good things about him
19:25
he’s kind of a mentor and he’s brilliant
19:27
but that word for
19:28
fo r right advancing technologies for
19:31
humanity it goes back to values
19:34
what is the success you’re trying to
19:36
build you can’t just be like yay
19:38
we’re advancing technology for humanity
19:40
how are you doing that
19:41
what does that mean so he and so many
19:44
other people within ieee
19:46
and then ieee is volunteer driven so 700
19:49
people
19:50
wrote ethically aligned design uh
19:52
constantinos and the team kind of helped
19:54
shape how it started
19:55
but then it was really the experts who
19:57
wrote the different sections
19:58
in consensus that created the document
20:01
it got
20:01
pages of feedback and it had three
20:03
versions and a lot of that feedback also
20:06
came from people
20:07
the first version was like americans
20:08
people from the eu created it
20:10
but then we got feedback from south
20:12
korea mexico city and japan which was
20:15
awesome
20:15
because many of them said this seems
20:18
really good but it feels non-west
20:19
or yeah it feels very western you need
20:21
more non-western views and so
20:24
that always to me and this is like the
20:25
favorite part of my job is like huh
20:28
feedback that means you want to join a
20:30
committee awesome
20:34
well that’s great i i think it’s really
20:36
uh it’s important that you were open to
20:38
that feedback that you you know you got
20:40
that kind of feedback
20:41
it it shows a lot of trust that your
20:44
constituency
20:45
came back and said can we incorporate
20:48
more of a viewpoint that
20:49
that that deviates from you know this
20:52
kind of western
20:53
standard agreed
20:57
yeah we so i you know i think what’s so
21:00
interesting to me is you know
21:01
i i read through your book artificial
21:05
intelligence
21:06
and you have a couple of uh quotes in
21:09
there that that really stood out so
21:10
one is that you said i am not anti-ai
21:14
i am pro-human which you know that
21:16
resonates with me
21:17
um but also i feel like it ties into
21:20
what you were just talking about it with
21:21
the slogan of
21:22
ieee but what to you what does it mean
21:24
to be
21:25
pro-human yeah by the way owe you a cup
21:29
of coffee for reading the whole
21:30
book it was wonderful
21:33
oh thank you thank you um i think
21:35
especially from a media narrative
21:37
standpoint there’s a lot of
21:39
us versus them titles that we’re often
21:42
working against in both ieee and the
21:44
work and the council on extended
21:45
intelligence
21:46
where you re and i’m doing this for
21:48
effect right you know
21:50
x new ai whatever new ai program
21:53
playing a sport or something destroyed
21:56
this human in soccer right these
21:59
extreme hyperbolic terms like
22:01
eviscerated a human
22:04
and it’s like how do you read that and
22:05
as a human just as anybody
22:07
not feel kind of like like crab you’re
22:10
like
22:11
and it it makes the technology and the
22:14
human
22:14
feel devalued and more importantly
22:18
the pro human thing means it’s okay to
22:20
recognize
22:22
that humans are inherently different
22:25
than the machines and the tools that
22:27
we’re building and to honor both
22:29
you can say here’s where they’re
22:31
different it doesn’t mean you’re saying
22:32
this is bad
22:33
but for instance and i won’t go into
22:35
this unless you want to because it gets
22:37
very philosophy geeky
22:38
right but i’m all about the philosophy
22:40
geeky
22:41
all right two hours later
22:46
a lot of western ethics is built on
22:48
rationality right and rationalities a
22:50
lot of like democratic ideas come from
22:52
awesome right but the yes and to
22:55
rationality
22:56
is things like relationality how do you
22:58
and i interact as people with our
23:00
emotion
23:01
and then the systems how do we interact
23:03
with nature
23:04
so if you kind of look at through one
23:07
lens
23:08
uh only a feature of who humans are it
23:11
can be easy to say well humans are only
23:13
about what’s in our brain
23:15
and once john’s just information the
23:17
cognitive
23:18
sort of stuff i have in my hard drive is
23:20
kind of spilled out
23:22
that’s all i am but i’m a musician i’m
23:24
an actor i’m a dad
23:26
i’m a friend of kate honored to be here
23:28
right
23:29
and that means those those ephemera
23:32
are not minor in terms of how it means
23:35
we relate to each
23:36
other and we relate to the world and
23:37
then when you go especially to
23:38
non-western traditions like the shinto
23:40
tradition in japan
23:42
or many indigenous traditions around the
23:44
world we cannot assume
23:46
anybody royal we making technology
23:49
that unless we know how others frame
23:52
these ethical questions about how they
23:54
look at human
23:55
that the system building are going to be
23:57
applicable to them we have to know what
23:59
they are
24:00
and work together um you know towards
24:02
consensus so say all that
24:04
what i want to get more at a in a
24:08
in a minute into you know that sort of
24:12
compilation of ethical views and all the
24:15
all the philosophical viewpoints that
24:17
sort of cobble together
24:19
to inform that but but i still want to
24:21
stay with this pro-human
24:22
idea because i feel like also what
24:24
you’re talking about there
24:26
you know you talked about the human
24:27
condition earlier and it feels like
24:30
some of what you’re saying is this
24:32
multi-dimensionality is a really
24:33
important facet
24:34
of humanity and of being pro-human is
24:37
that fair is that a fair
24:38
characterization of what you’re saying
24:40
yeah i think it’s easy and i sympathize
24:42
or should say understand
24:44
a lot of times people are like well let
24:46
usually it’s ai but let this technology
24:48
take over because humans have screwed
24:50
everything up
24:51
right it’s a it’s the sentiment is
24:55
understandable right people make
24:57
mistakes we’re all flawed
24:58
but i’m not quite sure what someone yeah
25:01
i tend to get frustrated to sometimes
25:02
with those statements because i’m like
25:03
well
25:04
a people build the systems
25:07
for you so you know like guess what
25:10
secondly it’s the systems underneath the
25:13
technology that need to be addressed
25:15
right and then third i have this sitting
25:17
by my desk
25:18
um i’ll read this to you it’s this
25:20
japanese adage
25:22
in japan broken objects are often
25:24
repaired with gold
25:25
the flaw is a unique piece subject’s
25:28
history which adds to its beauty
25:30
consider this when you feel broken right
25:33
like what are we supposed to be
25:34
perfect what does that mean and what’s a
25:37
perfect man what’s a perfect woman
25:38
what’s a perfect american what’s a
25:40
sense not who cares right we’re asking
25:43
to understand our values
25:45
but the starting point for me as a
25:47
person and a lot of work that we’re
25:48
doing and i
25:49
focused on well-being is to say
25:52
inherently all humans have
25:53
worth simply because they exist and so
25:56
to start to frame the humanness being
25:58
worthwhile
26:00
because of up here immediately means
26:03
we’re saying
26:04
we’re willing to design technology that
26:06
is in one sense only for a very small
26:09
portion of the planet
26:10
which is not the case with me is not the
26:12
case with ieee
26:14
so if that makes sense that’s the deeper
26:16
human stuff it makes great sense to me
26:18
i also want to remind the audience feel
26:20
free to start
26:21
funneling in any kind of questions if if
26:23
you’re hearing what john is saying and
26:25
you have questions about what we’re
26:27
talking about please go ahead and ask
26:28
them but you know here’s one that i have
26:30
is
26:30
another excerpt from artificial
26:32
intelligence is you wrote
26:34
if machines are the natural evolution of
26:36
humanity we owe it to ourselves to take
26:38
a full measure of who we are right now
26:40
so we can program these machines with
26:42
the ethics and values we hold dear
26:44
and here’s a question i get asked all
26:46
the time and i’d love to pass it along
26:48
to you
26:48
whose ethics and whose values are we
26:51
programming
26:52
and how can we be sure we’re getting
26:53
that decision right
26:56
uh my ethics
26:59
john’s way or the highway
27:02
you know it’s not aggressive it’s just
27:04
it’s the way to go no
27:06
question i mean first of all um applied
27:09
ethics right there’s a lot of discussion
27:10
around
27:11
aiox and i’m using air quotes maybe too
27:14
much
27:14
but it’s a huge phrase what do we mean
27:17
by ai
27:18
is it machine learning is it you know
27:20
inverse reinforcement learning what do
27:22
we mean by ethics
27:23
is it just philosophy or is it
27:25
compliance
27:27
but the basic idea is applied ethics is
27:30
essentially designed right in form of
27:32
design it’s saying we want to build a
27:34
technology
27:35
who are we building it for what is the
27:38
definition of value
27:39
for what we’re building oftentimes the
27:42
value framed in
27:43
exponential growth right not just profit
27:45
i want to be clear
27:46
we all need money to pay bills and and
27:48
profit is what sustains an organization
27:51
but exponential growth is an ideology
27:55
that it’s not just about getting some
27:57
profit or speed
27:58
it’s about doing this well when you when
28:01
you maximize
28:02
any one thing other things by definition
28:05
empirically take less of a focus
28:09
and especially with humans that can be
28:11
things like mental health
28:12
right i got to kick out this technology
28:15
to the world
28:16
because i’m pressured because of market
28:18
needs
28:19
this is not bad or evil this is why the
28:21
term ethics can be so confusing
28:24
but it is a decision and in this case
28:26
it’s a key performance indicator
28:27
decision
28:28
where there may be pressure the priority
28:30
is to get something say to market
28:32
versus how can we get something to
28:34
market that best
28:35
honors end user values in the context of
28:38
the region where they are
28:40
kind of to your last question and then
28:42
also how do we understand
28:44
what risk and harm is in the algorithmic
28:47
era
28:48
because one thing i’ll say quickly here
28:49
is a lot of times people are like ai is
28:51
just the new tech
28:52
you know and i’m like sorry it’s just
28:54
not here’s why
28:57
data right is key 100 years ago like the
29:00
first car
29:00
or whatever didn’t have data that would
29:03
measure us and then go to the cloud
29:06
so human data being measured and the
29:08
ability to immediately go to the cloud
29:10
is utterly different and how that data
29:13
is translated back to us about who we
29:15
are is deeply affecting human agency
29:18
identity and emotion
29:19
yeah it’s almost like the the earlier
29:21
example the car
29:23
is deciding where to drive us or at
29:25
least recommending
29:26
well you’re saying you want to drive to
29:28
chicago but detroit is nicer this time
29:31
of year
29:32
you should really go to detroit
29:35
well what do we do about all of the
29:38
human bias that’s already encoded into
29:40
data sets and algorithms and business
29:42
logic and
29:43
and all of that i think the easiest
29:46
thing is just hate
29:47
everyone universally right just pure
29:49
irrational
29:50
yeah no not relational but rational yes
29:54
um i think first of all for me is
29:57
there’s different levels i’m learning
29:59
about bias and again i want to be clear
30:01
i’m speaking here as john
30:02
not a psychiatrically the whole
30:03
organization um and if i have the book
30:05
here i’ll show it
30:06
yeah i have the book here so one thing
30:08
that you know everyone
30:10
assumedly in the industry the ai
30:12
industry is focused on is things like
30:14
eradicating bias
30:16
and here personal heroes of mine joy
30:18
bulamuni
30:19
uh has done some phenomenal work with
30:21
aspects to um
30:23
you know any device that won’t measure
30:25
brown or dark skin or or black skin
30:27
tones in the same way as white tones
30:29
she’s also done some amazing work with
30:31
the actual terminology
30:33
and i’m blanking on the term but like
30:35
the taxonomy of how
30:37
different data sets are created around
30:40
the framing of those skin colors anyway
30:42
joy bulimi
30:43
awesome the thing i think i’m just
30:46
learning
30:47
and i’ll hold up the book that’s called
30:49
race after technology by dr ruja
30:51
benjamin from princeton
30:52
and forgive me if you i know you’ve
30:54
interviewed people i think you’ve talked
30:55
about this type of stuff
30:56
um and i heard about her from the
30:58
radical ai podcast
30:59
shout out to my friends dylan and jess
31:01
they have a great show
31:02
um benjaminus on the show she gave the
31:05
example about bias and i’m going to
31:07
paraphrase this wrong so please
31:09
read her book but the logic of for
31:11
people creating tools
31:13
looking for data anyone creating ai they
31:16
might go to say like i live in new
31:18
jersey right so there’s an area of new
31:19
jersey where there’s a hundred thousand
31:21
citizens
31:22
who have been measured by one metric
31:24
which is the census data
31:25
right so a hundred thousand people live
31:27
here then there’s data about
31:29
something health or medical oriented all
31:31
these hundred thousand people
31:33
x amount did whatever in terms of i
31:35
don’t know cardiology
31:37
now that that insight or that data about
31:40
that data set
31:41
is now what’s being used hypothetically
31:44
or in reality but i’m giving an example
31:46
by everyone creating ai and then they’re
31:49
saying
31:50
we’re saying let’s make sure that that
31:52
ai
31:53
is accountable and transparent fair and
31:55
all those things which is we should
31:57
but she made the key point to me which
31:58
blew my mind and i’m
32:00
frankly a little embarrassed i hadn’t
32:01
thought of it before is
32:03
the assumption is that of those hundred
32:06
thousand
32:07
all hundred thousand citizens have
32:09
access to the health and medical data
32:12
when in fact whether it’s marginalized
32:14
populations whether it’s people that
32:16
just didn’t have you know
32:17
they weren’t able whatever the number
32:19
may be significantly lower
32:21
so underlying and analyzing the systems
32:24
by the way is a design
32:25
thing i know the term marginalized
32:27
obviously can be very
32:29
heated and whatever else for me let’s
32:32
move some of those terms out they’re
32:33
critically important
32:34
but the point is as people who want to
32:36
design this technology holistically well
32:38
for everyone
32:40
especially dr benjamin’s ideas really
32:42
helped me think about
32:43
we have to be thinking about building
32:45
for all not just those who we are
32:47
building for not realizing who we’re
32:49
missing
32:49
in the process well and you’re speaking
32:52
about
32:52
layers of design right it’s it’s the
32:56
important thing about a term like
32:57
marginalization and what it implies
32:59
is that there are systems and we can
33:01
recognize that there are
33:02
but you’re i think a lot of times the
33:05
the
33:06
design of technology or of technological
33:08
experiences
33:10
is focused on the technology and not on
33:13
the sociological
33:14
and cultural implications that are
33:16
wrapped in and around that technology
33:18
and i think
33:19
so much of what’s important about the
33:21
work that you’ve been doing and the work
33:22
of some of the people that you’ve
33:23
mentioned
33:24
is to unpack a lot of those those
33:26
assumptions and say
33:28
it’s not just going to exist in a void
33:31
or vacuum
33:32
it’s going to be used in culture and
33:34
these things create
33:36
experiences that scale our culture and
33:38
we need to be able
33:39
to understand you know what the
33:41
implications of of those design
33:42
decisions are
33:44
yeah exactly so i i want to ask you too
33:48
about automation because we have
33:50
actually had a pretty good amount of
33:51
discussion with some of the guests who
33:53
have been on
33:54
the show past episodes of the show so
33:55
far about ai ethics
33:58
and less about automation per se
34:01
and obviously i realize that a lot of
34:03
what needs unpacking about automation
34:05
does have to do with intelligence
34:07
but there are still questions about what
34:09
we automate and how
34:11
and who is affected so do you anticipate
34:14
ever being a discourse on the ethics of
34:16
automation that that gets much attention
34:18
that’s
34:19
separate or related to the ethics of ai
34:23
well i’m really glad you asked that we
34:24
for ethically line design
34:26
uh we actually use the term uh
34:28
autonomous and intelligent systems
34:30
because to your point you know if we
34:32
want to define artificial intelligence
34:34
we’d be here for seven hours
34:35
you know when you get in a room of
34:37
anybody defining it
34:39
it’s it’s very challenging so at least
34:41
to your point or i’m saying
34:42
i agree with you we said let’s talk
34:44
about automation
34:46
versus air quote intelligence without
34:48
being anthropomorphic
34:49
but the term intelligent systems is a a
34:52
classifier of say like certain types of
34:56
uh learning and what have you automation
34:59
you know everyone uses this example and
35:01
i always forget what it’s called but in
35:03
a car for the like the last 30 years
35:05
uh cruise control right you’re driving
35:07
at 60 miles an hour and you push a
35:09
button that’s already option
35:10
and then we’re used to with simple tools
35:13
i don’t know
35:14
spell check things like that’s a play a
35:16
good example
35:18
a lot of my book was focused on what are
35:20
the things either that we don’t
35:22
ever want to automate we want to make
35:25
sure that we have the option to be in
35:26
the midst of
35:28
that process and not always automate
35:31
so a good example i give there say like
35:33
a dating app
35:34
right the tools and this is like
35:36
e-harmony and some of the other services
35:38
use
35:39
really complex and frankly very
35:41
impressive
35:42
machine learning uh algorithms to help
35:45
you choose who you’d want to be with
35:47
and by the way some of the some of these
35:49
things there’s not a moral or ethical
35:50
issue it’s like
35:51
do you live in denver colorado yes do
35:54
you want to date someone in nome alaska
35:56
no thank you you know so it’s not like
35:59
this complex thing
36:01
but the thing is at some point there may
36:03
be aspects of a decision someone else
36:05
has made
36:07
where you now aren’t in the mix and
36:09
maybe you won’t meet a person
36:10
who you would have met under different
36:12
circumstances by the way that happens in
36:14
real life as well
36:16
point is is if we have upfront
36:18
disclosure
36:19
about those tools access to data and
36:22
most
36:22
importantly we have a choice and we know
36:25
that we have that choice and we make the
36:27
choice
36:28
this is where for instance in my life i
36:30
don’t want anything
36:31
to quote automate my decision around
36:35
parenting right or whatever it is it’s
36:37
not that it’s wrong or right it’s just
36:38
that that reflects my
36:40
values is hey look at this parenting app
36:43
i don’t have to do anything hey son
36:45
talking to here apparently you’re mad
36:47
you know
36:47
yada yada what do i do okay i’m supposed
36:49
to spank you
36:50
right that is totally fictional but my
36:53
point is is like
36:54
how easy it could be to avoid any choice
36:57
this is not about the technology
36:58
technology is astoundingly beautiful and
37:00
amazing
37:02
but us not being in the mix means that
37:05
we don’t learn
37:06
ourselves or train ourselves or focus on
37:08
our own values well there was just that
37:09
article in i don’t remember if it was
37:11
the new york times or what but that was
37:13
about
37:14
parents sort of offloading the the
37:17
dictation of terms of various kinds to
37:20
their children
37:21
to their alexas and smart speakers so if
37:25
they need to tell
37:26
a kid what to do it’s like have the
37:29
smart speaker tell the kid what to do
37:31
or something like that so the job of
37:34
discipline
37:34
is already i think being automated in
37:36
some sense but i think
37:38
to some of what you’re saying there’s
37:40
there’s the distinction between
37:42
automating away versus automating around
37:45
like you know when you talk about
37:47
automating parenting
37:49
i think you know it’s implied that
37:50
you’re saying you don’t want to automate
37:51
away
37:52
parenting but you could certainly make
37:55
some
37:55
seamlessness or some conveniences around
37:57
parenting through automation and it
37:59
wouldn’t be
38:00
uh it wouldn’t necessarily be a moral uh
38:03
controversy right
38:05
no not at all and i’m glad you brought
38:06
it up and i will say though the metrics
38:09
are key here right like as a parent i
38:11
have two kids who were young i would
38:13
have given
38:14
a good amount of money to sleep through
38:16
the night when they were getting sleep
38:17
trained for instance
38:18
right and and a lot of these a lot of
38:21
these tools can read bedtime stories etc
38:23
but i wrote an article about this um i
38:26
have to i’ll send you the link for the
38:27
show notes um
38:29
where where the question that i about is
38:31
take something to the nth degree and
38:33
again it will be clear this is not about
38:34
the technique
38:35
right this is about societal choices but
38:38
what happens if you use like six
38:39
different parenting
38:40
apps or tools and eventually your kid
38:43
says
38:43
you know i’m good you know dad i know
38:45
you wanted to go on a walk with me or
38:47
you wanted to talk about whatever but
38:49
i’m going to go through my six or seven
38:50
different things and
38:52
thanks so much i i don’t really i don’t
38:54
want you to read me a story i don’t want
38:55
you to take me on a walk i hang a robot
38:58
i don’t need your advice about girls
38:59
great 21st century cat’s cradle
39:02
story and i’m not trying to judge any
39:05
family or or a kid
39:07
right like balance of i’m not telling a
39:09
kid what they should or shouldn’t do but
39:10
i think
39:10
if we sort of usurp i’m sorry if we
39:13
eschew
39:14
and give away that and what i’m saying
39:17
give away is more like the ultimate
39:19
sense of
39:20
of why looking at our values and this in
39:23
case for parenting
39:24
are so critical the answer is outside of
39:27
the technology
39:28
right or policy you may wake up one day
39:31
and be like
39:31
what did i just give away i don’t know
39:34
but the metrics are critical here
39:36
right because mostly a lot of times um
39:39
and this is outside of like gdp and
39:41
exponential growth
39:43
um we tend to focus on
39:46
what can i do to get from five now to be
39:48
happy and a lot of times that
39:50
productivity i can be more productive
39:52
so we ignore the now and a lot of my
39:54
work has been in positive
39:56
psychology my last two books focused a
39:58
lot most of gratitude is just being able
40:00
to look at what you have now
40:02
and say this is stuff i really treasure
40:04
and value
40:05
and then that’s where you’d be able to
40:06
make that decision if parenting for
40:08
instance is one of those
40:10
things well then i’m going to allocate
40:12
time with my kids
40:14
even though that half an hour or hour at
40:16
dinner i could be doing more work
40:19
so it’s actually very pragmatic and
40:20
practical and that’s most of what my
40:22
last two books are focused on
40:24
is please think about this so you can
40:27
make these choices
40:28
so you’re not 10 years later like you
40:30
know to your point cat’s cradle
40:32
weeping in your beer like why don’t my
40:34
kids talk to me anymore
40:35
you know they talk to the smart speakers
40:37
still
40:38
[Laughter]
40:40
it’s also it reminds me in in my own
40:43
work one of the things that i talk about
40:44
is
40:45
with with the concept of meaning uh
40:47
being a very human-centric
40:48
concept and that meaningful experiences
40:52
the meaningfulness is one of the the
40:54
great
40:55
sort of characteristics of experiences
40:57
that we should be
40:58
trying to design through technology and
41:00
beyond but
41:01
that one of the things that happens with
41:03
automation it feels like is that we
41:05
focus a lot on as you say productivity
41:08
and we try to automate the things that
41:10
are
41:10
mundane or repetitive or that that feel
41:13
like they take away
41:15
our cognitive focus and yet
41:18
i feel like if you take that to scale
41:21
and you
41:22
only have automated the things that are
41:24
mundane and meaningless
41:25
then you end up in a horrible dystopia
41:28
when that
41:28
is what surrounds us and so there’s this
41:31
kind of counterpoint where i feel like
41:33
we need to be infusing more meaning and
41:35
i think it comes back to your idea of
41:36
infusing the values into into the
41:39
discussion and making sure that what’s
41:41
automated reflects
41:42
meaning and reflects values but it isn’t
41:45
automating the meaningful
41:47
things that you’re doing is that would
41:49
you agree with that
41:51
yeah i would and i i think also i’ve
41:53
read a lot of
41:54
media where there’s a lot of assumptions
41:57
that i would even call
41:58
if not arrogant certainly dismissive if
42:00
not wildly rude
42:02
so you know there’s you’ll read an
42:04
article that’s like well this machine
42:05
does x it shovels because no one wants
42:07
to shovel
42:08
for a living right i’m just bringing
42:10
this up and no that’s good it’s a good
42:12
point
42:12
on the tech right if like there’s a john
42:14
deere automated shoveler
42:16
i’m sure it’s fantastic is to say we’ve
42:18
all done jobs
42:20
of any kind uh that elements of it you
42:24
really don’t like
42:25
and you wish could be automated but
42:27
usually that’s because you do the job
42:29
long enough to realize this part of my
42:31
job i wish
42:32
would be automated right things like
42:34
shoveling i don’t know
42:36
yeah a lot of people would not be like
42:38
give me 40 years of shoveling
42:39
i’ve done a lot of like especially when
42:42
i was a younger person i did a lot of
42:44
like
42:44
you know camp counselor jobs for the
42:46
summer where i was outside
42:48
you know i was doing physical labor it
42:49
was awesome that said
42:51
i knew okay this was great for what it
42:53
was i kind of don’t want to do this for
42:55
my whole
42:56
life but the other thing there which i
42:58
really get upset about when i read some
42:59
of those articles
43:01
is what if whatever the job is insert
43:03
job x
43:04
which could be automated is how someone
43:06
makes their living
43:07
right then it’s not just a value
43:09
judgment about the nature of the actual
43:11
labor itself
43:13
but is sort of saying like really what
43:15
someone says there
43:16
is from the economic side of it it’s
43:19
justified to automate anything that can
43:21
be automated
43:22
because someone can make money from it
43:24
outside of
43:26
what that person needs to do to make
43:28
money for them and their family
43:29
and again a company having a cool idea
43:32
to build something that’s automation
43:34
oriented
43:34
that’s awesome but we have to have the
43:36
discussion about
43:38
what jobs you know might go away where
43:40
again the metrics are
43:42
if it’s exponential growth ultimately
43:45
then i don’t see why anything that
43:47
humans do would not be automated
43:50
period like i have not been to a policy
43:52
meeting or whatever yet where someone’s
43:53
like hold on
43:54
we need to not build x because some
43:57
humans w…

Search
Clear search
Close search
Google apps
Main menu