Should we leave Africa alone? | Big Think
Автор: Big Think
Загружено: 2012-04-23
Просмотров: 289
Should we leave Africa alone?
New videos DAILY: https://bigth.ink
Join Big Think Edge for exclusive video lessons from top thinkers and doers: https://bigth.ink/Edge
----------------------------------------------------------------------------------
Joanna Bryson thinks that people are confusing artificial intelligence with human clones, mostly due to Hollywood movies like Blade Runner and Steven Spielberg's A.I., both of which feature very humanoid beings. Take away the somewhat cuddly ideas the movies have given us about artificial intelligence and you have this: hyper-smart machines with absolutely no limit to their knowledge. She posits that giving artificial intelligence the same rights a human has could result in pretty dire consequences... because AI has already proven that it can pick up negative human characteristics if those characteristics are in the data. Therefore, it's not crazy at all to think that AI could scan all of Twitter in one afternoon and pick up all the negativity we've unloaded there. If it's already proven it's not only capable of making the wrong decision but eventually will make the wrong decision when it comes to data mining and implementation, why even give it the same powers as us in the first place?
----------------------------------------------------------------------------------
JOANNA BRYSON:
Joanna Bryson is a Reader (tenured Associate Professor) at the University of Bath, and an affiliate of Princeton's Center for Information Technology Policy (CITP). She has broad academic interests in the structure and utility of intelligence, both natural and artificial. Venues for her research range from Reddit to Science. She is best known for her work in systems AI and AI ethics, both of which she began during her Ph.D. in the 1990s, but she and her colleagues publish broadly, in biology, anthropology, sociology, philosophy, cognitive science, and politics. Current projects include “The Limits of Transparency for Humanoid Robotics” funded by AXA Research, and “Public Goods and Artificial Intelligence” (with Alin Coman of Princeton University’s Department of Psychology and Mark Riedl of Georgia Tech) funded by Princeton’s University Center for Human Values. Other current research includes understanding the causality behind the correlation between wealth inequality and political polarization, generating transparency for AI systems, and research on machine prejudice deriving from human semantics. She holds degrees in Psychology from Chicago and Edinburgh, and in Artificial Intelligence from Edinburgh and MIT. At Bath, she founded the Intelligent Systems research group (one of four in the Department of Computer Science) and heads their Artificial Models of Natural Intelligence.
----------------------------------------------------------------------------------
TRANSCRIPT:
Joanna Bryson: First of all there’s the whole question about why is it that we in the first place assume that we have obligations towards robots?
So we think that if something is intelligent, then that’s their special source, that’s why we have moral obligations. And why do we think that?
Because most of our moral obligations, the most important thing to us is each other.
So basically morality and ethics are the way that we maintain human society, including by doing things like keeping the environment okay, you know, making it so we can live.
So, one of the ways we characterize ourselves is as intelligent, and so when we then see something else and say, “Oh it’s more intelligent, well then maybe it needs even more protection.”
In AI we call that kind of reasoning heuristic reasoning: it’s a good guess that will probably get you pretty far, but it isn’t necessarily true.
I mean, again, how you define the term “intelligent” will vary. If you mean by “intelligent” a moral agent, you know, something that’s responsible for its actions, well then, of course, intelligence implies moral agency.
When will we know for sure that we need to worry about robots? Well, there’s a lot of questions there, but consciousness is another one of those words. The word I like to use is “moral patient”. It’s a technical term that the philosophers came up with, and it means, exactly, something that we are obliged to take care of.
So now we can have this conversation.
If you just mean “conscious means moral patient”, then it’s no great assumption to say “well then, if it’s conscious then we need to take care of it”. But it’s way more cool if you can say, “Does consciousness necessitate moral patiency?” And then we can sit down and say, “well, it depends what you mean by consciousness.” People use consciousness to mean a lot of different things.
So one of the things that we did last year, which was pretty cool, the headlines, becau...
For the full transcript, check out https://bigthink.com/videos/joanna-br...

Доступные форматы для скачивания:
Скачать видео mp4
-
Информация по загрузке: