AI is a tool. The choice about how it gets deployed is ours.
Our highways and our roads are underutilized because of the allowances we have to make for human drivers.
Automation has emerged as a bigger threat to American jobs than globalization or immigration combined.
A calculator is a tool for humans to do math more quickly and accurately than they could ever do by hand; similarly, AI computers are tools for us to perform tasks too difficult or expensive for us to do on our own, such as analyzing large data sets or keeping up to date on medical research.
It's much more likely that an asteroid will strike the Earth and annihilate life as we know it than AI will turn evil.
Deep learning is a subfield of machine learning, which is a vibrant research area in artificial intelligence, or AI.
Sooner or later, the U.S. will face mounting job losses due to advances in automation, artificial intelligence, and robotics.
Understanding of natural language is what sometimes is called 'AI complete,' meaning if you can really do that, you can probably solve artificial intelligence.
I became interested in AI in high school because I read 'Goedel, Escher, Bach,' a book by Douglas Hofstader. He showed how all their work in some ways fit together, and he talked about artificial intelligence. I thought 'Wow, this is what I want to be doing.'
Science is going to be revolutionized by AI assistants.
What are we going to do as automation increases, as computers get more sophisticated? One thing that people say is we'll retrain people, right? We'll take coal miners and turn them into data miners. Of course, we do need to retrain people technically. We need to increase technical literacy, but that's not going to work for everybody.
We don't want A.I. to engage in cyberbullying, stock manipulation, or terrorist threats; we don't want the F.B.I. to release A.I. systems that entrap people into committing crimes. We don't want autonomous vehicles that drive through red lights, or worse, A.I. weapons that violate international treaties.
There are many valid concerns about AI, from its impact on jobs to its uses in autonomous weapons systems and even to the potential risk of superintelligence.
The biggest reason we want autonomous cars is to prevent accidents.
The popular dystopian vision of AI is wrong for one simple reason: it equates intelligence with autonomy. That is, it assumes a smart computer will create its own goals and have its own will and will use its faster processing abilities and deep databases to beat humans at their own game.
It's hard for me to speculate about what motivates somebody like Stephen Hawking or Elon Musk to talk so extensively about AI. I'd have to guess that talking about black holes gets boring after awhile - it's a slowly developing topic.
I'm not a big fan of self-driving cars where there's no steering wheel or brake pedal. Knowing what I know about computer vision and AI, I'd be pretty uncomfortable with that. But I am a fan of a combined system - one that can brake for you if you fall asleep at the wheel, for example.
In the past, much power and responsibility over life and death was concentrated in the hands of doctors. Now, this ethical burden is increasingly shared by the builders of AI software.
To say that AI will start doing what it wants for its own purposes is like saying a calculator will start making its own calculations.
The mechanical loom and the calculator have shown us that technology is both disruptive and filled with opportunities. But it would be hard to find a decent argument that we would have been better off without these inventions.