To chat with Andrew Ng I almost have to tackle him. He was getting off stage at Re:Work’s Deep Learning Summit in San Francisco when a mob of adoring computer scientists descended on (clears throat) the Stanford deep learning professor, former “Google Brain” leader, Coursera founder and now chief scientist at Chinese web giant Baidu.
[snipped]
Um, can you elaborate on studying time?
By moving your head, you see objects in parallax. (The idea being that you’re viewing the relationship between objects over time.) Some move in the foreground, some move in the background. We have no idea: Do children learn to segment out objects, learn to recognize distances between objects because of parallax? I have no idea. I don’t think anyone does.
There have been ideas dancing around some of the properties of video that feel fundamental but there just hasn’t yet been that result. My belief is that none of us have come up with the right idea yet, the right way to think about time.
Animals see a video of the world. If an animal were only to see still images, how would its vision develop? Neuroscientists have run experiments in cats in a dark environment with a strobe so it can only see still images—and those cats’ visual systems actually underdevelop. So motion is important, but what is the algorithm? And how does [a visual system] take advantage of that?
I think time is super important but none of us have figured out the right algorithms for exploring it.
[That was all we had time for at the Deep Learning Summit. But I did get to ask Ng a followup via email.]
Do you see AI as a potential threat?
I’m optimistic about the potential of AI to make lives better for hundreds of millions of people. I wouldn’t work on it if I didn’t fundamentally believe that to be true. Imagine if we can just talk to our computers and have it understand “please schedule a meeting with Bob for next week.” Or if each child could have a personalized tutor. Or if self-driving cars could save all of us hours of driving.
I think the fears about “evil killer robots” are overblown. There’s a big difference between intelligence and sentience. Our software is becoming more intelligent, but that does not imply it is about to become sentient.
The biggest problem that technology has posed for centuries is the challenge to labor. For example, there are 3.5 million truck drivers in the US, whose jobs may be affected if we ever manage to develop self-driving cars. I think we need government and business leaders to have a serious conversation about that, and think the hype about “evil killer robots” is an unnecessary distraction.
Read full interview via Google Brain’s Co-Inventor Tells Why He’s Building Chinese Neural Networks — Backchannel — Medium.