Google Brain’s Quoc Le speaks about how Deep Learning could revolutionize Healthcare

Dr. Quoc Le Credit: Biotechin.Asia

Dr. Quoc Le Credit: Biotechin.Asia

Dr. Quoc Viet Le is a research scientist at Google Brain known for his path-breaking work on deep neural networks (DNN). He is especially famous for his Ph.D work in image processing under Andrew Ng, one of the pioneers of the DNN revolution. Le’s and Ng’s work demonstrated how computers could be used to learn complicated features and patterns in a way similar to how the mammalian brain learns, with better performance than earlier neural network technology. One of their first breakthroughs was demonstrating the training of a large neural network to detect cats from YouTube videos.

This revolutionized the interest in DNNs, and got the current giants of the computer industry such as Google, Facebook and Microsoft in a race to incorporate AI techniques into their software. Recently, Google announced a cloud platform for machine learning to encourage people into this area. DNNs have now become a buzzword among tech enthusiasts. They perform effectively in tasks such as image processing, handwriting recognition and game-playing, and are being explored for solutions to other problems such as self-driving cars, robotics, medical diagnosis and environmental and social problems.

Quoc Le was listed as one of the top tech innovators under 35 in the MIT tech review. At EmtechAsia, we asked Quoc Le a few questions about his take on neural networks, its development, philosophy, challenges and future role in enabling or threatening humanity.

In part 1 of our interview, we ask Le on the inspirations behind the development of neural networks and its various applications (Read part 2 here).

Q: During your development of deep neural networks, were you able to draw inspiration from detailed knowledge of the brains working through neuro-scientific findings ? How inspired were you by these insights in developing your techniques ?

Le: Actually, the brain is so complicated that we don’t know much about it. We know somewhat about the `hardware structure’, but not much about the `software’ and how it works. It is hard to find out much about these things, because you would have to cut brains and let animals/humans die in the process. So that makes this learning complicated.

DNNs are a first order approximation of what seems to happen in the brain. What we do in our field of simulating the brain (DNNs) is to mimic some of the hardware architecture. We learnt the brain is hierarchical, and that neurons are organised into layers with different functionalities, so these were nuances of the `hardware architecture’ that we could mimic in developing DNNs. But other than that, the structure of the brain is just an inspiration to DNN development.

Q: Do you have any shocking examples where AI has outclassed humans in pattern recognition tasks?

Le: One is image recognition. A researcher at Stanford, Dr. Andrej Karpathy was dealing with the problem of sifting through images and labeling them. He realized that in a head-to-head comparison, the machine learning algorithm was not far away from, or sometimes even better than, humans.

In face recognition, a lot of progress has been made. Handwriting, I would consider to be a (computer science) problem that is now solved, unless its really bad handwriting! But these are only particular narrow areas where AI outclasses humans because scientists have been working a lot on them and were able to make progress. We don’t have something like a general algorithm that can outdo humans in many categories.

Q: What about other areas like Robotics? Is it hard to train machines to move and balance things ?

Le: Yes, its hard. My friend at Berkeley University in Silicon valley used DNNs to train a robotic arm to grasp objects – move back and forth and stuff like that. He had some early success. I’d say this field is a good investment, but a long term one.

Hard to speculate how big a range of other activities DNN can be used for, but healthcare for one will greatly benefit from AI. Also, Smart Transportation. Right now, human drivers still cause accidents. But if an AI can help it recognize objects, routes and threats, it can help drive better.

Dr. Quoc Le from Google Brain, speaking at the MIT Innovators unders 35 forum Credit: Biotechin.Asia

Dr. Quoc Le from Google Brain, speaking at the MIT Innovators unders 35 forum Credit: Biotechin.Asia

Q: How can AI be used to revolutionize healthcare? Can you explain ?

Le: An example is medical diagnosis. For example, in my home country, Vietnam, I never had access to a good doctor in my youth. But now, we can train AIs to take over the task of a good doctor, or help doctors. I imagine it can be an algorithm on the phone, for an example, though we don’t have one like it yet. Your phone could be used to monitor your body status in terms of measurable quantities – your temperature, heart beat, pulse, weight, color of skin etc. Then, an AI could use this data to diagnose that maybe you have a a particular sickness or condition, a cold or a flu or some skin disease.

For this, the AI must be trained to learn from the best experience and based on previous labelled records. These labelled records would involve cases where a human doctor examined the data and classified it into some diagnosis – `these symptoms mean the patient has a cold’, `this indicates so-and-so disease’,  etc. The AI can learn to do the same from these records.

As an extension, it would be great if we can do this using unlabeled data as well, which is called unsupervised training (More on unsupervised training in part two of this interview).

Q: What about cyber security ?

Le: Yes, many use AI for cyber security. However, being such a tough task, AIs cannot manage or take over cyber security fully by themselves yet. What companies currently do is to hire security experts, code up a bunch of decisions for security applications, and then use a small AI on top of it to select or enable decisions. As far as I know, that is the only work being done there, and that is not so much of AI. But that’s second hand knowledge, I’m not an expert on it.

Q: A more quirky question for you.

Technology has been progressing rapidly in the last decade. With the current advances we are having in synthetic biology, scientists are exploring how you may be able to grow everything in a lab. Coupling this with deep learning, in next 50 years or so do you think there’s a possibility we could do something like download our brain into a hard disk, transplant it, and continue to live on again ?

Le: Well, it’s not so easy. Not in the next 10 years I think, but I don’t know. The thing is, technologies face a lot of surprises in the course of their development. The rapid improvement in accuracy that was achieved by DNNs in image processing took us by surprise. After that, people anticipated similar progress in tasks like robots being able to collect objects, but now we are seeing that such a thing is still far away. So while DNNs have made breakthroughs in some areas, extrapolating the success likewise to other areas is non-trivial. We have to wait for the tech to catch up in other aspects.

(Read more in part 2 of this interview about bottlenecks and future of AI technology)

https://mail.google.com/_/scs/mail-static/_/js/k=gmail.main.en.M8IC2j5XLTI.O/m=m_i,t,it/am=PiNeCZj_e38wrjMEoJU-UmHe-893S8runnv4_94EiNRXAP43-38A_wd70xYK/rt=h/d=1/rs=AHGWq9AvIE7irctsc7HsDGY3XT72aouBzwhttps://mail.google.com/mail/u/0/?ui=2&view=bsp&ver=ohhl4rw8mbn4https://mail.google.com/mail/u/0/?ui=2&view=bsp&ver=ohhl4rw8mbn4

Conversation opened. 1 read message.

More

1 of 11,578

 
 
Print all
In new window

(no subject)

Inbox
x

Hari Vishnu <harivishnu@gmail.com>

Attachments7:37 PM (6 hours ago)

to me
Attachments area
Click here to Reply or Forward
15.5 GB (81%) of 19 GB used
Last account activity: 1 hour ago

Details

https://accounts.google.com/o/oauth2/postmessageRelay?parent=https%3A%2F%2Fmail.google.com&jsh=m%3B%2F_%2Fscs%2Fabc-static%2F_%2Fjs%2Fk%3Dgapi.gapi.en.1MqgDU3zZ20.O%2Fm%3D__features__%2Frt%3Dj%2Fd%3D1%2Frs%3DAHpOoo_KZf1llAl4VsToQWFFf6-n5A3H0g#rpctoken=1994875760&forcesecure=1https://clients6.google.com/static/proxy.html?jsh=m%3B%2F_%2Fscs%2Fabc-static%2F_%2Fjs%2Fk%3Dgapi.gapi.en.1MqgDU3zZ20.O%2Fm%3D__features__%2Frt%3Dj%2Fd%3D1%2Frs%3DAHpOoo_KZf1llAl4VsToQWFFf6-n5A3H0g#parent=https%3A%2F%2Fmail.google.com&rpctoken=771198093

Dr. Quoc Viet Le (http://cs.stanford.edu/~quocle/) is currently a research scientist at Google. His Ph.D work at Stanford university was on deep neural networks (DNN) under Andrew Ng (http://www.andrewng.org), who is one of the famous pioneers of the DNN revolution. DNNs have become a buzzword amongst tech enthusiasts now (https://biotechin.asia/2016/03/12/artificial-intelligence-beats-human-champion-at-the-game-go/ ). Le’s and Ng’s work was detrimental to demonstrating how computers could be used to learn complicated features and patterns in a way similar to how the human brain learns, with better performance than earlier artificial neural network technology. One of their first breakthroughs was demonstrating the training of a large neural network which was able to detect cats from YouTube videos. This revolution sparked a race amongst the current giants of the computer industry such as Google, Facebook and Microsoft, to incorporate DNN based techniques into their software. DNNs have currently been shown to perform effectively in tasks such as image processing, handwriting recognition and game-playing (https://biotechin.asia/2016/03/12/artificial-intelligence-beats-human-champion-at-the-game-go/ ), and are being explored for solutions to other problems such as self-driving cars, medical diagnosis, robotics and environmental problems.

Quoc Le was listed as one of the top tech innovators under 35 in the MIT tech review. https://www.technologyreview.com/lists/innovators-under-35/2014/visionary/quoc-le/ . At Emtech 35, we asked Quoc Le a few questions about his take on neural networks, its  development, philosophy, challenges and its future role in enabling or threatening humanity.

In part one of our interview with him, we ask him on the inspirations behind the development of neural networks and its various applications. Read on for insights from one of the biggest brains of this era responsible for making computers brainier.

Q: Much like the brain’s structure may have inspired the start of artificial neural networks, were you able to draw inspiration from detailed knowledge of the brains working in developing DNNs ? How inspired were you by these insights in developing your techniques ?

Le : I tell my friends that DNNs are a first order approximation of what seems to happen in the brain. Actually the brain is so complicated that we don’t know much about it. We know somewhat about the `hardware structure’, but not much about the `software’ and how it works. It is hard to find out much about these things, because you would have to cut brains and let animals/humans die in the process. So that makes this learning complicated.

What we do in our field of simulating the brain (DNNs) is to mimic some of the hardware architecture. We learnt the brain is hierarchical, and that neurons are organised into layers with different functionalities, so these were nuances of the `hardware architecture’ that we could mimic in developing DNNs. But other than that, the structure of the brain is just an inspiration to DNN development.


12:36 
Q: Do u have shocking examples where artificial intelligence (AI) has outclassed humans in pattern recognition tasks?
Le : One is image recognition. A researcher at Stanford, Dr. Andrej Karpathy (cs.stanford.edu/people/karpathy) was dealing with the problem of sifting through images and labelling them. He realised that in a head to head comparison, the machine learning algorithm was not far away from, or sometimes even better than, humans. 

Handwriting and face-recognition fields are also there. In fact, I would say handwriting recognition is a (computer science) problem that can be considered solved, unless its really bad handwriting. 

But i think those are only particular areas where AI outclasses humans, because scientists have been working on these narrow areas and were able to make lot of progress. But we don’t have something like a general algorithm that can outdo humans in many categories.

Q: What about other areas like robotics? Is it hard to train machines to move, grasp, and balance things ? 
QL: Yes, its hard. My friend at Berkeley University in Silicon valley used DNNs to train a robotic arm to grasp objects - move back and forth and stuff like that. He had some early success. I would say this field is a good investment, but a long term one.

Hard to speculate how big a range of other activities DNN can be used for, but I think healthcare will benefit from AI.

Also, Smart Transportation. Right now, human drivers still cause accidents. But if an AI can help it recognise objects, routes and threats, it can help drive better.

Q: How can AI be used in healthcare? Can you explain ?
Le: An example is medical diagnosis. For example, in my home country, Vietnam, I never had access to a good doctor in my youth. But now, we can train AIs to take over the task of a good doctor, or help doctors. I imagine it can be an algorithm on the phone, for an example, though we dont have one like it yet.

Your phone could be used to monitor your body status in terms of measurable quantities -  your temperature, heart beat, pulse, weight, colour of skin etc. Then, an AI could use this data to diagnose that maybe you have this sickness, or maybe you have a cold or a flu, or some skin disease.

For this, the AI must be trained to learn from the best experience and based on previous labelled records. These labelled records would involve cases where a human doctor examined the data and classified it into some diagnosis - `these symptoms mean the patient has a cold’, `this set of symptoms indicates so and so disease’, `for this go to a certain specialist’, etc. From this, the AI can learn to do the same. 

As an extension, it would be great if we can do this using unlabelled data as well, which is called unsupervised training (More on unsupervised training in part two of this interview).

Q: What about cyber security ?
Le: Yes, many use AI for cyber security. However, being such a tough task, AIs cannot manage or take over cyber security fully by themselves yet. What companies currently do is to hire security experts, code up a bunch of decisions for security applications, and then use a small AI on top of it to select or enable decisions. As far as I know, that is the only work being done there, and this is not a deep AI. But thats second hand knowledge, I’m not an expert and don’t know much about it. 


Q: Public might say automated cars. But does it really work, guarantee u insured safety?
QL: I do think so. Seeing a black pixel is different from 


Q: A more quirky question for you. Technology has been progressing rapidly in the last decade. With the current advances we are having in synthetic biology, scientists are exploring how you may be able to grow everything in a lab. Coupling this with deep learning, in next 50 years or so do you think there’s a possibility we could do something like download our brain into a hard disk, transplant it, and continue to live on again ?
QL: Well, its not so easy. Not in the next 10 years, I think, but I don’t know. The thing is, technologies face a lot of surprises in the course of their development. For DNNs, the rapid level of improvement in accuracy that has been achieved in image processing took us by surprise. But people anticipated similar progress in tasks like robots being able to collect objects, but we are seeing that such a thing is still far away. So while DNNs have made breakthroughs in some areas, extrapolating the success likewise to other areas is nontrivial. We have to wait for the tech to catch up in other aspects. 


Part two

In part two of our interview with Quoc Le, we discuss the bottlenecks in the development of neural networks, the future of this technology, the philosophy of opening up the field for development, and whether it could be a threat to humanity. and its various applications. Read on for insights from one of the brains behind making computers brainier.

3:53
Q: You told us about the rapid strides that deep neural networks have made so far. What is the current bottleneck in the development of this technology ?

QL: Number 1. Scale up the networks that we are working on training. Currently, they are about 100 times bigger than what people have done until recently, and now we are trying for a 1000. But we are still far from the size of even the cat brain, by a few orders of magnitude, let alone the human brain. So one thing we want to do is to be able to scale up to the size of an animal brain. We will face some challenges in this.

Number 2. Mastering Unsupervised learning. What we've been able to do so far is with labelled data, so thats supervised training. Let me try to explain. Imagine you walk around with a teacher who tells you everyday what is what. So you go with the teacher everyday and the teacher tells u everything. That is supervised learning. So you have a collection of images, and the teacher will tell u this is the image of a cat, dog, car, house, etc. What category it is.

Now, what we dont have enough of is Unsupervised learning. You walk around without a teacher, and nobody tells u what images it is, but u just have a collection of images, and u learn some sort of representation of it. This is something humans learn well to do but not machines, yet. Its belongs to the softwrae, and is complicated. 

To put things in practical context, in Google , we can take images at a lot of websites which dont come with labels (car, cow, dog). With improved unsupervised learning we have made some progress in that. Same with speech recognition. Thinking beyond that-  we have a lot of medical records. We dont have too many good doctors, so we want to learn only from the good doctors. Hopw do we do that? Things like that,


18:04
Q: 
QL: Our understanding of Neural nets is still limited. Why does it work so well. I spoke to a guy, back in the 1990s he used to work on nets. One of the obstacles for him was trying to understand neural nets, it was a big problem then, which convinced scientists not to work on it. We don’t want to work on something we don’t understand.

Fast forward a decade or 2, our understanding of deep learning is still limited, a better understanding will be great for safety, security etc, we better understand it.


0:00
QL: Hardest part is to get people interested to work on your technology. A lot of my friends get inspiration to work in companies that are open. Google happened to be a very open company. That is a factor that persuaded many of my friends to join. I think itll happen in future.

Google, FB, MS etc opened up. Thats a good sign. People have more choices.


Do you remember the names of the Jap/Chinese companies that ...

QL : No, I dont really remember. Bloomberg article on some of our friends who have started, but I forgot.


<doubtful about this part, have to listen again>
11:02
#2 About open secrenance?.  
Researchers care deeply about making a big impact in the world. If you notice, a lot of AI people dont go to work for Apple folks. Im not criticising Apple here, dont want to get into this debate. But one of the reasons is that people dont want to go into a secretive place for AI, such as Google or FB or something. As soon as something secretive happens, we will fail to attract talented people and fail in our mission to build good AI. So we will stay open so as long as we want to do this 


Q: But whether making it open is one way around it. When u keep it open, u don’t have control over who deploys it now.
QL: I always debate with myself whether its good to have 1 AI or many AIs. Right now we don’t know. my theory is that a more open AI that people know what it is will be better. Right now its hard to say whats best, but the question is for far future. Maybe deep learning isn’t really the key tech (for the breakthrough), maybe it could be something else right. so too early to plan. So far we look at tech progress and understand and make decisions based on that.


8:52
Q: There are many famous people raising concerns about AI and where deep learning will go. There are concerns that we have no clue what could happen if AI blows up. Could you elaborate your view on that?
 Your view is that your company will be open.

QL: I have 2 comments. 
#1 The time frame for something like what Elon Musk said, but the time frame for such things to happen is a 1000 years. But if you zoom in at this particular moment, 5 years ago AI was not a thing. I was working in Stanford in secret cos i was embarassed that if i tell it out people will laugh at me. machines didnt work that well then. Now things work well and we extrapolate that something dreadful might threaten humanity. I think.. there is a risk with every technology in mankind's history. Could be a car, or an airplane - they all killed humans. This could be , but in my perspective as a scientist we are very far from it.

<about Andrej> If u connect dots, you begin to think AI is already there, beating a human at it. But if u think about it, it happened cos a lot of datasets were collected fot this restrictd area in AI. The images were collected by Andre was a very small subset of possible images. It happened that ppl overtrained on that dataset and it worked well. But if u use it for other applications, such as object recognition - my friend used it in a phone app.. it still didn’t work as well as humans. pretty far from humans.



Lax: Am a huge terminator fan. Will there come a day when machines will overcome humans?
QL: There is one possible future, i don’t dismiss it. #1 its gonna be very far away. #2 its not the tech that does harm, many techs do harm like nuclear and so forth. Its about how u deploy it that has consequences. So be conscious about it. Whether theres a day lie that? everything is possible so i can’t rule out. But i think we’ll figure out a way to control it.






quoc le article.txt
Open

https://content.googleapis.com/static/proxy.html?jsh=m%3B%2F_%2Fscs%2Fabc-static%2F_%2Fjs%2Fk%3Dgapi.gapi.en.1MqgDU3zZ20.O%2Fm%3D__features__%2Frt%3Dj%2Fd%3D1%2Frs%3DAHpOoo_KZf1llAl4VsToQWFFf6-n5A3H0g#parent=https%3A%2F%2Fmail.google.com&rpctoken=447205224

Displaying quoc le article.txt.

1 reply

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s