Seems there is a lot of harping on the details of the @DeepMind protein folding solution. Some of the concerns are legitimate, but in my mind even those are overblown. Yes, there is still a lot of work to be done, and edge cases to be solved, but the main solution is there. 1/n
And if there is one lesson to be learned form the past @DeepMind successes, especially with the way they solved Go, it's that even more powerful solutions and techniques are ahead. 2/n
Another often cited parallel is with #ImageNet: it seems that history is here repeating itself, and once the basic framework is in place, it will only accelerate dramatically over the next few years. 3/n
Another concern is with the understanding of the folding mechanisms, and here I have two things to say: 1. Despite what has been tirelessly echoed in some quarters, deep learning algorithms are not black boxes. We have always had a high degree of understanding of what they do 4/n
And this understanding has only grown over the years. Neural networks are very, very, very well understood algorithms, and their outputs can be interpreted in great depth. 5/n
2. And now for a small philosophical detour: there is probably a paradigm shift going on right now in science about how we understand what it means to understand something. The shift is the most pronounced in natural ("hard") sciences. 6/n
For the most of the past few centuries, the gold standard of understanding a physical phenomenon has been our ability to describe it in terms of mathematical equations *and* their solutions. 7/n
There will still be a place for those two, but increasingly the gold standard of understanding nature is the ability to create a powerful predictive ML model that describes it. 8/n
You can follow @tunguz.
Tip: mention @twtextapp on a Twitter thread with the keyword “unroll” to get a link to it.

Latest Threads Unrolled:

By continuing to use the site, you are consenting to the use of cookies as explained in our Cookie Policy to improve your experience.