KotlinConf 2019: Gradient Descent: The Ultimate Optimizer by Erik Meijer45:52 643 views 95% Published 9 months ago
Recording brought to you by American Express. https://americanexpress.io/kotlin-jobs
Working with any gradient-based machine learning algorithm requires the tedious task of tuning its hyper-parameters, such as the learning rate.
There exist more advanced techniques for automated hyper-parameter optimization, but they themselves introduce even more hyper-parameters to control the optimization process.
We propose to learn the hyper-parameters by gradient descent, and furthermore to learn the hyper-hyper-parameters by gradient descent as well, and so on.
As these towers of optimizers grow, they become significantly less sensitive to the choice of top-level hyper-parameters, hence decreasing the burden on the user to search for optimal values.
The best of all however, is that we illustrate all of this with sweet & simple Kotlin code that you could have easily written yourself.
KotlinConf website: https://jb.gg/fyaze5
KotlinConf on Twitter: https://twitter.com/kotlinconf
Kotlin website: https://jb.gg/pxrsn6
Kotlin blog: https://jb.gg/7uc7ow
Kotlin on Twitter: https://twitter.com/kotlin
#KotlinConf19 #Kotlin #JetBrains
About the Presenter:
Erik Meijer has been trying to bridge the ridge between theory and practice for most of his career. He is perhaps best known for his work on, amongst others, the Haskell, C#, Visual Basic, and Dart programming languages, as well as for his contributions to LINQ and the Reactive Framework (Rx). Most recently he is on a quest to make uncertainty a first-class citizen in mainstream programming languages