Bienvenue. You're with Ankur Bhatia!

Status

Software Engineer - ML at Gan.ai (June 2021 - Present). Having fun with videos and building generative video models.



ICML 2020: Review -

Proof: Floyd’s cycle finding Algorithm (Hare and Turtle algo)

Floyd’s cycle-finding algorithm is a pointer algorithm that uses only two pointers, which move through the sequence at different speeds. This algorithm can be used to detect cycle in linked lists in O(n) time and O(1) Space and also detect the position of start of the linked list. This video tutorial explains the concept and proof of the algorithm.
Video_link: https://youtu.be/iQmFtH0kj2c

Paper Review on Ranknet

In this video, we understand about Ranknet. Learning to Rank using Ranknet (by Microsoft) is a Ranking Algorithm that is used to rank the results of a query. The ranking comparison is performed pairwise, no mapping to particular rank values is required and no rank boundaries are needed. Hence, this paper removes the need of performing ordinal regression. Also, this paper presents a probabilistic cost function and learning using Gradient Descent.
Video Link: https://www.youtube.com/watch?v=MuAhhikIm2U
Paper Link: ICML_RankNet
Code: ranknet.py

Deploy a flask app on Amazon AWS EC2 and keep it running while you are offline.

This blog is published under Analytics Vidhya Publication. Flask is a micro web framework in python. It is designed to make getting started quick and easy, with the ability to scale up to complex applications. It’s easy to get started with to deploy .py’s on the web…….
Read More

Some Helpful Blogs:

  1. Getting Started with Docker - https://championshuttler.github.io/docker-basicLearning/
  2. Bayesian Inference for Parameter Estimation - Medium Article
    • This includes details about Bayes Theorem, Prior, Posterior, Liklihood and Marginal (evidence) distributions (i.e. all parts of the bayes formula)
    • Maximum a posteriori probability estimate or simply, the MAP estimate - The most important statistic calculated from the Posterior Distribution. (called Mode (statistics))
    • The fact that the posterior and prior are both from the same distribution family (they are both Gaussians) means that they are called conjugate distributions. In this case the prior distribution is known as a conjugate prior.
    • Includes a best blog for Latent Dirichlet Allocation (LDA) which is an unsupervised learning algorithm for finding topics in text corpa.Blog
    • Markov Chain Monte Carlo methods - If the prior and or liklihood are not gaussians, then their multiplications (Bayes) is difficult. Blog
    • Continous Updating Bayesian Inference if new data points arises. Application in Kalman Filters. Blog
    • How can priors acts as regularizers.
  3. How to Use t-SNE Effectively - distill Pub T-SNE
  4. Stratifying the data during Train Test Slplit - https://towardsdatascience.com/3-things-you-need-to-know-before-you-train-test-split-869dfabb7e50
    • Using seed is important so that results are reproducible (seed numpy, tf, cuda)
    • Stratification is necessary so that the data distribution remains same in both train set and test. Not be the case like most data from 3 classes (say) are in train set and other 2 classes (say ) in test set. Easily be done using stratify argument in train_test_split() from sklearn.
  5. Self Supervised Representational Learning - https://lilianweng.github.io/lil-log/2019/11/10/self-supervised-learning.html
  6. Deep Learning Based Text Classification: A Comprehensive Review (2020) - arXiv pdf
  7. Deep Learning for Single Image Super-Resolution : A Brief Review (2019) - arXiv pdf

To Read about:

  1. Soap Bubble Effect in High Dimensionals Gaussians - Much of the density of a high dimensional Gaussian lies close to the surface of a hypersphere: Stack Exchange Question
  2. One of the best explanations of Gaussian Processes - 2019 distill Publication: visual-exploration-gaussian-processes/
  3. Deep NN’s as Gaussian Processes: Paper
  4. Fast and Easy Infinite Wide Networks - Google ICML 2020: Google AI Blog
  5. Infinitely Deep Infinite Width Networks - ICLR 2019: Paper