Meeting Minutes from AI Lab session on Saturday 1 June at Bengaluru
#CellStratAILab #disrupt4.0 #WeCreateAISuperstars
CellStrat AI Lab members met for another intense AI session last Saturday in Bengaluru.
First Anand Mahalingam presented an intuitive session on how to use Alexa to develop a speech recognition app and demonstrated with help of an Echo device. Alexa is a leading NLP and Speech platform developed by Amazon and acts like an App Store for developers to publish speech recognition and speech generation apps.
Then Deepti Gupta presented an interesting session on how to recognize toxicity (rudeness or disrespect) in conversations using NLP. Toxicity detection is important but one must ensure to remove biases against identities which get frequently attacked in conversations. This allows one to remove unfair biases in conversation due to imbalanced datasets, while detecting toxicity.
Deepti demonstrated a solution which balances the classes in conversation, pre-processes the data to make it ready for model training, tokenizes the text and then trains the model using Keras and Scikit-Learn. The model uses bi-directional LSTM and Word Embeddings to make toxicity predictions.
Then came a deep presentation by Anshumaan Dash on how to reduce the size of trained models. Two papers were reviewed that discuss various ways we can use to optimize reduce the size of the trained model on disk. Novel ways like Vector quantization and Drop-Neuron algorithm were explained lucidly with implementation.
The Drop Neuron logic works with simplifying the structure of Deep Neural Networks. By pruning connections of the network by making the weight space sparse, one is able to drop connections. However, the model occupies the same amount of size as before in memory while doing a forward pass. Through this approach, the size of the network on disk is reduced, and the resultant model as a whole occupies less space in the RAM during a forward pass. The regularization makes use of li_regularizer and lo_regularizer.
The weights of a neural network take up all the space in memory assigned to the model. These weights are all slightly different floating point numbers, hence simple compression formats like zip don’t compress them well. This algorithm uses Uniform quantization, non-uniform quantization, and K-means clustering to quantize the weights, so the space taken is reduced significantly.
Interested in world class AI training as well as deep AI research ? Join us for the first AI Lab Hackathon on 8th June 2019 at BLR :-
Register : https://www.meetup.com/Disrupt-4-0/events/jvfhvqyzjblb/
Hackathon Topic : Object Detection and Localization in Images
Date : 8th June 2019, 10:30AM – 5:00PM
Loc. : WeWork, Embassy Tech Village, ORR, BLR
See you on 8th June for the AI Lab hackathon !
PS : Amazon Gift Certificates for the hackathon winners !
Questions ? Call me at +91-9742800566 !
Co-Founder & Chief Data Scientist, CellStrat