Natural Language Processing using Deep Learning course is suited for those who want to learn how to process and understand text. It covers the most popular architectures, including Recurrent Neural Networks and Hidden Markov Models.
It would be preferred to take the Basic Machine Learning course beforehand so the student is accustomed to the vocabulary and the main frameworks.
Prerequisites for the Natural Language Processing course:
– Basic Deep Learning: Neurons, Types of Layers, Networks, Loss Functions, Optimizers, Overfitting, Tensorflow
– Basic Natural Language Processing: Tokenization, Bag of words, Tf-idf, Stemming, Lemmatization, Language models, Sentiment analysis
Module 2: Word vectors
– What are vectors?
– Word analogies
– TF-IDF and t-SNE
– NLTK
– GloVe
– Word2vec
– Text classification using word vectors Hands-on Lab:
– Performing a basic text classification using multiple word vectors models
– Improve it by using basic text processing and language models to get the data ready for machine learning
Module 3: Language modeling
– Bigrams
– Language models
– Neural Network Bigram Model Hands-on Lab:
– Performing text classification using neural networks based on language models
– Understand the probabilistic modeling of language model, how to improve the context of a word and how synonyms can be generated and how basic neural networks generate powerful language models
Module 4: Word Embeddings
– CBOW
– Skip-Gram
– Negative Sampling Hands-on Lab: Understand advanced techniques for language modeling like Skip-Gram and Negative Sampling by implementing them and learn to predict the next most likely word in a conversation
Module 5: NLP techniques
– What is POS Tagging?
– POS Tagging Recurrent Neural Network
– POS Tagging Hidden Markov Model (HMM)
– Named Entity Recognition (NER)
– POS vs. NER Hands-on Lab: Use NLTK and SCIPY to improve your classification using grammar rules and POS, then use NER to highlight the most valuable content of a phrase, afterwards implement summarization
Module 6: Recurrent Neural Networks
– LSTM
– GRU
– Text Generation Hands-on Lab:
– Implement in Keras a basic RNN architecture for word prediction, using the already studied word embeddings
– Benchmark the performances of LSTM compared to GRU and BiLSTM
Module 7: Generative Neural Networks
Hands-on Lab:
– Implement in Keras your own generative model that generates lyrics similar to the ones from Shakespeare
– Learn to make Transfer Learning on text
Note:
Every student has assigned to him his own virtual lab environment setup.
Additional details:
To attend this course, you need to have:
• PC/Laptop with internet access
• Updated web browser
Natural Language Processing using Deep Learning
840
CATEGORY: Machine Learning Course
DURATION: 2 days
SKILL LEVEL: Associate
LECTURES: 7 lessons
PRICE: 840 €
CATEGORY: Machine Learning Course
DURATION: 2 days
SKILL LEVEL: Associate
LECTURES: 7 lessons
Course description:
Natural Language Processing using Deep Learning course is suited for those who want to learn how to process and understand text. It covers the most popular architectures, including Recurrent Neural Networks and Hidden Markov Models.
It would be preferred to take the Basic Machine Learning course beforehand so the student is accustomed to the vocabulary and the main frameworks.
Prerequisites for the Natural Language Processing course:
– Basic Deep Learning: Neurons, Types of Layers, Networks, Loss Functions, Optimizers, Overfitting, Tensorflow
– Basic Natural Language Processing: Tokenization, Bag of words, Tf-idf, Stemming, Lemmatization, Language models, Sentiment analysis
Module 2: Word vectors
– What are vectors?
– Word analogies
– TF-IDF and t-SNE
– NLTK
– GloVe
– Word2vec
– Text classification using word vectors Hands-on Lab:
– Performing a basic text classification using multiple word vectors models
– Improve it by using basic text processing and language models to get the data ready for machine learning
Module 3: Language modeling
– Bigrams
– Language models
– Neural Network Bigram Model Hands-on Lab:
– Performing text classification using neural networks based on language models
– Understand the probabilistic modeling of language model, how to improve the context of a word and how synonyms can be generated and how basic neural networks generate powerful language models
Module 4: Word Embeddings
– CBOW
– Skip-Gram
– Negative Sampling Hands-on Lab: Understand advanced techniques for language modeling like Skip-Gram and Negative Sampling by implementing them and learn to predict the next most likely word in a conversation
Module 5: NLP techniques
– What is POS Tagging?
– POS Tagging Recurrent Neural Network
– POS Tagging Hidden Markov Model (HMM)
– Named Entity Recognition (NER)
– POS vs. NER Hands-on Lab: Use NLTK and SCIPY to improve your classification using grammar rules and POS, then use NER to highlight the most valuable content of a phrase, afterwards implement summarization
Module 6: Recurrent Neural Networks
– LSTM
– GRU
– Text Generation Hands-on Lab:
– Implement in Keras a basic RNN architecture for word prediction, using the already studied word embeddings
– Benchmark the performances of LSTM compared to GRU and BiLSTM
Module 7: Generative Neural Networks
Hands-on Lab:
– Implement in Keras your own generative model that generates lyrics similar to the ones from Shakespeare
– Learn to make Transfer Learning on text
Note:
Every student has assigned to him his own virtual lab environment setup.
Additional details:
To attend this course, you need to have:
• PC/Laptop with internet access
• Updated web browser