Leveraging Long Short-Term Memory (LSTM) to generate novel jazz solos.
The dataset is a corpus of jazz music with the generation using 78 values. In this case, values can be thought as musical notes.
This model was propositioned to learn musical patterns, so the LSTM was set with 64 dimensional hidden states. Then input and output layers were created for the network.
Next was to create a djmodel() function which:
a. Creating a custom lambda layer.
Inspired by a problem statement on Upwork, though not this. An advert was placed for a data scientist that could analyze interconnection between connections in one’s network. It gave me an idea to analyze my LinkedIn growth, searched online and found this repo by Guillaume Chevalier.
Navigated to my LinkedIn profile, went to settings then privacy setting to get my data. It took 13 minutes before I was notified that it was ready. The file I got was in .txt, I had to use an online extractor for it, probably because I requested for more than my connections dataset.
Using the COVID-19 dataset from Kaggle , build sentiment analysis models using logistic regression and LSTM.
I imported the needed libraries to make the preprocessing possible.
loaded the csv file and encoded it using latin1 as UTF-8 or just reading without encoding returns errors. I used the .info function to get information about the dataset.
When building a model, data collection is the most essential part of the machine learning lifecycle. All around the internet, there are so many datasets one can use to build and train a model with their class names and labels. What happens when one wants to create a custom dataset with custom classes? This question is posed by many beginners including me. This is where data scraping comes in.
What is data scraping? According to wikipedia, data scraping is a technique in which a computer program extracts data from human-readable output coming from another program. Data scraping comes in form…
Tech is the new oil and everyone wants to take a huge chunk out of the cake. It is good, but the only problem is that most people do not calm down to know what they want, the route the want to follow or the language. This issue can be said to stem from lack of mentorship to which an extent I agree to. I went through that issue because I didn’t know where to start from or what I wanted to do, I wanted something fast and easy to learn, but this field isn’t something you rush into.
Text generation is one of the many breakthroughs of machine learning. This breakthrough comes in handy in the world of creative arts through song writing, poems, short stories and even novels. I decided to channel my inner Shakespeare by building and training a neural network that generates poems by predicting the next set of words from the seed text using LSTM. LSTMs are the go to models for text generation and its preferred over RNNs because of RNNs vanishing and exploding gradients problems.
I sourced the data from a combination of dialogues from Shakespearean novels spanning over 2500 lines of…
Detection of presence or absence of cardiovascular disease based on: Age Height Weight Gender Smoking Alcohol intake Physical activity Systolic Blood Pressure Diastolic Blood Pressure Cholesterol Glucose.
The dataset used for this project was sourced from Kaggle .
Importing library: I imported the following libraries:
pandas as pd numpy as np seaborn as sns matplotlib.pyplot as plt tensorflow as tf
Using the pandas read_csv function, I loaded the dataset and using the .head fucntion, I got the first five rows of the dataset.
I dropped the id column and converted the age into years as the given age in the…
A transformer model is an architecture used in natural language processing for transforming one sequence to another using two of its part which is the encoding and decoding.
In this article, I will be creating a transformer model that translates from English to Deutsch. Deutsch, otherwise known as German, according to wikipedia is a West Germanic language mainly spoken in Central Europe. It is the most widely spoken and official or co-official language in Germany, Austria, Switzerland, South Tyrol in Italy, the German-speaking Community of Belgium, and Liechtenstein. …
BERT is a Bidirectional Encoder Representations from Transformers which is designed to pre-train deep bidirectional representations from unlabeled text.
This article is on how to use BERT for sentiment analysis.
After I imported the libraries and loaded the dataset from the file, I started cleaning the data. This involves removing symbols that may interfere during tokenization.
Next I tokenized the data, which in order for me to do, I had to create a BERT layer using the Keras layer from the hub .
Then encoded the sentences for cleaning.
Next, I created padded batches, this way I added the minimum…
Trax is an end-to-end library for deep learning that focuses on clear code and speed. It is actively used and maintained in the Google Brain team.
I decided to use trax for sentiment analysis with twitter dataset. For more about trax, the repo is found here .
I used a conda environment with python 3.6 as python 3.8 isn’t really compatible with some trax module imports.
I started by importing the required libraries and dependencies plus the tweet dataset from utils.py.
I split both the negative and positive datasets into train and validation sets, printed out the length and number…