pandas

15 Dec, 2016

Using Category Encoders library in Scikit-learn

2017-01-30T13:42:52-08:00December 15th, 2016|1 Comment

I recently found a relatively new library on github for handling categorical features named categorical_encoding and decided to give it a spin. As a reminder - categorical features are variables in your data that have a finite (ideally small) set of possible values, for example months of the year or hair color. You can't feed these into predictive models as raw text, so some conversion is necessary to prepare these variables to be useable. Typically, you create a new, separate column for each possible value (or alternately depending on the intended model, n-1 values) and each of these new [...]

18 Nov, 2016

TF-IDF Basics with Pandas and Scikit-Learn

2017-01-30T13:44:52-08:00November 18th, 2016|7 Comments

In a previous post we took a look at some basic approaches for preparing text data to be used in predictive models. In this post, well use pandas and scikit learn to turn the product "documents" we prepared into a Tf-idf weight matrix that can be used as the basis of a feature set for modeling. What is Tf-idf? Tf-idf is a very common technique for determining roughly what each document in a set of documents is "about". It cleverly accomplishes this by looking at two simple metrics: tf (term frequency) and idf (inverse document frequency). Term frequency is [...]

24 Jun, 2016

A Shiny New Python Data Science Sandbox in 30 Minutes Or Less

2017-01-30T11:40:40-08:00June 24th, 2016|5 Comments

This post will give beginners a full walkthrough to go from nothing to a fully functional linux/python/pandas/scikit-learn environement with jupyter as a front end. For exploratory work, I really like this stack. My native OS is Windows, but since we're using VMs I would imagine the setup for OS X is very similar and probably won't need any modification (other than steps for configuring the VM). If you have a solid internet connection, we should be able to get this all done in under 30 minutes startiiiinnnnnng NOW... 1. Download an Ubuntu Desktop version of your choice. I like [...]

10 May, 2016

Text Pre-processing Basics with Pandas

2017-01-30T13:47:13-08:00May 10th, 2016|4 Comments

In this post, we'll take a look at the data provided in Kaggle's Home Depot Product Search Relevance challenge to demonstrate some techniques that may be helpful in getting started with feature generation for text data. Dealing with text data is considerably different than numerical data, so there are a few basic approaches that are an excellent place to start. As always, before we start creating features we'll need to clean and massage the data! In the Home Depot challenge, we have a few files which provide attributes and descriptions of each of the products on their website. The [...]

1 Dec, 2014

Kaggle Titanic Competition Part VII – Random Forests and Feature Importance

2017-01-30T13:53:17-08:00December 1st, 2014|0 Comments

In the last post we took a look at how reduce noisy variables from our data set using PCA, and today we'll actually start modelling! Random Forests are one of the easiest models to run, and highly effective as well. A great combination for sure. If you're just starting out with a new problem, this is a great model to quickly build a reference model. There aren't a whole lot of parameters to tune, which makes it very user friendly. The primary parameters include how many decision trees to include in the forest, how much data to include in [...]

10 Nov, 2014

Kaggle Titantic Competition Part V – Interaction Variables

2017-01-30T13:54:05-08:00November 10th, 2014|0 Comments

In the last post we covered some ways to derive variables from string fields using intuition and insight. This time we'll cover derived variables that are a lot easier to generate. Interaction variables capture effects of the relationship between variables. They are constructed by performing mathematical operations on sets of features. The simple approach that we use in this example is to perform basic operators (add, subtract, multiply, divide) on each pair of numerical features. We could also get much more involved and include more than 2 features in each calculation, and/or use other operators (sqrt, ln, trig functions, [...]

Go to Top