Auto-populate Twitter Lists in 5 minutes with Python

2018-01-20 00:00:00 +0000

We are going to create a Python script that will automatically search Twitter for individuals who use the #FreeCodeCamp hashtag and add them to a Twitter list of “FreeCodeCampers”. Twitter lists are a way to curate a group of individuals on Twitter and collect all of their tweets in a stream, without having to follow each individual accounts. Twitter lists can contain up to 5,000 individual Twitter accounts.

We can accomplish this by installing a couple of Python packages, registering an application with Twitter, accessing our Twitter credentials, making Twitter Search API calls and Twitter list API calls.

1. Installing necessary Python packages

First we need to create a file named where we will write our script and then import two Python modules into this file:

Import Config

In the same directory as we will create a file named that stores our top secret Twitter API credentials. We are going to import our API credentials from that file into our by including the line import config. Twitter requires a valid API key, API secret, access token and token secret for all API requests.

Import Twython

Twython is a Python wrapper for the Twitter API that makes it easier to programmatically access and manipulate data from Twitter using Python. We can import Twython with the following line from twython import Twython, TwythonError.

Your should now look like this:

import config
from twython import Twython, TwythonError

2. Twitter Authentication

Second, we need to authenticate our application in order to access the Twitter API. You need to have a Twitter account in order to access Twitter’s Application Management site. The Application Management site is where you can view/edit/create API keys, API secrets, access tokens and token secrets.

  1. In order to create these credentials we need to create an application by going to the Application Management site and clicking “Create new app” which should direct you to a page that looks similar to the one below.

  1. You should fill out of the required fields and then click “Create your Twitter application”. You will then be redirected to a page with details about your application

  2. Click on the tab that says “Keys and Access Tokens”. Copy the Consumer Key (API Key) and Consumer Secret (API Secret) into the file

  3. Scroll down and click to create access tokens. Copy the generated Access Token and Access Token Secret into the file.

For reference, I recommend formatting your similar to the file below:

Above is my recommended format for a Twitterr API If you are committing to GitHub.

  1. Right now all of our Twitter credentials live inside of our We’ve imported config into our file however, we have not actually passed any information between the files.

Let’s change that by creating a Twython object and passing in the necessary secret passwords from our with the following:

twitter = Twython(config.api_key, config.api_secret, config.access_token, config.token_secret)

The file should now look similar to this:

    import config

from twython import Twython, TwythonError

# create a Twython object by passing the necessary secret passwords
twitter = Twython(config.api_key, config.api_secret, config.access_token, config.token_secret)

3. API Twitter searches and add users to list

  • Let’s make an API call to search Twitter and return the 100 most recent, original tweets that contain “#FreeCodeCamp”
      # return tweets containing #FreeCodeCamp
response =’”#FreeCodeCamp” -filter:retweets’, result_type=”recent”, count=100)
  • Look at the tweets returned from our search
    # for each tweet returned from search of #FreeCodeCamp
for tweet in response[statuses]:
# print tweet info if needed for debugging

A single tweet returned by this API call looks like this in JSON:

    {'created_at': 'Sun Dec 24 00:23:05 +0000 2017', 'id': 944725078763298816, 'id_str': '944725078763298816', 'text': 'Why is it so hard to wrap my head around node/express. Diving in just seems so overwhelming. Templates, forms, post…
'truncated': True, 'entities': {'hashtags': [], 'symbols': [], 'user_mentions': [], 'urls': [{'url': '', 'expanded_url': '', 'display_url': '…', 'indices': [117, 140]}]}, 'metadata': {'iso_language_code': 'en', 'result_type': 'recent'}, 'source': '<a href="" rel="nofollow">Twitter Web Client</a>', 'in_reply_to_status_id': None, 'in_reply_to_status_id_str': None, 'in_reply_to_user_id': None, 'in_reply_to_user_id_str': None, 'in_reply_to_screen_name': None, 'user': {'id': 48602981, 'id_str': '48602981', 'name': 'Matt Huberty', 'screen_name': 'MattHuberty', 'location': 'Oxford, MS', 'description': "I'm a science and video game loving eagle scout with a Microbio degree from UF. Nowadays I'm working on growing my tutoring business at Ole Miss. Link below!", 'url': '', 'entities': {'url': {'urls': [{'url': '', 'expanded_url': '', 'display_url': '', 'indices': [0, 23]}]}, 'description': {'urls': []}}, 'protected': False, 'followers_count': 42, 'friends_count': 121, 'listed_count': 4, 'created_at': 'Fri Jun 19 04:00:44 +0000 2009', 'favourites_count': 991, 'utc_offset': -28800, 'time_zone': 'Pacific Time (US & Canada)', 'geo_enabled': False, 'verified': False, 'statuses_count': 199, 'lang': 'en', 'contributors_enabled': False, 'is_translator': False, 'is_translation_enabled': False, 'profile_background_color': 'C0DEED', 'profile_background_image_url': '', 'profile_background_image_url_https': '', 'profile_background_tile': False, 'profile_image_url': '', 'profile_image_url_https': '', 'profile_banner_url': '', 'profile_link_color': '1DA1F2', 'profile_sidebar_border_color': 'C0DEED', 'profile_sidebar_fill_color': 'DDEEF6', 'profile_text_color': '333333', 'profile_use_background_image': True, 'has_extended_profile': True, 'default_profile': True, 'default_profile_image': False, 'following': False, 'follow_request_sent': False, 'notifications': False, 'translator_type': 'none'}, 'geo': None, 'coordinates': None, 'place': None, 'contributors': None, 'is_quote_status': False, 'retweet_count': 1, 'favorite_count': 0, 'favorited': False, 'retweeted': False, 'lang': 'en'}

and like this on

  • Add Tweet-ers to our Twitter list

In order to add the author of the tweet to our Twitter list we need the username associated with the tweet tweet['user']['screen_name']

Let’s try to add the users from these tweets to our freecodecamplist. I created my Twitter list at which means for my script the slug is freecodecampers and the owner_screen_name is mine, waterproofheart.

You can create your own Twitter list by navigating to your Twitter profile on desktop and clicking on the left-hand side to “Create a list”. View the official Twitter List documentation for more information.

    for tweet in response['statuses']:

# try to add each user who has tweeted the hashtag to the list
twitter.add_list_member(slug=YOUR_LIST_SLUG, owner_screen_name=YOUR_USERNAME, screen_name= tweet[user][screen_name])

#if for some reason Twython can't add user to the list print exception message
except TwythonError as e:

You can test your script by running python in the terminal.

My working script looks like this:

This script can be set to automatically run locally or remotely via a cron job which allow tasks to be performed on a server on a set schedule.

A version of this article was originally published by Monica Powell on FreeCodeCamp on January, 17th, 2018

Conquering the Command Line

2018-01-02 00:00:00 +0000

Output on Mac OS terminal after typing: telnet

When I was first introduced to the command line I really had to adjust to navigating my computer in a black box with just text. So I avoided the command line as much as possible. I was accustomed to the visual cues and feedback that a computer usually provides. In many ways it felt like I was re-learning how to use a computer via the command line.

Yet, since first learning how to navigate my computer using UNIX commands I’ve learned that the command line doesn’t have to be a scary thing just because there’s no visual feedback when typing a password in on the command line. As security, nothing shows up as you type in your password to indicate that any characters have been entered.

What is the command line?

The command line is a software that executes commands or instructions for a computer to manipulate or interact with its file system.

What is UNIX?

Why Use the Command Line?

In order to get started on the command line you should navigate to your applications and open the Terminal application.

terminal-1.png Above is the Terminal Icon on Mac.

Create a Basic Website Folder on the Command Line


Folder structure of sample project

A folder with the above structure can be create on the command line by typing the commands inside of an empty directory:


We start inside of an empty directory!

  • Make a directory (also known as a folder) called personal-website
    mkdir personal-website


We’ve created a folder named personal-website

  • Navigate to inside of the directory called personal-website
    cd personal-website
  • create a directory, inside of the personal-website folder called assets
    mkdir assets


We’ve created a folder inside of personal-website to contain all of our assets

  • Navigate inside of the assets folder which is inside of the personal-website folder
    cd assets
  • create a directory, inside of the assets folder named images
    mdkir images
  • create a directory, inside of the assets folder named js
    mkdir js
  • create a directory, inside of the assets folder named css
    mkdir css


We’ve created folders inside of personal-website/assets to store our project’s assets


Woops! We forgot to create an index.html file :(

We are in the assets folder and want an index.html file in our main personal-website folder. Typing cd .. will move us out of the assets folder and into the directory above which is personal-website. Now that we are in the personal-website folder if we type touch index.html a blank index.html file will be created.


Some frequently used terminal commands are:

commands to navigate/manipulate the filesystem

ls - list the contents of a directory

pwd - print working directory for the terminal to display the directory you are currently working on

touch - create or open a file without making any changes
very handy when wanting to create empty files without leaving the command line

sudo - this allows you to run commands as a super user

mv - move a file or directory this can be used to move or rename a file by updating the file path

cd - change the current directory you are working on so that you can access files on a different part of the system
cd moves you to the root directory (top level folder on computer — usually the current User)
cd . current directory
cd .. navigates to directory two levels up

mkdir - make a new directory (or a folder)

Commands to Install Software

You can install some software from the command line using the following commands:

  • in Python pip install <package name>.
    Pip is a software package manager for Python.
  • in JavaScript npm install <package name>
    NPM is a package manager for JavaScript pages.

Commands to Run Software

In order to run a script on the command line you need to provide a command prompt and file name. Some examples are:

  • in Java javac and then java filename compiles java projects and then runs them.
  • in Python python filename runs python scripts.

If you find you are repeating a lot of commands you can scroll through your recent commands using the up/down arrows and edit them and re-run by navigating to them and then pressing enter.

Additional Resources to Get Started with Command Line Prompts

Decorating the Command Line

You can completely customize the colors and outputs on the command line to better suit your visual and aesthetic needs.

I made my command line appear prettier by installing the theme Tomorrow Night. Check out this site for instructions on installing the theme Tomorrow Night.

A version of this article was originally published by Monica Powell on FreeCodeCamp on December, 5th, 2017

How to Add Author Bio to Posts in Jekyll

2017-10-02 00:00:00 +0000

The above image is a preview of how the author bio will appear at the end of this tutorial.

Datalogues is powered by Jekyll, a static-site generator. The theme I selected for the site did not support authors out of the box however it is easy to implement author functionality in Jekyll.

1) Edit/create appropriate folders and files in Jekyll project

  • front matter of individual blog posts where author should be included
  • _layouts/post.html

and created the following folders/files:

  • _data/authors.yml
  • _includes/author_bio.html

2) Store Author Data

I have stored my author data in a folder called _data that contains a file authors.yml. The author information associated with monica_powell is pulled into my post from the authors.yml data file.

    name: Monica Powell
    bio: Monica Powell is a web technologist that cares about increasing the visiblity of underestimated individuals in technology. In 2015, she received the &#35;GIRLBOSS award from Sophia Amoruso’s Girl Boss Foundation. She’s currently focusing on making tech more enjoyable & accessible and is always up to chat data visualizations, web development or &#35;BlackGirlMagic.

3) Reference relevant authors in the front matter of individual blog posts

In the front matter of each blog post in Jekyll you should reference authors in YAML (YAML Ain’t Markup Language) using the following format author: NAME OF AUTHOR. The name of author should be an exact match one of the variables in your authors.yml

The front matter in Jekyll sets the metadata for a post and is key to properly building posts. YAML is a human friendly data serialization standard for all programming languages.

Here is an example of the front matter for this particular post.

layout: post
title: How to Add Author Bio in Jekyll
description: A guide to adding author bios in Jekyll
image: assets/images/author-bio.png
permalink: adding-author-bios-in-jekyll
author: monica_powell
comments: true

4) Define HTML for author bio

in the folder _includes create a file called author_bio.html to define the HTML for how author bio’s should be displayed

5) Add author bios to the post layout

Add a line in post.html where author bio should appear and pull in the HTML as defined above in author_bio.html. The logic is set so that it will only call that HTML template if there is author information associated with this particular post.

  ## if there is an author bio
  {% if %}
      {% include author_bio.html %}
  {% endif %}

All done! Feel free to comment below or tweet me if you have any questions!

How to Use the TMDB API to Find Films with the Highest Revenue

2017-05-28 00:00:00 +0000

Get Out has been one of the most talked about films in 2017 and as of April 2017 the highest grossing debut film based on an original screenplay in history. We want to programmatically find out how Get Out ranked amongst other 2017 American films and which films have earned the most revenue in 2017. This tutorial assumes most readers have basic working knowledge of Python.


  • Install the following python packages and run them ideally in a virtualenv.
    • config
    • requests
    • locale
    • pandas
    • matplotlib
  • In addition to installing the above dependencies we will need to request an API key from The Movie DB (TMDB). TMDB has a free API to programmatically access information about movies.

    • In order to request an API key from TMDB:
      1. Create a free account
      2. Check your e-mail to verify your account.
      3. Visit the API Settings page in your Account Settings and request an API key
      4. You should now have an API key and be ready to go!
import config # to hide TMDB API keys
import requests # to make TMDB API calls
import locale # to format currency as USD
locale.setlocale( locale.LC_ALL, '' )

import pandas as pd
import matplotlib
import matplotlib.pyplot as plt
from matplotlib.ticker import FuncFormatter # to format currency on charts axis

api_key = config.tmdb_api_key # get TMDB API key from file

If you plan on committing your project to GitHub or another public repository and need help setting up config you should read this article about using config to hide API keys.

Part 1: Determine the highest earning American films of 2017

In this section we will request 2017 data from TMDB, store the data we recieve as a json into a dataframe and then use matplotlib to visualize our data.

Make API Call to TMDB to return the data of interest

In order to get the highest earning films from TMDB an API request needs to be constructed to return films with a primary_release_year of 2017 sorted in descending order by revenue.

response = requests.get('' +  api_key + '&primary_release_year=2017&sort_by=revenue.desc')
highest_revenue = response.json() # store parsed json response

# uncomment the next line to get a peek at the highest_revenue json structure
# highest_revenue

highest_revenue_films = highest_revenue['results']

Create dataframe from JSON returned from TMDB API call

Let’s store the JSON data returned from our API call in a dataframe to store each film and its associated revenue.

# define column names for our new dataframe
columns = ['film', 'revenue']

# create dataframe with film and revenue columns
df = pandas.DataFrame(columns=columns)

Now to add the data to our dataframe we will need to loop through the data.

# for each of the highest revenue films make an api call for that specific movie to return the budget and revenue
for film in highest_revenue_films:
    # print(film['title'])
    film_revenue = requests.get(''+ str(film['id']) +'?api_key='+ api_key+'&language=en-US')
    film_revenue = film_revenue.json()
    #print(locale.currency(film_revenue['revenue'], grouping=True ))
    df.loc[len(df)]=[film['title'],film_revenue['revenue']] # store title and revenue in our dataframe    

Below is what the dataframe head (top 5 lines) looks like after iterating through the films our API call returned.

film revenue
0 Beauty and the Beast 1221782049
1 The Fate of the Furious 1212583865
2 Guardians of the Galaxy Vol. 2 744784722
3 Logan 608674100
4 Kong: Skull Island 565151307

Let’s actually see the data with matplotlib

We will create a horizontal bar chart using matplotlib to display the revenue earned for each film.'ggplot')
fig, ax = plt.subplots()
df.plot(kind="barh", y='revenue', color = ['#624ea7', '#599ad3', '#f9a65a', '#9e66ab', 'purple'], x=df['film'], ax=ax)

#format xaxis in terms of currency
formatter = FuncFormatter(currency)

avg = df['revenue'].mean()

# Add a line for the average
ax.axvline(x=avg, color='b', label='Average', linestyle='--', linewidth=1)

ax.set(title='American Films with Highest Revenue (2017)', xlabel='Revenue', ylabel='Film')
[<matplotlib.text.Text at 0x111f8aba8>,
 <matplotlib.text.Text at 0x111f20978>,
 <matplotlib.text.Text at 0x111fad2e8>]


Part 2: Determine the highest earning American films of all-time

In this section we will request all-time data from TMDB, store the data we recieve as a json into a dataframe and then use matplotlib to visualize our data. Our API call will be similar to the one we used in the previous section but sans &primary_release_year=2017.

Requesting, formatting and storing API data

response = requests.get('' +  api_key + '&sort_by=revenue.desc')
highest_revenue_ever = response.json()
highest_revenue_films_ever = highest_revenue_ever['results']

columns = ['film', 'revenue', 'budget', 'release_date']
highest_revenue_ever_df = pandas.DataFrame(columns=columns)

for film in highest_revenue_films_ever:
    # print(film['title'])

    film_revenue = requests.get(''+ str(film['id']) +'?api_key='+ api_key+'&language=en-US')
    film_revenue = film_revenue.json()
    # print(film_revenue)

    # print(locale.currency(film_revenue['revenue'], grouping=True ))

    # Lord of the Rings duplicate w/ bad data was being returned
    # It's budget was $281 which is way too low for a top-earning film. Therefore in order to be added to dataframe the film
    # budget must be greater than $281.

    if film_revenue['budget'] > 281:
        # print(film_revenue['budget'])
        # add film title, revenue, budget and release date to the dataframe
        highest_revenue_ever_df.loc[len(highest_revenue_ever_df)]=[film['title'],film_revenue['revenue'], (film_revenue['budget'] * -1), film_revenue['release_date']]


film revenue budget release_date
0 Avatar 2781505847 -237000000 2009-12-10
1 Star Wars: The Force Awakens 2068223624 -245000000 2015-12-15
2 Titanic 1845034188 -200000000 1997-11-18
3 The Avengers 1519557910 -220000000 2012-04-25
4 Jurassic World 1513528810 -150000000 2015-06-09

Calculate the gross profit

We can calculate the gross profit by subtracting total revenue from amount spent. Earlier we made the budget values negative therefore we need to add the revenue to the (negative) budget to get the gross profit which is effectively subtraction.

highest_revenue_ever_df['gross'] = highest_revenue_ever_df['revenue'] + highest_revenue_ever_df['budget']

What does the dataframe look like now?

film revenue budget release_date gross
0 Avatar 2781505847 -237000000 2009-12-10 2544505847
1 Star Wars: The Force Awakens 2068223624 -245000000 2015-12-15 1823223624
2 Titanic 1845034188 -200000000 1997-11-18 1645034188
3 The Avengers 1519557910 -220000000 2012-04-25 1299557910
4 Jurassic World 1513528810 -150000000 2015-06-09 1363528810

Plotting data in matplotlib with horizontal bar charts and a scatter plot

fig, ax = plt.subplots()
highest_revenue_ever_df.plot(kind="barh", y='revenue', color = ['#624ea7', '#599ad3', '#f9a65a', '#9e66ab', 'purple'], x=highest_revenue_ever_df['film'], ax=ax)
formatter = FuncFormatter(currency)
ax.set(title='American Films with Highest Revenue (All Time)', xlabel='Revenue', ylabel='Film')
[<matplotlib.text.Text at 0x111c90e48>,
 <matplotlib.text.Text at 0x111f85588>,
 <matplotlib.text.Text at 0x1120f0e48>]


fig, ax = plt.subplots()
highest_revenue_ever_df.plot(kind="barh", y='gross', color = ['#624ea7', '#599ad3', '#f9a65a', '#9e66ab', 'purple'], x=highest_revenue_ever_df['film'], ax=ax)
formatter = FuncFormatter(currency)
ax.set(title='Gross Profit of the American Films with Highest Revenue (All Time)', xlabel='Gross Profit', ylabel='Film')
[<matplotlib.text.Text at 0x112285cf8>,
 <matplotlib.text.Text at 0x1120bf198>,
 <matplotlib.text.Text at 0x11234de10>]


fig, ax = plt.subplots()
highest_revenue_ever_df.plot(kind='scatter', y='gross', x='budget', ax=ax)
formatter = FuncFormatter(currency)
ax.set(title='Profit vs Budget of the American Films with Highest Revenue (All Time)', xlabel='Budget', ylabel='Gross Profit')

[<matplotlib.text.Text at 0x112b67f98>,
 <matplotlib.text.Text at 0x112b29518>,
 <matplotlib.text.Text at 0x112b8e550>]


# Adding release year to dataframe
# highest_revenue_ever_df['year'] = pd.DatetimeIndex(highest_revenue_ever_df['release_date']).year
# print(highest_revenue_ever_df)


The above data and graphs do not account for inflation (the TMDB API returns by revenue unadjusted by inflation) therefore the earnings from more recent films are more weighted than their earlier counterparts. When looking at all time data inflation should be adjusted for however when looking over a shorter time period adjusting for inflation might not be necessary. Older films would appear above if inflation was taken into account, as it is now, the oldest film on this list was The Titanic in 1997.

Cover photo is Chris Washington, played by Daniel Kaluuya, from Get Out. Universal Pictures

How to Hide Your API Keys in Python

2017-05-27 00:00:00 +0000

Protect your application’s API Keys while committing to Git.

If you plan on programming any applications and storing your code in a public GitHub repository then it is important that you protect your API keys 🔑 by ensuring that they are not searchable or otherwise publicly accessible.

What’s an API?

An application programming interface (API) is a structured set of instructions for building applications. If you want to leverage data from services such as Twitter, The New York Times, Slack, Spotify etc. then you should read their APIs to figure out how to structure your queries to receive data from their service or to post on their service.

What are API keys?

API keys allow developers to access APIs and are unique keys associated with that particular developer and/or application. Just like you shouldn’t share your passwords you should never share your API keys. It is important to protect your API keys so that people do not take any actions as you which could result in your API key being revoked due to somebody else exceeding rate limits or abusing/violating an APIs terms of service. A rate limit is when an application limits the number of API calls that a specific application or user can make during a specified period of time.

How do I protect my API keys on Github?

Here’s how to hide API keys in Python from GitHub using to store your sensitive API keys and tokens in a separate file from your main script. I used similar code when accessing the Twitter Search API for my blackgirlmagic twitter bot.

Create 3 Files in Your Application

This file will store your API keys. You just need to update the portion in the strings with your API keys, depending on the service you may or may not need all four types of API keys. These in particular are required to create a Twitter application.

This file will store your main script that needs to access the API keys. This file can be named whatever you like.


A .gitignore file tells GitHub to ignore the noted files, directories or files that end in specific extensions when committing files to GitHub.** This step is crucial to ensure that your file does not end up viewable on GitHub! Here’s a collection of useful .gitignore templates.**

Originally published at *Black Tech Diva.

How to Change Repo Language in GitHub

2017-05-20 00:00:00 +0000

I recently started working on a Weather app in Flask to auto-detect a user’s location based off of their IP address. After committing some updates to GitHub my app switched from being labeled as predominately Python to 98.9% CSS even though it was a Flask application in which most of the code I had written was in Python and HTML. Now and again, I do not agree with how GitHub classifies the languages in my repositories so I set out to figure out how to fix this issue.


Before: My Flask App Appeared in GitHub as 98.9% CSS.

Pro-tip: Help GitHub properly detect your repositories main language(s).

GitHub has a linguist library that auto-detects the language within every repository. Upon researching how to resolve GitHub misclassifying the language of your projects I found out the solution is as simple as telling GitHub which files to ignore. While you still want to commit these files to GitHub and therefore can’t use a .gitignore you can tell GitHub’s linguist which files to ignore in a .gitattribute file. (Side note: Check out my piece on “Hiding API Keys from GitHub” if you are interested in learning about .gitignore).

the solution is as simple as telling GitHub which files to ignore!

Upon examining the documentation for the linguist library I learned that adding just one line to a .gitattribute file would resolve my language issues for this particular repo.

My .gitattribute:

This one-line file told GitHub to ignore all of my files in my static/ folder which is where CSS and other assets are stored for a Flask app. Vendor files can sometimes take up a lot of relative space so I am telling the linguist to just ignore them (since they were accounting for 98.9% of my project)!


After: My Flask App Appears in GitHub now as 56.2% Python and 43.8% HTML. Here’s a repository with sample .gitattribute files for you try the next time you disagree with the linguist ;). Note: If the linguist truly is wrong GitHub encourages you to report it as an issue.

I hope this article was helpful! I would love to hear some of your tricks for GitHub and am happy to answer any questions you may have.

Also published on Medium.

How to Import CSV and XLS Data into Pandas

2017-05-20 00:00:00 +0000

Pandas is a Python Data Analysis Library. It allows you to play around with data and perform powerful data analysis.

In this example I will show you how to read data from CSV and Excel files in Pandas. You can then save the read output as in a Pandas dataframe. The sample data used in the below exercise was generated by

import pandas as pd
csv_data_df = pd.read_csv('data/MOCK_DATA.csv')

Preview the first 5 lines of the data with .head() to ensure that it loaded.

id first_name last_name email gender ip_address
0 1 Ross Ricart Male
1 2 Jenn Pizer Female
2 3 Delainey Sulley Male
3 4 Nessie Feirn Female
4 5 Noami Flanner Female

You will need to pip install xlrd if you haven’t already. In order to import data from Excel.

import xlrd
excel_data_df = pd.read_excel('data/MOCK_DATA.xlsx')
id first_name last_name email gender ip_address
0 1 Chloris Antliff Female
1 2 Brion Gierok Male
2 3 Fleur Skells Female
3 4 Dora Privost Female
4 5 Annabella Hucker Female

Image Courtesy of jballeis (Own work) CC BY-SA 3.0, via Wikimedia Commons


Brooklyn, New York