An SQLite database can be read directly into Python Pandas (a data analysis library). In this article we’ll demonstrate loading data from an SQLite database table into a Python Pandas Data Frame. We’ll also briefly cover the creation of the sqlite database table using Python.
Related course
Data Analysis with Python Pandas
We create a simple dataset using this code:
import sqlite3 as lite
import sys
con = lite.connect('population.db')
with con:
cur = con.cursor()
cur.execute("CREATE TABLE Population(id INTEGER PRIMARY KEY, country TEXT, population INT)")
cur.execute("INSERT INTO Population VALUES(NULL,'Germany',81197537)")
cur.execute("INSERT INTO Population VALUES(NULL,'France', 66415161)")
cur.execute("INSERT INTO Population VALUES(NULL,'Spain', 46439864)")
cur.execute("INSERT INTO Population VALUES(NULL,'Italy', 60795612)")
cur.execute("INSERT INTO Population VALUES(NULL,'Spain', 46439864)")
It creates the SQLite database containing one table with dummy data.
SQLite dataset created from script
Sqlite to Python Panda DataframeAn SQL query result can directly be stored in a panda dataframe:
import pandas as pd
import sqlite3
conn = sqlite3.connect('population.db')
query = "SELECT country FROM Population WHERE population > 50000000;"
df = pd.read_sql_query(query,conn)
for country in df['country']:
print(country)
We connect to the SQLite database using the line:
conn = sqlite3.connect('population.db')
The line that converts SQLite data to a Panda data frame is:
df = pd.read_sql_query(query,conn)
where query is a traditional SQL query.
The dataframe (df) will contain the actual data.
RetroSearch is an open source project built by @garambo | Open a GitHub Issue
Search and Browse the WWW like it's 1997 | Search results from DuckDuckGo
HTML:
3.2
| Encoding:
UTF-8
| Version:
0.7.4