Beginner’s Guide to Data Analysis with Polars

0
7



Image by Author | Ideogram

 

Introduction

 
When you’re new to analyzing with Python, pandas is usually what most analysts learn and use. But Polars has become super popular and is faster and more efficient.

Built in Rust, Polars handles data processing tasks that would slow down other tools. It is designed for speed, memory efficiency, and ease of use. In this beginner-friendly article, we’ll spin up fictional coffee shop data and analyze it to learn Polars. Sounds interesting? Let’s begin!

🔗 Link to the code on GitHub

 

Installing Polars

 
Before we dive into analyzing data, let’s get the installation steps out of the way. First, install Polars:

! pip install polars numpy

 

Now, let’s import the libraries and modules:

import polars as pl
import numpy as np
from datetime import datetime, timedelta

 

We use pl as an alias for Polars.

 

Creating Sample Data

 
Imagine you’re managing a small coffee shop, say “Bean There,” and have hundreds of receipts and related data to analyze. You want to understand which drinks sell best, which days bring in the most revenue, and related questions. So yeah, let’s start coding! ☕

To make this guide practical, let’s create a realistic dataset for “Bean There Coffee Shop.” We’ll generate data that any small business owner would recognize:

# Set up for consistent results
np.random.seed(42)

# Create realistic coffee shop data
def generate_coffee_data():
    n_records = 2000
    # Coffee menu items with realistic prices
    menu_items = ['Espresso', 'Cappuccino', 'Latte', 'Americano', 'Mocha', 'Cold Brew']
    prices = [2.50, 4.00, 4.50, 3.00, 5.00, 3.50]
    price_map = dict(zip(menu_items, prices))

    # Generate dates over 6 months
    start_date = datetime(2023, 6, 1)
    dates = [start_date + timedelta(days=np.random.randint(0, 180))
             for _ in range(n_records)]

    # Randomly select drinks, then map the correct price for each selected drink
    drinks = np.random.choice(menu_items, n_records)
    prices_chosen = [price_map[d] for d in drinks]

    data = {
        'date': dates,
        'drink': drinks,
        'price': prices_chosen,
        'quantity': np.random.choice([1, 1, 1, 2, 2, 3], n_records),
        'customer_type': np.random.choice(['Regular', 'New', 'Tourist'],
                                          n_records, p=[0.5, 0.3, 0.2]),
        'payment_method': np.random.choice(['Card', 'Cash', 'Mobile'],
                                           n_records, p=[0.6, 0.2, 0.2]),
        'rating': np.random.choice([2, 3, 4, 5], n_records, p=[0.1, 0.4, 0.4, 0.1])
    }
    return data

# Create our coffee shop DataFrame
coffee_data = generate_coffee_data()
df = pl.DataFrame(coffee_data)

 

This creates a sample dataset with 2,000 coffee transactions. Each row represents one sale with details like what was ordered, when, how much it cost, and who bought it.

 

Looking at Your Data

 
Before analyzing any data, you need to understand what you’re working with. Think of this like looking at a new recipe before you start cooking:

# Take a peek at your data
print("First 5 transactions:")
print(df.head())

print("\nWhat types of data do we have?")
print(df.schema)

print("\nHow big is our dataset?")
print(f"We have {df.height} transactions and {df.width} columns")

 

The head() method shows you the first few rows. The schema tells you what type of information each column contains (numbers, text, dates, etc.).

First 5 transactions:
shape: (5, 7)
┌─────────────────────┬────────────┬───────┬──────────┬───────────────┬────────────────┬────────┐
│ date                ┆ drink      ┆ price ┆ quantity ┆ customer_type ┆ payment_method ┆ rating │
│ ---                 ┆ ---        ┆ ---   ┆ ---      ┆ ---           ┆ ---            ┆ ---    │
│ datetime[μs]        ┆ str        ┆ f64   ┆ i64      ┆ str           ┆ str            ┆ i64    │
╞═════════════════════╪════════════╪═══════╪══════════╪═══════════════╪════════════════╪════════╡
│ 2023-09-11 00:00:00 ┆ Cold Brew  ┆ 5.0   ┆ 1        ┆ New           ┆ Cash           ┆ 4      │
│ 2023-11-27 00:00:00 ┆ Cappuccino ┆ 4.5   ┆ 1        ┆ New           ┆ Card           ┆ 4      │
│ 2023-09-01 00:00:00 ┆ Espresso   ┆ 4.5   ┆ 1        ┆ Regular       ┆ Card           ┆ 3      │
│ 2023-06-15 00:00:00 ┆ Cappuccino ┆ 5.0   ┆ 1        ┆ New           ┆ Card           ┆ 4      │
│ 2023-09-15 00:00:00 ┆ Mocha      ┆ 5.0   ┆ 2        ┆ Regular       ┆ Card           ┆ 3      │
└─────────────────────┴────────────┴───────┴──────────┴───────────────┴────────────────┴────────┘

What types of data do we have?
Schema({'date': Datetime(time_unit="us", time_zone=None), 'drink': String, 'price': Float64, 'quantity': Int64, 'customer_type': String, 'payment_method': String, 'rating': Int64})

How big is our dataset?
We have 2000 transactions and 7 columns

 

Adding New Columns

 
Now let’s start extracting business insights. Every coffee shop owner wants to know their total revenue per transaction:

# Calculate total sales amount and add useful date information
df_enhanced = df.with_columns([
    # Calculate revenue per transaction
    (pl.col('price') * pl.col('quantity')).alias('total_sale'),

    # Extract useful date components
    pl.col('date').dt.weekday().alias('day_of_week'),
    pl.col('date').dt.month().alias('month'),
    pl.col('date').dt.hour().alias('hour_of_day')
])

print("Sample of enhanced data:")
print(df_enhanced.head())

 

Output (your exact numbers may vary):

Sample of enhanced data:
shape: (5, 11)
┌─────────────┬────────────┬───────┬──────────┬───┬────────────┬─────────────┬───────┬─────────────┐
│ date        ┆ drink      ┆ price ┆ quantity ┆ … ┆ total_sale ┆ day_of_week ┆ month ┆ hour_of_day │
│ ---         ┆ ---        ┆ ---   ┆ ---      ┆   ┆ ---        ┆ ---         ┆ ---   ┆ ---         │
│ datetime[μs ┆ str        ┆ f64   ┆ i64      ┆   ┆ f64        ┆ i8          ┆ i8    ┆ i8          │
│ ]           ┆            ┆       ┆          ┆   ┆            ┆             ┆       ┆             │
╞═════════════╪════════════╪═══════╪══════════╪═══╪════════════╪═════════════╪═══════╪═════════════╡
│ 2023-09-11  ┆ Cold Brew  ┆ 5.0   ┆ 1        ┆ … ┆ 5.0        ┆ 1           ┆ 9     ┆ 0           │
│ 00:00:00    ┆            ┆       ┆          ┆   ┆            ┆             ┆       ┆             │
│ 2023-11-27  ┆ Cappuccino ┆ 4.5   ┆ 1        ┆ … ┆ 4.5        ┆ 1           ┆ 11    ┆ 0           │
│ 00:00:00    ┆            ┆       ┆          ┆   ┆            ┆             ┆       ┆             │
│ 2023-09-01  ┆ Espresso   ┆ 4.5   ┆ 1        ┆ … ┆ 4.5        ┆ 5           ┆ 9     ┆ 0           │
│ 00:00:00    ┆            ┆       ┆          ┆   ┆            ┆             ┆       ┆             │
│ 2023-06-15  ┆ Cappuccino ┆ 5.0   ┆ 1        ┆ … ┆ 5.0        ┆ 4           ┆ 6     ┆ 0           │
│ 00:00:00    ┆            ┆       ┆          ┆   ┆            ┆             ┆       ┆             │
│ 2023-09-15  ┆ Mocha      ┆ 5.0   ┆ 2        ┆ … ┆ 10.0       ┆ 5           ┆ 9     ┆ 0           │
│ 00:00:00    ┆            ┆       ┆          ┆   ┆            ┆             ┆       ┆             │
└─────────────┴────────────┴───────┴──────────┴───┴────────────┴─────────────┴───────┴─────────────┘

 

Here’s what’s happening:

  • with_columns() adds new columns to our data
  • pl.col() refers to existing columns
  • alias() gives our new columns descriptive names
  • The dt accessor extracts parts from dates (like getting just the month from a full date)

Think of this like adding calculated fields to a spreadsheet. We’re not changing the original data, just adding more information to work with.

 

Grouping Data

 
Let’s now answer some interesting questions.

// Question 1: Which drinks are our best sellers?

This code groups all transactions by drink type, then calculates totals and averages for each group. It’s like sorting all your receipts into piles by drink type, then calculating totals for each pile.

drink_performance = (df_enhanced
    .group_by('drink')
    .agg([
        pl.col('total_sale').sum().alias('total_revenue'),
        pl.col('quantity').sum().alias('total_sold'),
        pl.col('rating').mean().alias('avg_rating')
    ])
    .sort('total_revenue', descending=True)
)

print("Drink performance ranking:")
print(drink_performance)

 
Output:

Drink performance ranking:
shape: (6, 4)
┌────────────┬───────────────┬────────────┬────────────┐
│ drink      ┆ total_revenue ┆ total_sold ┆ avg_rating │
│ ---        ┆ ---           ┆ ---        ┆ ---        │
│ str        ┆ f64           ┆ i64        ┆ f64        │
╞════════════╪═══════════════╪════════════╪════════════╡
│ Americano  ┆ 2242.0        ┆ 595        ┆ 3.476454   │
│ Mocha      ┆ 2204.0        ┆ 591        ┆ 3.492711   │
│ Espresso   ┆ 2119.5        ┆ 570        ┆ 3.514793   │
│ Cold Brew  ┆ 2035.5        ┆ 556        ┆ 3.475758   │
│ Cappuccino ┆ 1962.5        ┆ 521        ┆ 3.541139   │
│ Latte      ┆ 1949.5        ┆ 514        ┆ 3.528846   │
└────────────┴───────────────┴────────────┴────────────┘

 

// Question 2: What do the daily sales look like?

Now let’s find the number of transactions and the corresponding revenue for each day of the week.

daily_patterns = (df_enhanced
    .group_by('day_of_week')
    .agg([
        pl.col('total_sale').sum().alias('daily_revenue'),
        pl.len().alias('number_of_transactions')
    ])
    .sort('day_of_week')
)

print("Daily business patterns:")
print(daily_patterns)

 
Output:

Daily business patterns:
shape: (7, 3)
┌─────────────┬───────────────┬────────────────────────┐
│ day_of_week ┆ daily_revenue ┆ number_of_transactions │
│ ---         ┆ ---           ┆ ---                    │
│ i8          ┆ f64           ┆ u32                    │
╞═════════════╪═══════════════╪════════════════════════╡
│ 1           ┆ 2061.0        ┆ 324                    │
│ 2           ┆ 1761.0        ┆ 276                    │
│ 3           ┆ 1710.0        ┆ 278                    │
│ 4           ┆ 1784.0        ┆ 288                    │
│ 5           ┆ 1651.5        ┆ 265                    │
│ 6           ┆ 1596.0        ┆ 259                    │
│ 7           ┆ 1949.5        ┆ 310                    │
└─────────────┴───────────────┴────────────────────────┘

 

Filtering Data

 
Let’s find our high-value transactions:

# Find transactions over $10 (multiple items or expensive drinks)
big_orders = (df_enhanced
    .filter(pl.col('total_sale') > 10.0)
    .sort('total_sale', descending=True)
)

print(f"We have {big_orders.height} orders over $10")
print("Top 5 biggest orders:")
print(big_orders.head())

 
Output:

We have 204 orders over $10
Top 5 biggest orders:
shape: (5, 11)
┌─────────────┬────────────┬───────┬──────────┬───┬────────────┬─────────────┬───────┬─────────────┐
│ date        ┆ drink      ┆ price ┆ quantity ┆ … ┆ total_sale ┆ day_of_week ┆ month ┆ hour_of_day │
│ ---         ┆ ---        ┆ ---   ┆ ---      ┆   ┆ ---        ┆ ---         ┆ ---   ┆ ---         │
│ datetime[μs ┆ str        ┆ f64   ┆ i64      ┆   ┆ f64        ┆ i8          ┆ i8    ┆ i8          │
│ ]           ┆            ┆       ┆          ┆   ┆            ┆             ┆       ┆             │
╞═════════════╪════════════╪═══════╪══════════╪═══╪════════════╪═════════════╪═══════╪═════════════╡
│ 2023-07-21  ┆ Cappuccino ┆ 5.0   ┆ 3        ┆ … ┆ 15.0       ┆ 5           ┆ 7     ┆ 0           │
│ 00:00:00    ┆            ┆       ┆          ┆   ┆            ┆             ┆       ┆             │
│ 2023-08-02  ┆ Latte      ┆ 5.0   ┆ 3        ┆ … ┆ 15.0       ┆ 3           ┆ 8     ┆ 0           │
│ 00:00:00    ┆            ┆       ┆          ┆   ┆            ┆             ┆       ┆             │
│ 2023-07-21  ┆ Cappuccino ┆ 5.0   ┆ 3        ┆ … ┆ 15.0       ┆ 5           ┆ 7     ┆ 0           │
│ 00:00:00    ┆            ┆       ┆          ┆   ┆            ┆             ┆       ┆             │
│ 2023-10-08  ┆ Cappuccino ┆ 5.0   ┆ 3        ┆ … ┆ 15.0       ┆ 7           ┆ 10    ┆ 0           │
│ 00:00:00    ┆            ┆       ┆          ┆   ┆            ┆             ┆       ┆             │
│ 2023-09-07  ┆ Latte      ┆ 5.0   ┆ 3        ┆ … ┆ 15.0       ┆ 4           ┆ 9     ┆ 0           │
│ 00:00:00    ┆            ┆       ┆          ┆   ┆            ┆             ┆       ┆             │
└─────────────┴────────────┴───────┴──────────┴───┴────────────┴─────────────┴───────┴─────────────┘

 

Analyzing Customer Behavior

 
Let’s look into customer patterns:

# Analyze customer behavior by type
customer_analysis = (df_enhanced
    .group_by('customer_type')
    .agg([
        pl.col('total_sale').mean().alias('avg_spending'),
        pl.col('total_sale').sum().alias('total_revenue'),
        pl.len().alias('visit_count'),
        pl.col('rating').mean().alias('avg_satisfaction')
    ])
    .with_columns([
        # Calculate revenue per visit
        (pl.col('total_revenue') / pl.col('visit_count')).alias('revenue_per_visit')
    ])
)

print("Customer behavior analysis:")
print(customer_analysis)

 

Output:

Customer behavior analysis:
shape: (3, 6)
┌───────────────┬──────────────┬───────────────┬─────────────┬──────────────────┬──────────────────┐
│ customer_type ┆ avg_spending ┆ total_revenue ┆ visit_count ┆ avg_satisfaction ┆ revenue_per_visi │
│ ---           ┆ ---          ┆ ---           ┆ ---         ┆ ---              ┆ t                │
│ str           ┆ f64          ┆ f64           ┆ u32         ┆ f64              ┆ ---              │
│               ┆              ┆               ┆             ┆                  ┆ f64              │
╞═══════════════╪══════════════╪═══════════════╪═════════════╪══════════════════╪══════════════════╡
│ Regular       ┆ 6.277832     ┆ 6428.5        ┆ 1024        ┆ 3.499023         ┆ 6.277832         │
│ Tourist       ┆ 6.185185     ┆ 2505.0        ┆ 405         ┆ 3.518519         ┆ 6.185185         │
│ New           ┆ 6.268827     ┆ 3579.5        ┆ 571         ┆ 3.502627         ┆ 6.268827         │
└───────────────┴──────────────┴───────────────┴─────────────┴──────────────────┴──────────────────┘

 

Putting It All Together

 
Let’s create a comprehensive business summary:

# Create a complete business summary
business_summary = {
    'total_revenue': df_enhanced['total_sale'].sum(),
    'total_transactions': df_enhanced.height,
    'average_transaction': df_enhanced['total_sale'].mean(),
    'best_selling_drink': drink_performance.row(0)[0],  # First row, first column
    'customer_satisfaction': df_enhanced['rating'].mean()
}

print("\n=== BEAN THERE COFFEE SHOP - SUMMARY ===")
for key, value in business_summary.items():
    if isinstance(value, float) and key != 'customer_satisfaction':
        print(f"{key.replace('_', ' ').title()}: ${value:.2f}")
    else:
        print(f"{key.replace('_', ' ').title()}: {value}")

 

Output:

=== BEAN THERE COFFEE SHOP - SUMMARY ===
Total Revenue: $12513.00
Total Transactions: 2000
Average Transaction: $6.26
Best Selling Drink: Americano
Customer Satisfaction: 3.504

 

Conclusion

 
You’ve just completed a comprehensive introduction to data analysis with Polars! Using our coffee shop example, (I hope) you’ve learned how to transform raw transaction data into meaningful business insights.

Remember, becoming proficient at data analysis is like learning to cook — you start with basic recipes (like the examples in this guide) and gradually get better. The key is practice and curiosity.

Next time you analyze a dataset, ask yourself:

  • What story does this data tell?
  • What patterns might be hidden here?
  • What questions could this data answer?

Then use your new Polars skills to find out. Happy analyzing!
 
 

Bala Priya C is a developer and technical writer from India. She likes working at the intersection of math, programming, data science, and content creation. Her areas of interest and expertise include DevOps, data science, and natural language processing. She enjoys reading, writing, coding, and coffee! Currently, she’s working on learning and sharing her knowledge with the developer community by authoring tutorials, how-to guides, opinion pieces, and more. Bala also creates engaging resource overviews and coding tutorials.