• About Us
  • Privacy Policy
  • Disclaimer
  • Contact Us
TechTrendFeed
  • Home
  • Tech News
  • Cybersecurity
  • Software
  • Gaming
  • Machine Learning
  • Smart Home & IoT
No Result
View All Result
  • Home
  • Tech News
  • Cybersecurity
  • Software
  • Gaming
  • Machine Learning
  • Smart Home & IoT
No Result
View All Result
TechTrendFeed
No Result
View All Result

How I Constructed a Knowledge Cleansing Pipeline Utilizing One Messy DoorDash Dataset

Admin by Admin
October 17, 2025
Home Machine Learning
Share on FacebookShare on Twitter


How I Built a Data Cleaning Pipeline Using One Messy DoorDash Dataset
How I Built a Data Cleaning Pipeline Using One Messy DoorDash Dataset
Picture by Editor

 

# Introduction

 
In keeping with CrowdFlower’s survey, information scientists spend 60% of their time organizing and cleansing the info.

On this article, we’ll stroll via constructing an information cleansing pipeline utilizing a real-life dataset from DoorDash. It comprises almost 200,000 meals supply data, every of which incorporates dozens of options resembling supply time, complete gadgets, and retailer class (e.g., Mexican, Thai, or American delicacies).

 

# Predicting Meals Supply Occasions with DoorDash Knowledge

 
Predicting Food Delivery Times with DoorDash DataPredicting Food Delivery Times with DoorDash Data
 
DoorDash goals to estimate the time it takes to ship meals precisely, from the second a buyer locations an order to the time it arrives at their door. In this information undertaking, we’re tasked with creating a mannequin that predicts the overall supply length primarily based on historic supply information.

Nonetheless, we gained’t do the entire undertaking—i.e., we gained’t construct a predictive mannequin. As a substitute, we’ll use the dataset supplied within the undertaking and create an information cleansing pipeline.

Our workflow consists of two main steps.

 
Data Cleaning PipelineData Cleaning Pipeline
 

 

# Knowledge Exploration

 
Data Cleaning PipelineData Cleaning Pipeline
 

Let’s begin by loading and viewing the primary few rows of the dataset.

 

// Load and Preview the Dataset

import pandas as pd
df = pd.read_csv("historical_data.csv")
df.head()

 

Right here is the output.

 
Data Cleaning PipelineData Cleaning Pipeline
 

This dataset contains datetime columns that seize the order creation time and precise supply time, which can be utilized to calculate supply length. It additionally comprises different options resembling retailer class, complete merchandise depend, subtotal, and minimal merchandise worth, making it appropriate for varied kinds of information evaluation. We will already see that there are some NaN values, which we’ll discover extra carefully within the following step.

 

// Discover The Columns With data()

Let’s examine all column names with the data() technique. We’ll use this technique all through the article to see the modifications in column worth counts; it’s a great indicator of lacking information and general information well being.

 

Right here is the output.

 
Data Cleaning PipelineData Cleaning Pipeline
 

As you’ll be able to see, we’ve got 15 columns, however the variety of non-null values differs throughout them. This implies some columns comprise lacking values, which might have an effect on our evaluation if not dealt with correctly. One last item: the created_at and actual_delivery_time information varieties are objects; these must be datetime.

 

# Constructing Knowledge Cleansing Pipeline

 
On this step, we construct a structured information cleansing pipeline to arrange the dataset for modeling. Every stage addresses frequent points resembling timestamp formatting, lacking values, and irrelevant options.
 
Building Data Cleaning PipelineBuilding Data Cleaning Pipeline
 

// Fixing the Date and Time Columns Knowledge Varieties

Earlier than doing information evaluation, we have to repair the columns that present the time. In any other case, the calculation that we talked about (actual_delivery_time – created_at) will go flawed.

What we’re fixing:

  • created_at: when the order was positioned
  • actual_delivery_time: when the meals arrived

These two columns are saved as objects, so to have the ability to do calculations accurately, we’ve got to transform them to the datetime format. To do this, we are able to use datetime capabilities in pandas. Right here is the code.

import pandas as pd
df = pd.read_csv("historical_data.csv")
# Convert timestamp strings to datetime objects
df["created_at"] = pd.to_datetime(df["created_at"], errors="coerce")
df["actual_delivery_time"] = pd.to_datetime(df["actual_delivery_time"], errors="coerce")
df.data()

 

Right here is the output.

 
Building Data Cleaning PipelineBuilding Data Cleaning Pipeline
 

As you’ll be able to see from the screenshot above, the created_at and actual_delivery_time are datetime objects now.

 
Building Data Cleaning PipelineBuilding Data Cleaning Pipeline
 

Among the many key columns, store_primary_category has the fewest non-null values (192,668), which implies it has essentially the most lacking information. That’s why we’ll deal with cleansing it first.

 

// Knowledge Imputation With mode()

One of many messiest columns within the dataset, evident from its excessive variety of lacking values, is store_primary_category. It tells us what sort of meals shops can be found, like Mexican, American, and Thai. Nonetheless, many rows are lacking this data, which is an issue. As an example, it will possibly restrict how we are able to group or analyze the info. So how can we repair it?

We’ll fill these rows as an alternative of dropping them. To do this, we are going to use smarter imputation.

We write a dictionary that maps every store_id to its most frequent class, after which use that mapping to fill in lacking values. Let’s see the dataset earlier than doing that.

 
Data Imputation With modeData Imputation With mode
 

Right here is the code.

import numpy as np

# World most-frequent class as a fallback
global_mode = df["store_primary_category"].mode().iloc[0]

# Construct store-level mapping to essentially the most frequent class (quick and strong)
store_mode = (
    df.groupby("store_id")["store_primary_category"]
      .agg(lambda s: s.mode().iloc[0] if not s.mode().empty else np.nan)
)

# Fill lacking classes utilizing the store-level mode, then fall again to world mode
df["store_primary_category"] = (
    df["store_primary_category"]
      .fillna(df["store_id"].map(store_mode))
      .fillna(global_mode)
)

df.data()

 

Right here is the output.

 
Data Imputation With modeData Imputation With mode
 

As you’ll be able to see from the screenshot above, the store_primary_category column now has a better non-null depend. However let’s double-check with this code.

df["store_primary_category"].isna().sum()

 

Right here is the output displaying the variety of NaN values. It’s zero; we removed all of them.

 
Data Imputation With modeData Imputation With mode
 

And let’s see the dataset after the imputation.

 
Data Imputation With modeData Imputation With mode

 

// Dropping Remaining NaNs

Within the earlier step, we corrected the store_primary_category, however did you discover one thing? The non-null counts throughout the columns nonetheless don’t match!

It is a clear signal that we’re nonetheless coping with lacking values in some a part of the dataset. Now, on the subject of information cleansing, we’ve got two choices:

  • Fill these lacking values
  • Drop them

On condition that this dataset comprises almost 200,000 rows, we are able to afford to lose some. With smaller datasets, you’d should be extra cautious. In that case, it’s advisable to research every column, set up requirements (determine how lacking values will likely be stuffed—utilizing the imply, median, most frequent worth, or domain-specific defaults), after which fill them.

To take away the NaNs, we are going to use the dropna() technique from the pandas library. We’re setting inplace=True to use the modifications on to the DataFrame with no need to assign it once more. Let’s see the dataset at this level.

 
Dropping NaNsDropping NaNs
 

Right here is the code.

df.dropna(inplace=True)
df.data()

 

Right here is the output.

 
Dropping NaNsDropping NaNs
 

As you’ll be able to see from the screenshot above, every column now has the identical variety of non-null values.

Let’s see the dataset after all of the modifications.

 
Dropping NaNsDropping NaNs
 

 

// What Can You Do Subsequent?

Now that we’ve got a clear dataset, right here are some things you are able to do subsequent:

  • Carry out EDA to know supply patterns.
  • Engineer new options like supply hours or busy dashers ratio so as to add extra which means to your evaluation.
  • Analyze correlations between variables to extend your mannequin’s efficiency.
  • Construct completely different regression fashions and discover the best-performing mannequin.
  • Predict the supply length with the best-performing mannequin.

 

# Remaining Ideas

 
On this article, we’ve got cleaned the real-life dataset from DoorDash by addressing frequent information high quality points, resembling fixing incorrect information varieties and dealing with lacking values. We constructed a easy information cleansing pipeline tailor-made to this information undertaking and explored potential subsequent steps.

Actual-world datasets could be messier than you assume, however there are additionally many strategies and methods to unravel these points. Thanks for studying!
 
 

Nate Rosidi is an information scientist and in product technique. He is additionally an adjunct professor educating analytics, and is the founding father of StrataScratch, a platform serving to information scientists put together for his or her interviews with actual interview questions from prime corporations. Nate writes on the most recent developments within the profession market, offers interview recommendation, shares information science initiatives, and covers every part SQL.



Tags: builtCleaningDatadatasetDoorDashMessypipeline
Admin

Admin

Next Post
North Korean Hackers Use EtherHiding to Cover Malware Inside Blockchain Good Contracts

North Korean Hackers Use EtherHiding to Cover Malware Inside Blockchain Good Contracts

Leave a Reply Cancel reply

Your email address will not be published. Required fields are marked *

Trending.

Reconeyez Launches New Web site | SDM Journal

Reconeyez Launches New Web site | SDM Journal

May 15, 2025
Safety Amplified: Audio’s Affect Speaks Volumes About Preventive Safety

Safety Amplified: Audio’s Affect Speaks Volumes About Preventive Safety

May 18, 2025
Flip Your Toilet Right into a Good Oasis

Flip Your Toilet Right into a Good Oasis

May 15, 2025
Discover Vibrant Spring 2025 Kitchen Decor Colours and Equipment – Chefio

Discover Vibrant Spring 2025 Kitchen Decor Colours and Equipment – Chefio

May 17, 2025
Apollo joins the Works With House Assistant Program

Apollo joins the Works With House Assistant Program

May 17, 2025

TechTrendFeed

Welcome to TechTrendFeed, your go-to source for the latest news and insights from the world of technology. Our mission is to bring you the most relevant and up-to-date information on everything tech-related, from machine learning and artificial intelligence to cybersecurity, gaming, and the exciting world of smart home technology and IoT.

Categories

  • Cybersecurity
  • Gaming
  • Machine Learning
  • Smart Home & IoT
  • Software
  • Tech News

Recent News

ChatGPT Advertisements and the Ethics of AI Monetization

ChatGPT Advertisements and the Ethics of AI Monetization

February 10, 2026
New Cybercrime Group 0APT Accused of Faking Tons of of Breach Claims

New Cybercrime Group 0APT Accused of Faking Tons of of Breach Claims

February 10, 2026
  • About Us
  • Privacy Policy
  • Disclaimer
  • Contact Us

© 2025 https://techtrendfeed.com/ - All Rights Reserved

No Result
View All Result
  • Home
  • Tech News
  • Cybersecurity
  • Software
  • Gaming
  • Machine Learning
  • Smart Home & IoT

© 2025 https://techtrendfeed.com/ - All Rights Reserved