59
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
this post was submitted on 01 Dec 2024
59 points (100.0% liked)
Programming
17674 readers
100 users here now
Welcome to the main community in programming.dev! Feel free to post anything relating to programming here!
Cross posting is strongly encouraged in the instance. If you feel your post or another person's post makes sense in another community cross post into it.
Hope you enjoy the instance!
Rules
Rules
- Follow the programming.dev instance rules
- Keep content related to programming in some way
- If you're posting long videos try to add in some form of tldr for those who don't want to watch videos
Wormhole
Follow the wormhole through a path of communities !webdev@programming.dev
founded 2 years ago
MODERATORS
Its a paradigm shift from pandas. In polars, you define a pipeline, or a set of instructions, to perform on a dataframe, and only execute them all at once at the end of your transformation. In other words, its lazy. Pandas is eager, which every part of the transformation happens sequentially and in isolation. Polars also has an eager API, but you likely want to use the lazy API in a production script.
Because its lazy, Polars performs query optimization, like a database does with a SQL query. At the end of the day, if you're using polars for data engineering or in a pipeline, it'll likely work much faster and more memory efficient. Polars also executes operations in parallel, as well.
What kind of query optimization can it for scanning data that's already in memory?
A big feature of polars is only loading applicable data from disk. But during exporatory data analysis (EDA) you often have the whole dataset in memory. In this case, filters wont help much there. Polars has a good page in their docs about all the possible optimizations it is capable of. https://docs.pola.rs/user-guide/lazy/optimizations/
One I see off the top is projection pushdown, which only selects relevant columns for a final transformations. In pandas, if you perform a group by with aggregation, then only look at a few columns, you still perform aggregation across all the data. In polars lazy API, you would define the entire process upfront, and it would know not to aggregate certain columns, for instance.
Hm, that's kind of interesting
But my first reaction is that optimizations only at the "Python processing level" are going to be pretty limited since it's not going to have metadata/statistics, and it'd depend heavily on the source data layout, e.g. CSV vs parquet
You are correct. For some data sources like parquet it includes some metadata that helps with this, but it's not as robust at databases I dont think. And of course, cvs have no metadata (I guess a header row.)
The actually specification for how to efficiently store tabular data in memory that also permits quick execution of filtering, pivoting, i.e. all the transformations you need...is called apache arrow. It is the backend of polars and is also a non-default backend of pandas. The complexity of the format I'm unfamiliar with.