有很多选择时如何提高我的pandas效率

我有一个有两百万行的大数据框。有60000个唯一的(store_id,product_id)对。


我需要按每个 (store_id, product_id) 进行选择,进行一些计算,例如重新采样到H, sum, avg。最后,将所有连接到一个新的数据框。


问题是它非常非常慢,并且在运行时变得越来越慢。


主要代码是:


def process_df(df, func, *args, **kwargs):

    '''

    '''

    product_ids = df.product_id.unique()

    store_ids = df.store_id.unique()


    # uk = df.drop_duplicates(subset=['store_id','product_id'])

    # for idx, item in uk.iterrows():


    all_df = list()

    i = 1


    with tqdm(total=product_ids.shape[0]*store_ids.shape[0]) as t:


        for store_id in store_ids:

            sdf = df.loc[df['store_id']==store_id]

            for product_id in product_ids:

                new_df = sdf.loc[(sdf['product_id']==product_id) ]


                if new_df.shape[0] < 14:

                    continue


                new_df = func(new_df, *args, **kwargs)

                new_df.loc[:, 'store_id'] = store_id

                new_df.loc[:, 'product_id'] = product_id


                all_df.append(new_df)


                t.update()


        all_df= pd.concat(all_df)


    return all_df



def process_order_items(df, store_id=None, product_id=None, freq='D'):

    if store_id and "store_id" in df.columns:

        df = df.loc[df['store_id']==store_id]


    if product_id and "product_id" in df.columns:

        df = df.loc[df['product_id']==product_id]


    # convert to datetime


    df.loc[:, "datetime_create"] = pd.to_datetime(df.time_create, unit='ms').dt.tz_localize('UTC').dt.tz_convert('Asia/Shanghai').dt.tz_localize(None)

    df = df[["price", "count", "fee_total", "fee_real", "price_real",  "price_guide", "price_change_category", "datetime_create"]]


    df.loc[:, "has_discount"] = (df.price_change_category > 0).astype(int) 

    df.loc[:, "clearance"] = df.price_change_category.apply(lambda x:x in(10, 20, 23)).astype(int) 



我认为问题是有太多多余的选择。


也许我groupby(['store_id','product_id']).agg可以避免重复使用,但是我不知道如何使用process_order_items它并将结果合并在一起。


HUH函数
浏览 147回答 1
1回答

一只萌萌小番薯

我认为你可以改变:df.loc[:,"clearance"] = df.price_change_category.apply(lambda x:x in(10, 20, 23)).astype(int)&nbsp;到Series.isin:df["clearance"] = df.price_change_category.isin([10, 20, 23]).astype(int)&nbsp;也是解决方案Resampler.aggregate:d = {'has_discount':'sum',&nbsp; &nbsp; &nbsp;'clearance':'sum',&nbsp; &nbsp; &nbsp;'count': ['count', 'sum'],&nbsp; &nbsp; &nbsp;'price_guide':'max'}df1 = df.resample(freq).agg(d)df1.columns = df1.columns.map('_'.join)d1 = {'has_discount_count':'discount_order_count',&nbsp; &nbsp; &nbsp; 'clearance_count':'clearance_order_count',&nbsp; &nbsp; &nbsp; 'count_count':'order_count',&nbsp; &nbsp; &nbsp; 'count_sum':'day_count',&nbsp; &nbsp; &nbsp; 'price_guide_max':'price_guide'}df1.rename(columns=d1)另一个想法是不将布尔掩码转换为整数,而是使用列进行过滤,例如:df["has_discount"] = df.price_change_category > 0df["clearance"] = df.price_change_category.isin([10, 20, 23])discount_sale_count = df.loc[df.has_discount, 'count'].resample(freq).sum()clearance_sale_count = df.loc[df.clearance, 'count'].resample(freq).sum()#for filtering ==0 invert boolean mask columns by ~no_discount_price = df.loc[~df.has_discount, 'price'].resample(freq).sum()no_clearance_price = df.loc[~df.clearance, 'price'].resample(freq).sum()应该通过GroupBy.applyinsta循环简化第一个功能,然后concat没有必要:def f(x):&nbsp; &nbsp; print (x)df = df.groupby(['product_id','store_id']).apply(f)
打开App,查看更多内容
随时随地看视频慕课网APP

相关分类

Python