呼如林
如果我理解正确,您希望先执行列过滤,然后再将其传递给列表理解。例如,您有一个如下所示的 df,其中 c 列是nan free,from pyspark.sql.functions import isnan, count, whenimport numpy as npdf = spark.createDataFrame([(1.0, np.nan, 0.0), (np.nan, 2.0, 9.0),\ (np.nan, 3.0, 8.0), (np.nan, 4.0, 7.0)], ('a', 'b', 'c'))df.show()# +---+---+---+# | a| b| c|# +---+---+---+# |1.0|NaN|0.0|# |NaN|2.0|9.0|# |NaN|3.0|8.0|# |NaN|4.0|7.0|# +---+---+---+你得到了生产的解决方案和材料df.select([count(when((isnan(c)),c)).alias(c) for c in df.columns]).show()# +---+---+---+# | a| b| c|# +---+---+---+# | 3| 1| 0|# +---+---+---+但你想要# +---+---+# | a| b|# +---+---+# | 3| 1|# +---+---+为了得到那个输出,你可以试试这个rows = df.collect()#column filtering based on your nan conditionnan_columns = [''.join(key) for _ in rows for (key,val) in _.asDict().items() if np.isnan(val)]nan_columns = list(set(nan_columns)) #may sort if order is important#nan_columns#['a', 'b']df.select([count(when((isnan(c)),c)).alias(c) for c in nan_columns]).show()# +---+---+# | a| b|# +---+---+# | 3| 1|# +---+---+