我正在迁移表中的一些数据,我正在尝试更改“日期”列的值,但 PySpark 似乎在读取数据时删除了数据。
我正在执行以下步骤:
从表中读取数据
更改列的值
将数据覆盖到同一张表
当我在这些步骤之后检查数据时,我的表是空的。
这是我的代码
table = "MY_TABLE"
data_input = sqlContext.read.format("jdbc").options(url=JDBCURL, dbtable=table).load()
print("data_input.count()=", data_input.count())
print("'2019' in data_input:", data_input.where(col("date").contains("2019")).count())
print("'YEAR' in data_input:", data_input.where(col("date").contains("YEAR")).count())
# data_input.count()= 1000
# '2019' in data_input: 1000
# 'YEAR' in data_input: 0
data_output = data_input.withColumn("date", F.regexp_replace("date", "2019", "YEAR"))
print("data_output.count()=", data_output.count())
print("'2019' in data_output:", data_output.where(col("date").contains("2019")).count())
print("'YEAR' in data_output:", data_output.where(col("date").contains("YEAR")).count())
# data_output.count()= 1000
# '2019' in data_output: 1000
# 'YEAR' in data_output: 0
到目前为止一切顺利,让我们覆盖表格
df_writer = DataFrameWriter(data_output)
df_writer.jdbc(url = JDBCURL, table=table, mode="overwrite")
# Let's check the data now
print("data_input.count()=", data_input.count())
print("'2019' in data_input:", data_input.where(col("date").contains("2019")).count())
print("'YEAR' in data_input:", data_input.where(col("date").contains("YEAR")).count())
# data_input.count()= 0
# '2019' in data_input: 0
# 'YEAR' in data_input: 0
# huh, weird
print("data_output.count()=", data_output.count())
print("'2019' in data_output:", data_output.where(col("date").contains("2019")).count())
print("'YEAR' in data_output:", data_output.where(col("date").contains("YEAR")).count())
# data_output.count()= 0
# '2019' in data_output: 0
# 'YEAR' in data_output: 0
# Still weird
查询SELECT * FROM MY_TABLE返回 0 行。
为什么 [Py]Spark 这样做?我怎样才能改变这种行为?缓存?这在文档中有什么解释?
红糖糍粑
慕尼黑5688855
相关分类