我看过类似的帖子,但没有完整的答案,因此在这里发帖。
我在 Spark 中使用 TF-IDF 来获取文档中具有最大 tf-idf 值的单词。我使用下面的一段代码。
from pyspark.ml.feature import HashingTF, IDF, Tokenizer, CountVectorizer, StopWordsRemover
tokenizer = Tokenizer(inputCol="doc_cln", outputCol="tokens")
remover1 = StopWordsRemover(inputCol="tokens",
outputCol="stopWordsRemovedTokens")
stopwordList =["word1","word2","word3"]
remover2 = StopWordsRemover(inputCol="stopWordsRemovedTokens",
outputCol="filtered" ,stopWords=stopwordList)
hashingTF = HashingTF(inputCol="filtered", outputCol="rawFeatures", numFeatures=2000)
idf = IDF(inputCol="rawFeatures", outputCol="features", minDocFreq=5)
from pyspark.ml import Pipeline
pipeline = Pipeline(stages=[tokenizer, remover1, remover2, hashingTF, idf])
model = pipeline.fit(df)
results = model.transform(df)
results.cache()
我得到的结果是
|[a8g4i9g5y, hwcdn] |(2000,[905,1104],[7.34977707433047,7.076179741760428])
在哪里
filtered: array (nullable = true)
features: vector (nullable = true)
如何从“特征”中提取数组?理想情况下,我想得到对应于最高 tfidf 的单词,如下所示
|a8g4i9g5y|7.34977707433047
提前致谢!
相关分类