您可能需要考虑使用自然语言工具包或nltk.尝试这个:import nltksentence = "Punctuations to be included as its own unit."tokens = nltk.word_tokenize(sentence)print(tokens)输出:['Punctuations', 'to', 'be', 'included', 'as', 'its', 'own', 'unit', '.']
下面的代码片段可以使用正则表达式来分隔列表中的单词和标点符号。import stringimport repunctuations = string.punctuationregularExpression="[\w]+|" + "[" + punctuations + "]"content="Punctuations to be included as its own unit."splittedWords_Puncs = re.findall(r""+regularExpression, content)print(splittedWords_Puncs)输出:['标点符号', 'to', 'be', 'included', 'as', 'its', 'own', 'unit', '.']