猿问

使用 Python 标记和标记 HTML 源代码

我有一些带注释的 HTML 源代码,其中代码类似于您将使用的代码,requests并且注释是带有标记项目开始的字符索引的标签,并且


例如,源代码可以是:


<body><text>Hello world!</text><text>This is my code. And this is a number 42</text></body>

标签可以是例如:


[{'label':'salutation', 'start':12, 'end':25},

 {'label':'verb', 'start':42, 'end':45},

 {'label':'size', 'start':75, 'end':78}]

分别指“Hello world”、“is”和“42”这三个词。我们事先知道标签没有重叠。


我想处理源代码和注释以生成适合 HTML 格式的标记列表。


例如,它可以在这里产生如下内容:


['<body>', '<text>', 'hello', 'world', '</text>', '<text>', 'this', 'is', 'my', 'code', 'and', 'this', 'is', 'a', 'number', '[NUMBER]', '</text>', '</body>']

此外,它必须将注释映射到标记化,生成与标记化长度相同的标签序列,例如:


['NONE', 'NONE', 'salutation', 'salutation', 'NONE', 'NONE', 'NONE', 'verb', 'NONE', 'NONE', 'NONE', 'NONE', 'NONE', 'NONE', 'NONE', 'size', 'NONE', 'NONE']

在 Python 中完成此任务的最简单方法是什么?


慕妹3146593
浏览 161回答 2
2回答

UYOU

您可以使用递归BeautifulSoup来生成所有标签和内容的列表,然后可以使用它来匹配标签:from bs4 import BeautifulSoup as soupimport recontent = '<body><text>Hello world!</text><text>This is my code. And this is a number 42</text></body>'def tokenize(d):&nbsp; yield f'<{d.name}>'&nbsp; for i in d.contents:&nbsp; &nbsp; &nbsp;if not isinstance(i, str):&nbsp; &nbsp; &nbsp; &nbsp;yield from tokenize(i)&nbsp; &nbsp; &nbsp;else:&nbsp; &nbsp; &nbsp; &nbsp;yield from i.split()&nbsp; yield f'</{d.name}>'data = list(tokenize(soup(content, 'html.parser').body))输出:['<body>', '<text>', 'Hello', 'world!', '</text>', '<text>', 'This', 'is', 'my', 'code.', 'And', 'this', 'is', 'a', 'number', '42', '</text>', '</body>']然后,匹配标签:labels = [{'label':'salutation', 'start':12, 'end':25}, {'label':'verb', 'start':42, 'end':45}, {'label':'size', 'start':75, 'end':78}]&nbsp;tokens = [{**i, 'word':content[i['start']:i['end']-1].split()} for i in labels]indices = {i:iter([[c, c+len(i)+1] for c in range(len(content)) if re.findall('^\W'+i, content[c-1:])]) for i in data}&nbsp;&nbsp;new_data = [[i, next(indices[i], None)] for i in data]result = [(lambda x:'NONE' if not x else x[0])([c['label'] for c in tokens if b and c['start'] <= b[0] and b[-1] <= c['end']]) for a, b in new_data]输出:['NONE', 'NONE', 'salutation', 'salutation', 'NONE', 'NONE', 'NONE', 'verb', 'NONE', 'NONE', 'NONE', 'NONE', 'NONE', 'NONE', 'NONE', 'size', 'NONE', 'NONE']

胡说叔叔

目前我已经使用 HTMLParser 完成了这项工作:from html.parser import HTMLParserfrom tensorflow.keras.preprocessing.text import text_to_word_sequenceclass HTML_tokenizer_labeller(HTMLParser):&nbsp; def __init__(self, annotations, *args, **kwargs):&nbsp; &nbsp; super(HTML_tokenizer_labeller, self).__init__(*args, **kwargs)&nbsp; &nbsp; self.tokens = []&nbsp; &nbsp; self.labels = []&nbsp; &nbsp; self.annotations = annotations&nbsp; def handle_starttag(self, tag, attrs):&nbsp; &nbsp; self.tokens.append(f'<{tag}>')&nbsp; &nbsp; self.labels.append('OTHER')&nbsp; def handle_endtag(self, tag):&nbsp; &nbsp; self.tokens.append(f'</{tag}>')&nbsp; &nbsp; self.labels.append('OTHER')&nbsp; def handle_data(self, data):&nbsp; &nbsp; print(f"getpos = {self.getpos()}")&nbsp; &nbsp; tokens = text_to_word_sequence(data)&nbsp; &nbsp; pos = self.getpos()[1]&nbsp; &nbsp; for annotation in annotations:&nbsp; &nbsp; &nbsp; if annotation['start'] <= pos <= annotation['end']:&nbsp; &nbsp; &nbsp; &nbsp; label = annotation['tag']&nbsp; &nbsp; &nbsp; &nbsp; break&nbsp; &nbsp; else: label = 'OTHER'&nbsp; &nbsp; self.tokens += tokens&nbsp; &nbsp; self.labels += [label] * len(tokens)
随时随地看视频慕课网APP

相关分类

Python
我要回答