网页抓取:如何从 html 中提取与与其他标签匹配的关键字相对应的链接,而 url 中没有关键字?

我试图从网页中提取与某些关键字匹配的职位描述并且这有效,但是我也想提取与 HTML 中找到的描述相对应的链接。问题是链接出现在描述的关键字之前,并且 URL 不包含要搜索的关键字。如何提取与通过关键字找到的职位描述相匹配的链接?


这是我的代码:


import re, requests, time, os, csv, subprocess


from bs4 import BeautifulSoup



def get_jobs(url):


keywords = ["KI", "AI", "Big Data", "Data", "data", "big data", "Analytics", "analytics", "digitalisierung", "ML",

            "Machine Learning", "Daten", "Datenexperte", "Datensicherheitsexperte"]

headers = {'user-agent': 'Mozilla/5.0 (Macintosh; Intel Mac OS X 10_14_1) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/72.0.3626.109 Safari/537.36'}


html = requests.get(url, headers=headers, timeout=5)


time.sleep(2)


soup = BeautifulSoup(html.text, 'html.parser')


jobs = soup.find_all('p',text=re.compile(r'\b(?:%s)\b' % '|'.join(keywords)))


# links = jobs.find_all('a')



jobs_found = []

for word in jobs:

    jobs_found.append(word)

with open("jobs.csv", 'a', encoding='utf-8') as toWrite:

    writer = csv.writer(toWrite)

    writer.writerows(jobs_found)

    # subprocess.call('./Autopilot3.py')

    print("Matched Jobs have been collected.")



get_jobs('https://www.auftrag.at//tenders.aspx')


holdtom
浏览 215回答 1
1回答

炎炎设计

通过网络我看到链接总是比描述高两个级别。然后你冷使用find_parent()函数来获取a找到的作业的标签。你的代码中有:jobs = soup.find_all('p',text=re.compile(r'\b(?:%s)\b' % '|'.join(keywords)))然后在那之后添加:for i in jobs:   print(i.find_parent('a').get('href'))这将打印链接。请注意,这些链接是相对链接而不是绝对链接。您应该添加根以查找特定页面。例如,如果您发现一个链接是:ETender.aspx?id=ed60009c-8d64-4759-a722-872e21cf9ea7&action=show。您必须在开头添加:https://www.auftrag.at/。作为最后一个链接:https : //www.auftrag.at/ETender.aspx?id=ed60009c-8d64-4759-a722-872e21cf9ea7&action=show如果需要,您可以像处理职位描述一样将它们添加到列表中。完整代码(不保存在 csv 中)将是:import re, requests, time, os, csv, subprocessfrom bs4 import BeautifulSoupdef get_jobs(url):    keywords = ["KI", "AI", "Big Data", "Data", "data", "big data", "Analytics", "analytics", "digitalisierung", "ML",                "Machine Learning", "Daten", "Datenexperte", "Datensicherheitsexperte"]    headers = {'user-agent': 'Mozilla/5.0 (Macintosh; Intel Mac OS X 10_14_1) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/72.0.3626.109 Safari/537.36'}    html = requests.get(url, headers=headers, timeout=5)    time.sleep(2)    soup = BeautifulSoup(html.text, 'html.parser')    jobs = soup.find_all('p',text=re.compile(r'\b(?:%s)\b' % '|'.join(keywords)))    # links = jobs.find_all('a')    jobs_found = []    links = []    for word in jobs:        jobs_found.append(word)        links.append(word.find_parent('a').get('href'))    with open("jobs.csv", 'a', encoding='utf-8') as toWrite:        writer = csv.writer(toWrite)        writer.writerows(jobs_found)        # subprocess.call('./Autopilot3.py')        print("Matched Jobs have been collected.")    return soup, jobssoup, jobs = get_jobs('https://www.auftrag.at//tenders.aspx')如果要添加完整的 url,只需更改行:links.append(word.find_parent('a').get('href'))到:links.append("//".join(["//".join(url.split("//")[:2]),word.find_parent('a').get('href')]))
打开App,查看更多内容
随时随地看视频慕课网APP

相关分类

Python