猿问

如何在Python中获取嵌套的href?

目标

(我需要重复搜索数百次):

1.在“ https://www.ncbi.nlm.nih.gov/ipg/ ”中搜索(例如“WP_000177210.1”)

(即https://www.ncbi.nlm.nih.gov/ipg/?term=WP_000177210.1

2.选择表格第二列“CDS Region in Nucleotide”中的第一条记录

(即“NC_011415.1 1997353-1998831 (-)”,https: //www.ncbi.nlm.nih.gov/nuccore/NC_011415.1 ?from=1997353&to=1998831&strand=2 )

3.选择该序列名称下的“FASTA”

4.获取fasta序列

(即“NC_011415.1:c1998831-1997353大肠杆菌SE11,完整序列ATGACTTTATGGATTAACGGTGACTGGATAACGGGCCAGGGCGCATCGCGTGTGAAGCGTAATCCGGTAT CGGGCGAG......”)。

代码

1.在“ https://www.ncbi.nlm.nih.gov/ipg/ ”中搜索(例如“WP_000177210.1”)


import requests

from bs4 import BeautifulSoup


url = "https://www.ncbi.nlm.nih.gov/ipg/"

r = requests.get(url, params = "WP_000177210.1")

if r.status_code == requests.codes.ok:

    soup = BeautifulSoup(r.text,"lxml")

2.选择表第二列“核苷酸中的 CDS 区域”中的第一条记录(在本例中为“NC_011415.1 1997353-1998831 (-)”)(即https://www.ncbi.nlm.nih.gov /nuccore/NC_011415.1?from=1997353&to=1998831&strand=2 )


# try 1 (wrong)

## I tried this first, but it seemed like it only accessed to the first level of the href?!

for a in soup.find_all('a', href=True):

    if (a['href'][:8]) =="/nuccore":

        print("Found the URL:", a['href'])


# try 2 (not sure how to access nested href)

## According to the label I saw in the Develop Tools, I think I need to get the href in the following nested structure. However, it didn't work.

soup.select("html div #maincontent div div div #ph-ipg div table tbody tr td a")

我现在就卡在这一步了......


聚苯乙烯

这是我第一次处理html格式。我也是第一次在这里提问。我可能不太清楚地表达这个问题。如果有任何问题,请告诉我。


叮当猫咪
浏览 100回答 1
1回答

ibeautiful

不使用 NCBI 的 REST API,import timefrom bs4 import BeautifulSoupfrom selenium import webdriver# Opens a firefox webbrowser for scrapping purposesbrowser = webdriver.Firefox(executable_path=r'your\path\geckodriver.exe') # Put your own path here# Allows you to load a page completely (with all of the JS)browser.get('https://www.ncbi.nlm.nih.gov/ipg/?term=WP_000177210.1')# Delay turning the page into a soup in order to collect the newly fetched datatime.sleep(3)# Creates the soupsoup = BeautifulSoup(browser.page_source, "html")# Gets all the links by filtering out ones with just '/nuccore' and keeping ones that include '/nuccore'links = [a['href'] for a in soup.find_all('a', href=True) if '/nuccore' in a['href'] and not a['href'] == '/nuccore']笔记:你需要这个包selenium您需要安装GeckoDriver
随时随地看视频慕课网APP

相关分类

Html5
我要回答