如何在 selenium Python 中动态地一个一个地获取 Url?

我是硒 Python 的新手。我想在谷歌上搜索一个关键字并打开它,在结果部分我想点击第一个 url 并获取数据然后返回点击第二个链接并获取数据......等等直到 10 个 URL。我已经在下面的代码中使用 x-path 完成了它,但我想动态地完成它而不编写一个链接的特定 x-path?PS - 我试过使用 for 循环,但我做不到。总而言之,我想在不指定特定 x 路径的情况下获得以下代码的结果,但为任何关键字动态获取 url。


from selenium import webdriver


from selenium.webdriver.common.keys import Keys


from selenium.webdriver.common.by import By

import time


driver=webdriver.Chrome(executable_path="E:\Sahil\selenium\chromedriver\chromedriver.exe")


driver.get("https://www.google.com/")


print(driver.title)


driver.maximize_window()

time.sleep(2)


driver.find_element(By.XPATH, "//input[@name='q']").send_keys('selenium')


driver.find_element(By.XPATH, "//div[@class='FPdoLc tfB0Bf']//input[@name='btnK']").send_keys(Keys.ENTER)

# time.sleep(5)


# 1>>>

driver.find_element(By.PARTIAL_LINK_TEXT, "Selenium Web Driver").click()


a=driver.find_elements(By.TAG_NAME, "p")


for data in a:

    print(data.text)

driver.back()


# 2>>>

driver.find_element(By.PARTIAL_LINK_TEXT, "The Selenium Browser Automation Project :: Documentation ...").click()


b=driver.find_elements(By.TAG_NAME, "p")


for data in b:

    print(data.text)

driver.back()


# 3>>>

driver.find_element(By.PARTIAL_LINK_TEXT, "Selenium Tutorial for Beginners: Learn WebDriver in 7 Days").click()


c=driver.find_elements(By.TAG_NAME, "p")


for data in c:

    print(data.text)

driver.back()


# 4>>>

driver.find_element(By.PARTIAL_LINK_TEXT, "Selenium with Python — Selenium Python Bindings 2 ...").click()


d=driver.find_elements(By.TAG_NAME, "p")


for data in d:

    print(data.text)

driver.back()


# 5>>>

driver.find_element(By.PARTIAL_LINK_TEXT, "Selenium: Definition, How it works and Why you need it ...").click()


e=driver.find_elements(By.TAG_NAME, "p")


for data in e:

    print(data.text)

driver.back()

当年话下
浏览 199回答 1
1回答

江户川乱折腾

尝试这个 :from selenium import webdriverfrom selenium.webdriver.common.keys import Keysfrom selenium.webdriver.common.by import Byimport timedriver=webdriver.Chrome("chromedriver.exe")driver.get("https://www.google.com/")print(driver.title)driver.maximize_window()time.sleep(2)driver.find_element(By.XPATH, "//input[@name='q']").send_keys('selenium')driver.find_element(By.XPATH, "//div[@class='FPdoLc tfB0Bf']//input[@name='btnK']").send_keys(Keys.ENTER)a = driver.find_elements_by_xpath("//div[@class='r']/a") links = []for x in a:      # this loop get all the webpages link and store into 'links' list.    links.append(x.get_attribute('href'))link_data = []for new_url in links:    #go on every webpage and store page source in link_data list.    print('new url : ' , new_url)    driver.get(new_url)    link_data.append(driver.page_source)    driver.back()#print('link data len : ' ,len(link_data)) #print('link data [0] : ' , link_data[0])  # print first webpage source.此代码从所有链接获取所有数据并保存在link_data列表中。对于 p 标签,您可以使用以下代码:from bs4 import BeautifulSoup as bspage = bs(link_data[0],'html.parser')p_tag = page.find_all('p')print(p_tag)
打开App,查看更多内容
随时随地看视频慕课网APP

相关分类

Python