猿问

在链接页面中抓取多余的字段

我试图在主页上刮一些帖子,那里几乎我需要的一切都在那里。但是在链接页面上,我还需要一个日期字段。我尝试了以下回调:


from scrapy.spider import BaseSpider

from macnn_com.items import MacnnComItem


from scrapy.selector import HtmlXPathSelector

from scrapy.contrib.loader import XPathItemLoader

from scrapy.contrib.loader.processor import MapCompose, Join

from scrapy.http.request import Request


class MacnnSpider(BaseSpider):

    name = 'macnn_com'

    allowed_domains = ['macnn.com']

    start_urls = ['http://www.macnn.com']

    posts_list_xpath = '//div[@class="post"]'

    item_fields = { 'title': './/h1/a/text()',

                    'link': './/h1/a/@href',

                    'summary': './/p/text()',

                    'image': './/div[@class="post_img"]/div[@class="post_img_border"]/a/img/@original' }


    def parse(self, response):

        hxs = HtmlXPathSelector(response)

        # iterate over posts

        for qxs in hxs.select(self.posts_list_xpath):

            loader = XPathItemLoader(MacnnComItem(), selector=qxs)


            # define processors

            loader.default_input_processor = MapCompose(unicode.strip)

            loader.default_output_processor = Join()

            # skip posts with empty titles

            if loader.get_xpath('.//h1/a/text()') == []:

                continue

            # iterate over fields and add xpaths to the loader

            for field, xpath in self.item_fields.iteritems():

                loader.add_xpath(field, xpath)

            request = Request(loader.get_xpath('.//h1/a/@href')[0], callback=self.parse_link,meta={'loader':loader})

            yield request

            #loader.add_value('datums',request)

            yield loader.load_item()



但是我收到类似的错误


错误:Spider必须返回Request,BaseItem或None,在其中获得了'XPathItemLoader' <GET http://www.macnn.com/articles/13/06/14/sidebar.makes.it.easier.to.jump.between.columns/>


我在这里做错了什么?


缥缈止盈
浏览 145回答 1
1回答
随时随地看视频慕课网APP

相关分类

Python
我要回答