想请教大家几个关于scrapy的问题

最近正在学习使用scrapy抓取网页
我想抓取人民网一个页面上的新闻,并且加入了下一页的判断,写出了如下的爬虫程序,可运行总是显示(referer: None),不知道是哪里出的问题。
还有就是按官网的示例试着抓取中文网页,输出的总是unicode代码形式,print出来也不行,然后导出到csv文件中更是如此,不知道怎么才能直接显示中文
还望指点
谢谢回帖~

Python code

#!/usr/bin/python
# -*- coding: UTF-8 -*-

from urlparse import urljoin
from scrapy.selector import HtmlXPathSelector
from scrapy.contrib.linkextractors.sgml import SgmlLinkExtractor
from scrapy.http import Request
from scrapy.spider import BaseSpider
from renmin.items import RenminItem

class PeopleSpider(BaseSpider):
    name = 'people'
    allowed_domains = ['military.people.com.cn']
    start_urls = ['http://military.people.com.cn/GB/1077/52987/index.html']

    def parse(self, response):
        hxs = HtmlXPathSelector(response)
        sites = hxs.select('//div[@class="c_c"]/a/@href').extract()
        texts = hxs.select('//div[@class="c_c"]/a/text()').extract()
        print texts
        for site in sites:
            yield Request(site, callback=self.parse_post)
        page_links = hxs.select('//div[@class="c_c"]/table/tr/td/a')
        for link in page_links:
            if link.select('text()').extract()[0] == u'\u4e0b\u4e00\u9875':
                url = link.select('@href').extract()[0]
                yield Request(url, callback=self.parse)
        return

    def parse_post(self, response):
        hxs1 = HtmlXPathSelector(response)
        i = RenminItem()
        i['title'] = hxs1.select("//h1[@id='p_title']/text()").extract()
        i['link'] = response.url
        i['cont'] = hxs1.select("//div[@id='p_content']/text()").extract()
        page_links = hxs1.select("//div[@id='p_content']/center/table/tr/td/a")
        for link in page_links:
            if link.select('img/@src').extract()[0] == u'/img/next_b.gif':
                url = link.select('@href').extract()[0]
                yield Request(url, callback=self.parse_post)
        yield i
        return        

SPIDER = PeopleSpider()

作者: whitebill2004   发布时间: 2011-05-12

请求头加上 Referer 头...

作者: mrshelly   发布时间: 2011-05-14