搜索
简帛阁>技术文章>通过分析Ajax请求 抓取今日头条街拍图集

通过分析Ajax请求 抓取今日头条街拍图集

 

代码:

 

import os
import re
import json
import time
from hashlib import md5
from multiprocessing import Pool

import requests
from requests.exceptions import RequestException
from pymongo import MongoClient

# 配置信息
OFFSET_START = 0   # 爬去页面的起始下标
OFFSET_END = 20    # 爬去页面的结束下标
KEYWORD = '街拍'   # 搜索的关键字

# mongodb相关配置
MONGO_URL = 'localhost'
MONGO_DB = 'toutiao'   # 数据库名称
MONGO_TABLE = 'jiepai'  # 集合名称

# 图片保存的文件夹名称
IMAGE_PATH = 'images'

headers = {
    "User-Agent": 'Mozilla/5.0 (Windows NT 10.0; Win64; x64)'
}

client = MongoClient(host=MONGO_URL)
db = client[MONGO_DB]
jiepai_table = db[MONGO_TABLE]

if not os.path.exists(IMAGE_PATH):
    os.mkdir(IMAGE_PATH)


def get_html(url, params=None):
    try:
        response = requests.get(url, params=params, headers=headers)
        if response.status_code == 200:
            return response.text
        return None
    except RequestException as e:
        print("请求%s失败: " % url, e)
        return None

# 获取索引页内容
def get_index_page(offset, keyword):
    basic_url = 'http://www.toutiao.com/search_content/'
    params = {
        'offset': offset,
        'format': 'json',
        'keyword': keyword,
        'autoload': 'true',
        'count': 20,
        'cur_tab': 3
    }
    return get_html(basic_url, params)


def parse_index_page(html):
    '''
    解析索引页内容
    返回: 索引页中包含的所有详情页url
    '''
    if not html:
        return
    data = json.loads(html)
    if 'data' in data:
        for item in data['data']:
            article_url = item['article_url']
            if 'toutiao.com/group' in article_url:
                yield article_url


# 获取详情页
def get_detail_page(url):
    return get_html(url)

# 解析详情页


def parse_detail_page(url, html):
    '''
        解析详情页
        返回对应的标题,url和包含的图片url
    '''
    title_reg = re.compile('<title>(.*?)</title>')
    title = title_reg.search(html).group(1)
    gallery_reg = re.compile('var gallery = (.*?);')
    gallery = gallery_reg.search(html)
    if gallery and 'sub_images' in gallery.group(1):
        images = json.loads(gallery.group(1))['sub_images']
        image_list = [image['url'] for image in images]
        return {
            'title': title,
            'url': url,
            'images': image_list
        }
    return None


def save_to_mongodb(content):
    jiepai_table.insert(content)
    print("存储到mongdob成功", content)


def download_images(image_list):
    for image_url in image_list:
        try:
            response = requests.get(image_url)
            if response.status_code == 200:
                save_image(response.content)
        except RequestException as e:
            print("下载图片失败: ", e)


def save_image(content):
    '''
        对图片的二进制内容做hash,构造图片路径,以此保证图片不重复
    '''
    file_path = '{0}/{1}/{2}.{3}'.format(os.getcwd(),
                                         IMAGE_PATH, md5(content).hexdigest(), 'jpg')
    # 去除重复的图片
    if not os.path.exists(file_path):
        with open(file_path, 'wb') as f:
            f.write(content)


def jiepai(offset):
    html = get_index_page(offset, KEYWORD)
    if html is None:
        return
    page_urls = list(parse_index_page(html))
    # print("详情页url列表:" )
    # for page_url in page_urls:
    #     print(page_url)

    for page in page_urls:
        print('get detail page:', page)
        html = get_detail_page(page)
        if html is None:
            continue
        content = parse_detail_page(page, html)
        if content:
            save_to_mongodb(content)
            download_images(content['images'])
            time.sleep(1)
    print('-------------------------------------')


if __name__ == '__main__':
    offset_list = range(OFFSET_START, OFFSET_END)
    pool = Pool()
    pool.map(jiepai, offset_list)

 备注:

其实通过url请求返回的json数据中已经包含了图片列表

import requests


basic_url = 'http://www.toutiao.com/search_content/?offset={}&format=json&keyword=%E8%A1%97%E6%8B%8D&autoload=true&count=20&cur_tab=3'
url = basic_url.format(0)
html = requests.get(url).json()
items = html['data']
for item in items:
    title = item['media_name']
    image_list = [image_detail['url'] for image_detail in item['image_detail']]
    print(title, image_list)

 

代码:importosimportreimportjsonimporttimefromhashlibimportmd5frommultiprocessingimportPoolimportreques
爬取实战:爬取今日头条美图,通过传入想要爬取的搜索内容,爬取对应的图片,本文以抓取街拍美图,下载街拍美图。爬取思路:1创建动态请求网页代码,可以方便以后爬取代码的修改2分析网页响应,筛选提取搜索目录
网址:https://wwwtoutiaocom/搜索头条可以得到这个网址:https://wwwtoutiaocom/search/keyword%E8%A1%97%E6%8B%8D开发者工具查看:
今日头条的数据都是ajax加载显示的,按照正常的url是抓取不到数据的,需要分析出加载出址,我们以https://wwwtoutiaocom/searc为例来采集列表的文章用谷歌浏览器打开链接,右键点
比如:代码如下:functionxmlHttpR(){varxmlhttp;if(windowXMLHttpRequest){xmlhttpnewXMLHttpRequest();}else{try{
你可能得预先了解实现功能:点击页面上的按钮实现动态追加数据实现原理:点击页面按钮,通过Ajax提交请求到后台,后台接收请求后进行数据库操作,然后返回数据到前台并进行页面渲染动态加载更多数据代码实现//
使用jquery,post请求data:那里要使用data:JSONstringify(data)$ajax({type:POST,async:false,url://gzq/circle/dele
通过传统的form表单提交的方式上传文件:Html代码<formid"uploadForm"action"http://localhost:8080/cfJAX_RS/rest/file/upl
其实抓ajax异步内容的页面和抓普通的页面区别不大。ajax只不过是做了一次异步的http请求,只要使用firebug类似的工具,找到请求的后端服务url和传值的参数,然后对该url传递参数进行抓取
代码如下://AJAX类functionAJAXRequest(){varxmlObjfalse;varCBfunc,ObjSelf;ObjSelfthis;try{xmlObjnewXMLHttpR