如何优化 json 数据对象中 10 个最常用词的检索?

我正在寻找使代码更高效的方法(运行时和内存复杂性) 我应该使用像 Max-Heap 这样的东西吗?由于字符串连接或字典排序不就地或其他原因导致的性能不佳? 编辑:我将字典/地图对象替换为在所有检索到的名称列表上应用 Counter 方法(有重复)


最小请求: 脚本应该花费少于 30 秒 当前运行时间:它需要 54 秒


   # Try to implement the program efficiently (running the script should take less then 30 seconds)

import requests


# Requests is an elegant and simple HTTP library for Python, built for human beings.

# Requests is the only Non-GMO HTTP library for Python, safe for human consumption.

# Requests is not a built in module (does not come with the default python installation), so you will have to install it:

# http://docs.python-requests.org/en/v2.9.1/

# installing it for pyCharm is not so easy and takes a lot of troubleshooting (problems with pip's main version)

# use conda/pip install requests instead


import json


# dict subclass for counting hashable objects

from collections import Counter


#import heapq


import datetime


url = 'https://api.namefake.com'

# a "global" list object. TODO: try to make it "static" (local to the file)

words = []


#####################################################################################

# Calls the site http://www.namefake.com  100 times and retrieves random names

# Examples for the format of the names from this site:

# Dr. Willis Lang IV

# Lily Purdy Jr.

# Dameon Bogisich

# Ms. Zora Padberg V

# Luther Krajcik Sr.

# Prof. Helmer Schaden            etc....

#####################################################################################


忽然笑
浏览 171回答 1
1回答
打开App,查看更多内容
随时随地看视频慕课网APP

相关分类

Python