基本上有一个脚本来梳理节点/点的数据集以删除那些重叠的节点/点。实际的脚本更复杂,但我将其缩减为基本上一个简单的重叠检查,它对演示没有任何作用。
我尝试了一些变体,包括锁、队列、池,一次添加一项作业,而不是批量添加。一些最严重的罪犯的速度慢了几个数量级。最终我以最快的速度完成了它。
发送到各个进程的重叠检查算法:
def check_overlap(args):
tolerance = args['tolerance']
this_coords = args['this_coords']
that_coords = args['that_coords']
overlaps = False
distance_x = this_coords[0] - that_coords[0]
if distance_x <= tolerance:
distance_x = pow(distance_x, 2)
distance_y = this_coords[1] - that_coords[1]
if distance_y <= tolerance:
distance = pow(distance_x + pow(distance_y, 2), 0.5)
if distance <= tolerance:
overlaps = True
return overlaps
处理功能:
def process_coords(coords, num_processors=1, tolerance=1):
import multiprocessing as mp
import time
if num_processors > 1:
pool = mp.Pool(num_processors)
start = time.time()
print "Start script w/ multiprocessing"
else:
num_processors = 0
start = time.time()
print "Start script w/ standard processing"
total_overlap_count = 0
# outer loop through nodes
start_index = 0
last_index = len(coords) - 1
while start_index <= last_index:
# nature of the original problem means we can process all pairs of a single node at once, but not multiple, so batch jobs by outer loop
batch_jobs = []
# inner loop against all pairs for this node
start_index += 1
count_overlapping = 0
for i in range(start_index, last_index+1, 1):
if num_processors:
# add job
batch_jobs.append({
'tolerance': tolerance,
'this_coords': coords[start_index],
'that_coords': coords[i]
})
尽管如此,非多处理始终在不到 0.4 秒的时间内运行,而多处理我可以达到 3.0 秒以下。我知道这里的算法可能太简单而无法真正获得好处,但考虑到上述情况有近 50 万次迭代,而实际情况有明显更多,多处理慢一个数量级对我来说很奇怪。
我缺少什么/我可以做些什么来改进?
慕的地6264312
相关分类