我是新手,最近几个月开始在 pyhton 中编码。我有一个脚本,它需要一个蛋白质组(2850 个字符串的 800 Kb 文件)并根据一个大型数据集(作为 id:protein_string 的字典保存在代码中的 2300 万个字符串的 8Gb 文件)检查每个单独的蛋白质(protein_string)并报告 Ids所有相同的字符串(每个字符串最多可以报告 8500 个 ID)。当前脚本需要 4 小时才能运行。一般可以采取什么措施来加快进程,以及如何将我的脚本转换为多处理或多线程(不确定差异),以便进行比较的代码部分?
import sys
from Bio import AlignIO
from Bio import SeqIO
from Bio.Seq import Seq
import time
start_time = time.time()
databasefile = sys.argv[1]
queryfile = sys.argv[2]
file_hits = "./" + sys.argv[2].split("_protein")[0] + "_ZeNovo_hits_v1.txt"
file_report = "./" + sys.argv[2].split("_protein")[0] + "_ZeNovo_report_v1.txt"
format = "fasta"
output_file = open(file_hits, 'w')
output_file_2 = open(file_report,'w')
sequences_dict = {}
output_file.write("{}\t{}\n".format("protein_query", "hits"))
for record in SeqIO.parse(databasefile, format):
sequences_dict[record.description] = str(record.seq)
print("processed database in --- {:.3f} seconds ---".format(time.time() - start_time))
processed_counter = 0
for record in SeqIO.parse(queryfile, format):
query_seq = str(record.seq)
count = 0
output_file.write("{}\t".format(record.description))
for id, seq in sequences_dict.items():
if seq == query_seq:
count += 1
output_file.write("{}\t".format(id))
processed_counter += 1
output_file.write("\n")
print("processed protein "+str(processed_counter))
output_file_2.write(record.description+'\t'+str(count)+'\t'+str(len(record.seq))+'\t'+str(record.seq)+'\n')
output_file.close()
output_file_2.close()
print("Done in --- {:.3f} seconds ---".format(time.time() - start_time))
万千封印
料青山看我应如是
相关分类