Welcome to OGeek Q&A Community for programmer and developer-Open, Learning and Share
Welcome To Ask or Share your Answers For Others

Categories

0 votes
293 views
in Technique[技术] by (71.8m points)

how to parallel running of python scripts on bioinformatics

I wish to use python to read in a fasta sequence file and convert it into a panda dataframe. I use the following scripts:

from Bio import SeqIO
import pandas as pd

def fasta2df(infile):
    records = SeqIO.parse(infile, 'fasta')
    seqList = []
    for record in records:
        desp = record.description
        # print(desp)
        seq = list(record.seq._data.upper())
        seqList.append([desp] + seq)
        seq_df = pd.DataFrame(seqList)
        print(seq_df.shape)
        seq_df.columns=['strainName']+list(range(1, seq_df.shape[1]))
    return seq_df


if __name__ == "__main__":
    path = 'path/to/the/fasta/file'
    input = path + 'GISAIDspikeprot0119.selection.fasta'
    df = fasta2df(input)

The 'GISAIDspikeprot0119.selection.fasta' file can be found at https://drive.google.com/file/d/1F5Ir5S6h9rFsVUQkDdZpomiWo9_bXtaW/view?usp=sharing

The script can be run at my linux workstation only with one cpu core, but is it possible to run it with more cores (multiple processes) so that it can be run much faster? What would be the codes for that?

with many thanks!

question from:https://stackoverflow.com/questions/65877758/how-to-parallel-running-of-python-scripts-on-bioinformatics

与恶龙缠斗过久,自身亦成为恶龙;凝视深渊过久,深渊将回以凝视…
Welcome To Ask or Share your Answers For Others

1 Reply

0 votes
by (71.8m points)

Before throwing more CPUs at your problem, you should invest some time in inspecting which parts of your code are slow.

In your case, you are executing the expensive conversion seq_df = pd.DataFrame(seqList) in every loop iteration. This is just wasting CPU time as the result seq_df is overwritten in the next iteration.

Your code took over 15 minutes on my machine. After moving pd.DataFrame(seqList) and the print statement out of the loop it is down to ~15 seconds.

def fasta2df(infile):
    records = SeqIO.parse(infile, 'fasta')
    seqList = []
    for record in records:
        desp = record.description
        seq = list(record.seq._data.upper())
        seqList.append([desp] + seq)
    seq_df = pd.DataFrame(seqList)
    seq_df.columns = ['strainName'] + list(range(1, seq_df.shape[1]))
    return seq_df

In fact, almost all time is spend in the line seq_df = pd.DataFrame(seqList) - about 13 seconds for me. By setting the dtype explicitly to string, we can bring it down to ~7 seconds:

def fasta2df(infile):
    records = SeqIO.parse(infile, 'fasta')
    seqList = []
    for record in records:
        desp = record.description
        seq = list(record.seq._data.upper())
        seqList.append([desp] + seq)
    seq_df = pd.DataFrame(seqList, dtype="string")
    seq_df.columns = ['strainName'] + list(range(1, seq_df.shape[1]))
    return seq_df

With this new performance, I highly doubt that you can improve the speed any further by parallel processing.


与恶龙缠斗过久,自身亦成为恶龙;凝视深渊过久,深渊将回以凝视…
OGeek|极客中国-欢迎来到极客的世界,一个免费开放的程序员编程交流平台!开放,进步,分享!让技术改变生活,让极客改变未来! Welcome to OGeek Q&A Community for programmer and developer-Open, Learning and Share
Click Here to Ask a Question

...