文章引用说明 更多>> (返回到该文章)

Y. Fournier, J. Bonelle, C. Moulinec, Z. Shang, A. G. Sunderland and J. C. Uribe. Optimizing Code_Saturne computations on Peta- scale systems. Computers & Fluids, 2011, 45: 103-108.

被以下文章引用:

  • 标题: 大型矩阵相乘并行计算的特性分析 Performance Analysis of Large Scale Parallel Matrix Multiplication

    作者: 尚智, 陈硕

    关键字: 大规模计算, 海量数据, 并行计算, 分布式运算, 矩阵相乘 Large Scale Computing; Massive Data; Parallel Computing; Distributed Computing; Matrix Multiplication

    期刊名称: 《Software Engineering and Applications》, Vol.2 No.1, 2013-02-26

    摘要: 随着科学研究和工程计算的发展,大规模计算和模拟已经无法避免。这些大规模计算往往涉及海量数据的运算和处理,因此并行计算被用来一方面解决大规模的快速计算,另一方面解决海量数据的处理。基于MPI的并行计算可以方便地进行分布式运算,把海量数据分散在集群超级计算机上,使得每单个处理器(CPU)处理一小部分数据,从而实现快速运算和大规模计算。本文基于MPI的并行编程,实现了大规模矩阵的相乘运算,并且测试了点对点通信下的不通信机制(阻塞通信、非阻塞通信及其混合通信)的标准通信的并行性能。针对大型矩阵相乘计算,组建了完整的快速标准通信方法,并且防止死锁的发生。为今后的进一步实际应用奠定基础,提供有用的参考。The large scale computing is difficult to be avoided following the requirement of modern scientific re- searches and practical engineering applications. The computing and processing of massive data have to be involved in these large scale computing. The parallel computing therefore is employed to solve these issues of large scale comput- ing not only on fast computing but also on data processing. MPI-based parallel computing can easily realize distributed computing and massive data scattered in the cluster supercomputer, making each a single processor to handle a small portion of data in order to achieve fast computing and large scale computing. Based on MPI parallel programming a large-scale matrix multiplication operation was developed. Through the testes on the parallel performance of the point to point communications with the blocking communication, non-blocking communication and mixed communication, a complete set of rapid communication to prevent the occurrence of deadlock was established. The results were significant for the future practical applications.

在线客服:
对外合作:
联系方式:400-6379-560
投诉建议:feedback@hanspub.org
客服号

人工客服,优惠资讯,稿件咨询
公众号

科技前沿与学术知识分享