翻訳と辞書
Words near each other
・ BMW 018
・ BMW 1 Series
・ BMC Structural Biology
・ BMC Switzerland
・ BMC Systems Biology
・ BMCC Tribeca Performing Arts Center
・ BMCE Bank
・ BMCE Bank International
・ BMCI
・ BMCSR
・ BMD
・ BMD-1
・ BMD-2
・ BMD-3
・ BMD-4
BMDFM
・ BMDP
・ BME
・ BME Recordings
・ BMEzine
・ BMF
・ BMF (gene)
・ BMF (record label)
・ BMFC
・ BMFMS
・ BMG (disambiguation)
・ BMG Canada Inc v Doe
・ BMG Classic
・ BMG Heritage Records
・ BMG movement


Dictionary Lists
翻訳と辞書 辞書検索 [ 開発暫定版 ]
スポンサード リンク

BMDFM : ウィキペディア英語版
BMDFM
BMDFM (Binary Modular Dataflow Machine) is software, which enables running an application in parallel on shared memory symmetric multiprocessors (SMP) using the multiple processors to speed up the execution of single applications.
BMDFM automatically identifies and exploits parallelism due to the static and mainly DYNAMIC SCHEDULING of the dataflow instruction sequences derived from the formerly sequential program.
BMDFM dynamic scheduling subsystem performs an SMP emulation of Tagged-Token Dataflow Machine to provide the transparent dataflow semantics for the applications. No directives for parallel execution are required.
==Background==

Nowadays parallel shared memory symmetric multiprocessors (SMP) are complex machines, where a large number of architectural aspects have to be simultaneously addressed in order to achieve high performance. Recent commodity SMP machines for technical computing can have many tightly coupled cores (good examples are SMP machines based on Intel multi-core processors or IBM POWER multi-core processors). The number of cores per SMP node probably will be doubling next couple of years according to the unveiled plans of computer manufacturing companies.
Multi-cores are intended to exploit a thread-level parallelism, identified by software. Hence, the most challenging task is to find an efficient way of how to harness power of multi-core processors for processing an application program in parallel. Existent OpenMP paradigm of the static parallelization with a fork-join runtime library works pretty well for loop-intensive regular array-based computations only, however, compile-time parallelization methods are weak in general and almost inapplicable for irregular applications:
* There are many operations that take a non-deterministic amount of time making it difficult to know exactly when certain pieces of data will become available.
* A memory hierarchy with multi-level caches has unpredictable memory access latencies.
* A multi-user mode other people's codes can use up resources or slow down a part of the computation in a way that the compiler cannot account for.
* Compile-time inter-procedural and cross-conditional optimizations are hard (very often impossible) because compilers cannot figure out which way a conditional will go or cannot optimize across a function call.

抄文引用元・出典: フリー百科事典『 ウィキペディア(Wikipedia)
ウィキペディアで「BMDFM」の詳細全文を読む



スポンサード リンク
翻訳と辞書 : 翻訳のためのインターネットリソース

Copyright(C) kotoba.ne.jp 1997-2016. All Rights Reserved.