sampledoc

MPI Wrapper Class API

class mpiBase.MsgrBoost(commObj, verbose=False)[source]

Bases: mpiBase.ParallelMsgr

Derived class implementing interface in the base class using Boost MPI libraries

allReduceSum(data)[source]

Add data elements in localData across processors and put on all world procs

barrier()[source]

Barrier for world comm

bcast(data, root=0)[source]

Broadcast from root proc to world comm

gatherList(data, root=0)[source]

Gather data in list from world comm on root proc and return as a continuous list

reduceSum(data, root=0)[source]

Add data elements in localData across processors and put result on root proc

class mpiBase.MsgrMpi4py(commObj, verbose=False)[source]

Bases: mpiBase.ParallelMsgr

Derived class implementing interface in the base class using Mpi4py libraries

allReduceSum(localData)[source]

Add data elements in localData across processors and put on all world procs

barrier()[source]

Barrier to world comm

bcast(localData, root=0)[source]

Broadcast to world communicator

gatherList(localData, root=0)[source]

Gather data in list from world comm on root proc and return as a continuous list

reduceSum(localData, root=0)[source]

Add data elements in localData across processors and put result on root proc

class mpiBase.MsgrSerial(verbose=False)[source]

Bases: mpiBase.ParallelMsgr

Derived class implementing interface in the base class using no MPI (serial) libraries

allReduceSum(data)[source]

Add data elements in localData across processors and put on all world procs

barrier()[source]

Barrier for world comm

bcast(data, root=0)[source]

Broadcast from root proc to world comm

gatherList(data, root=0)[source]

Gather data in list from world comm on root proc and return as a continuous list

reduceSum(data, root=0)[source]

Add data elements in localData across processors and put result on root proc

class mpiBase.ParallelMsgr(rk, sz, verbose=False)[source]

Base class defining the interface to parallel communication methods

allReduceSum(data)[source]

Must be implemented in derived

barrier()[source]

Must be implemented in derived

gatherList(data, root=0)[source]

Must be implemented in derived

getCommSize()[source]

Number of processors in world communicator

getRank()[source]

ID of current processor

Returns:
integerr rank of current processor
reduceSum(data, root=0)[source]

Must be implemented in derived

splitListOnProcs(data)[source]

Split up global input list equally (as possible) and return the ‘chunk’ of list belonging to each processor. Note, no communication is performed, input data is known globally.

Split algorithm will guarantee returning a list that has ‘rank’ number of elements. Not all local lists will in general be the same length.

NOTE: If len(data) > rank some of the elements of the split list will be of zero length, so the local list on some processors can be empty. Calling program should account for this.

Args:
data: list of data [..]
Returns:
list of data for this local processor
tupleOfLists2List(tup)[source]

Take a tuple of lists and convert to single list eg ([a,b],[c,d],[e,f],...) –> [a,b,c,d,e,f]

mpiBase.getMPIObject(verbose=True, localVerbose=True)[source]

Driver for comm classes. Selects MPI-comm module if found and builds appropriate derived class

Returns: an mpi object of the appropriate derived class

mpiBase.getMPISerialObject(verbose=True)[source]

Driver for comm classes. Forces serial MPI-comm module

Returns: an mpi serial object