Groups | Search | Server Info | Login | Register
Groups > comp.parallel.mpi > #62
| From | blmblm@myrealbox.com <blmblm.myrealbox@gmail.com> |
|---|---|
| Newsgroups | comp.parallel.mpi |
| Subject | Re: Running "MPI" program on "Cluster" |
| Date | 2013-09-26 16:45 +0000 |
| Organization | None |
| Message-ID | <baj6ltFfmi1U1@mid.individual.net> (permalink) |
| References | <320c9a76-fe5c-4ff1-b674-5248db10759d@googlegroups.com> |
In article <320c9a76-fe5c-4ff1-b674-5248db10759d@googlegroups.com>,
Meenal Chougule <meenal.chougule@gmail.com> wrote:
> Hello everyone,
>
> I have a program having Master and Slave kind of nature. I want to execute those on a cluster.
>
> for cluster there is a master and 2 slaves. cluster master does decomposition of work and slave executes that.
>
> i know IP`s of both slave but i want to know the command by which i can execute the or options in mpirun.
>
I'm not sure what "the or options in mpirun" means here (maybe a
typo?), and I'm not sure I understand your situation, but some
comments/questions that might clarify:
Traditionally (MPI 1.x) MPI programs were strictly SPMD ("single
program, multiple data"), and in this model you would have a
single executable, compiled from code that includes processing for
both master and slave, and you would launch three copies of this
executable with "mpirun", and each copy would know (based on the
output of MPI_Comm_rank) whether it should behave as the master or
a slave.
MPI 2.x adds other options -- processes can spawn other processes, and
mpirun can launch more than one executable.
Does your program fit the MPI 1.x model, or does it use some of the
MPI 2.x features?
--
B. L. Massingill
ObDisclaimer: I don't speak for my employers; they return the favor.
Back to comp.parallel.mpi | Previous | Next — Previous in thread | Next in thread | Find similar
Running "MPI" program on "Cluster" Meenal Chougule <meenal.chougule@gmail.com> - 2013-09-25 23:17 -0700
Re: Running "MPI" program on "Cluster" blmblm@myrealbox.com <blmblm.myrealbox@gmail.com> - 2013-09-26 16:45 +0000
Re: Running "MPI" program on "Cluster" Meenal Chougule <meenal.chougule@gmail.com> - 2013-09-27 23:03 -0700
Re: Running "MPI" program on "Cluster" Meenal Chougule <meenal.chougule@gmail.com> - 2013-09-27 23:04 -0700
Re: Running "MPI" program on "Cluster" Meenal Chougule <meenal.chougule@gmail.com> - 2013-09-27 23:06 -0700
Re: Running "MPI" program on "Cluster" blmblm@myrealbox.com <blmblm.myrealbox@gmail.com> - 2013-09-28 17:33 +0000
Re: Running "MPI" program on "Cluster" blmblm@myrealbox.com <blmblm.myrealbox@gmail.com> - 2013-10-01 15:29 +0000
Re: Running "MPI" program on "Cluster" Meenal Chougule <meenal.chougule@gmail.com> - 2013-09-27 23:06 -0700
Re: Running "MPI" program on "Cluster" blmblm@myrealbox.com <blmblm.myrealbox@gmail.com> - 2013-09-28 17:34 +0000
Re: Running "MPI" program on "Cluster" Meenal Chougule <meenal.chougule@gmail.com> - 2013-10-02 00:45 -0700
Re: Running "MPI" program on "Cluster" Keith Thompson <kst-u@mib.org> - 2013-10-02 11:52 -0700
Re: Running "MPI" program on "Cluster" blmblm@myrealbox.com <blmblm.myrealbox@gmail.com> - 2013-10-03 21:43 +0000
Re: Running "MPI" program on "Cluster" Meenal Chougule <meenal.chougule@gmail.com> - 2013-10-06 09:10 -0700
Re: Running "MPI" program on "Cluster" blmblm@myrealbox.com <blmblm.myrealbox@gmail.com> - 2013-10-18 18:57 +0000
Re: Running "MPI" program on "Cluster" Meenal Chougule <meenal.chougule@gmail.com> - 2013-10-21 23:48 -0700
Re: Running "MPI" program on "Cluster" blmblm@myrealbox.com <blmblm.myrealbox@gmail.com> - 2013-10-22 16:53 +0000
Re: Running "MPI" program on "Cluster" blmblm@myrealbox.com <blmblm.myrealbox@gmail.com> - 2013-10-02 19:10 +0000
csiph-web