[CIG-SEISMO] Some questions about SPECFEM3D_V2.0.1

Dimitri Komatitsch komatitsch at lma.cnrs-mrs.fr
Fri Oct 21 07:00:26 PDT 2011


Dear Jenny,

"need at least one receiver" means that you have no seismic station (no 
seismic receiver / recording station) located in the mesh. Just make 
sure you have at least one station in DATA/STATIONS located inside the 
region that you want to study.

The reason why the code stops in such a case is that it could do all the 
calculations, but it would record nothing at the end of the run;
thus it would waste CPU time.

I do not understand your first question (about NPROC being different 
from NPROC_XI * NPROC_ETA). In principle NPROC is computed as NPROC_XI * 
NPROC_ETA, as you guessed; hopefully other developers (Daniel maybe?) 
can answer (and cc us).

Thank you,
Dimitri.

On 10/21/2011 10:36 AM, Jenny Yan wrote:
> Hello:
> In recent days, I used SPECFEM3D_V2.0.1 ,but met some difficulties. Now
> ,I  have some questions about this software.
>  From the manual of SPECFEM3D_V2.0.0, I have know that "xmeshfem3D"
> needs to read "Mesh_par_files" with the number of MPI processors is
> "NPROC_XI*NPROC_ETA", and "xgenerate_databases" needs to read
> "Par_files" with the numver of MPI processors is NPROC. But I find that
> NPROC_XI*NPROC_ETA does not equal to  NPROC from the examples in the
> code in most caes. Why?
> So I always modified the parameter NPROC ,set it equal
> to NPROC_XI*NPROC_ETA , go_mesher and go_generate_databases work ok.
> But go_solver is always wrong like this:"
>   need at least one receiver
>   Error detected, aborting MPI... proc            0
>   need at least one receiver
>   Error detected, aborting MPI... proc            1
>   need at least one receiver
>   Error detected, aborting MPI... proc            2
>   need at least one receiver
>   Error detected, aborting MPI... proc            3
> --------------------------------------------------------------------------
> MPI_ABORT was invoked on rank 3 in communicator MPI_COMM_WORLD
> with errorcode 30.
> NOTE: invoking MPI_ABORT causes Open MPI to kill all MPI processes.
> You may or may not see output from other processes, depending on
> exactly when Open MPI kills them.
> --------------------------------------------------------------------------
> --------------------------------------------------------------------------
> mpirun has exited due to process rank 3 with PID 19509 on
> node node0975 exiting without calling "finalize". This may
> have caused other processes in the application to be
> terminated by signals sent by mpirun (as reported here).
> --------------------------------------------------------------------------
> [node0975:19475] 3 more processes have sent help message
> help-mpi-api.txt / mpi-abort
> [node0975:19475] Set MCA parameter "orte_base_help_aggregate" to 0 to
> see all help / error messages
> "
> that is all. THank you
> Best regards
> Jenny
>
>
>
> _______________________________________________
> CIG-SEISMO mailing list
> CIG-SEISMO at geodynamics.org
> http://geodynamics.org/cgi-bin/mailman/listinfo/cig-seismo

-- 
Dimitri Komatitsch - komatitsch aT lma.cnrs-mrs.fr
CNRS Research Director (DR CNRS), Laboratory of Mechanics and Acoustics,
UPR 7051, Marseille, France  http://www.univ-pau.fr/~dkomati1


More information about the CIG-SEISMO mailing list