From michael.gineste at ntnu.no Thu Feb 1 03:05:03 2018 From: michael.gineste at ntnu.no (Michael Gineste) Date: Thu, 1 Feb 2018 12:05:03 +0100 Subject: [CIG-SEISMO] bug fixes Message-ID: <5579cc81-0e65-11f1-4aeb-d9bbed75692e@ntnu.no> Hi SpecFem2D team, I encountered some issues which must be bugs and which has simple solutions. The diffs that fixes these are attached. The first is in get_simulation_domains.f90 and from reading the surrounding code, the line in question must be wrong. It got triggered when using model=default and supplying an external tomography model. The second in locate_receivers.f90 is bit more mysterious and I do not know the exact cause, but the modification in the attached diff file seemingly fixes it. It occured when IMAIN=42 and not =ISTANDARD_OUTPUT, i.e. output got written to file instead of screen. Don't know if these information are useful for anything, but I used gfortran v. 5.4.0 and openmpi 1.10.2. Best regards, Michael Gineste -------------- next part -------------- diff --git a/src/specfem2D/locate_receivers.F90 b/src/specfem2D/locate_receivers.F90 index 1004803..54a8390 100644 --- a/src/specfem2D/locate_receivers.F90 +++ b/src/specfem2D/locate_receivers.F90 @@ -166,6 +166,12 @@ ! thus, please do *NOT* remove this statement (it took us a while to discover this nasty compiler problem). ! GNU gfortran and all other compilers we have tested are fine and do not have any problem even if this ! statement is removed. Some releases of Intel ifort are also OK. + + ! Without this line write before the flush, the following error occurs when std. out is to a file: + ! At line 94 of file src/shared/exit_mpi.F90 (unit = 42) + ! Fortran runtime error: Specified UNIT in FLUSH is not connected + write(IMAIN,*) ' ' + call flush_IMAIN() distmin_squared = dist_squared -------------- next part -------------- diff --git a/src/specfem2D/get_simulation_domains.f90 b/src/specfem2D/get_simulation_domains.f90 index 586c39b..5ad302c 100644 --- a/src/specfem2D/get_simulation_domains.f90 +++ b/src/specfem2D/get_simulation_domains.f90 @@ -148,7 +148,7 @@ ispec_is_elastic(ispec) = .true. ispec_is_acoustic(ispec) = .false. else if (vsext(i,j,ispec) < TINYVAL) then - ispec_is_acoustic(ispec) = .false. + ispec_is_acoustic(ispec) = .true. endif endif From komatitsch at lma.cnrs-mrs.fr Fri Feb 2 06:05:11 2018 From: komatitsch at lma.cnrs-mrs.fr (Dimitri Komatitsch) Date: Fri, 2 Feb 2018 15:05:11 +0100 Subject: [CIG-SEISMO] bug fixes In-Reply-To: <5579cc81-0e65-11f1-4aeb-d9bbed75692e@ntnu.no> References: <5579cc81-0e65-11f1-4aeb-d9bbed75692e@ntnu.no> Message-ID: Hi Michael, Thanks a lot. Very useful. I will commit the two changes to Git in a few minutes. Thanks again for your bug report, Best regards, Dimitri. On 02/01/2018 12:05 PM, Michael Gineste wrote: > Hi SpecFem2D team, > > I encountered some issues which must be bugs and which has simple > solutions. The diffs that fixes these are attached. > > The first is in get_simulation_domains.f90 and from reading the > surrounding code, the line in question must be wrong. It got triggered > when using model=default and supplying an external tomography model. > > The second in locate_receivers.f90 is bit more mysterious and I do not > know the exact cause, but the modification in the attached diff file > seemingly fixes it. It occured when IMAIN=42 and not =ISTANDARD_OUTPUT, > i.e. output got written to file instead of screen. > > Don't know if these information are useful for anything, but I used > gfortran v. 5.4.0 and openmpi 1.10.2. > > Best regards, > Michael Gineste > > > > > _______________________________________________ > CIG-SEISMO mailing list > CIG-SEISMO at geodynamics.org > http://lists.geodynamics.org/cgi-bin/mailman/listinfo/cig-seismo > -- Dimitri Komatitsch, CNRS Research Director (DR CNRS) Laboratory of Mechanics and Acoustics, Marseille, France http://komatitsch.free.fr From komatitsch at lma.cnrs-mrs.fr Fri Feb 2 06:51:53 2018 From: komatitsch at lma.cnrs-mrs.fr (Dimitri Komatitsch) Date: Fri, 2 Feb 2018 15:51:53 +0100 Subject: [CIG-SEISMO] bug fixes In-Reply-To: References: <5579cc81-0e65-11f1-4aeb-d9bbed75692e@ntnu.no> Message-ID: <3c156785-deea-b08b-0b12-7d45c2e4fe02@lma.cnrs-mrs.fr> Hi Michael, I tried to fix the bug but noticed it was already fixed (at least in the "devel" branch of the code): else if (vsext(i,j,ispec) < TINYVAL) then ispec_is_acoustic(ispec) = .true. endif Thus, if you are using a slightly older version of the code, you can switch to the "devel" branch and then type git pull. On my side I am going to merge the "devel" branch into the main branch, to make sure that bug is fixed there as well. Thanks, Best regards, Dimitri. On 02/02/2018 03:05 PM, Dimitri Komatitsch wrote: > > Hi Michael, > > Thanks a lot. Very useful. I will commit the two changes to Git in a few > minutes. > > Thanks again for your bug report, > Best regards, > Dimitri. > > On 02/01/2018 12:05 PM, Michael Gineste wrote: >> Hi SpecFem2D team, >> >> I encountered some issues which must be bugs and which has simple >> solutions. The diffs that fixes these are attached. >> >> The first is in get_simulation_domains.f90 and from reading the >> surrounding code, the line in question must be wrong. It got triggered >> when using model=default and supplying an external tomography model. >> >> The second in locate_receivers.f90 is bit more mysterious and I do not >> know the exact cause, but the modification in the attached diff file >> seemingly fixes it. It occured when IMAIN=42 and not >> =ISTANDARD_OUTPUT, i.e. output got written to file instead of screen. >> >> Don't know if these information are useful for anything, but I used >> gfortran v. 5.4.0 and openmpi 1.10.2. >> >> Best regards, >> Michael Gineste >> >> >> >> >> _______________________________________________ >> CIG-SEISMO mailing list >> CIG-SEISMO at geodynamics.org >> http://lists.geodynamics.org/cgi-bin/mailman/listinfo/cig-seismo >> > -- Dimitri Komatitsch, CNRS Research Director (DR CNRS) Laboratory of Mechanics and Acoustics, Marseille, France http://komatitsch.free.fr From ljhwang at ucdavis.edu Mon Feb 5 11:21:58 2018 From: ljhwang at ucdavis.edu (Lorraine Hwang) Date: Mon, 5 Feb 2018 11:21:58 -0800 Subject: [CIG-SEISMO] 10th ACES Workshop - SAVE THE DATE September 2018 Message-ID: <40FAB321-03EB-4E48-8C27-1A710C52440F@ucdavis.edu> We are pleased to announce that the 10th ACES International Workshop will be held at Minami Awaji Royal Hotel , Awaji Island, Japan from September 25 to September 28, 2018. Awaji Island is a charming island located 40 km southwest of Kobe. The ACES (ACES ) (APEC Cooperation for Earthquake Simulation) is a multi-lateral grand challenge science research cooperation of APEC. The project is supported by Australia, China, Japan, and USA (core members) as well as Canada, Chinese Taipei, New Zealand, and South Korea (members). It involves leading international earthquake science research groups. ACES aims to develop realistic simulation models, thus providing a "virtual laboratory" to probe earthquake behavior and to access its potential hazards. This capability will offer a new opportunity to gain an improved understanding of the earthquake process and related phenomena, and of earthquake science in general. In this workshop we will focus on the earthquake physics including strain accumulation, earthquake nucleation and propagation, and redistribution of stress and healing. The workshop generally consists of eight plenary scientific sessions as follows. Session 1: Stress accumulation and earthquake preparation Session 2: Earthquake nucleation process Session 3: Earthquake dynamic rupture Session 4: Seismic wave propagation and tsunamis Session 5: Static deformation, stress redistribution and viscoelastic relaxation Session 6: Aftershocks and seismic activity Session 7: Earthquake cycles Workshop Website: http://quaketm.bosai.go.jp/~shiqing/ACES2018/index_aces.html Abstracts Submission and Registration: Opens April 18 Travel Support: We anticipate limited travel support will be available for US participants. Organizing Committee: Eiichi Fukuyama (Executive Director, Japan) Huilin Xing (Australia) Kristy Tiampo (Canada) Yongxian Zhang (China) How-wei Chen (Chinese Taipei) Charles Williams (New Zealand) Tae-Seob Kang (South Korea) Tony Song (United States of America) John Rundle (Executive Director Emeritus, United States of America) -------------- next part -------------- An HTML attachment was scrubbed... URL: From fabien.dubuffet at univ-lyon1.fr Thu Feb 8 06:33:45 2018 From: fabien.dubuffet at univ-lyon1.fr (Fabien Dubuffet) Date: Thu, 8 Feb 2018 15:33:45 +0100 Subject: [CIG-SEISMO] Some problems with xspecfem3D and the Intel compilers Message-ID: Dear Seismo team, I've got some problems when I run xspecfem3D compiled with the intel compiler 17.0.4. I have created the mesh with xmeshfem3D (compiled with the Intel compilers) without any problems (apparently), and when I run the xspecfem3D on 96 cores on our new cluster, I've got the following errors: forrtl: severe (67): input statement requires too much data, unit 40, file ./DATABASES_1Dprem/proc000031_reg1_solver_data.bin Image              PC                Routine Line        Source libifcoremt.so.5   00002AC49AF6998F  for__io_return Unknown  Unknown libifcoremt.so.5   00002AC49AFA901F  for_read_seq_xmit Unknown  Unknown libifcoremt.so.5   00002AC49AFA6712  for_read_seq Unknown  Unknown xspecfem3D         000000000092A9E1  read_arrays_solve 130  read_arrays_solver.f90 xspecfem3D         0000000000951030  read_mesh_databas 252  read_mesh_databases.F90 xspecfem3D         000000000097130E  read_mesh_databas 72  read_mesh_databases.F90 xspecfem3D         0000000000A0FEAB  MAIN__ 465  specfem3D.F90 xspecfem3D         00000000004036EE  Unknown Unknown  Unknown libc-2.17.so       00002AC49CC4FB35  __libc_start_main Unknown  Unknown xspecfem3D         00000000004035F9  Unknown Unknown  Unknown I've got this error for most of the 96 proc***_reg1_solver_data.bin files. I don't have these errors when i compile using the GNU compilers. Do you have any ideas ? Best regards, Fabien From komatitsch at lma.cnrs-mrs.fr Thu Feb 8 07:12:03 2018 From: komatitsch at lma.cnrs-mrs.fr (Dimitri Komatitsch) Date: Thu, 8 Feb 2018 16:12:03 +0100 Subject: [CIG-SEISMO] Some problems with xspecfem3D and the Intel compilers In-Reply-To: References: Message-ID: <02a74af9-3431-149f-5811-61a3cf93f5e8@lma.cnrs-mrs.fr> Hi Fabien, Very likely an internal bug in the Intel compiler (again...). Just revert to Intel v16 and you should be all set. Best regards, Dimitri. On 02/08/2018 03:33 PM, Fabien Dubuffet wrote: > Dear Seismo team, > > I've got some problems when I run xspecfem3D compiled with the intel > compiler 17.0.4. > > I have created the mesh with xmeshfem3D (compiled with the Intel > compilers) without any problems (apparently), and when I run the > xspecfem3D on 96 cores on our new cluster, I've got the following errors: > > forrtl: severe (67): input statement requires too much data, unit 40, > file ./DATABASES_1Dprem/proc000031_reg1_solver_data.bin > Image              PC                Routine Line        Source > libifcoremt.so.5   00002AC49AF6998F  for__io_return Unknown  Unknown > libifcoremt.so.5   00002AC49AFA901F  for_read_seq_xmit Unknown  Unknown > libifcoremt.so.5   00002AC49AFA6712  for_read_seq Unknown  Unknown > xspecfem3D         000000000092A9E1  read_arrays_solve 130 > read_arrays_solver.f90 > xspecfem3D         0000000000951030  read_mesh_databas 252 > read_mesh_databases.F90 > xspecfem3D         000000000097130E  read_mesh_databas 72 > read_mesh_databases.F90 > xspecfem3D         0000000000A0FEAB  MAIN__ 465  specfem3D.F90 > xspecfem3D         00000000004036EE  Unknown Unknown  Unknown > libc-2.17.so       00002AC49CC4FB35  __libc_start_main Unknown  Unknown > xspecfem3D         00000000004035F9  Unknown Unknown  Unknown > > I've got this error for most of the 96 proc***_reg1_solver_data.bin files. > > I don't have these errors when i compile using the GNU compilers. Do you > have any ideas ? > > Best regards, > > Fabien > _______________________________________________ > CIG-SEISMO mailing list > CIG-SEISMO at geodynamics.org > http://lists.geodynamics.org/cgi-bin/mailman/listinfo/cig-seismo -- Dimitri Komatitsch, CNRS Research Director (DR CNRS) Laboratory of Mechanics and Acoustics, Marseille, France http://komatitsch.free.fr From komatitsch at lma.cnrs-mrs.fr Thu Feb 8 07:13:21 2018 From: komatitsch at lma.cnrs-mrs.fr (Dimitri Komatitsch) Date: Thu, 8 Feb 2018 16:13:21 +0100 Subject: [CIG-SEISMO] Some problems with xspecfem3D and the Intel compilers In-Reply-To: <02a74af9-3431-149f-5811-61a3cf93f5e8@lma.cnrs-mrs.fr> References: <02a74af9-3431-149f-5811-61a3cf93f5e8@lma.cnrs-mrs.fr> Message-ID: <359c24ba-09e5-3b62-7434-0e0dfb32a3a8@lma.cnrs-mrs.fr> Hi again Fabien, To be sure, you can also configure the code with --enable-debug and try again (make realclean ; make clean ; make all). Best regards, Dimitri. On 02/08/2018 04:12 PM, Dimitri Komatitsch wrote: > > Hi Fabien, > > Very likely an internal bug in the Intel compiler (again...). > Just revert to Intel v16 and you should be all set. > > Best regards, > Dimitri. > > On 02/08/2018 03:33 PM, Fabien Dubuffet wrote: >> Dear Seismo team, >> >> I've got some problems when I run xspecfem3D compiled with the intel >> compiler 17.0.4. >> >> I have created the mesh with xmeshfem3D (compiled with the Intel >> compilers) without any problems (apparently), and when I run the >> xspecfem3D on 96 cores on our new cluster, I've got the following errors: >> >> forrtl: severe (67): input statement requires too much data, unit 40, >> file ./DATABASES_1Dprem/proc000031_reg1_solver_data.bin >> Image              PC                Routine Line        Source >> libifcoremt.so.5   00002AC49AF6998F  for__io_return Unknown  Unknown >> libifcoremt.so.5   00002AC49AFA901F  for_read_seq_xmit Unknown  Unknown >> libifcoremt.so.5   00002AC49AFA6712  for_read_seq Unknown  Unknown >> xspecfem3D         000000000092A9E1  read_arrays_solve 130 >> read_arrays_solver.f90 >> xspecfem3D         0000000000951030  read_mesh_databas 252 >> read_mesh_databases.F90 >> xspecfem3D         000000000097130E  read_mesh_databas 72 >> read_mesh_databases.F90 >> xspecfem3D         0000000000A0FEAB  MAIN__ 465  specfem3D.F90 >> xspecfem3D         00000000004036EE  Unknown Unknown  Unknown >> libc-2.17.so       00002AC49CC4FB35  __libc_start_main Unknown  Unknown >> xspecfem3D         00000000004035F9  Unknown Unknown  Unknown >> >> I've got this error for most of the 96 proc***_reg1_solver_data.bin >> files. >> >> I don't have these errors when i compile using the GNU compilers. Do >> you have any ideas ? >> >> Best regards, >> >> Fabien >> _______________________________________________ >> CIG-SEISMO mailing list >> CIG-SEISMO at geodynamics.org >> http://lists.geodynamics.org/cgi-bin/mailman/listinfo/cig-seismo > -- Dimitri Komatitsch, CNRS Research Director (DR CNRS) Laboratory of Mechanics and Acoustics, Marseille, France http://komatitsch.free.fr From fabien.dubuffet at univ-lyon1.fr Thu Feb 8 09:16:29 2018 From: fabien.dubuffet at univ-lyon1.fr (Fabien Dubuffet) Date: Thu, 8 Feb 2018 18:16:29 +0100 Subject: [CIG-SEISMO] Some problems with xspecfem3D and the Intel compilers In-Reply-To: <359c24ba-09e5-3b62-7434-0e0dfb32a3a8@lma.cnrs-mrs.fr> References: <02a74af9-3431-149f-5811-61a3cf93f5e8@lma.cnrs-mrs.fr> <359c24ba-09e5-3b62-7434-0e0dfb32a3a8@lma.cnrs-mrs.fr> Message-ID: <83cd4d03-36e5-dcf3-8ccb-551a5257a6d5@univ-lyon1.fr> Hello Dimitri, Thanks for your help. I Have tried with the --enable-debug flag, and 'make realclean ; make clean ; make all', same problem. As you suggested, i have used a previous version of the intel compilers and now it seems OK. Thanks a lot, Best Regards, Fabien From komatitsch at lma.cnrs-mrs.fr Thu Feb 8 09:24:12 2018 From: komatitsch at lma.cnrs-mrs.fr (Dimitri Komatitsch) Date: Thu, 8 Feb 2018 18:24:12 +0100 Subject: [CIG-SEISMO] Some problems with xspecfem3D and the Intel compilers In-Reply-To: <83cd4d03-36e5-dcf3-8ccb-551a5257a6d5@univ-lyon1.fr> References: <02a74af9-3431-149f-5811-61a3cf93f5e8@lma.cnrs-mrs.fr> <359c24ba-09e5-3b62-7434-0e0dfb32a3a8@lma.cnrs-mrs.fr> <83cd4d03-36e5-dcf3-8ccb-551a5257a6d5@univ-lyon1.fr> Message-ID: Hello Fabien, Thanks! So I guess the internal compiler bug is confirmed :-( Thanks, Best wishes, Dimitri. On 02/08/2018 06:16 PM, Fabien Dubuffet wrote: > Hello Dimitri, > > Thanks for your help. > I Have tried with the --enable-debug flag, and 'make realclean ; make > clean ; make all', same problem. > As you suggested, i have used a previous version of the intel compilers > and now it seems OK. > Thanks a lot, > > Best Regards, > Fabien > _______________________________________________ > CIG-SEISMO mailing list > CIG-SEISMO at geodynamics.org > http://lists.geodynamics.org/cgi-bin/mailman/listinfo/cig-seismo -- Dimitri Komatitsch, CNRS Research Director (DR CNRS) Laboratory of Mechanics and Acoustics, Marseille, France http://komatitsch.free.fr From carene at lanl.gov Thu Feb 8 15:49:15 2018 From: carene at lanl.gov (Larmat, Carene) Date: Thu, 8 Feb 2018 23:49:15 +0000 Subject: [CIG-SEISMO] Some problems with xspecfem3D and the Intel compilers In-Reply-To: References: <02a74af9-3431-149f-5811-61a3cf93f5e8@lma.cnrs-mrs.fr> <359c24ba-09e5-3b62-7434-0e0dfb32a3a8@lma.cnrs-mrs.fr> <83cd4d03-36e5-dcf3-8ccb-551a5257a6d5@univ-lyon1.fr> Message-ID: <46F1B12F-B0FF-4CDB-931E-4C7AFA554650@lanl.gov> I had some success bypassing some intel compiler issues using -assume nobuffered_io for those of us who are imposed to work with new versions. Cheers! Carene Larmat, EES-17, MS D452 carene at lanl.gov, 505 667 2074 On Feb 8, 2018, at 10:24 AM, Dimitri Komatitsch > wrote: Hello Fabien, Thanks! So I guess the internal compiler bug is confirmed :-( Thanks, Best wishes, Dimitri. On 02/08/2018 06:16 PM, Fabien Dubuffet wrote: Hello Dimitri, Thanks for your help. I Have tried with the --enable-debug flag, and 'make realclean ; make clean ; make all', same problem. As you suggested, i have used a previous version of the intel compilers and now it seems OK. Thanks a lot, Best Regards, Fabien _______________________________________________ CIG-SEISMO mailing list CIG-SEISMO at geodynamics.org http://lists.geodynamics.org/cgi-bin/mailman/listinfo/cig-seismo -- Dimitri Komatitsch, CNRS Research Director (DR CNRS) Laboratory of Mechanics and Acoustics, Marseille, France http://komatitsch.free.fr _______________________________________________ CIG-SEISMO mailing list CIG-SEISMO at geodynamics.org http://lists.geodynamics.org/cgi-bin/mailman/listinfo/cig-seismo -------------- next part -------------- An HTML attachment was scrubbed... URL: From fabien.dubuffet at univ-lyon1.fr Fri Feb 9 02:04:48 2018 From: fabien.dubuffet at univ-lyon1.fr (Fabien Dubuffet) Date: Fri, 9 Feb 2018 11:04:48 +0100 Subject: [CIG-SEISMO] Some problems with xspecfem3D and the Intel compilers In-Reply-To: <46F1B12F-B0FF-4CDB-931E-4C7AFA554650@lanl.gov> References: <02a74af9-3431-149f-5811-61a3cf93f5e8@lma.cnrs-mrs.fr> <359c24ba-09e5-3b62-7434-0e0dfb32a3a8@lma.cnrs-mrs.fr> <83cd4d03-36e5-dcf3-8ccb-551a5257a6d5@univ-lyon1.fr> <46F1B12F-B0FF-4CDB-931E-4C7AFA554650@lanl.gov> Message-ID: <5ae9aa70-e8ce-d98e-b4a4-d9badb40c6c1@univ-lyon1.fr> Hello Carene, I have tried your solution, and the '-assume nobuffered_io' flag solves this problem. Thanks a lot, Cheers, Fabien Le 09/02/18 à 00:49, Larmat, Carene a écrit : > I had some success bypassing some intel compiler issues using > -assume nobuffered_io for those of us who are imposed to work with new > versions. > Cheers! > > Carene Larmat, EES-17, MS D452 > carene at lanl.gov , 505 667 2074 -------------- next part -------------- An HTML attachment was scrubbed... URL: From komatitsch at lma.cnrs-mrs.fr Fri Feb 9 05:24:55 2018 From: komatitsch at lma.cnrs-mrs.fr (Dimitri Komatitsch) Date: Fri, 9 Feb 2018 14:24:55 +0100 Subject: [CIG-SEISMO] Some problems with xspecfem3D and the Intel compilers In-Reply-To: <5ae9aa70-e8ce-d98e-b4a4-d9badb40c6c1@univ-lyon1.fr> References: <02a74af9-3431-149f-5811-61a3cf93f5e8@lma.cnrs-mrs.fr> <359c24ba-09e5-3b62-7434-0e0dfb32a3a8@lma.cnrs-mrs.fr> <83cd4d03-36e5-dcf3-8ccb-551a5257a6d5@univ-lyon1.fr> <46F1B12F-B0FF-4CDB-931E-4C7AFA554650@lanl.gov> <5ae9aa70-e8ce-d98e-b4a4-d9badb40c6c1@univ-lyon1.fr> Message-ID: <18d08a8a-71b5-d4f6-bf22-e8a6b6dc54fd@lma.cnrs-mrs.fr> Hi Carène and Fabien, Hi all, Thanks a lot! Since several users have reported problems with this Intel compiler option, in particular with ifort version 17, let me keep it but turn it off by default (in 2D, 3D, and 3D_GLOBE). When it works fine it does speed up I/Os significantly though, thus worth trying it. I will mention that in the users manual (of 2D, 3D, and 3D_GLOBE). Thanks! Best regards, Dimitri. On 02/09/2018 11:04 AM, Fabien Dubuffet wrote: > Hello Carene, > > I have tried your solution, and the '-assume nobuffered_io' flag solves > this problem. > Thanks a lot, > > Cheers, > > Fabien > > Le 09/02/18 à 00:49, Larmat, Carene a écrit : >> I had some success bypassing some intel compiler issues using >> -assume nobuffered_io for those of us who are imposed to work with new >> versions. >> Cheers! >> >> Carene Larmat, EES-17, MS D452 >> carene at lanl.gov , 505 667 2074 > > > _______________________________________________ > CIG-SEISMO mailing list > CIG-SEISMO at geodynamics.org > http://lists.geodynamics.org/cgi-bin/mailman/listinfo/cig-seismo > -- Dimitri Komatitsch, CNRS Research Director (DR CNRS) Laboratory of Mechanics and Acoustics, Marseille, France http://komatitsch.free.fr From chenzou at uchicago.edu Fri Feb 9 06:54:45 2018 From: chenzou at uchicago.edu (Chen Zou) Date: Fri, 9 Feb 2018 08:54:45 -0600 Subject: [CIG-SEISMO] Reasonable problem size for one machine Message-ID: Hi all, I am trying to understand the cache performance of a single machine in the cluster/supercomputer running sw4 job. To make the results reasonable, it is necessary to find a reasonable problem size for each machine to solve. I am aware that there is the SCEC test suite and sample input like LOH.1-h100.in . But what is rule of thumb to determine the number of machines to run the job to get reasonable performance? Say 1000 DOF per machine? There is also another thing I would like to confirm. To my understanding, the job running at each node is like a batch job for multiple DOFs. The job for each DOF is similar and there is limited data reuse between the job. If this understanding is correct, does this mean that the cache performance won't vary a lot by problem size as long as the machine gets a reasonably large number of DOF to solve? My current experiments with different problem size do support this statement. Thanks in advance. Regards, Chen -------------- next part -------------- An HTML attachment was scrubbed... URL: From rama.kishan.v.malladi at intel.com Fri Feb 9 21:50:41 2018 From: rama.kishan.v.malladi at intel.com (Malladi, Rama Kishan V) Date: Sat, 10 Feb 2018 05:50:41 +0000 Subject: [CIG-SEISMO] Some problems with xspecfem3D and the Intel compilers In-Reply-To: <18d08a8a-71b5-d4f6-bf22-e8a6b6dc54fd@lma.cnrs-mrs.fr> References: <02a74af9-3431-149f-5811-61a3cf93f5e8@lma.cnrs-mrs.fr> <359c24ba-09e5-3b62-7434-0e0dfb32a3a8@lma.cnrs-mrs.fr> <83cd4d03-36e5-dcf3-8ccb-551a5257a6d5@univ-lyon1.fr> <46F1B12F-B0FF-4CDB-931E-4C7AFA554650@lanl.gov> <5ae9aa70-e8ce-d98e-b4a4-d9badb40c6c1@univ-lyon1.fr> <18d08a8a-71b5-d4f6-bf22-e8a6b6dc54fd@lma.cnrs-mrs.fr> Message-ID: Hi Dimitri, All, I will check what changed in the recent version of the Intel compiler and see if this can be reported to the compiler team as a bug report. Thanks -Rama -----Original Message----- From: CIG-SEISMO [mailto:cig-seismo-bounces at geodynamics.org] On Behalf Of Dimitri Komatitsch Sent: Friday, February 9, 2018 6:55 PM To: cig-seismo at geodynamics.org; Fabien Dubuffet ; Larmat Carene Subject: Re: [CIG-SEISMO] Some problems with xspecfem3D and the Intel compilers Hi Carène and Fabien, Hi all, Thanks a lot! Since several users have reported problems with this Intel compiler option, in particular with ifort version 17, let me keep it but turn it off by default (in 2D, 3D, and 3D_GLOBE). When it works fine it does speed up I/Os significantly though, thus worth trying it. I will mention that in the users manual (of 2D, 3D, and 3D_GLOBE). Thanks! Best regards, Dimitri. On 02/09/2018 11:04 AM, Fabien Dubuffet wrote: > Hello Carene, > > I have tried your solution, and the '-assume nobuffered_io' flag > solves this problem. > Thanks a lot, > > Cheers, > > Fabien > > Le 09/02/18 à 00:49, Larmat, Carene a écrit : >> I had some success bypassing some intel compiler issues using -assume  >> nobuffered_io for those of us who are imposed to work with new >> versions. >> Cheers! >> >> Carene Larmat, EES-17, MS D452 >> carene at lanl.gov , 505 667 2074 > > > _______________________________________________ > CIG-SEISMO mailing list > CIG-SEISMO at geodynamics.org > http://lists.geodynamics.org/cgi-bin/mailman/listinfo/cig-seismo > -- Dimitri Komatitsch, CNRS Research Director (DR CNRS) Laboratory of Mechanics and Acoustics, Marseille, France http://komatitsch.free.fr _______________________________________________ CIG-SEISMO mailing list CIG-SEISMO at geodynamics.org http://lists.geodynamics.org/cgi-bin/mailman/listinfo/cig-seismo From komatitsch at lma.cnrs-mrs.fr Sat Feb 10 10:16:25 2018 From: komatitsch at lma.cnrs-mrs.fr (Dimitri Komatitsch) Date: Sat, 10 Feb 2018 19:16:25 +0100 Subject: [CIG-SEISMO] Some problems with xspecfem3D and the Intel compilers In-Reply-To: References: <02a74af9-3431-149f-5811-61a3cf93f5e8@lma.cnrs-mrs.fr> <359c24ba-09e5-3b62-7434-0e0dfb32a3a8@lma.cnrs-mrs.fr> <83cd4d03-36e5-dcf3-8ccb-551a5257a6d5@univ-lyon1.fr> <46F1B12F-B0FF-4CDB-931E-4C7AFA554650@lanl.gov> <5ae9aa70-e8ce-d98e-b4a4-d9badb40c6c1@univ-lyon1.fr> <18d08a8a-71b5-d4f6-bf22-e8a6b6dc54fd@lma.cnrs-mrs.fr> Message-ID: Hi Rama, Hi all, Thanks a lot. That would be very useful. It seems option -assume buffered_io has a problem in ifort v17, but it would be a pity to get rid of it because when it works it does speed up the code significantly. I attach another bug we have detected with v17 (maybe the same bug?), see the comment at line 494 in the attached file (that file is src/shared/read_value_parameters.f90 in the Git version), which is from SPECFEM2D (to get it: git clone --recursive --branch devel https://github.com/geodynamics/specfem2d.git ) Thank you very much, Best regards, Dimitri. On 02/10/2018 06:50 AM, Malladi, Rama Kishan V wrote: > Hi Dimitri, All, > I will check what changed in the recent version of the Intel compiler and see if this can be reported to the compiler team as a bug report. > > Thanks > -Rama > > -----Original Message----- > From: CIG-SEISMO [mailto:cig-seismo-bounces at geodynamics.org] On Behalf Of Dimitri Komatitsch > Sent: Friday, February 9, 2018 6:55 PM > To: cig-seismo at geodynamics.org; Fabien Dubuffet ; Larmat Carene > Subject: Re: [CIG-SEISMO] Some problems with xspecfem3D and the Intel compilers > > > Hi Carène and Fabien, Hi all, > > Thanks a lot! Since several users have reported problems with this Intel compiler option, in particular with ifort version 17, let me keep it but turn it off by default (in 2D, 3D, and 3D_GLOBE). > > When it works fine it does speed up I/Os significantly though, thus worth trying it. I will mention that in the users manual (of 2D, 3D, and 3D_GLOBE). > > Thanks! > Best regards, > Dimitri. > > On 02/09/2018 11:04 AM, Fabien Dubuffet wrote: >> Hello Carene, >> >> I have tried your solution, and the '-assume nobuffered_io' flag >> solves this problem. >> Thanks a lot, >> >> Cheers, >> >> Fabien >> >> Le 09/02/18 à 00:49, Larmat, Carene a écrit : >>> I had some success bypassing some intel compiler issues using -assume >>> nobuffered_io for those of us who are imposed to work with new >>> versions. >>> Cheers! >>> >>> Carene Larmat, EES-17, MS D452 >>> carene at lanl.gov , 505 667 2074 >> >> >> _______________________________________________ >> CIG-SEISMO mailing list >> CIG-SEISMO at geodynamics.org >> http://lists.geodynamics.org/cgi-bin/mailman/listinfo/cig-seismo >> > > -- > Dimitri Komatitsch, CNRS Research Director (DR CNRS) Laboratory of Mechanics and Acoustics, Marseille, France http://komatitsch.free.fr _______________________________________________ > CIG-SEISMO mailing list > CIG-SEISMO at geodynamics.org > http://lists.geodynamics.org/cgi-bin/mailman/listinfo/cig-seismo > _______________________________________________ > CIG-SEISMO mailing list > CIG-SEISMO at geodynamics.org > http://lists.geodynamics.org/cgi-bin/mailman/listinfo/cig-seismo > -- Dimitri Komatitsch, CNRS Research Director (DR CNRS) Laboratory of Mechanics and Acoustics, Marseille, France http://komatitsch.free.fr -------------- next part -------------- A non-text attachment was scrubbed... Name: read_value_parameters.f90 Type: text/x-fortran Size: 13901 bytes Desc: not available URL: From Moritz.Fehr at dmt-group.com Mon Feb 12 08:25:40 2018 From: Moritz.Fehr at dmt-group.com (Fehr, Moritz) Date: Mon, 12 Feb 2018 16:25:40 +0000 Subject: [CIG-SEISMO] CIG-SEISMO Digest, Vol 120, Issue 5 In-Reply-To: References: Message-ID: Hi Daniel, thanks a lot for adjusting the receiver detection routine. First the routine seems to work fine, but after finishing the routine the calculation immediately breaks up with CUDA memory errors (see error files). I have used two Tesla K40m GPUs (2 x 12 gig) and the model consists of 1.500.000 elements. I think the GPU memory should be enough for that model size. The same problem has occurred with the V100 GPUs. Do you have any idea? Thanks Mo -----Ursprüngliche Nachricht----- Von: CIG-SEISMO [mailto:cig-seismo-bounces at geodynamics.org] Im Auftrag von cig-seismo-request at geodynamics.org Gesendet: Donnerstag, 25. Januar 2018 11:02 An: cig-seismo at geodynamics.org Betreff: CIG-SEISMO Digest, Vol 120, Issue 5 Send CIG-SEISMO mailing list submissions to cig-seismo at geodynamics.org To subscribe or unsubscribe via the World Wide Web, visit http://lists.geodynamics.org/cgi-bin/mailman/listinfo/cig-seismo or, via email, send a message with subject or body 'help' to cig-seismo-request at geodynamics.org You can reach the person managing the list at cig-seismo-owner at geodynamics.org When replying, please edit your Subject line so it is more specific than "Re: Contents of CIG-SEISMO digest..." Today's Topics: 1. Re: SPECFEM3D: GPU memory usage limited (Daniel B. Peter) 2. SPECFEM3D: GPU memory usage limited (Fehr, Moritz) ---------------------------------------------------------------------- Message: 1 Date: Wed, 24 Jan 2018 21:02:53 +0000 From: "Daniel B. Peter" To: "cig-seismo at geodynamics.org" Subject: Re: [CIG-SEISMO] SPECFEM3D: GPU memory usage limited Message-ID: <5EC00584-31D0-4CFE-8708-4142FEC8D106 at kaust.edu.sa> Content-Type: text/plain; charset="utf-8" hi Moritz, is there also an error output? I’m not aware that there should be such an issue with SPECFEM3D on this newest GPU hardware. running simulations on Pascal GPUs with multiple GB memory usage works just fine. so I would expect that the run exits because of another issue, not because of the GPU memory part. your output_solver.txt stops at the receiver detection. based on your setup which uses about 13,670 stations, i expect it to be a receiver detection issue. this routines works fine for a few hundred station, but becomes very slow for more than a few thousand on a single process. this issue has been addressed in the global version, let me see if i can implement it in a similar way in the SPECFEM3D devel version. many thanks for pointing out, daniel On Jan 24, 2018, at 6:35 PM, Fehr, Moritz > wrote: Hallo, I have a problem simulating a CUBIT meshed model (1.000.000 elements) on a Tesla GPU V100-SXM2 (amazon cloud CPU / GPU cluster). I am using SPECEM3D Cartesian (V3.0) and the newest CUDA 9 lib. I want to share this issue: It seems that the GPU memory usage is limited to a value of 420 MiB although the maximum GPU memory is 16000 Mib. Do you have any idea about the origin of this limitation? Thanks Mo ___________________________________________________________________________________________________ Sitz der Gesellschaft/Headquarters: DMT GmbH & Co. KG * Am Technologiepark 1 * 45307 Essen * Deutschland/Germany Registergericht/County Court: Amtsgericht Essen * HRA 9091 * USt-ID DE 253275653 Komplementär/Fully Liable Partner: DMT Verwaltungsgesellschaft mbH, Essen Registergericht/County Court: Amtsgericht Essen * HRB 20420 Geschäftsführer/Board of Directors: Prof. Dr. Eiko Räkers (Vorsitzender/CEO), Dr. Maik Tiedemann, Ulrich Pröpper, Jens-Peter Lux Vorsitzender des Aufsichtsrates/Chairman of the Supervisory Board: Jürgen Himmelsbach TÜV NORD GROUP ___________________________________________________________________________________________________ Diese Nachricht enthält vertrauliche Informationen und ist nur für den Empfänger bestimmt. Wenn Sie nicht der Empfänger sind, sollten Sie die E-Mail nicht verbreiten, verteilen oder diese E-Mail kopieren. Benachrichtigen Sie bitte den Absender per E-Mail, wenn Sie diese E-Mail irrtümlich erhalten haben und löschen dann diese E-Mail von Ihrem System. This message contains confidential information and is intended only for the recipient. If you are not the recipient you should not disseminate, distribute or copy this e-mail. Please notify the sender immediately by e-mail if you have received this e-mail by mistake and delete this e-mail from your system. _______________________________________________ CIG-SEISMO mailing list CIG-SEISMO at geodynamics.org http://lists.geodynamics.org/cgi-bin/mailman/listinfo/cig-seismo ________________________________ This message and its contents including attachments are intended solely for the original recipient. If you are not the intended recipient or have received this message in error, please notify me immediately and delete this message from your computer system. Any unauthorized use or distribution is prohibited. Please consider the environment before printing this email. -------------- next part -------------- An HTML attachment was scrubbed... URL: ------------------------------ Message: 2 Date: Thu, 25 Jan 2018 10:07:18 +0000 From: "Fehr, Moritz" To: "cig-seismo at geodynamics.org" Subject: [CIG-SEISMO] SPECFEM3D: GPU memory usage limited Message-ID: Content-Type: text/plain; charset="utf-8" Hi Daniel, thanks for your rapid response. Sorry, I do not have any error output file, because of breaking up the simulation by myself (The simulation remains in the process of receiver detection). But I tried the same simulation with just one receiver and it works fine. Please let me know if you can fix the problem. Thanks Mo ----------------------------------------------------------------------------------------------------------------------------------- hi Moritz, is there also an error output? I’m not aware that there should be such an issue with SPECFEM3D on this newest GPU hardware. running simulations on Pascal GPUs with multiple GB memory usage works just fine. so I would expect that the run exits because of another issue, not because of the GPU memory part. your output_solver.txt stops at the receiver detection. based on your setup which uses about 13,670 stations, i expect it to be a receiver detection issue. this routines works fine for a few hundred station, but becomes very slow for more than a few thousand on a single process. this issue has been addressed in the global version, let me see if i can implement it in a similar way in the SPECFEM3D devel version. many thanks for pointing out, daniel On Jan 24, 2018, at 6:35 PM, Fehr, Moritz > wrote: Hallo, I have a problem simulating a CUBIT meshed model (1.000.000 elements) on a Tesla GPU V100-SXM2 (amazon cloud CPU / GPU cluster). I am using SPECEM3D Cartesian (V3.0) and the newest CUDA 9 lib. I want to share this issue: It seems that the GPU memory usage is limited to a value of 420 MiB although the maximum GPU memory is 16000 Mib. Do you have any idea about the origin of this limitation? Thanks Mo _______________________________________________ CIG-SEISMO mailing list CIG-SEISMO at geodynamics.org http://lists.geodynamics.org/cgi-bin/mailman/listinfo/cig-seismo ________________________________ This message and its contents including attachments are intended solely for the original recipient. If you are not the intended recipient or have received this message in error, please notify me immediately and delete this message from your computer system. Any unauthorized use or distribution is prohibited. Please consider the environment before printing this email. -------------- next part -------------- An HTML attachment was scrubbed... URL: ___________________________________________________________________________________________________ Sitz der Gesellschaft/Headquarters: DMT GmbH & Co. KG * Am Technologiepark 1 * 45307 Essen * Deutschland/Germany Registergericht/County Court: Amtsgericht Essen * HRA 9091 * USt-ID DE 253275653 Komplementär/Fully Liable Partner: DMT Verwaltungsgesellschaft mbH, Essen Registergericht/County Court: Amtsgericht Essen * HRB 20420 Geschäftsführer/Board of Directors: Prof. Dr. Eiko Räkers (Vorsitzender/CEO), Dr. Maik Tiedemann, Ulrich Pröpper, Jens-Peter Lux Vorsitzender des Aufsichtsrates/Chairman of the Supervisory Board: Jürgen Himmelsbach TÜV NORD GROUP ___________________________________________________________________________________________________ Diese Nachricht enthält vertrauliche Informationen und ist nur für den Empfänger bestimmt. Wenn Sie nicht der Empfänger sind, sollten Sie die E-Mail nicht verbreiten, verteilen oder diese E-Mail kopieren. Benachrichtigen Sie bitte den Absender per E-Mail, wenn Sie diese E-Mail irrtümlich erhalten haben und löschen dann diese E-Mail von Ihrem System. This message contains confidential information and is intended only for the recipient. If you are not the recipient you should not disseminate, distribute or copy this e-mail. Please notify the sender immediately by e-mail if you have received this e-mail by mistake and delete this e-mail from your system. ------------------------------ Subject: Digest Footer _______________________________________________ CIG-SEISMO mailing list CIG-SEISMO at geodynamics.org http://lists.geodynamics.org/cgi-bin/mailman/listinfo/cig-seismo ------------------------------ End of CIG-SEISMO Digest, Vol 120, Issue 5 ****************************************** ___________________________________________________________________________________________________ Sitz der Gesellschaft/Headquarters: DMT GmbH & Co. KG * Am Technologiepark 1 * 45307 Essen * Deutschland/Germany Registergericht/County Court: Amtsgericht Essen * HRA 9091 * USt-ID DE 253275653 Komplementär/Fully Liable Partner: DMT Verwaltungsgesellschaft mbH, Essen Registergericht/County Court: Amtsgericht Essen * HRB 20420 Geschäftsführer/Board of Directors: Prof. Dr. Eiko Räkers (Vorsitzender/CEO), Dr. Maik Tiedemann, Ulrich Pröpper, Jens-Peter Lux Vorsitzender des Aufsichtsrates/Chairman of the Supervisory Board: Jürgen Himmelsbach TÜV NORD GROUP ___________________________________________________________________________________________________ Diese Nachricht enthält vertrauliche Informationen und ist nur für den Empfänger bestimmt. Wenn Sie nicht der Empfänger sind, sollten Sie die E-Mail nicht verbreiten, verteilen oder diese E-Mail kopieren. Benachrichtigen Sie bitte den Absender per E-Mail, wenn Sie diese E-Mail irrtümlich erhalten haben und löschen dann diese E-Mail von Ihrem System. This message contains confidential information and is intended only for the recipient. If you are not the recipient you should not disseminate, distribute or copy this e-mail. Please notify the sender immediately by e-mail if you have received this e-mail by mistake and delete this e-mail from your system. -------------- next part -------------- An embedded and charset-unspecified text was scrubbed... Name: error_message_000004.txt URL: -------------- next part -------------- An embedded and charset-unspecified text was scrubbed... Name: error_message_000005.txt URL: -------------- next part -------------- An embedded and charset-unspecified text was scrubbed... Name: error_message_000011.txt URL: -------------- next part -------------- An embedded and charset-unspecified text was scrubbed... Name: error_message_000010.txt URL: -------------- next part -------------- An embedded and charset-unspecified text was scrubbed... Name: error_message_000014.txt URL: -------------- next part -------------- An embedded and charset-unspecified text was scrubbed... Name: error_message_000017.txt URL: -------------- next part -------------- An embedded and charset-unspecified text was scrubbed... Name: error_message_000006.txt URL: -------------- next part -------------- An embedded and charset-unspecified text was scrubbed... Name: error_message_000008.txt URL: -------------- next part -------------- An embedded and charset-unspecified text was scrubbed... Name: error_message_000007.txt URL: -------------- next part -------------- An embedded and charset-unspecified text was scrubbed... Name: error_message_000001.txt URL: -------------- next part -------------- An embedded and charset-unspecified text was scrubbed... Name: error_message_000009.txt URL: -------------- next part -------------- An embedded and charset-unspecified text was scrubbed... Name: output_solver.txt URL: -------------- next part -------------- An embedded and charset-unspecified text was scrubbed... Name: gpu_device_info.txt URL: -------------- next part -------------- An embedded and charset-unspecified text was scrubbed... Name: output_generate_databases.txt URL: From daniel.peter at kaust.edu.sa Mon Feb 12 14:29:20 2018 From: daniel.peter at kaust.edu.sa (Daniel B. Peter) Date: Mon, 12 Feb 2018 22:29:20 +0000 Subject: [CIG-SEISMO] CIG-SEISMO Digest, Vol 120, Issue 5 In-Reply-To: References: Message-ID: Hi Moritz, again good point - your pushing the limits :) concerning the mesh, the GPU memory is just fine for that number of elements. 1M elements will take about 10GB memory for an elastic simulation. so you could even add more if you like for those two K40s. in your setup, it’s likely a problem with the seismogram allocations on the GPUs. this changed recently such that the full seismogram arrays are allocated on the GPU as well to speed up simulations even further. unfortunately, this comes with a hit on memory consumption. in your setup, you have about 10,000 stations and 111,111 time steps. depending on the number of receivers in a single slice and how they are shared between the 2 GPUs, a rough estimate for your case shows up to about 6.3GB additional memory for a single 3-component seismogram (e.g. displacement) per GPU (assuming you have an even split with 5,000 local stations per GPU). given the ~7.5GB from the mesh, this exceeds a single GPU memory of 12GB. as a quick workaround, you would split up the stations and run several simulations for different station setups. this can limit the seismogram memory allocation. the other workarounds involve some changes in the code: (i) we could add checkpointing (again, the global version has it, so in principle easy to add to the Cartesian version) but this means you will have to run multiple simulations as well, (ii) we reduce the seismogram allocations. well, both would be nice, let me focus on (ii) first since we already have a parameter in the Par_file, NTSTEP_BETWEEN_OUTPUT_SEISMOS, which could be used here to limit the array allocation size. best wishes, daniel > On Feb 12, 2018, at 7:25 PM, Fehr, Moritz wrote: > > Hi Daniel, > > thanks a lot for adjusting the receiver detection routine. First the routine seems to work fine, but after finishing the routine the calculation immediately breaks up with CUDA memory errors (see error files). I have used two Tesla K40m GPUs (2 x 12 gig) and the model consists of 1.500.000 elements. I think the GPU memory should be enough for that model size. The same problem has occurred with the V100 GPUs. > Do you have any idea? > > Thanks > Mo > > > > > > > > -----Ursprüngliche Nachricht----- > Von: CIG-SEISMO [mailto:cig-seismo-bounces at geodynamics.org] Im Auftrag von cig-seismo-request at geodynamics.org > Gesendet: Donnerstag, 25. Januar 2018 11:02 > An: cig-seismo at geodynamics.org > Betreff: CIG-SEISMO Digest, Vol 120, Issue 5 > > Send CIG-SEISMO mailing list submissions to > cig-seismo at geodynamics.org > > To subscribe or unsubscribe via the World Wide Web, visit > http://lists.geodynamics.org/cgi-bin/mailman/listinfo/cig-seismo > or, via email, send a message with subject or body 'help' to > cig-seismo-request at geodynamics.org > > You can reach the person managing the list at > cig-seismo-owner at geodynamics.org > > When replying, please edit your Subject line so it is more specific than "Re: Contents of CIG-SEISMO digest..." > > > Today's Topics: > > 1. Re: SPECFEM3D: GPU memory usage limited (Daniel B. Peter) > 2. SPECFEM3D: GPU memory usage limited (Fehr, Moritz) > > > ---------------------------------------------------------------------- > > Message: 1 > Date: Wed, 24 Jan 2018 21:02:53 +0000 > From: "Daniel B. Peter" > To: "cig-seismo at geodynamics.org" > Subject: Re: [CIG-SEISMO] SPECFEM3D: GPU memory usage limited > Message-ID: <5EC00584-31D0-4CFE-8708-4142FEC8D106 at kaust.edu.sa> > Content-Type: text/plain; charset="utf-8" > > hi Moritz, > > is there also an error output? > > I’m not aware that there should be such an issue with SPECFEM3D on this newest GPU hardware. running simulations on Pascal GPUs with multiple GB memory usage works just fine. so I would expect that the run exits because of another issue, not because of the GPU memory part. > > your output_solver.txt stops at the receiver detection. based on your setup which uses about 13,670 stations, i expect it to be a receiver detection issue. this routines works fine for a few hundred station, but becomes very slow for more than a few thousand on a single process. this issue has been addressed in the global version, let me see if i can implement it in a similar way in the SPECFEM3D devel version. > > many thanks for pointing out, > daniel > > > > On Jan 24, 2018, at 6:35 PM, Fehr, Moritz > wrote: > > Hallo, > > I have a problem simulating a CUBIT meshed model (1.000.000 elements) on a Tesla GPU V100-SXM2 (amazon cloud CPU / GPU cluster). I am using SPECEM3D Cartesian (V3.0) and the newest CUDA 9 lib. I want to share this issue: It seems that the GPU memory usage is limited to a value of 420 MiB although the maximum GPU memory is 16000 Mib. > Do you have any idea about the origin of this limitation? > > Thanks > Mo > > > > > > ___________________________________________________________________________________________________ > Sitz der Gesellschaft/Headquarters: DMT GmbH & Co. KG * Am Technologiepark 1 * 45307 Essen * Deutschland/Germany Registergericht/County Court: Amtsgericht Essen * HRA 9091 * USt-ID DE 253275653 Komplementär/Fully Liable Partner: DMT Verwaltungsgesellschaft mbH, Essen Registergericht/County Court: Amtsgericht Essen * HRB 20420 Geschäftsführer/Board of Directors: Prof. Dr. Eiko Räkers (Vorsitzender/CEO), Dr. Maik Tiedemann, Ulrich Pröpper, Jens-Peter Lux Vorsitzender des Aufsichtsrates/Chairman of the Supervisory Board: Jürgen Himmelsbach TÜV NORD GROUP ___________________________________________________________________________________________________ > Diese Nachricht enthält vertrauliche Informationen und ist nur für den Empfänger bestimmt. Wenn Sie nicht der Empfänger sind, sollten Sie die E-Mail nicht verbreiten, verteilen oder diese E-Mail kopieren. Benachrichtigen Sie bitte den Absender per E-Mail, wenn Sie diese E-Mail irrtümlich erhalten haben und löschen dann diese E-Mail von Ihrem System. > > This message contains confidential information and is intended only for the recipient. If you are not the recipient you should not disseminate, distribute or copy this e-mail. Please notify the sender immediately by e-mail if you have received this e-mail by mistake and delete this e-mail from your system. > > > _______________________________________________ > CIG-SEISMO mailing list > CIG-SEISMO at geodynamics.org > http://lists.geodynamics.org/cgi-bin/mailman/listinfo/cig-seismo > > > ________________________________ > This message and its contents including attachments are intended solely for the original recipient. If you are not the intended recipient or have received this message in error, please notify me immediately and delete this message from your computer system. Any unauthorized use or distribution is prohibited. Please consider the environment before printing this email. > -------------- next part -------------- > An HTML attachment was scrubbed... > URL: > > ------------------------------ > > Message: 2 > Date: Thu, 25 Jan 2018 10:07:18 +0000 > From: "Fehr, Moritz" > To: "cig-seismo at geodynamics.org" > Subject: [CIG-SEISMO] SPECFEM3D: GPU memory usage limited > Message-ID: > > Content-Type: text/plain; charset="utf-8" > > Hi Daniel, > > thanks for your rapid response. Sorry, I do not have any error output file, because of breaking up the simulation by myself (The simulation remains in the process of receiver detection). But I tried the same simulation with just one receiver and it works fine. > Please let me know if you can fix the problem. > > Thanks > Mo > > > > ----------------------------------------------------------------------------------------------------------------------------------- > hi Moritz, > > is there also an error output? > > I’m not aware that there should be such an issue with SPECFEM3D on this newest GPU hardware. running simulations on Pascal GPUs with multiple GB memory usage works just fine. so I would expect that the run exits because of another issue, not because of the GPU memory part. > > your output_solver.txt stops at the receiver detection. based on your setup which uses about 13,670 stations, i expect it to be a receiver detection issue. this routines works fine for a few hundred station, but becomes very slow for more than a few thousand on a single process. this issue has been addressed in the global version, let me see if i can implement it in a similar way in the SPECFEM3D devel version. > > many thanks for pointing out, > daniel > > > > On Jan 24, 2018, at 6:35 PM, Fehr, Moritz > wrote: > > Hallo, > > I have a problem simulating a CUBIT meshed model (1.000.000 elements) on a Tesla GPU V100-SXM2 (amazon cloud CPU / GPU cluster). I am using SPECEM3D Cartesian (V3.0) and the newest CUDA 9 lib. I want to share this issue: It seems that the GPU memory usage is limited to a value of 420 MiB although the maximum GPU memory is 16000 Mib. > Do you have any idea about the origin of this limitation? > > Thanks > Mo > > > > > > _______________________________________________ > CIG-SEISMO mailing list > CIG-SEISMO at geodynamics.org http://lists.geodynamics.org/cgi-bin/mailman/listinfo/cig-seismo > > > ________________________________ > This message and its contents including attachments are intended solely for the original recipient. If you are not the intended recipient or have received this message in error, please notify me immediately and delete this message from your computer system. Any unauthorized use or distribution is prohibited. Please consider the environment before printing this email. > -------------- next part -------------- > An HTML attachment was scrubbed... > URL: > > ___________________________________________________________________________________________________ > Sitz der Gesellschaft/Headquarters: DMT GmbH & Co. KG * Am Technologiepark 1 * 45307 Essen * Deutschland/Germany Registergericht/County Court: Amtsgericht Essen * HRA 9091 * USt-ID DE 253275653 Komplementär/Fully Liable Partner: DMT Verwaltungsgesellschaft mbH, Essen Registergericht/County Court: Amtsgericht Essen * HRB 20420 Geschäftsführer/Board of Directors: Prof. Dr. Eiko Räkers (Vorsitzender/CEO), Dr. Maik Tiedemann, Ulrich Pröpper, Jens-Peter Lux Vorsitzender des Aufsichtsrates/Chairman of the Supervisory Board: Jürgen Himmelsbach TÜV NORD GROUP ___________________________________________________________________________________________________ > Diese Nachricht enthält vertrauliche Informationen und ist nur für den Empfänger bestimmt. Wenn Sie nicht der Empfänger sind, sollten Sie die E-Mail nicht verbreiten, verteilen oder diese E-Mail kopieren. Benachrichtigen Sie bitte den Absender per E-Mail, wenn Sie diese E-Mail irrtümlich erhalten haben und löschen dann diese E-Mail von Ihrem System. > > This message contains confidential information and is intended only for the recipient. If you are not the recipient you should not disseminate, distribute or copy this e-mail. Please notify the sender immediately by e-mail if you have received this e-mail by mistake and delete this e-mail from your system. > > > ------------------------------ > > Subject: Digest Footer > > _______________________________________________ > CIG-SEISMO mailing list > CIG-SEISMO at geodynamics.org > http://lists.geodynamics.org/cgi-bin/mailman/listinfo/cig-seismo > > ------------------------------ > > End of CIG-SEISMO Digest, Vol 120, Issue 5 > ****************************************** > > ___________________________________________________________________________________________________ > Sitz der Gesellschaft/Headquarters: DMT GmbH & Co. KG * Am Technologiepark 1 * 45307 Essen * Deutschland/Germany > Registergericht/County Court: Amtsgericht Essen * HRA 9091 * USt-ID DE 253275653 > Komplementär/Fully Liable Partner: DMT Verwaltungsgesellschaft mbH, Essen > Registergericht/County Court: Amtsgericht Essen * HRB 20420 > Geschäftsführer/Board of Directors: Prof. Dr. Eiko Räkers (Vorsitzender/CEO), Dr. Maik Tiedemann, Ulrich Pröpper, Jens-Peter Lux > Vorsitzender des Aufsichtsrates/Chairman of the Supervisory Board: Jürgen Himmelsbach > TÜV NORD GROUP > ___________________________________________________________________________________________________ > Diese Nachricht enthält vertrauliche Informationen und ist nur für den Empfänger bestimmt. Wenn Sie nicht der Empfänger sind, sollten Sie die E-Mail nicht verbreiten, verteilen oder diese E-Mail kopieren. Benachrichtigen Sie bitte den Absender per E-Mail, wenn Sie diese E-Mail irrtümlich erhalten haben und löschen dann diese E-Mail von Ihrem System. > > This message contains confidential information and is intended only for the recipient. If you are not the recipient you should not disseminate, distribute or copy this e-mail. Please notify the sender immediately by e-mail if you have received this e-mail by mistake and delete this e-mail from your system. > > _______________________________________________ > CIG-SEISMO mailing list > CIG-SEISMO at geodynamics.org > http://lists.geodynamics.org/cgi-bin/mailman/listinfo/cig-seismo From komatitsch at lma.cnrs-mrs.fr Mon Feb 12 15:10:00 2018 From: komatitsch at lma.cnrs-mrs.fr (Dimitri Komatitsch) Date: Tue, 13 Feb 2018 00:10:00 +0100 Subject: [CIG-SEISMO] CIG-SEISMO Digest, Vol 120, Issue 5 In-Reply-To: References: Message-ID: Hi Daniel and Moritz, Thanks! Another option that we frequently use here is to subsample the seismograms, i.e. allocate seismograms arrays that are e.g. 10 times smaller and save one every 10 time steps. This is very often OK because the CFL time step is much smaller than the Nyquist frequency of the source. In the 2D code we have this flag: # subsampling of the seismograms to create smaller files (but less accurately sampled in time) subsamp_seismos = 1 Not sure if it is available in 3D as well (if not, very easy to implement; if someone does it, please let us know and/or commit that to Git, it would be very useful). thanks Best, Dimitri. On 02/12/2018 11:29 PM, Daniel B. Peter wrote: > Hi Moritz, > > again good point - your pushing the limits :) > > concerning the mesh, the GPU memory is just fine for that number of elements. 1M elements will take about 10GB memory for an elastic simulation. so you could even add more if you like for those two K40s. > > in your setup, it’s likely a problem with the seismogram allocations on the GPUs. this changed recently such that the full seismogram arrays are allocated on the GPU as well to speed up simulations even further. unfortunately, this comes with a hit on memory consumption. > > in your setup, you have about 10,000 stations and 111,111 time steps. depending on the number of receivers in a single slice and how they are shared between the 2 GPUs, a rough estimate for your case shows up to about 6.3GB additional memory for a single 3-component seismogram (e.g. displacement) per GPU (assuming you have an even split with 5,000 local stations per GPU). given the ~7.5GB from the mesh, this exceeds a single GPU memory of 12GB. > > as a quick workaround, you would split up the stations and run several simulations for different station setups. this can limit the seismogram memory allocation. the other workarounds involve some changes in the code: (i) we could add checkpointing (again, the global version has it, so in principle easy to add to the Cartesian version) but this means you will have to run multiple simulations as well, (ii) we reduce the seismogram allocations. well, both would be nice, let me focus on (ii) first since we already have a parameter in the Par_file, NTSTEP_BETWEEN_OUTPUT_SEISMOS, which could be used here to limit the array allocation size. > > best wishes, > daniel > > > > >> On Feb 12, 2018, at 7:25 PM, Fehr, Moritz wrote: >> >> Hi Daniel, >> >> thanks a lot for adjusting the receiver detection routine. First the routine seems to work fine, but after finishing the routine the calculation immediately breaks up with CUDA memory errors (see error files). I have used two Tesla K40m GPUs (2 x 12 gig) and the model consists of 1.500.000 elements. I think the GPU memory should be enough for that model size. The same problem has occurred with the V100 GPUs. >> Do you have any idea? >> >> Thanks >> Mo >> >> >> >> >> >> >> >> -----Ursprüngliche Nachricht----- >> Von: CIG-SEISMO [mailto:cig-seismo-bounces at geodynamics.org] Im Auftrag von cig-seismo-request at geodynamics.org >> Gesendet: Donnerstag, 25. Januar 2018 11:02 >> An: cig-seismo at geodynamics.org >> Betreff: CIG-SEISMO Digest, Vol 120, Issue 5 >> >> Send CIG-SEISMO mailing list submissions to >> cig-seismo at geodynamics.org >> >> To subscribe or unsubscribe via the World Wide Web, visit >> http://lists.geodynamics.org/cgi-bin/mailman/listinfo/cig-seismo >> or, via email, send a message with subject or body 'help' to >> cig-seismo-request at geodynamics.org >> >> You can reach the person managing the list at >> cig-seismo-owner at geodynamics.org >> >> When replying, please edit your Subject line so it is more specific than "Re: Contents of CIG-SEISMO digest..." >> >> >> Today's Topics: >> >> 1. Re: SPECFEM3D: GPU memory usage limited (Daniel B. Peter) >> 2. SPECFEM3D: GPU memory usage limited (Fehr, Moritz) >> >> >> ---------------------------------------------------------------------- >> >> Message: 1 >> Date: Wed, 24 Jan 2018 21:02:53 +0000 >> From: "Daniel B. Peter" >> To: "cig-seismo at geodynamics.org" >> Subject: Re: [CIG-SEISMO] SPECFEM3D: GPU memory usage limited >> Message-ID: <5EC00584-31D0-4CFE-8708-4142FEC8D106 at kaust.edu.sa> >> Content-Type: text/plain; charset="utf-8" >> >> hi Moritz, >> >> is there also an error output? >> >> I’m not aware that there should be such an issue with SPECFEM3D on this newest GPU hardware. running simulations on Pascal GPUs with multiple GB memory usage works just fine. so I would expect that the run exits because of another issue, not because of the GPU memory part. >> >> your output_solver.txt stops at the receiver detection. based on your setup which uses about 13,670 stations, i expect it to be a receiver detection issue. this routines works fine for a few hundred station, but becomes very slow for more than a few thousand on a single process. this issue has been addressed in the global version, let me see if i can implement it in a similar way in the SPECFEM3D devel version. >> >> many thanks for pointing out, >> daniel >> >> >> >> On Jan 24, 2018, at 6:35 PM, Fehr, Moritz > wrote: >> >> Hallo, >> >> I have a problem simulating a CUBIT meshed model (1.000.000 elements) on a Tesla GPU V100-SXM2 (amazon cloud CPU / GPU cluster). I am using SPECEM3D Cartesian (V3.0) and the newest CUDA 9 lib. I want to share this issue: It seems that the GPU memory usage is limited to a value of 420 MiB although the maximum GPU memory is 16000 Mib. >> Do you have any idea about the origin of this limitation? >> >> Thanks >> Mo >> >> >> >> >> >> ___________________________________________________________________________________________________ >> Sitz der Gesellschaft/Headquarters: DMT GmbH & Co. KG * Am Technologiepark 1 * 45307 Essen * Deutschland/Germany Registergericht/County Court: Amtsgericht Essen * HRA 9091 * USt-ID DE 253275653 Komplementär/Fully Liable Partner: DMT Verwaltungsgesellschaft mbH, Essen Registergericht/County Court: Amtsgericht Essen * HRB 20420 Geschäftsführer/Board of Directors: Prof. Dr. Eiko Räkers (Vorsitzender/CEO), Dr. Maik Tiedemann, Ulrich Pröpper, Jens-Peter Lux Vorsitzender des Aufsichtsrates/Chairman of the Supervisory Board: Jürgen Himmelsbach TÜV NORD GROUP ___________________________________________________________________________________________________ >> Diese Nachricht enthält vertrauliche Informationen und ist nur für den Empfänger bestimmt. Wenn Sie nicht der Empfänger sind, sollten Sie die E-Mail nicht verbreiten, verteilen oder diese E-Mail kopieren. Benachrichtigen Sie bitte den Absender per E-Mail, wenn Sie diese E-Mail irrtümlich erhalten haben und löschen dann diese E-Mail von Ihrem System. >> >> This message contains confidential information and is intended only for the recipient. If you are not the recipient you should not disseminate, distribute or copy this e-mail. Please notify the sender immediately by e-mail if you have received this e-mail by mistake and delete this e-mail from your system. >> >> >> _______________________________________________ >> CIG-SEISMO mailing list >> CIG-SEISMO at geodynamics.org >> http://lists.geodynamics.org/cgi-bin/mailman/listinfo/cig-seismo >> >> >> ________________________________ >> This message and its contents including attachments are intended solely for the original recipient. If you are not the intended recipient or have received this message in error, please notify me immediately and delete this message from your computer system. Any unauthorized use or distribution is prohibited. Please consider the environment before printing this email. >> -------------- next part -------------- >> An HTML attachment was scrubbed... >> URL: >> >> ------------------------------ >> >> Message: 2 >> Date: Thu, 25 Jan 2018 10:07:18 +0000 >> From: "Fehr, Moritz" >> To: "cig-seismo at geodynamics.org" >> Subject: [CIG-SEISMO] SPECFEM3D: GPU memory usage limited >> Message-ID: >> >> Content-Type: text/plain; charset="utf-8" >> >> Hi Daniel, >> >> thanks for your rapid response. Sorry, I do not have any error output file, because of breaking up the simulation by myself (The simulation remains in the process of receiver detection). But I tried the same simulation with just one receiver and it works fine. >> Please let me know if you can fix the problem. >> >> Thanks >> Mo >> >> >> >> ----------------------------------------------------------------------------------------------------------------------------------- >> hi Moritz, >> >> is there also an error output? >> >> I’m not aware that there should be such an issue with SPECFEM3D on this newest GPU hardware. running simulations on Pascal GPUs with multiple GB memory usage works just fine. so I would expect that the run exits because of another issue, not because of the GPU memory part. >> >> your output_solver.txt stops at the receiver detection. based on your setup which uses about 13,670 stations, i expect it to be a receiver detection issue. this routines works fine for a few hundred station, but becomes very slow for more than a few thousand on a single process. this issue has been addressed in the global version, let me see if i can implement it in a similar way in the SPECFEM3D devel version. >> >> many thanks for pointing out, >> daniel >> >> >> >> On Jan 24, 2018, at 6:35 PM, Fehr, Moritz > wrote: >> >> Hallo, >> >> I have a problem simulating a CUBIT meshed model (1.000.000 elements) on a Tesla GPU V100-SXM2 (amazon cloud CPU / GPU cluster). I am using SPECEM3D Cartesian (V3.0) and the newest CUDA 9 lib. I want to share this issue: It seems that the GPU memory usage is limited to a value of 420 MiB although the maximum GPU memory is 16000 Mib. >> Do you have any idea about the origin of this limitation? >> >> Thanks >> Mo >> >> >> >> >> >> _______________________________________________ >> CIG-SEISMO mailing list >> CIG-SEISMO at geodynamics.org http://lists.geodynamics.org/cgi-bin/mailman/listinfo/cig-seismo >> >> >> ________________________________ >> This message and its contents including attachments are intended solely for the original recipient. If you are not the intended recipient or have received this message in error, please notify me immediately and delete this message from your computer system. Any unauthorized use or distribution is prohibited. Please consider the environment before printing this email. >> -------------- next part -------------- >> An HTML attachment was scrubbed... >> URL: >> >> ___________________________________________________________________________________________________ >> Sitz der Gesellschaft/Headquarters: DMT GmbH & Co. KG * Am Technologiepark 1 * 45307 Essen * Deutschland/Germany Registergericht/County Court: Amtsgericht Essen * HRA 9091 * USt-ID DE 253275653 Komplementär/Fully Liable Partner: DMT Verwaltungsgesellschaft mbH, Essen Registergericht/County Court: Amtsgericht Essen * HRB 20420 Geschäftsführer/Board of Directors: Prof. Dr. Eiko Räkers (Vorsitzender/CEO), Dr. Maik Tiedemann, Ulrich Pröpper, Jens-Peter Lux Vorsitzender des Aufsichtsrates/Chairman of the Supervisory Board: Jürgen Himmelsbach TÜV NORD GROUP ___________________________________________________________________________________________________ >> Diese Nachricht enthält vertrauliche Informationen und ist nur für den Empfänger bestimmt. Wenn Sie nicht der Empfänger sind, sollten Sie die E-Mail nicht verbreiten, verteilen oder diese E-Mail kopieren. Benachrichtigen Sie bitte den Absender per E-Mail, wenn Sie diese E-Mail irrtümlich erhalten haben und löschen dann diese E-Mail von Ihrem System. >> >> This message contains confidential information and is intended only for the recipient. If you are not the recipient you should not disseminate, distribute or copy this e-mail. Please notify the sender immediately by e-mail if you have received this e-mail by mistake and delete this e-mail from your system. >> >> >> ------------------------------ >> >> Subject: Digest Footer >> >> _______________________________________________ >> CIG-SEISMO mailing list >> CIG-SEISMO at geodynamics.org >> http://lists.geodynamics.org/cgi-bin/mailman/listinfo/cig-seismo >> >> ------------------------------ >> >> End of CIG-SEISMO Digest, Vol 120, Issue 5 >> ****************************************** >> >> ___________________________________________________________________________________________________ >> Sitz der Gesellschaft/Headquarters: DMT GmbH & Co. KG * Am Technologiepark 1 * 45307 Essen * Deutschland/Germany >> Registergericht/County Court: Amtsgericht Essen * HRA 9091 * USt-ID DE 253275653 >> Komplementär/Fully Liable Partner: DMT Verwaltungsgesellschaft mbH, Essen >> Registergericht/County Court: Amtsgericht Essen * HRB 20420 >> Geschäftsführer/Board of Directors: Prof. Dr. Eiko Räkers (Vorsitzender/CEO), Dr. Maik Tiedemann, Ulrich Pröpper, Jens-Peter Lux >> Vorsitzender des Aufsichtsrates/Chairman of the Supervisory Board: Jürgen Himmelsbach >> TÜV NORD GROUP >> ___________________________________________________________________________________________________ >> Diese Nachricht enthält vertrauliche Informationen und ist nur für den Empfänger bestimmt. Wenn Sie nicht der Empfänger sind, sollten Sie die E-Mail nicht verbreiten, verteilen oder diese E-Mail kopieren. Benachrichtigen Sie bitte den Absender per E-Mail, wenn Sie diese E-Mail irrtümlich erhalten haben und löschen dann diese E-Mail von Ihrem System. >> >> This message contains confidential information and is intended only for the recipient. If you are not the recipient you should not disseminate, distribute or copy this e-mail. Please notify the sender immediately by e-mail if you have received this e-mail by mistake and delete this e-mail from your system. >> >> _______________________________________________ >> CIG-SEISMO mailing list >> CIG-SEISMO at geodynamics.org >> http://lists.geodynamics.org/cgi-bin/mailman/listinfo/cig-seismo > > _______________________________________________ > CIG-SEISMO mailing list > CIG-SEISMO at geodynamics.org > http://lists.geodynamics.org/cgi-bin/mailman/listinfo/cig-seismo > -- Dimitri Komatitsch, CNRS Research Director (DR CNRS) Laboratory of Mechanics and Acoustics, Marseille, France http://komatitsch.free.fr From komatitsch at lma.cnrs-mrs.fr Mon Feb 12 15:13:36 2018 From: komatitsch at lma.cnrs-mrs.fr (Dimitri Komatitsch) Date: Tue, 13 Feb 2018 00:13:36 +0100 Subject: [CIG-SEISMO] CIG-SEISMO Digest, Vol 120, Issue 5 In-Reply-To: References: Message-ID: Hi all, PS: only works (directly) for forward runs. For FWI / imaging / adjoint runs, the adjoint sources are needed at each time step. Interpolating is possible, but more involved. Best, Dimitri. On 02/13/2018 12:10 AM, Dimitri Komatitsch wrote: > > Hi Daniel and Moritz, > > Thanks! > > Another option that we frequently use here is to subsample the > seismograms, i.e. allocate seismograms arrays that are e.g. 10 times > smaller and save one every 10 time steps. > This is very often OK because the CFL time step is much smaller than the > Nyquist frequency of the source. > > In the 2D code we have this flag: > > # subsampling of the seismograms to create smaller files (but less > accurately sampled in time) > subsamp_seismos                 = 1 > > Not sure if it is available in 3D as well (if not, very easy to > implement; if someone does it, please let us know and/or commit that to > Git, it would be very useful). > > thanks > Best, > Dimitri. > > On 02/12/2018 11:29 PM, Daniel B. Peter wrote: >> Hi Moritz, >> >> again good point - your pushing the limits :) >> >> concerning the mesh, the GPU memory is just fine for that number of >> elements. 1M elements will take about 10GB memory for an elastic >> simulation. so you could even add more if you like for those two K40s. >> >> in your setup, it’s likely a problem with the seismogram allocations >> on the GPUs. this changed recently such that the full seismogram >> arrays are allocated on the GPU as well to speed up simulations even >> further. unfortunately, this comes with a hit on memory consumption. >> >> in your setup, you have about 10,000 stations and 111,111 time steps. >> depending on the number of receivers in a single slice and how they >> are shared between the 2 GPUs, a rough estimate for your case shows up >> to about 6.3GB additional memory for a single 3-component seismogram >> (e.g. displacement) per GPU (assuming you have an even split with >> 5,000 local stations per GPU). given the ~7.5GB from the mesh, this >> exceeds a single GPU memory of 12GB. >> >> as a quick workaround, you would split up the stations and run several >> simulations for different station setups. this can limit the >> seismogram memory allocation. the other workarounds involve some >> changes in the code: (i) we could add checkpointing (again, the global >> version has it, so in principle easy to add to the Cartesian version) >> but this means you will have to run multiple simulations as well, (ii) >> we reduce the seismogram allocations. well, both would be nice, let me >> focus on (ii) first since we already have a parameter in the Par_file, >> NTSTEP_BETWEEN_OUTPUT_SEISMOS, which could be used here to limit the >> array allocation size. >> >> best wishes, >> daniel >> >> >> >> >>> On Feb 12, 2018, at 7:25 PM, Fehr, Moritz >>> wrote: >>> >>> Hi Daniel, >>> >>> thanks a lot for adjusting the receiver detection routine. First the >>> routine seems to work fine, but after finishing the routine the >>> calculation immediately breaks up with CUDA memory errors (see error >>> files). I have used two Tesla K40m GPUs (2 x 12 gig) and the model >>> consists of 1.500.000 elements. I think the GPU memory should be >>> enough for that model size. The same problem has occurred with the >>> V100 GPUs. >>> Do you have any idea? >>> >>> Thanks >>> Mo >>> >>> >>> >>> >>> >>> >>> >>> -----Ursprüngliche Nachricht----- >>> Von: CIG-SEISMO [mailto:cig-seismo-bounces at geodynamics.org] Im >>> Auftrag von cig-seismo-request at geodynamics.org >>> Gesendet: Donnerstag, 25. Januar 2018 11:02 >>> An: cig-seismo at geodynamics.org >>> Betreff: CIG-SEISMO Digest, Vol 120, Issue 5 >>> >>> Send CIG-SEISMO mailing list submissions to >>> cig-seismo at geodynamics.org >>> >>> To subscribe or unsubscribe via the World Wide Web, visit >>> http://lists.geodynamics.org/cgi-bin/mailman/listinfo/cig-seismo >>> or, via email, send a message with subject or body 'help' to >>> cig-seismo-request at geodynamics.org >>> >>> You can reach the person managing the list at >>> cig-seismo-owner at geodynamics.org >>> >>> When replying, please edit your Subject line so it is more specific >>> than "Re: Contents of CIG-SEISMO digest..." >>> >>> >>> Today's Topics: >>> >>>    1. Re: SPECFEM3D: GPU memory usage limited (Daniel B. Peter) >>>    2.  SPECFEM3D: GPU memory usage limited (Fehr, Moritz) >>> >>> >>> ---------------------------------------------------------------------- >>> >>> Message: 1 >>> Date: Wed, 24 Jan 2018 21:02:53 +0000 >>> From: "Daniel B. Peter" >>> To: "cig-seismo at geodynamics.org" >>> Subject: Re: [CIG-SEISMO] SPECFEM3D: GPU memory usage limited >>> Message-ID: <5EC00584-31D0-4CFE-8708-4142FEC8D106 at kaust.edu.sa> >>> Content-Type: text/plain; charset="utf-8" >>> >>> hi Moritz, >>> >>> is there also an error output? >>> >>> I’m not aware that there should be such an issue with SPECFEM3D on >>> this newest GPU hardware. running simulations on Pascal GPUs with >>> multiple GB memory usage works just fine. so I would expect that the >>> run exits because of another issue, not because of the GPU memory part. >>> >>> your output_solver.txt stops at the receiver detection. based on your >>> setup which uses about 13,670 stations, i expect it to be a receiver >>> detection issue. this routines works fine for a few hundred station, >>> but becomes very slow for more than a few thousand on a single >>> process. this issue has been addressed in the global version, let me >>> see if i can implement it in a similar way in the SPECFEM3D devel >>> version. >>> >>> many thanks for pointing out, >>> daniel >>> >>> >>> >>> On Jan 24, 2018, at 6:35 PM, Fehr, Moritz >>> > wrote: >>> >>> Hallo, >>> >>> I have a problem simulating a CUBIT meshed model (1.000.000 elements) >>> on a Tesla GPU V100-SXM2 (amazon cloud CPU / GPU cluster). I am using >>> SPECEM3D Cartesian (V3.0) and the newest CUDA 9 lib. I want to share >>> this issue: It seems that the GPU memory usage is limited to a value >>> of 420 MiB although the maximum GPU memory is 16000 Mib. >>> Do you have any idea about the origin of this limitation? >>> >>> Thanks >>> Mo >>> >>> >>> >>> >>> >>> ___________________________________________________________________________________________________ >>> >>> Sitz der Gesellschaft/Headquarters: DMT GmbH & Co. KG * Am >>> Technologiepark 1 * 45307 Essen * Deutschland/Germany >>> Registergericht/County Court: Amtsgericht Essen * HRA 9091 * USt-ID >>> DE 253275653 Komplementär/Fully Liable Partner: DMT >>> Verwaltungsgesellschaft mbH, Essen Registergericht/County Court: >>> Amtsgericht Essen * HRB 20420 Geschäftsführer/Board of Directors: >>> Prof. Dr. Eiko Räkers (Vorsitzender/CEO), Dr. Maik Tiedemann, Ulrich >>> Pröpper, Jens-Peter Lux Vorsitzender des Aufsichtsrates/Chairman of >>> the Supervisory Board: Jürgen Himmelsbach TÜV NORD GROUP >>> ___________________________________________________________________________________________________ >>> >>> Diese Nachricht enthält vertrauliche Informationen und ist nur für >>> den Empfänger bestimmt. Wenn Sie nicht der Empfänger sind, sollten >>> Sie die E-Mail nicht verbreiten, verteilen oder diese E-Mail >>> kopieren. Benachrichtigen Sie bitte den Absender per E-Mail, wenn Sie >>> diese E-Mail irrtümlich erhalten haben und löschen dann diese E-Mail >>> von Ihrem System. >>> >>> This message contains confidential information and is intended only >>> for the recipient. If you are not the recipient you should not >>> disseminate, distribute or copy this e-mail. Please notify the sender >>> immediately by e-mail if you have received this e-mail by mistake and >>> delete this e-mail from your system. >>> >>> >>> _______________________________________________ >>> >>> CIG-SEISMO mailing list >>> CIG-SEISMO at geodynamics.org >>> http://lists.geodynamics.org/cgi-bin/mailman/listinfo/cig-seismo >>> >>> >>> ________________________________ >>> This message and its contents including attachments are intended >>> solely for the original recipient. If you are not the intended >>> recipient or have received this message in error, please notify me >>> immediately and delete this message from your computer system. Any >>> unauthorized use or distribution is prohibited. Please consider the >>> environment before printing this email. >>> -------------- next part -------------- >>> An HTML attachment was scrubbed... >>> URL: >>> >>> >>> >>> ------------------------------ >>> >>> Message: 2 >>> Date: Thu, 25 Jan 2018 10:07:18 +0000 >>> From: "Fehr, Moritz" >>> To: "cig-seismo at geodynamics.org" >>> Subject: [CIG-SEISMO]  SPECFEM3D: GPU memory usage limited >>> Message-ID: >>> >>> Content-Type: text/plain; charset="utf-8" >>> >>> Hi Daniel, >>> >>> thanks for your rapid response. Sorry, I do not have any error output >>> file, because of breaking up the simulation by myself (The simulation >>> remains in the process of receiver detection).  But I tried the same >>> simulation with just one receiver and it works fine. >>> Please let me know if you can fix the problem. >>> >>> Thanks >>> Mo >>> >>> >>> >>> ----------------------------------------------------------------------------------------------------------------------------------- >>> >>> hi Moritz, >>> >>> is there also an error output? >>> >>> I’m not aware that there should be such an issue with SPECFEM3D on >>> this newest GPU hardware. running simulations on Pascal GPUs with >>> multiple GB memory usage works just fine. so I would expect that the >>> run exits because of another issue, not because of the GPU memory part. >>> >>> your output_solver.txt stops at the receiver detection. based on your >>> setup which uses about 13,670 stations, i expect it to be a receiver >>> detection issue. this routines works fine for a few hundred station, >>> but becomes very slow for more than a few thousand on a single >>> process. this issue has been addressed in the global version, let me >>> see if i can implement it in a similar way in the SPECFEM3D devel >>> version. >>> >>> many thanks for pointing out, >>> daniel >>> >>> >>> >>> On Jan 24, 2018, at 6:35 PM, Fehr, Moritz >> dmt-group.com> wrote: >>> >>> Hallo, >>> >>> I have a problem simulating a CUBIT meshed model (1.000.000 elements) >>> on a Tesla GPU V100-SXM2 (amazon cloud CPU / GPU cluster). I am using >>> SPECEM3D Cartesian (V3.0) and the newest CUDA 9 lib. I want to share >>> this issue: It seems that the GPU memory usage is limited to a value >>> of 420 MiB although the maximum GPU memory is 16000 Mib. >>> Do you have any idea about the origin of this limitation? >>> >>> Thanks >>> Mo >>> >>> >>> >>> >>> >>> _______________________________________________ >>> >>> CIG-SEISMO mailing list >>> CIG-SEISMO at geodynamics.org >>> http://lists.geodynamics.org/cgi-bin/mailman/listinfo/cig-seismo >>> >>> >>> ________________________________ >>> This message and its contents including attachments are intended >>> solely for the original recipient. If you are not the intended >>> recipient or have received this message in error, please notify me >>> immediately and delete this message from your computer system. Any >>> unauthorized use or distribution is prohibited. Please consider the >>> environment before printing this email. >>> -------------- next part -------------- >>> An HTML attachment was scrubbed... >>> URL: >>> >>> >>> >>> ___________________________________________________________________________________________________ >>> >>> Sitz der Gesellschaft/Headquarters: DMT GmbH & Co. KG * Am >>> Technologiepark 1 * 45307 Essen * Deutschland/Germany >>> Registergericht/County Court: Amtsgericht Essen * HRA 9091 * USt-ID >>> DE 253275653 Komplementär/Fully Liable Partner: DMT >>> Verwaltungsgesellschaft mbH, Essen Registergericht/County Court: >>> Amtsgericht Essen * HRB 20420 Geschäftsführer/Board of Directors: >>> Prof. Dr. Eiko Räkers (Vorsitzender/CEO), Dr. Maik Tiedemann, Ulrich >>> Pröpper, Jens-Peter Lux Vorsitzender des Aufsichtsrates/Chairman of >>> the Supervisory Board: Jürgen Himmelsbach TÜV NORD GROUP >>> ___________________________________________________________________________________________________ >>> >>> Diese Nachricht enthält vertrauliche Informationen und ist nur für >>> den Empfänger bestimmt. Wenn Sie nicht der Empfänger sind, sollten >>> Sie die E-Mail nicht verbreiten, verteilen oder diese E-Mail >>> kopieren. Benachrichtigen Sie bitte den Absender per E-Mail, wenn Sie >>> diese E-Mail irrtümlich erhalten haben und löschen dann diese E-Mail >>> von Ihrem System. >>> >>> This message contains confidential information and is intended only >>> for the recipient. If you are not the recipient you should not >>> disseminate, distribute or copy this e-mail. Please notify the sender >>> immediately by e-mail if you have received this e-mail by mistake and >>> delete this e-mail from your system. >>> >>> >>> ------------------------------ >>> >>> Subject: Digest Footer >>> >>> _______________________________________________ >>> CIG-SEISMO mailing list >>> CIG-SEISMO at geodynamics.org >>> http://lists.geodynamics.org/cgi-bin/mailman/listinfo/cig-seismo >>> >>> ------------------------------ >>> >>> End of CIG-SEISMO Digest, Vol 120, Issue 5 >>> ****************************************** >>> >>> ___________________________________________________________________________________________________ >>> >>> Sitz der Gesellschaft/Headquarters: DMT GmbH & Co. KG * Am >>> Technologiepark 1 * 45307 Essen * Deutschland/Germany >>> Registergericht/County Court: Amtsgericht Essen * HRA 9091 * USt-ID >>> DE 253275653 >>> Komplementär/Fully Liable Partner: DMT Verwaltungsgesellschaft mbH, >>> Essen >>> Registergericht/County Court: Amtsgericht Essen * HRB 20420 >>> Geschäftsführer/Board of Directors: Prof. Dr. Eiko Räkers >>> (Vorsitzender/CEO), Dr. Maik Tiedemann, Ulrich Pröpper, Jens-Peter Lux >>> Vorsitzender des Aufsichtsrates/Chairman of the Supervisory Board: >>> Jürgen Himmelsbach >>> TÜV NORD GROUP >>> ___________________________________________________________________________________________________ >>> >>> Diese Nachricht enthält vertrauliche Informationen und ist nur für >>> den Empfänger bestimmt. Wenn Sie nicht der Empfänger sind, sollten >>> Sie die E-Mail nicht verbreiten, verteilen oder diese E-Mail >>> kopieren. Benachrichtigen Sie bitte den Absender per E-Mail, wenn Sie >>> diese E-Mail irrtümlich erhalten haben und löschen dann diese E-Mail >>> von Ihrem System. >>> >>> This message contains confidential information and is intended only >>> for the recipient. If you are not the recipient you should not >>> disseminate, distribute or copy this e-mail. Please notify the sender >>> immediately by e-mail if you have received this e-mail by mistake and >>> delete this e-mail from your system. >>> >>> _______________________________________________ >>> >>> CIG-SEISMO mailing list >>> CIG-SEISMO at geodynamics.org >>> http://lists.geodynamics.org/cgi-bin/mailman/listinfo/cig-seismo >> >> _______________________________________________ >> CIG-SEISMO mailing list >> CIG-SEISMO at geodynamics.org >> http://lists.geodynamics.org/cgi-bin/mailman/listinfo/cig-seismo >> > -- Dimitri Komatitsch, CNRS Research Director (DR CNRS) Laboratory of Mechanics and Acoustics, Marseille, France http://komatitsch.free.fr From yingzi.ying at me.com Thu Feb 15 02:50:51 2018 From: yingzi.ying at me.com (Yingzi Ying) Date: Thu, 15 Feb 2018 11:50:51 +0100 Subject: [CIG-SEISMO] source term definition in acoustic simulation Message-ID: <607da3bc-4caf-6d05-3ef8-fba95c5667db@me.com> Hi Guys, My initial purpose is to use a loudspeaker to play an audio wavelet, which is recorded by a microphone(sound pressure signal), to reproduce sound field, that I need a system with sound pressure signal in and sound pressure signal out. I take a look at the source term definition in the Daniel's 2011 paper "Forward and adjoint simulations of seismic wave propagation on fully unstructured hexahedral meshes". It says "The source f may be expressed in terms of pressure P" as in Eq(12), but intentionally denotes with the capital letter "P". I do a simple test with Specfem2d, in which I put a co-located source and receive. I see that the scaled horizontal or vertical displacement waveforms are exactly the same as the input signal waveform. Then I guess the source signal term is more related to represent the omni-directional volume injection rather than pressure. As the homogeneous PDEs governing the pressure and displacement potential have the same form, to meet my initial purpose, I input a signal and record negative displacement potential as output. The output signal(reproduced sound) is very similar to the input sound wavelet. The difference may due to the 2D simulation(2D Green's function response). Could you please let me know if such tricky way may have any problem? Or do you have any suggestion to a more safe/better way to define the source signal as sound pressure? Many thanks and regards, Yingzi From mathias.pilch at tu-dortmund.de Fri Feb 16 03:49:04 2018 From: mathias.pilch at tu-dortmund.de (Mathias Pilch) Date: Fri, 16 Feb 2018 12:49:04 +0100 Subject: [CIG-SEISMO] Bug? Check functionality of force direction Message-ID: Dear developers of Specfem3D, there appears to be a bug in the latest devel version when using a point force source. The line in DATA/FORCESOLUTION component dir vect source Z_UP: -1.d0 does not seem to have any effect. I tested it with a Gaussian with hdur = 0.06 s, and both component dir vect source Z_UP: -1.d0 and component dir vect source Z_UP: 1.d0 However, the resulting seismograms (semv) look the same, while in reality those calculated with the opposite sign should yield the inverted seismograms. Could you please check that? Thank you in advance, Mathias Pilch From daniel.peter at kaust.edu.sa Fri Feb 16 11:14:11 2018 From: daniel.peter at kaust.edu.sa (Daniel B. Peter) Date: Fri, 16 Feb 2018 19:14:11 +0000 Subject: [CIG-SEISMO] Bug? Check functionality of force direction In-Reply-To: References: Message-ID: hi Mathias, you’re right. it doesn’t work correctly for elastic point forces. i just came across the same bug which was introduced recently and will submit a bug fix soon to the devel branch. many thanks for spotting! daniel > On Feb 16, 2018, at 19:46, Mathias Pilch wrote: > > Dear developers of Specfem3D, > > there appears to be a bug in the latest devel version when using a point force source. > > The line in DATA/FORCESOLUTION > > component dir vect source Z_UP: -1.d0 > > does not seem to have any effect. I tested it with a Gaussian with hdur = 0.06 s, and > both > > component dir vect source Z_UP: -1.d0 > > and > > component dir vect source Z_UP: 1.d0 > > However, the resulting seismograms (semv) look the same, while in reality those calculated with > the opposite sign should yield the inverted seismograms. > > Could you please check that? > > Thank you in advance, > Mathias Pilch > > _______________________________________________ > CIG-SEISMO mailing list > CIG-SEISMO at geodynamics.org > http://lists.geodynamics.org/cgi-bin/mailman/listinfo/cig-seismo ________________________________ This message and its contents including attachments are intended solely for the original recipient. If you are not the intended recipient or have received this message in error, please notify me immediately and delete this message from your computer system. Any unauthorized use or distribution is prohibited. Please consider the environment before printing this email. From komatitsch at lma.cnrs-mrs.fr Fri Feb 16 11:53:32 2018 From: komatitsch at lma.cnrs-mrs.fr (Dimitri Komatitsch) Date: Fri, 16 Feb 2018 20:53:32 +0100 Subject: [CIG-SEISMO] Bug? Check functionality of force direction In-Reply-To: References: Message-ID: <414514d7-d052-cfbb-f89c-435ebb08d5c9@lma.cnrs-mrs.fr> Hi, Thanks for the bug report, and great if Daniel can fix it! By the way, at some point we should also check this old issue: https://github.com/geodynamics/specfem3d/issues/842 (I will try to find time to do it next month) Best wishes, Dimitri. On 02/16/2018 08:14 PM, Daniel B. Peter wrote: > hi Mathias, > > you’re right. it doesn’t work correctly for elastic point forces. i just came across the same bug which was introduced recently and will submit a bug fix soon to the devel branch. > > many thanks for spotting! > daniel > > > >> On Feb 16, 2018, at 19:46, Mathias Pilch wrote: >> >> Dear developers of Specfem3D, >> >> there appears to be a bug in the latest devel version when using a point force source. >> >> The line in DATA/FORCESOLUTION >> >> component dir vect source Z_UP: -1.d0 >> >> does not seem to have any effect. I tested it with a Gaussian with hdur = 0.06 s, and >> both >> >> component dir vect source Z_UP: -1.d0 >> >> and >> >> component dir vect source Z_UP: 1.d0 >> >> However, the resulting seismograms (semv) look the same, while in reality those calculated with >> the opposite sign should yield the inverted seismograms. >> >> Could you please check that? >> >> Thank you in advance, >> Mathias Pilch >> >> _______________________________________________ >> CIG-SEISMO mailing list >> CIG-SEISMO at geodynamics.org >> http://lists.geodynamics.org/cgi-bin/mailman/listinfo/cig-seismo > > ________________________________ > This message and its contents including attachments are intended solely for the original recipient. If you are not the intended recipient or have received this message in error, please notify me immediately and delete this message from your computer system. Any unauthorized use or distribution is prohibited. Please consider the environment before printing this email. > _______________________________________________ > CIG-SEISMO mailing list > CIG-SEISMO at geodynamics.org > http://lists.geodynamics.org/cgi-bin/mailman/listinfo/cig-seismo > -- Dimitri Komatitsch, CNRS Research Director (DR CNRS) Laboratory of Mechanics and Acoustics, Marseille, France http://komatitsch.free.fr From jfontiela at uevora.pt Sun Feb 18 11:21:10 2018 From: jfontiela at uevora.pt (=?utf-8?Q?Jo=C3=A3o_Fontiela?=) Date: Sun, 18 Feb 2018 19:21:10 +0000 Subject: [CIG-SEISMO] [SW4] difficulties to implement material model Message-ID: <2715F897-D7AA-43F7-B3EA-0C01DA951D38@uevora.pt> Dear all, I’m trying implement material model using command block, but SW4 only take in consideration the first line of the block. In consequence the error is the same for all points of computation grid. In the following example I used the block model of example file named artie-coarse.in and I got the same error with my material model: Point (i,j,k)=(9, 152, -1) in grid g=0 with (x,y,z)=(5.333333e+02,1.006667e+04,-1.333333e+02) and depth=-1.333333e+02 is outside the block domain: -1.000000e+05<= x <= 2.000000e+05, -1.000000e+05 <= y <= 2.000000e+05, 2.100000e+04 <= depth <= 8.000000e+04 The block bounds are: ********Parsing source command********* Cartesian coordinates of source at (lon, lat)=(-1.552100e+02, 1.933000e+01) is (x,y)=(31959.8, 72480.5) Moment source at x=3.195983e+04, y=7.248048e+04, z=8.000000e+03 is centered at grid point i=480, j=1088, k=121, in grid=0 ********Done parsing source command********* *** Ignoring command: 'absdepth=0' Block 1 has bounds -1.00000000e+05 2.00000000e+05 -1.00000000e+05 2.00000000e+05 2.10000000e+04 8.00000000e+04 Block 2 has bounds -1.00000000e+05 2.00000000e+05 -1.00000000e+05 2.00000000e+05 0.00000000e+00 2.00000000e+00 Block 3 has bounds -1.00000000e+05 2.00000000e+05 -1.00000000e+05 2.00000000e+05 2.00000000e+00 6.00000000e+00 Block 4 has bounds -1.00000000e+05 2.00000000e+05 -1.00000000e+05 2.00000000e+05 6.00000000e+00 1.20000000e+01 Block 5 has bounds -1.00000000e+05 2.00000000e+05 -1.00000000e+05 2.00000000e+05 1.20000000e+01 2.00000000e+01 Block 6 has bounds -1.00000000e+05 2.00000000e+05 -1.00000000e+05 2.00000000e+05 2.00000000e+01 3.00000000e+01 Block 7 has bounds -1.00000000e+05 2.00000000e+05 -1.00000000e+05 2.00000000e+05 3.00000000e+01 1.00000000e+02 Block 8 has bounds -1.00000000e+05 2.00000000e+05 -1.00000000e+05 2.00000000e+05 1.00000000e+02 3.00000000e+02 Block 9 has bounds -1.00000000e+05 2.00000000e+05 -1.00000000e+05 2.00000000e+05 3.00000000e+02 5.00000000e+02 Block 10 has bounds -1.00000000e+05 2.00000000e+05 -1.00000000e+05 2.00000000e+05 5.00000000e+02 7.00000000e+02 Block 11 has bounds -1.00000000e+05 2.00000000e+05 -1.00000000e+05 2.00000000e+05 7.00000000e+02 1.00000000e+03 Block 12 has bounds -1.00000000e+05 2.00000000e+05 -1.00000000e+05 2.00000000e+05 1.00000000e+03 3.00000000e+03 Block 13 has bounds -1.00000000e+05 2.00000000e+05 -1.00000000e+05 2.00000000e+05 3.00000000e+03 5.00000000e+03 Block 14 has bounds -1.00000000e+05 2.00000000e+05 -1.00000000e+05 2.00000000e+05 5.00000000e+03 6.00000000e+03 Block 15 has bounds -1.00000000e+05 2.00000000e+05 -1.00000000e+05 2.00000000e+05 6.00000000e+03 1.10000000e+04 Block 16 has bounds -1.00000000e+05 2.00000000e+05 -1.00000000e+05 2.00000000e+05 1.10000000e+04 1.60000000e+04 Block 17 has bounds -1.00000000e+05 2.00000000e+05 -1.00000000e+05 2.00000000e+05 1.60000000e+04 2.10000000e+04 ********Done reading the input file********* Receiver INFO for station hawaii: initial location (x,y,z) = 7.98272135e+04 3.87629303e+04 0.00000000e+00 zTopo= 0.00000000e+00 nearest grid point (x,y,z) = 7.98000000e+04 3.87333333e+04 0.00000000e+00 h= 6.66666667e+01 with indices (i,j,k)= 1198 582 1 in grid 0 Execution time, reading input file 2 minutes 1.58231549e+01 seconds Using Bjorn's fast (parallel) IO library Assuming a PARALLEL file system Writing images from (up to) 8 procs Setting up SBP boundary stencils Info: Grid #0 min z-coordinate: 0.00000000e+00 Info: mGridSize[0]=6.66666667e+01 The input parameters are the following ones: # SW4 # input RUN:teste # fileio path=teste pfs=1 verbose=5 # # Grid coords grid x=100.0e3 y=100.0e3 z=40.0e3 nx=1501 lat=19.042900 lon=-155.900000 az=0 # supergrid gp=60 # source event source lat=19.33 lon=-155.21 depth=8.0e3 strike=175 dip=60 rake=-30 m0=3.13e16 mxx=4.17e12 myy=9.36e12 mzz=-1.35e13 mxy=-2.43e13 t0=1.0 freq=6.28 type=Gaussian # #time steps=0 time t=60 # #globalmaterial vpmin=866 vsmin=500 # supergrid gp=30 # # Earth Model # block model 1D absdepth=0 block z1=21000.0 vs=3800.0 vp=6800.0 rho=2950.0 qs=190.0 qp=380.0 block z1=0.0 z2=2.0 vs=450.0 vp=1700.0 rho=2000.0 qs=22.5 qp=45.0 block z1=2.0 z2=6.0 vs=650.0 vp=1800.0 rho=2100.0 qs=32.5 qp=65.0 block z1=6.0 z2=12.0 vs=850.0 vp=1800.0 rho=2100.0 qs=42.5 qp=85.0 block z1=12.0 z2=20.0 vs=950.0 vp=1900.0 rho=2100.0 qs=47.5 qp=95.0 block z1=20.0 z2=30.0 vs=1150.0 vp=2000.0 rho=2200.0 qs=57.5 qp=115.0 block z1=30.0 z2=100.0 vs=1200.0 vp=2400.0 rho=2200.0 qs=60.0 qp=120.0 block z1=100.0 z2=300.0 vs=1400.0 vp=2800.0 rho=2300.0 qs=70.0 qp=140.0 block z1=300.0 z2=500.0 vs=1600.0 vp=3100.0 rho=2400.0 qs=80.0 qp=160.0 block z1=500.0 z2=700.0 vs=1800.0 vp=3400.0 rho=2450.0 qs=90.0 qp=180.0 block z1=700.0 z2=1000.0 vs=2100.0 vp=3700.0 rho=2500.0 qs=105.0 qp=210.0 block z1=1000.0 z2=3000.0 vs=2400.0 vp=4400.0 rho=2600.0 qs=120.0 qp=240.0 block z1=3000.0 z2=5000.0 vs=2800.0 vp=5100.0 rho=2700.0 qs=140.0 qp=280.0 block z1=5000.0 z2=6000.0 vs=3150.0 vp=5600.0 rho=2750.0 qs=157.5 qp=315.0 block z1=6000.0 z2=11000.0 vs=3600.0 vp=6150.0 rho=2825.0 qs=180.0 qp=360.0 block z1=11000.0 z2=16000.0 vs=3650.0 vp=6320.0 rho=2850.0 qs=182.5 qp=365.0 block z1=16000.0 z2=21000.0 vs=3700.0 vp=6550.0 rho=2900.0 qs=185.0 qp=370.0 # # restricted output rec lat=19.76 lon=-155.53 depth=0 sta=POHA file=hawaii usgsformat=0 sacformat=1 variables=velocity # # Image output z=0 #image z=0 mode=hmax file=image time=59.4 #image z=0 mode=hmax file=image timeInterval=0.2 #image z=0 mode=vmax file=image time=59.4 Sincerely yours, João Fontiela ********************* ********************* João Fontiela PhD Student skype: fontiela email: fontiela at gmail.com mobile phone +351 917 862 830 Profissional contacts: email: jfontiela at uevora.pt Tel: (+351) 266 745 354 ext: 5395 or 5413 Fax: (+351) 266 745 394 Profissional address: Universidade de Évora Instituto de Ciências da Terra Colégio Luís Verney Rua Romão Ramalho, 59 7000-671 Évora - Portugal ResearchGate: https://www.researchgate.net/profile/Joao_Fontiela -------------- next part -------------- An HTML attachment was scrubbed... URL: From petersson1 at llnl.gov Thu Feb 22 15:27:47 2018 From: petersson1 at llnl.gov (Petersson, Anders) Date: Thu, 22 Feb 2018 23:27:47 +0000 Subject: [CIG-SEISMO] [SW4] difficulties to implement material model Message-ID: <5B070270-A304-4B5E-A303-FBBC7FA65E53@llnl.gov> Hi Joao, The input file in examples/rupture/artie-coarse.in runs without problems on my machine. The reason for your difficulties is that you don't specify the material model at the ghost point, which is located just above the free surface. When you use block commands, it is a good idea to put them in order of increasing depth. Furthermore, the first block command should not specify z1=..., and the last should not specify z2=.... That way, the first block command applies to all grid points where z <= z2, and the last applies to all grid points where z >= z1. The intermediate block commands can specify both z1 and z2, even though you would get the same material model if you only specified z1, if the commands are in order of increasing depth. This is because they are parsed from top to bottom, so later block commands supersede earlier ones. In your input file, you should first remove the spurious line "absdepth=0", which is not a legal sw4 command. Then, specify the material model with block vs=450.0 vp=1700.0 rho=2000.0 qs=22.5 qp=45.0 block z1=2.0 vs=650.0 vp=1800.0 rho=2100.0 qs=32.5 qp=65.0 block z1=6.0 vs=850.0 vp=1800.0 rho=2100.0 qs=42.5 qp=85.0 ... block z1=21000.0 vp=6800.0 rho=2950.0 qs=190.0 qp=380.0 Hope this helps, Anders ---------------------------------------------------------------------- Message: 1 Date: Sun, 18 Feb 2018 19:21:10 +0000 From: João Fontiela To: cig-seismo at geodynamics.org Subject: [CIG-SEISMO] [SW4] difficulties to implement material model Message-ID: <2715F897-D7AA-43F7-B3EA-0C01DA951D38 at uevora.pt> Content-Type: text/plain; charset="utf-8" Dear all, I’m trying implement material model using command block, but SW4 only take in consideration the first line of the block. In consequence the error is the same for all points of computation grid. In the following example I used the block model of example file named artie-coarse.in and I got the same error with my material model: Point (i,j,k)=(9, 152, -1) in grid g=0 with (x,y,z)=(5.333333e+02,1.006667e+04,-1.333333e+02) and depth=-1.333333e+02 is outside the block domain: -1.000000e+05<= x <= 2.000000e+05, -1.000000e+05 <= y <= 2.000000e+05, 2.100000e+04 <= depth <= 8.000000e+04 ... From maurya2278satish at gmail.com Fri Feb 23 12:01:13 2018 From: maurya2278satish at gmail.com (satish maurya) Date: Fri, 23 Feb 2018 12:01:13 -0800 Subject: [CIG-SEISMO] Starin Green's tensor! Message-ID: Dear SPECFEM developer, 1) I started to use SPECFEM_GLOBE to compute synthetic seismogram for FORCESOLUTION. Can you guys tell me if most recent version allows to save Strain? If not, can you point me the routine where this variable are? # option to save strain seismograms # this option is useful for strain Green's tensor # this feature is currently under development *SAVE_SEISMOGRAMS_STRAIN * = .false. 2) Also, I try to import 3D-Synthetic model like (Plume structure). Do we have grid read model routine to import this model on SPECFEM_GLOBE? Any other suggestion is most welcome to import 3D-Synthetic model. Regards Satish Maurya Post-Doc University of California, Berkeley Seismological Laboratory, Dept. of Earth & Planetary Science 201 The Tower Room, McCone Hall Berkeley, CA 94720 -------------- next part -------------- An HTML attachment was scrubbed... URL: From INCHINP at my.erau.edu Fri Feb 23 08:36:48 2018 From: INCHINP at my.erau.edu (Inchin, Pavel) Date: Fri, 23 Feb 2018 16:36:48 +0000 Subject: [CIG-SEISMO] MINEOS near-field fluctuations Message-ID: <9C63A604-4DC6-4AD4-A4C6-A5D539C802F6@my.erau.edu> Good day, I use MINEOS to reproduce surface waves (particularly Rayleigh waves). On the plot below I did synthetic seismograms (for vertical component in frequency range 0-20 mHz) for every 2 km along a particular longitude to the North from epicenter of earthquake up to 3000 km distance. I clearly see the propagation of the Rayleigh wave (at ~2000 km on the plot below), but at the same time during all simulations I see constant fluctuations near the epicenter which decay very slowly. Do these represent some real effect or it is just a modeling artifact? To which extent MINEOS reproduce correct fluctuation due to waves propagation near the epicenter? [cid:CAD6B2EB-3C2F-4905-ABCF-A7B355BF4973 at db.erau.edu] Thank you. Paul Inchin -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: PastedGraphic-1.png Type: image/png Size: 53339 bytes Desc: PastedGraphic-1.png URL: