From marine.lasbleis at gmail.com Mon Jul 10 17:00:40 2017 From: marine.lasbleis at gmail.com (Marine Lasbleis) Date: Tue, 11 Jul 2017 09:00:40 +0900 Subject: [CIG-MC] Installing ASPECT on Cray XC30 Message-ID: <1A73664D-51BD-45AF-BC60-B41AB2F08780@gmail.com> Hi all, This is my first message here, I hope it’s OK. I’m started to work on ASPECT, and installed it already on a desktop computer (debian with 8 cores). But would like to install it on the available clusters. (I have access to 3 different clusters. Not sure which one is the best for that… And definitely no real admin for the clusters. They are “self-organised”, which is not always for the best) I’m trying to install ASPECT on the ELSI cluster, which is a CRAY CX30, and while having problems, I found that you may have done the same a couple of weeks ago (I saw this conversation: http://dealii.narkive.com/jCU1oGdB/deal-ii-get-errors-when-installing-dealii-on-opensuse-leap-42-1-by-using-candi ) For now, what we’ve done: (before seeing candi installation) - switch to PrgEnv-gnu - try to install p4est. But it seems that we need to use “ftn” and not fortran or others, so he can’t do anything, and stop very soon. I tried to modify by hand the configure file (adding ftn where I could find the system was looking for fortran of mpif77.) But I guess it’s definitely not a good idea, and I am obviously still missing a couple of call because I still got the same error. So, with the conversation, I guessed that https://github.com/dealii/candi can actually install everything for me. Since I’m using a slightly different cluster (CRAY XC30), I will try to give you updates on my progress. I’m not familiar with candi, but I decided to give a try, so please excuse me if I am doing obvious mistakes. I changed the configuration as requested, and loaded the required modules and defined new variables for the info on the compilers. In this particular cluster, we need to be careful with the path where to install (the default one is on a drive that is very slow to access, and compilation takes forever), so I had to use a -p path option. Also, I think I used first too many cores to compile, and got a memory error (internal compiler error raised, which seems to be related to available memory) So, from my day trying to install: - I finished the candi.sh script, apparently everything correctly installed. - I built ASPECT (with this particular cluster, be careful with cmake. By default, the cmake is not up-to-date, and in particular even after installation with candi.sh, the available cmake is not the one that was installed) I got a couple of warnings, mostly about PETSc, that I thought were only warnings and not problems. Most of them were along the line of this one: warning: 'dealii::PETScWrappers::MPI::Vector::supports_distributed_data' is deprecated [-Wdeprecated-declarations] , for either PETSc or Trilinos. - I’ve run a couple of examples from the cookbook. None are working. I got this from running ASPEC using aprun -n4 ../aspect burnman.prm ----------------------------------------------------------------------------- -- This is ASPECT, the Advanced Solver for Problems in Earth's ConvecTion. -- . version 1.5.0 -- . running in DEBUG mode -- . running with 4 MPI processes -- . using Trilinos ----------------------------------------------------------------------------- [0]PETSC ERROR: [1]PETSC ERROR: [3]PETSC ERROR: [2]PETSC ERROR: ------------------------------------------------------------------------ [0]PETSC ERROR: ------------------------------------------------------------------------ ------------------------------------------------------------------------ [2]PETSC ERROR: ------------------------------------------------------------------------ [1]PETSC ERROR: [3]PETSC ERROR: Caught signal number 8 FPE: Floating Point Exception,probably divide by zero [1]PETSC ERROR: Try option -start_in_debugger or -on_error_attach_debugger Caught signal number 8 FPE: Floating Point Exception,probably divide by zero [1]PETSC ERROR: [3]PETSC ERROR: or see http://www.mcs.anl.gov/petsc/documentation/faq.html#valgrind Try option -start_in_debugger or -on_error_attach_debugger [1]PETSC ERROR: [3]PETSC ERROR: or try http://valgrind.org on GNU/linux and Apple Mac OS X to find memory corruption errors or see http://www.mcs.anl.gov/petsc/documentation/faq.html#valgrind [1]PETSC ERROR: [3]PETSC ERROR: configure using --with-debugging=yes, recompile, link, and run or try http://valgrind.org on GNU/linux and Apple Mac OS X to find memory corruption errors [1]PETSC ERROR: [3]PETSC ERROR: to get more information on the crash. configure using --with-debugging=yes, recompile, link, and run [3]PETSC ERROR: to get more information on the crash. [1]PETSC ERROR: --------------------- Error Message -------------------------------------------------------------- Caught signal number 8 FPE: Floating Point Exception,probably divide by zero Any idea where this could come from? (any additional files I should show you?) Thanks! (and many thanks to the person who did the candi.sh script for Cray XC40 :-) ) Marine -------------- next part -------------- An HTML attachment was scrubbed... URL: From maxrudolph at ucdavis.edu Mon Jul 10 19:52:51 2017 From: maxrudolph at ucdavis.edu (Max Rudolph) Date: Mon, 10 Jul 2017 19:52:51 -0700 Subject: [CIG-MC] Installing ASPECT on Cray XC30 In-Reply-To: <1A73664D-51BD-45AF-BC60-B41AB2F08780@gmail.com> References: <1A73664D-51BD-45AF-BC60-B41AB2F08780@gmail.com> Message-ID: Marine, You may want to post this on the aspect-devel mailing list if you haven't done so already. Max On Mon, Jul 10, 2017 at 5:00 PM, Marine Lasbleis wrote: > Hi all, > > This is my first message here, I hope it’s OK. > I’m started to work on ASPECT, and installed it already on a desktop > computer (debian with 8 cores). But would like to install it on the > available clusters. (I have access to 3 different clusters. Not sure which > one is the best for that… And definitely no real admin for the clusters. > They are “self-organised”, which is not always for the best) > > I’m trying to install ASPECT on the ELSI cluster, which is a CRAY CX30, > and while having problems, I found that you may have done the same a couple > of weeks ago (I saw this conversation: http://dealii. > narkive.com/jCU1oGdB/deal-ii-get-errors-when-installing- > dealii-on-opensuse-leap-42-1-by-using-candi ) > > For now, what we’ve done: (before seeing candi installation) > - switch to PrgEnv-gnu > - try to install p4est. But it seems that we need to use “ftn” and not > fortran or others, so he can’t do anything, and stop very soon. I tried to > modify by hand the configure file (adding ftn where I could find the system > was looking for fortran of mpif77.) But I guess it’s definitely not a good > idea, and I am obviously still missing a couple of call because I still got > the same error. > > So, with the conversation, I guessed that https://github.com/dealii/candi can > actually install everything for me. > Since I’m using a slightly different cluster (CRAY XC30), I will try to > give you updates on my progress. > I’m not familiar with candi, but I decided to give a try, so please excuse > me if I am doing obvious mistakes. > > I changed the configuration as requested, and loaded the required modules > and defined new variables for the info on the compilers. > In this particular cluster, we need to be careful with the path where to > install (the default one is on a drive that is very slow to access, and > compilation takes forever), so I had to use a -p path option. Also, I think > I used first too many cores to compile, and got a memory error (internal > compiler error raised, which seems to be related to available memory) > > So, from my day trying to install: > - I finished the candi.sh script, apparently everything correctly > installed. > - I built ASPECT (with this particular cluster, be careful with cmake. By > default, the cmake is not up-to-date, and in particular even after > installation with candi.sh, the available cmake is not the one that was > installed) > I got a couple of warnings, mostly about PETSc, that I thought were only > warnings and not problems. > Most of them were along the line of this one: > warning: 'dealii::PETScWrappers::MPI::Vector::supports_distributed_data' > is deprecated [-Wdeprecated-declarations] , for either PETSc or Trilinos. > > - I’ve run a couple of examples from the cookbook. None are working. > > I got this from running ASPEC using aprun -n4 ../aspect burnman.prm > ------------------------------------------------------------ > ----------------- > -- This is ASPECT, the Advanced Solver for Problems in Earth's ConvecTion. > -- . version 1.5.0 > -- . running in DEBUG mode > -- . running with 4 MPI processes > -- . using Trilinos > ------------------------------------------------------------ > ----------------- > > [0]PETSC ERROR: [1]PETSC ERROR: [3]PETSC ERROR: [2]PETSC ERROR: > ------------------------------------------------------------------------ > [0]PETSC ERROR: ------------------------------ > ------------------------------------------ > ------------------------------------------------------------------------ > [2]PETSC ERROR: ------------------------------ > ------------------------------------------ > [1]PETSC ERROR: [3]PETSC ERROR: Caught signal number 8 FPE: Floating Point > Exception,probably divide by zero > [1]PETSC ERROR: Try option -start_in_debugger or -on_error_attach_debugger > Caught signal number 8 FPE: Floating Point Exception,probably divide by > zero > [1]PETSC ERROR: [3]PETSC ERROR: or see http://www.mcs.anl.gov/petsc/ > documentation/faq.html#valgrind > Try option -start_in_debugger or -on_error_attach_debugger > [1]PETSC ERROR: [3]PETSC ERROR: or try http://valgrind.org on GNU/linux > and Apple Mac OS X to find memory corruption errors > or see http://www.mcs.anl.gov/petsc/documentation/faq.html#valgrind > [1]PETSC ERROR: [3]PETSC ERROR: configure using --with-debugging=yes, > recompile, link, and run > or try http://valgrind.org on GNU/linux and Apple Mac OS X to find memory > corruption errors > [1]PETSC ERROR: [3]PETSC ERROR: to get more information on the crash. > configure using --with-debugging=yes, recompile, link, and run > [3]PETSC ERROR: to get more information on the crash. > [1]PETSC ERROR: --------------------- Error Message > -------------------------------------------------------------- > Caught signal number 8 FPE: Floating Point Exception,probably divide by > zero > > > > > > Any idea where this could come from? > (any additional files I should show you?) > > > Thanks! (and many thanks to the person who did the candi.sh script for > Cray XC40 :-) ) > Marine > > > > > > _______________________________________________ > CIG-MC mailing list > CIG-MC at geodynamics.org > http://lists.geodynamics.org/cgi-bin/mailman/listinfo/cig-mc > -- --------------------------------- Max Rudolph Assistant Professor, Earth and Planetary Sciences, UC Davis webpage -------------- next part -------------- An HTML attachment was scrubbed... URL: From knepley at mcs.anl.gov Mon Jul 10 18:45:39 2017 From: knepley at mcs.anl.gov (Matthew Knepley) Date: Mon, 10 Jul 2017 18:45:39 -0700 Subject: [CIG-MC] Installing ASPECT on Cray XC30 In-Reply-To: <1A73664D-51BD-45AF-BC60-B41AB2F08780@gmail.com> References: <1A73664D-51BD-45AF-BC60-B41AB2F08780@gmail.com> Message-ID: On Mon, Jul 10, 2017 at 5:00 PM, Marine Lasbleis wrote: > Hi all, > > This is my first message here, I hope it’s OK. > I’m started to work on ASPECT, and installed it already on a desktop > computer (debian with 8 cores). But would like to install it on the > available clusters. (I have access to 3 different clusters. Not sure which > one is the best for that… And definitely no real admin for the clusters. > They are “self-organised”, which is not always for the best) > > I’m trying to install ASPECT on the ELSI cluster, which is a CRAY CX30, > and while having problems, I found that you may have done the same a couple > of weeks ago (I saw this conversation: http://dealii. > narkive.com/jCU1oGdB/deal-ii-get-errors-when-installing- > dealii-on-opensuse-leap-42-1-by-using-candi ) > > For now, what we’ve done: (before seeing candi installation) > - switch to PrgEnv-gnu > - try to install p4est. But it seems that we need to use “ftn” and not > fortran or others, so he can’t do anything, and stop very soon. I tried to > modify by hand the configure file (adding ftn where I could find the system > was looking for fortran of mpif77.) But I guess it’s definitely not a good > idea, and I am obviously still missing a couple of call because I still got > the same error. > > So, with the conversation, I guessed that https://github.com/dealii/candi can > actually install everything for me. > Since I’m using a slightly different cluster (CRAY XC30), I will try to > give you updates on my progress. > I’m not familiar with candi, but I decided to give a try, so please excuse > me if I am doing obvious mistakes. > > I changed the configuration as requested, and loaded the required modules > and defined new variables for the info on the compilers. > In this particular cluster, we need to be careful with the path where to > install (the default one is on a drive that is very slow to access, and > compilation takes forever), so I had to use a -p path option. Also, I think > I used first too many cores to compile, and got a memory error (internal > compiler error raised, which seems to be related to available memory) > > So, from my day trying to install: > - I finished the candi.sh script, apparently everything correctly > installed. > - I built ASPECT (with this particular cluster, be careful with cmake. By > default, the cmake is not up-to-date, and in particular even after > installation with candi.sh, the available cmake is not the one that was > installed) > I got a couple of warnings, mostly about PETSc, that I thought were only > warnings and not problems. > Most of them were along the line of this one: > warning: 'dealii::PETScWrappers::MPI::Vector::supports_distributed_data' > is deprecated [-Wdeprecated-declarations] , for either PETSc or Trilinos. > > - I’ve run a couple of examples from the cookbook. None are working. > > I got this from running ASPEC using aprun -n4 ../aspect burnman.prm > ------------------------------------------------------------ > ----------------- > -- This is ASPECT, the Advanced Solver for Problems in Earth's ConvecTion. > -- . version 1.5.0 > -- . running in DEBUG mode > -- . running with 4 MPI processes > -- . using Trilinos > ------------------------------------------------------------ > ----------------- > > [0]PETSC ERROR: [1]PETSC ERROR: [3]PETSC ERROR: [2]PETSC ERROR: > ------------------------------------------------------------------------ > [0]PETSC ERROR: ------------------------------ > ------------------------------------------ > ------------------------------------------------------------------------ > [2]PETSC ERROR: ------------------------------ > ------------------------------------------ > [1]PETSC ERROR: [3]PETSC ERROR: Caught signal number 8 FPE: Floating Point > Exception,probably divide by zero > [1]PETSC ERROR: Try option -start_in_debugger or -on_error_attach_debugger > Caught signal number 8 FPE: Floating Point Exception,probably divide by > zero > [1]PETSC ERROR: [3]PETSC ERROR: or see http://www.mcs.anl.gov/petsc/ > documentation/faq.html#valgrind > Try option -start_in_debugger or -on_error_attach_debugger > [1]PETSC ERROR: [3]PETSC ERROR: or try http://valgrind.org on GNU/linux > and Apple Mac OS X to find memory corruption errors > or see http://www.mcs.anl.gov/petsc/documentation/faq.html#valgrind > [1]PETSC ERROR: [3]PETSC ERROR: configure using --with-debugging=yes, > recompile, link, and run > or try http://valgrind.org on GNU/linux and Apple Mac OS X to find memory > corruption errors > [1]PETSC ERROR: [3]PETSC ERROR: to get more information on the crash. > configure using --with-debugging=yes, recompile, link, and run > [3]PETSC ERROR: to get more information on the crash. > [1]PETSC ERROR: --------------------- Error Message > -------------------------------------------------------------- > Caught signal number 8 FPE: Floating Point Exception,probably divide by > zero > > Any idea where this could come from? > This does not appear to actually be a PETSc error. It appears that ASPECT calls PetscInitialize even when 'using Trilinos'. This installs a signal handler (unless you unload it), which caught the FPE signal generated somewhere in ASPECT code. I suggest you run this under valgrind. I also suggest not debugging in parallel before serial things work. Thanks, Matt > (any additional files I should show you?) > > > Thanks! (and many thanks to the person who did the candi.sh script for > Cray XC40 :-) ) > Marine > > > > > > _______________________________________________ > CIG-MC mailing list > CIG-MC at geodynamics.org > http://lists.geodynamics.org/cgi-bin/mailman/listinfo/cig-mc > -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener -------------- next part -------------- An HTML attachment was scrubbed... URL: From marine.lasbleis at gmail.com Mon Jul 10 23:13:00 2017 From: marine.lasbleis at gmail.com (Marine Lasbleis) Date: Tue, 11 Jul 2017 15:13:00 +0900 Subject: [CIG-MC] Installing ASPECT on Cray XC30 In-Reply-To: References: <1A73664D-51BD-45AF-BC60-B41AB2F08780@gmail.com> Message-ID: Hi Max, Thank you, I was not aware of this mailing list! I’ll send it now. (and sorry for using the wrong mailing list) Marine > On 11 Jul 2017, at 11:52, Max Rudolph wrote: > > Marine, > You may want to post this on the aspect-devel mailing list if you haven't done so already. > > Max > > On Mon, Jul 10, 2017 at 5:00 PM, Marine Lasbleis > wrote: > Hi all, > > This is my first message here, I hope it’s OK. > I’m started to work on ASPECT, and installed it already on a desktop computer (debian with 8 cores). But would like to install it on the available clusters. (I have access to 3 different clusters. Not sure which one is the best for that… And definitely no real admin for the clusters. They are “self-organised”, which is not always for the best) > > I’m trying to install ASPECT on the ELSI cluster, which is a CRAY CX30, and while having problems, I found that you may have done the same a couple of weeks ago (I saw this conversation: http://dealii.narkive.com/jCU1oGdB/deal-ii-get-errors-when-installing-dealii-on-opensuse-leap-42-1-by-using-candi ) > > For now, what we’ve done: (before seeing candi installation) > - switch to PrgEnv-gnu > - try to install p4est. But it seems that we need to use “ftn” and not fortran or others, so he can’t do anything, and stop very soon. I tried to modify by hand the configure file (adding ftn where I could find the system was looking for fortran of mpif77.) But I guess it’s definitely not a good idea, and I am obviously still missing a couple of call because I still got the same error. > > So, with the conversation, I guessed that https://github.com/dealii/candi can actually install everything for me. > Since I’m using a slightly different cluster (CRAY XC30), I will try to give you updates on my progress. > I’m not familiar with candi, but I decided to give a try, so please excuse me if I am doing obvious mistakes. > > I changed the configuration as requested, and loaded the required modules and defined new variables for the info on the compilers. > In this particular cluster, we need to be careful with the path where to install (the default one is on a drive that is very slow to access, and compilation takes forever), so I had to use a -p path option. Also, I think I used first too many cores to compile, and got a memory error (internal compiler error raised, which seems to be related to available memory) > > So, from my day trying to install: > - I finished the candi.sh script, apparently everything correctly installed. > - I built ASPECT (with this particular cluster, be careful with cmake. By default, the cmake is not up-to-date, and in particular even after installation with candi.sh, the available cmake is not the one that was installed) > I got a couple of warnings, mostly about PETSc, that I thought were only warnings and not problems. > Most of them were along the line of this one: > warning: 'dealii::PETScWrappers::MPI::Vector::supports_distributed_data' is deprecated [-Wdeprecated-declarations] , for either PETSc or Trilinos. > > - I’ve run a couple of examples from the cookbook. None are working. > > I got this from running ASPEC using aprun -n4 ../aspect burnman.prm > ----------------------------------------------------------------------------- > -- This is ASPECT, the Advanced Solver for Problems in Earth's ConvecTion. > -- . version 1.5.0 > -- . running in DEBUG mode > -- . running with 4 MPI processes > -- . using Trilinos > ----------------------------------------------------------------------------- > > [0]PETSC ERROR: [1]PETSC ERROR: [3]PETSC ERROR: [2]PETSC ERROR: ------------------------------------------------------------------------ > [0]PETSC ERROR: ------------------------------------------------------------------------ > ------------------------------------------------------------------------ > [2]PETSC ERROR: ------------------------------------------------------------------------ > [1]PETSC ERROR: [3]PETSC ERROR: Caught signal number 8 FPE: Floating Point Exception,probably divide by zero > [1]PETSC ERROR: Try option -start_in_debugger or -on_error_attach_debugger > Caught signal number 8 FPE: Floating Point Exception,probably divide by zero > [1]PETSC ERROR: [3]PETSC ERROR: or see http://www.mcs.anl.gov/petsc/documentation/faq.html#valgrind > Try option -start_in_debugger or -on_error_attach_debugger > [1]PETSC ERROR: [3]PETSC ERROR: or try http://valgrind.org on GNU/linux and Apple Mac OS X to find memory corruption errors > or see http://www.mcs.anl.gov/petsc/documentation/faq.html#valgrind > [1]PETSC ERROR: [3]PETSC ERROR: configure using --with-debugging=yes, recompile, link, and run > or try http://valgrind.org on GNU/linux and Apple Mac OS X to find memory corruption errors > [1]PETSC ERROR: [3]PETSC ERROR: to get more information on the crash. > configure using --with-debugging=yes, recompile, link, and run > [3]PETSC ERROR: to get more information on the crash. > [1]PETSC ERROR: --------------------- Error Message -------------------------------------------------------------- > Caught signal number 8 FPE: Floating Point Exception,probably divide by zero > > > > > > Any idea where this could come from? > (any additional files I should show you?) > > > Thanks! (and many thanks to the person who did the candi.sh script for Cray XC40 :-) ) > Marine > > > > > > _______________________________________________ > CIG-MC mailing list > CIG-MC at geodynamics.org > http://lists.geodynamics.org/cgi-bin/mailman/listinfo/cig-mc > > > > -- > --------------------------------- > Max Rudolph > Assistant Professor, Earth and Planetary Sciences, UC Davis > webpage > _______________________________________________ > CIG-MC mailing list > CIG-MC at geodynamics.org > http://lists.geodynamics.org/cgi-bin/mailman/listinfo/cig-mc -------------- next part -------------- An HTML attachment was scrubbed... URL: From bangerth at tamu.edu Tue Jul 11 11:10:30 2017 From: bangerth at tamu.edu (Wolfgang Bangerth) Date: Tue, 11 Jul 2017 12:10:30 -0600 Subject: [CIG-MC] Installing ASPECT on Cray XC30 In-Reply-To: <1A73664D-51BD-45AF-BC60-B41AB2F08780@gmail.com> References: <1A73664D-51BD-45AF-BC60-B41AB2F08780@gmail.com> Message-ID: <382f25d3-a112-1804-b43e-c7e1ed2d0c80@tamu.edu> On 07/10/2017 06:00 PM, Marine Lasbleis wrote: > > [0]PETSC ERROR: [1]PETSC ERROR: [3]PETSC ERROR: [2]PETSC ERROR: > ------------------------------------------------------------------------ > [0]PETSC ERROR: > ------------------------------------------------------------------------ > ------------------------------------------------------------------------ > [2]PETSC ERROR: > ------------------------------------------------------------------------ > [1]PETSC ERROR: [3]PETSC ERROR: Caught signal number 8 FPE: Floating Point > Exception,probably divide by zero > [1]PETSC ERROR: Try option -start_in_debugger or -on_error_attach_debugger > Caught signal number 8 FPE: Floating Point Exception,probably divide by zero ASPECT uses signaling floating point NaNs to track down errors, but as Matt already mentioned, the error is only caught -- not generated -- by PETSc. Do you get the same error if you run with one processor? If so, do you know how to generate a backtrace in a debugger to figure out where this is happening? (I'll mention that on some systems with some compilers, we have seen these sorts of floating point exceptions come out of the standard C library -- that's highly annoying because it means that the system is defeating our ways of debugging problems. In that case, you can just disable floating point exceptions when you run ASPECT's cmake script. I think there is a blurb about that in the readme file. But before you do that, I'd still be interested to see whether you can get a backtrace :-) ) Best W. -- ------------------------------------------------------------------------ Wolfgang Bangerth email: bangerth at colostate.edu www: http://www.math.colostate.edu/~bangerth/ From marine.lasbleis at elsi.jp Wed Jul 12 00:38:37 2017 From: marine.lasbleis at elsi.jp (Marine Lasbleis) Date: Wed, 12 Jul 2017 16:38:37 +0900 Subject: [CIG-MC] Installing ASPECT on Cray XC30 In-Reply-To: <382f25d3-a112-1804-b43e-c7e1ed2d0c80@tamu.edu> References: <1A73664D-51BD-45AF-BC60-B41AB2F08780@gmail.com> <382f25d3-a112-1804-b43e-c7e1ed2d0c80@tamu.edu> Message-ID: Dear Timo and Wolfgang, I tried with the flag -D ASPECT_USE_FP_EXCEPTIONS=OFF, and ASPECT is indeed running! (by the way, which would be the best example to use from the cookbook to check how well aspect is running? I tried the convection-box. I want to use ASPECT for inner core, so I’ll be also running the inner core cookbook soon) amongst other things we also tried: - same than before, with apron -n 1 (so monoproc), same error. - static installation (following subsection 3.3.5 Compiling a static ASPECT executable) It did not work on our cluster (not sure why, but building of trilinos failed) So, is there any reason the -D ASPECT_USE_FP_EXCEPTIONS=OFF should not be used? "Do you get the same error if you run with one processor? If so, do you know how to generate a backtrace in a debugger to figure out where this is happening?” > Yes, same error. No, I don’t know how to do a backtrace. If you think I should try, please let me know! (either with explanation on how to do it, or I’ll have a look at internet) I will investigate a little bit more how well the runs are, and trying a couple of examples to be sure it seems to work. I’ll install it on other sessions of the cluster as well. Thank you! And thank you so much for the candi.sh script adapted to Cray :-) I tried installing ASPECT on the cluster (and on my mac) a couple of years ago, but with no success. Also, as we are a couple of people here in ELSI this summer wanting to use ASPECT, I installed it with docker on another colleague’s mac, and it seems to be working correctly (at first, I gave her my old virtual machine from a summer workshop in 2014). It’s likely that you’ll get questions from people in Nantes (France) willing to use ASPECT on their own cluster as well. Best, Marine Marine Lasbleis ===== ELSI Research Scientist Earth Life Science Institute Tokyo Institute of Technology Tokyo, Japan +81 70 1572 5070 marine.lasbleis at elsi.jp https://members.elsi.jp/~marine.lasbleis/ http://elsi.jp/en/ > On 12 Jul 2017, at 03:10, Wolfgang Bangerth wrote: > > On 07/10/2017 06:00 PM, Marine Lasbleis wrote: >> [0]PETSC ERROR: [1]PETSC ERROR: [3]PETSC ERROR: [2]PETSC ERROR: ------------------------------------------------------------------------ >> [0]PETSC ERROR: ------------------------------------------------------------------------ >> ------------------------------------------------------------------------ >> [2]PETSC ERROR: ------------------------------------------------------------------------ >> [1]PETSC ERROR: [3]PETSC ERROR: Caught signal number 8 FPE: Floating Point Exception,probably divide by zero >> [1]PETSC ERROR: Try option -start_in_debugger or -on_error_attach_debugger >> Caught signal number 8 FPE: Floating Point Exception,probably divide by zero > > ASPECT uses signaling floating point NaNs to track down errors, but as Matt already mentioned, the error is only caught -- not generated -- by PETSc. > > Do you get the same error if you run with one processor? If so, do you know how to generate a backtrace in a debugger to figure out where this is happening? > > (I'll mention that on some systems with some compilers, we have seen these sorts of floating point exceptions come out of the standard C library -- that's highly annoying because it means that the system is defeating our ways of debugging problems. In that case, you can just disable floating point exceptions when you run ASPECT's cmake script. I think there is a blurb about that in the readme file. But before you do that, I'd still be interested to see whether you can get a backtrace :-) ) > > Best > W. > > -- > ------------------------------------------------------------------------ > Wolfgang Bangerth email: bangerth at colostate.edu > www: http://www.math.colostate.edu/~bangerth/ > -------------- next part -------------- An HTML attachment was scrubbed... URL: From legendcit at gmail.com Sat Jul 15 06:32:54 2017 From: legendcit at gmail.com (Lijun Liu) Date: Sat, 15 Jul 2017 08:32:54 -0500 Subject: [CIG-MC] AGU session DI002 on seismic anisotropy Message-ID: Dear all, We invite you to submit to the following AGU session: DI002: Advances in Understanding Earth's Dynamic Processes using Seismic Anisotropy Session Description: Understanding the process and evolution of mantle dynamics represents a fundamental goal of Earth science. However, this has also been a major challenge due to the inaccessible nature of the solid Earth and the various uncertainties in observation and modeling. Recent advances on measuring seismic anisotropy using different geophysical methods generate new constraints on deformation of the Earth’s interior associated with active subduction, lithosphere deformation, mantle circulation, core-mantle boundary processes, as well as the growth of the inner core. Meanwhile, novel analog and numerical modeling techniques and capabilities start to be able to capture the Earth’s dynamic processes at unprecedented levels of resolution and physics. Consequently, new perspectives emerge on the evolution of the crust, lithosphere, mantle, and core. Therefore, we invite submission with a focus on the deformation and structures of the solid Earth from seismic and MT measurements, laboratory simulation, theoretical calculation, and geodynamic modeling. Confirmed invited presenters: Thorsten Becker, University of Texas, Austin Kelly Liu, Missouri University of Science & Technology Conveners: Lijun Liu, University of Illinois at Urbana Champaign, Urbana, IL, United States Maureen D Long, Yale University, Department of Geology and Geophysics, New Haven, CT, United States Margarete Ann Jadamec, University of Houston, Houston, TX, United States Manuele Faccenda, University of Padova, Padova, Italy -Lijun --------------------------------------------------- Lijun Liu Associate Professor Lincoln Excellence Scholar University of Illinois at Urbana-Champaign Email: ljliu at illinois.edu Web: https://www.geology.illinois.edu/cms/One.aspx?portalId=127672&pageId=230146 -------------- next part -------------- An HTML attachment was scrubbed... URL: