Difference between revisions of "Parallel output: time and processor blocks"
| Line 16: | Line 16: | ||
; Processor number | ; Processor number | ||
The mpi_processor number starts at 0 and represents the mpi processor id from the task that wrote the output. If your processor writes to a shared directory, the files for different processors will be collocated, but since clusters are not always set up the same way and each process is writing locally this isn't always the case. The per-processor outputs are usually gathered into a global binary. The scripts that do this are called combine_output*.f90 (a simple perl script autocombine_MPI_elfe.pl exsits to combine all available outputs transparently). Once you are done, you will have a binary file called something like 9_elev.61. The time block and the variable name remain. There is no utility for gathering the outputs in time, instead most post-processing tools are able to work with multiple files. | The mpi_processor number starts at 0 and represents the mpi processor id from the task that wrote the output. If your processor writes to a shared directory, the files for different processors will be collocated, but since clusters are not always set up the same way and each process is writing locally this isn't always the case. The per-processor outputs are usually gathered into a global binary. The scripts that do this are called combine_output*.f90 (a simple perl script autocombine_MPI_elfe.pl exsits to combine all available outputs transparently). Once you are done, you will have a binary file called something like 9_elev.61. The time block and the variable name remain. There is no utility for gathering the outputs in time, instead most post-processing tools are able to work with multiple files. | ||
| − | |||
; Combine outputs of side-centered variables | ; Combine outputs of side-centered variables | ||
| − | Outputs for barotropic pressure gradient force (bpgr.65) and Wave forces (wafo.67) are currently located in side centers. To combine this variables, a .gr3 type file named "sidecenters.gr3" is needed. "sidecenters.gr3" can be generated by these steps: | + | Outputs for barotropic pressure gradient force (bpgr.65) and Wave forces (wafo.67) are currently located in side centers. To combine this variables, a ".gr3" type file named "sidecenters.gr3" is needed. "sidecenters.gr3" can be generated by these steps: |
# Run model with ipre = 0 (inside param.in), then a build point file "sidecenters.bp" will be generated. | # Run model with ipre = 0 (inside param.in), then a build point file "sidecenters.bp" will be generated. | ||
# Triangulate "sidecenters.bp" with xmgredit5 or Aquaeo SMS (SMS is recommended, as xmgredit5 may get very ugly triangulated result sometimes) to generate a ".gr3" file. | # Triangulate "sidecenters.bp" with xmgredit5 or Aquaeo SMS (SMS is recommended, as xmgredit5 may get very ugly triangulated result sometimes) to generate a ".gr3" file. | ||
# Put "sidecenters.gr3" inside your work directory and use the perl script autocombine_MPI_elfe.pl to combine the outputs. 4. visualize the combined outputs with xmvis6. | # Put "sidecenters.gr3" inside your work directory and use the perl script autocombine_MPI_elfe.pl to combine the outputs. 4. visualize the combined outputs with xmvis6. | ||
The triangulated file "sidecenters.gr3" with xmgredt5 or SMS will get residual elements outside the mesh domain. It's not necessary to remove the extra elements for combining and visulizing, but a cleaner triangulated "sidecenters.gr3" is good for better visualization. | The triangulated file "sidecenters.gr3" with xmgredt5 or SMS will get residual elements outside the mesh domain. It's not necessary to remove the extra elements for combining and visulizing, but a cleaner triangulated "sidecenters.gr3" is good for better visualization. | ||
Revision as of 06:35, 5 May 2013
SELFE binary state output is emitted in a directory called /outputs. This directory must exist or you will get an immediate warning from the model. Depending on your MPI configuration, the /outputs directory may exist in a central location (this is more common) or each processor may have an instance in which case you need to collect together the contents into a central location.
An example file name is 9_0000_elev.61. More generally, the file name is: [time_block]_[processor_no]_[variable_name].[fortran_unit]
- Variable name
The variable name is transparent and covered in the documentation.
- Time block
The time blocks start from 1 and are sequential. The model buffers and writes data occasionally. Every ihfskip time steps it opens a new time block. For instance, if the time step is 120 seconds and ihfskip = 10080 each block will be 14 days long.
- "Neat" time lengths that will make meaningful analysis (e.g. 14 days) are usually easiest later when you postprocess.
- Some of the output post-processing scripts will run a lot better if the length of your simulation is an even multiple of ihfskip. This can be done by altering ihfskip or the simulation length -- at the risk of lengthening the simulation a bit, the latter often produces a neater result.
- If your simulation length is not an even multiple of the time block length, the last time block will be truncated on the last block. This will cause some minor errors and warnings in the post-processing tools. In addition, if you then restart the run it is best to repeat and overwrite the truncated block -- the post-processing tools do not work well with blocks that grow and shrink in the middle of the run.
- Even if the output blocks match the end of the simulation very neatly, the model (at the time of writing) will open a new block that will be unused. This is useful for the autocombine_MPI_elfe.pl, as the latter always waits until a new block to come out before starting to combine the previous block (and so it'd hang if the last block were not written out).
- Processor number
The mpi_processor number starts at 0 and represents the mpi processor id from the task that wrote the output. If your processor writes to a shared directory, the files for different processors will be collocated, but since clusters are not always set up the same way and each process is writing locally this isn't always the case. The per-processor outputs are usually gathered into a global binary. The scripts that do this are called combine_output*.f90 (a simple perl script autocombine_MPI_elfe.pl exsits to combine all available outputs transparently). Once you are done, you will have a binary file called something like 9_elev.61. The time block and the variable name remain. There is no utility for gathering the outputs in time, instead most post-processing tools are able to work with multiple files.
- Combine outputs of side-centered variables
Outputs for barotropic pressure gradient force (bpgr.65) and Wave forces (wafo.67) are currently located in side centers. To combine this variables, a ".gr3" type file named "sidecenters.gr3" is needed. "sidecenters.gr3" can be generated by these steps:
- Run model with ipre = 0 (inside param.in), then a build point file "sidecenters.bp" will be generated.
- Triangulate "sidecenters.bp" with xmgredit5 or Aquaeo SMS (SMS is recommended, as xmgredit5 may get very ugly triangulated result sometimes) to generate a ".gr3" file.
- Put "sidecenters.gr3" inside your work directory and use the perl script autocombine_MPI_elfe.pl to combine the outputs. 4. visualize the combined outputs with xmvis6.
The triangulated file "sidecenters.gr3" with xmgredt5 or SMS will get residual elements outside the mesh domain. It's not necessary to remove the extra elements for combining and visulizing, but a cleaner triangulated "sidecenters.gr3" is good for better visualization.