Parallel I/O in Flexible Modelling System (FMS) and Modular Ocean Model 5 (MOM5)
Advanced Search
Select up to three search categories and corresponding keywords using the fields to the right. Refer to the Help section for more detailed instructions.

Search our Collections & Repository

For very narrow results

When looking for a specific result

Best used for discovery & interchangable words

Recommended to be used in conjunction with other fields

Dates

to

Document Data
Library
People
Clear All
Clear All

For additional assistance using the Custom Query please check out our Help Page

i

Parallel I/O in Flexible Modelling System (FMS) and Modular Ocean Model 5 (MOM5)

Filetype[PDF-7.06 MB]


Select the Download button to view the document
This document is over 5mb in size and cannot be previewed

Details:

  • Journal Title:
    Geoscientific Model Development
  • Personal Author:
  • NOAA Program & Office:
  • Description:
    We present an implementation of parallel I/O in the Modular Ocean Model (MOM), a numerical ocean model used for climate forecasting, and determine its optimal performance over a range of tuning parameters. Our implementation uses the parallel API of the netCDF library, and we investigate the potential bottlenecks associated with the model configuration, netCDF implementation, the underpinning MPI-IO library/implementations and Lustre filesystem. We investigate the performance of a global 0.25 degrees resolution model using 240 and 960 CPUs. The best performance is observed when we limit the number of contiguous I/O domains on each compute node and assign one MPI rank to aggregate and to write the data from each node, while ensuring that all nodes participate in writing this data to our Lustre filesystem. These best-performance configurations are then applied to a higher 0.1 degrees resolution global model using 720 and 1440 CPUs, where we observe even greater performance improvements. In all cases, the tuned parallel I/O implementation achieves much faster write speeds relative to serial single-file I/O, with write speeds up to 60 times faster at higher resolutions. Under the constraints outlined above, we observe that the performance scales as the number of compute nodes and I/O aggregators are increased, ensuring the continued scalability of I/O-intensive MOM5 model runs that will be used in our next-generation higher-resolution simulations.
  • Keywords:
  • Source:
    Geosci. Model Dev., 13, 1885–1902, 2020 https://doi.org/10.5194/gmd-13-1885-2020
  • DOI:
  • Document Type:
  • Rights Information:
    CC BY
  • Compliance:
    Submitted
  • Main Document Checksum:
  • Download URL:
  • File Type:

You May Also Like

Checkout today's featured content at staging-noaa.cdc.gov

Version 3.27.1