4. Advanced Tutorial#
Learn how to install CMAQ software and underlying libraries, copy input data, and run CMAQ.
- 4.1. Create Cyclecloud CMAQ Cluster
- 4.1.1. Configure the Cycle Cloud Application Host using the Azure Portal
- 4.1.2. Customize your Host Virtual Machine for the CycleCloud Application
- 4.1.3. Connect to Cyclecloud Web Interface
- 4.1.4. Instructions to upgrade the number of processors available to the Cycle Cloud Cluster (only needed if you want to modify the number of nodes in the HPC queue)
- 4.2. Modify Cyclecloud CMAQ Cluster
- 4.3. Install CMAQv533 and pre-requisite libraries on linux
- 4.3.1. Login to updated cluster
- 4.3.2. Change shell to use .tcsh
- 4.3.3. Log out and then log back in to activate the tcsh shell
- 4.3.4. Optional Step to allow multiple users to run on the CycleCloud Cluster
- 4.3.5. Check to see if the group is added to your user ID
- 4.3.6. Make the /shared/build directory
- 4.3.7. Change ownership to your username
- 4.3.8. Make the /shared/cyclecloud-cmaq directory
- 4.3.9. Change ownership to your username
- 4.3.10. Install git
- Install the cluster-cmaq git repo to the /shared directory
- Optional - Change the group to cmaq recursively for the /shared directory/build
- Check what modules are available on the cluster
- Load the openmpi module
- Load the gcc copiler - note, this may have been automatically loaded by the openmpi module
- Verify the gcc compiler version is greater than 8.0
- Change directories to install and build the libraries and CMAQ
- Build netcdf C and netcdf F libraries - these scripts work for the gcc 8+ compiler
- A .cshrc script with LD_LIBRARY_PATH was copied to your home directory, enter the shell again and check environment variables that were set using
- If the .cshrc wasn’t created use the following command to create it
- Execute the shell to activate it
- Verify that you see the following setting
- Build I/O API library
- Build CMAQ
- 4.4. Configuring selected storage and obtaining input data
- 4.5. Copy the run scripts from the CycleCloud repo
- 4.6. Run the CONUS Domain on 180 pes
- 4.7. Check the status in the queue
- 4.8. check the timings while the job is still running using the following command
- 4.9. When the job has completed, use tail to view the timing from the log file.
- 4.10. Check whether the scheduler thinks there are cpus or vcpus