10  MPlus, Linstat, and Slurm

The SSCC has a limited number of MPlus licenses. On the Linux servers the licenses are tracked by Slurm. All MPlus jobs on Linstat need to be run using Slurm in order to obtain an MPlus license.

10.1 Writing MPlus Code

MPlus for Linux does not have all the interactive features available with MPlus for Windows - there is no HTML version of output, and no model diagrammer or plot viewer. However, our Linux servers have many more processors and much more memory available, and using them can substantially speed up your work.

Typically you’ll write your .inp file using a text editor (like VS Code, or MPlus for Windows on Winstat) and then use Slurm on a Linux server to run to it.

10.2 Submitting MPlus Jobs to Slurm

To submit a regular MPlus job to Slurm, log in to Linstat and type:

> ssubmit -L mplus --cores=C --mem=Mg "mplus filename.inp"

where C should be replaced by the number of cores your job will use, M should be replaced by the number of gigabytes of memory your job will use, and filename.inp should be replaced by the name of your MPlus file.

You are also welcome to write your ownsbatch files.

Input file names can not contain spaces when used with MPlus on linux.

10.2.1 Cores

MPlus is capable of using multiple cores (cpus, processors). Pick a number of cores that is large enough to speed up your job, but small enough that you do not have to wait for a server to become free.

In order for MPlus to use the number of cores you specify for Slurm, your input file must also include as part of the ANALYSIS section

ANALYSIS:
  TYPE = [complex, mixture, etc.];
  PROCESSORS = C;

where again C is the number of cores your MPlus job will use.

MPlus jobs that use MONTECARLO simulation, or that use random STARTS to search for a solution are good candidates for using multiple processors. It is efficient to pick a number of STARTS or NREPS that is a multiple of the number of processors, e.g. PROCESSORS = 16 and STARTS = 64.

See MPlus User’s Guide (8e) (2017, pp.708-710) for more suggestions about models that benefit from multiple processors.

10.2.2 Memory

Specify enough memory to both load your data and process it. The minimum Slurm specification of 1GB is often enough.

10.2.3 Output

A successful job submission should produce two files:

  • the usual MPlus output file, named something like filename.out, and
  • a Slurm log file, named something like Slurm-[jobid].out.

If your job did not run, you may or may not have an MPlus output file, and the Slurm log will contain an error message. Keep in mind that sometimes MPlus produces output files with errors or warnings in the middle of the output: just because you have MPlus output does not guarantee that you can trust it!

If the MPlus job ran, the Slurm log file will contain the output that MPlus normally sends to the terminal. For a simple problem this might be something like

    MPlus VERSION 8.10 (Linux)
     MUTHEN & MUTHEN

     Running input file 'Regression.inp'...

     Beginning Time:  11:50:10
        Ending Time:  11:50:11
       Elapsed Time:  00:00:01

     Output saved in 'Regression.out'.

If your MPlus job ran but produced warnings, these are noted at the bottom of the Slurm log.

The Slurm log file that is created will contain any MPlus output that would normally go to the terminal rather than an MPlus output file. This includes the iteration log (TECH8), so it can be lengthy.

10.3 Checking your MPlus Job Efficiency

Because MPlus jobs cannot be run in the background (in the technical, linux sense), you cannot check what resources your MPlus task is using while you are running it. Instead, you will need to make your best guess what resources (cores, memory) will be sufficient, and then adjust for subsequent runs.

The email you receive by using ssubmit includes an efficiency report (you can also use the Slurm seff [jobid] command - the jobid is part of the Slurm log file name).

10.3.1 Example

Look at the statistics on your job’s efficiency to see what resource requests need adjusting. For example:

Job ID: 1526763
Cluster: sscc_Slurm
User/Group: hemken/hemken-upg
State: COMPLETED (exit code 0)
Nodes: 1
Cores per node: 8
CPU Utilized: 00:21:24
CPU Efficiency: 83.59% of 00:25:36 core-walltime 
Job Wall-clock time: 00:03:12 
Memory Utilized: 135.84 MB 
Memory Efficiency: 13.27% of 1.00 GB

10.3.2 CPU Efficiency

This was a complex estimation with random starts that spent most of it’s time using multiple cores - the CPU Efficiency is therefore high. I can use more cores to reduce the overall time (Job Wall-clock) from three minutes to just seconds. This would reduce the efficiency. An efficiency of 50% might be a good compromise between time spent and computational efficiency.

10.3.3 Memory Utilized

The data set was just 8MB, so I see from the report that MPlus actually needed over 100MB of memory just for the estimation algorithm. But the minimum Slurm memory request is 1GB, so there is nothing to adjust there.