MATLAB Answers

How to define MCR_CACHE_ROOT for different nodes on a cluster

40 views (last 30 days)
MikeStein
MikeStein on 29 Dec 2016
Commented: Paul Dankoski on 8 Sep 2019
Hello,
I'm trying to deploy a compiled simulation to a cluster. I'm running into an issue where I set the environment variable MCR_CACHE_ROOT, where the batched jobs unpack the MCR, but I can't figure out a way to have a unique path for each node. As I understand it, this environment variable is shared amongst all nodes. If I change it on one, the others see that change too(?).
Maybe I can initially define MCR_CACHE_ROOT in my pbs script, and then change it with get/setenv within the compiled Matlab code, or is that too late in the execution stream?
I'm finding it very hard to find details on how the MCR_CACHE_ROOT variable is actually used. How is the MCR_CACHE_ROOT used (if at all) when programs are compiled with the -C flag? Any insights/suggestions are welcome!
FYI: I'm using bcMPI to batch out the jobs.
Thank you, Michael

  0 Comments

Sign in to comment.

Answers (2)

Sid Jhaveri
Sid Jhaveri on 3 Jan 2017
Batch processing in MATLAB is done using Master-Slave architecture. Thus, all the workers share the file location. Hence, I would not advice on creating multiple MCR_CACHE_ROOT variables.
To answer your second question, if the application is compiled using "-C" flag, then the CTF archive will not be embedded with the application.

  1 Comment

Paul Dankoski
Paul Dankoski on 8 Sep 2019
Sid,
Can you please comment on the MCR_CACHE_ROOT searching for where .deploy_lock.* are expected to be processed?
What I see is that it seems to be trying to create .deploy_lock.0 in the current working directory, then it seems to look in the folder where the CTF has been unpacked ("myapp_mrc/.matlab"), then it seems to switch to $HOME/.matlab/mcr_v85/.deploy_lock.0. (yes, an older version).
I would have expected it to follow $MCR_CACHE_ROOT if it is defined.
Thanks,
Paul D.

Sign in to comment.


Walter Roberson
Walter Roberson on 3 Jan 2017
Are you encountering a difficulty with it being the same for all nodes?
I can imagine a situation where it could be inefficient: namely that each node has a disk that is somehow more "local" (faster access), especially something like a SSD. To the extent that we ever did something like that, we relied upon the fact that the different nodes can mount different disks at the same mount point, so that the same path might reach different disks depending on the node it was operating on. For example on OS-X and Linux, these days /tmp is often a link to /private/tmp where /private is typically mounted on a "local" disk. Setting the MCR_CACHE_ROOT to be under /tmp might be all that is needed, if you need to use different disks.

  0 Comments

Sign in to comment.

Sign in to answer this question.