RUSH RENDER QUEUE
(C) Copyright 1995,2000 Greg Ercolano. All rights reserved.
V 102.14 11/12/00
Strikeout text indicates features not yet implemented


Command Reference


 Submit Command Reference

Submit Commands
AutoDump 
Criteria 
Command
Cpus
DependOn 
DoneCommand 
DoneMail 
Frames
LogDir
LogFlags 
NeverCpus
Notes
Priority
Ram
State
Title
WaitFor
Dump job on completion
Criteria for matching hosts
Render script to execute
Hosts (or hostgroups) to use for rendering
Command to run when job done
Send mail when job done
Frame ranges to render
Directory for log files
Controls logfile behavior
Cpus to never use for rendering
Job notes
Default priority
Ram job expects to use (max)
Initial state for job
Title for job
Wait for other jobs to complete

AutoDump
(rush -autodump)

Command
(rush -command)

Cpus
(rush -ac/-rc)

Criteria
(rush -criteria)


[erco@howland] % rush -lac
IP               Hostname   Ram  Cpus Pri Criteria
192.168.10.3     rotwang    100  2    0   +any,linux,linux6.0,intel,+dante
192.168.10.2     how        256  2    0   +any,sgi,irix,irix6.2
192.168.10.1     nt         256  1    0   +any,winnt,+dante
 
criteria ( linux | ( irix6 & octane ) )
criteria ( linux | irix6.2 )
criteria ( linux & !alpha )
criteria ( linux & alpha & carrera )    
criteria ( +any )
criteria ( !intel )
# Use linux machines OR irix6 octanes.
# Only linux machines OR  irix6.2 machines.
# Use only linux machines that are NOT dec-alphas.
# Use only linux dec-alphas built by Carrera.
# Use all available machines
# Use all machines that are NOT intel based machines.

DependOn

DoneCommand
(rush -donecommand)

DoneMail
(rush -donemail)

Frames
(rush -af/-rf)

LogDir

LogFlags

NeverCpus
(rush -an/-rn)

Notes
(rush -notes)

Priority
(rush -priority)

Ram
(rush -ram)

State
(rush -pause/-cont)

Title
(rush -title)

WaitFor


Rush Command Line


Rush Command Line Arguments
-ac -af -an -autodump -checkconf -checkhosts
-command -cont -criteria -dcatlog -deltaskfu -dexit
-dexitnow -deltask_fu -dependon -dlog -done -donecommand
-donemail -down -dump -end -fail -fu
-getoff -hold -jobnotes -lac -lacf -laj
-lajf -lc -lcf -lf -lff -lfi
-lj -ljf -notes -offline -online -pause
-ping -que -ram -rc -reserve -reorder
-rf -rn -rotate -status -submit -tasklist
-title -trs -try -tss -uping -waitfor

rush -ac <cpuspec..> [jobid..]

rush -af <framerange..> [jobid..]

rush -an <hostname|+group..> [jobid..]

rush -autodump <off|done|donefail> [jobid..]

rush -checkconf <filename>

rush -checkhosts <filename>

rush -command [jobid..] <command>"

rush -cont [jobid..]

rush -criteria 'criteria strings' [jobid..]

rush -dcatlog [host..]

rush -deltaskfu [..]

rush [jobid] -dependon [-|depjobid,[..,..]]

rush -dexit [remotehost..]

rush -dexitnow [remotehost..]

rush -dlog [remotehost..]

rush -done <framerange|framestate..> [jobid..]

rush -donecommand [jobid..] <command|->

rush -donemail user[@domain.com[,user..]] [jobid..]

rush -dump [jobid|user@host..]

rush -end [jobid..]

rush -fail <framerange|framestate..> [jobid..]

rush [operation] -fu

rush -getoff [remotehost..]

rush -hold <framerange|framestate..> [jobid..]

rush -jobnotes '<notes>' [jobid..]

rush -lac [hostname..]

rush -lacf [hostname..]

rush -laj

rush -lajf

rush -lc [jobid..]

rush -lcf [jobid..]

rush -lf [jobid..]

rush -lff [jobid..]

rush -lfi [jobid..]

rush -lj [remotehost..]

rush -ljf [jobid..]

rush -notes <framerange>:'notes..' [jobid..]

rush -offline [remotehost|+group..]

rush -online [remotehost|+group..]

rush -pause [jobid..]

rush -ping [remotehost|+group..]

rush -priority [jobid..]

rush -que <framerange|framestate..> [jobid..]

rush -ram <ramval> [jobid..]

rush -rc <cpuspec|tidspec|hostname> [jobid..]

% rush -ac tahoe@100                       # Add a cpu.
% rush -rc tahoe@100                       # Now try to remove it
'tahoe@100' no such cpu specification      # FAILED: need to use spec shown in 'rush -lc'

% rush -lc                                 # Look at 'rush -lc' report
CPUSPEC     STATE FRM  PID   ELAPSED  .. 
tahoe=1@100 Run   0002 26747 00:00:11 ..   # More complete specification in report.

% rush -rc tahoe=1@100                     # Remove using spec shown in report
'tahoe=1@100' removed.                     # It works

rush -reserve <cpuspec> [ramval]

rush -reorder <framerange..> [jobid..]

rush -rf <framerange..> [jobid..]

rush -rn <hostname|+group..> [jobid..]

rush -rotate [remotehost|+group..]

rush -status [-s secs] [-c count] [remhost ..]

rush -submit [remotehost]

rush -tasklist [remotehost..]

rush -title <text> [jobid..]

rush -trs

rush -try <count> <framerange..> [jobid..]

rush -tss

rush -uping [-c count] [remotehost..]

rush [jobid] -waitfor [-|waitjobid,[..,..]]


Configuration File
$RUSH_DIR/etc/rush.conf




Hosts File
$RUSH_DIR/etc/hosts




Cpu Accounting File
$RUSH_DIR/var/cpu.acct


The cpu accounting file is configured with the rush.conf file's CpuAcctPath  command. Each time a frame finishes executing, a new entry is created in the Cpu Accounting file logging the name of the job, how long the frame ran, etc.

Cpu Accounting File Example

u  948242700 53
p  948242783 tahoe-798    WERNER/C33 erco     0106  superior 100k  122  0   0	0
p  948242783 tahoe-798    WERNER/C33 erco     0107  superior 100k  122  0   0	0
p  948242865 tahoe-797    KILLER     erco     0504  superior 200   121  0   0	0
u  948246300 5
u  948249900 0

Process Entries


p  948242783 tahoe-798 WERNER/C33 erco  0106  superior  100k  122  0   0   0
p  948242783 tahoe-798 WERNER/C33 erco  0107  superior  100k  122  0   0   0
p  948242865 tahoe-797 KILLER     erco  0504  superior  200   121  0   0   0
-  --------- --------- ---------- ----  ----  --------  ----  ---  -   -   -
|      |         |          |      |     |       |       |     |   |   |   |
|      |         |          |      |     |       |       |     |   |   |   Exit code
|      |         |          |      |     |       |       |     |   |   |
|      |         |          |      |     |       |       |     |   |   #Secs User Time
|      |         |          |      |     |       |       |     |   |                 
|      |         |          |      User  |       |       |     |   #Secs System Time
|      |         |          |            |       |       |     |
|      |         |          Title of job |       |       |     #Secs Wall Clock Time
|      |         Jobid                   |       |       |
|      |                                 |       |       Priority
|      time(2) process started           |       |
|                                        |       Host that ran the process
'p' indicates 'process entry'            |
					 Frame that ran

Utilization Entries


u  948242700 53
u  948246300 5
-  --------- --
|      |      |
|      |      Percent of time processor(s) were busy rendering. (0-100)
|      |
|      time(2) utilization recorded
|
'u' indicates 'utilization entry' 

CAVEATS

  • 'Exit code' is normally a positive number representing the actual exit code of the process. This value will be negative if the process was signaled; the value being the signal number. If the value is negative, this usually means the process killed, segfaulted, or was bumped by a higher priority process. Commonly, the 'Exit code' will be one of:
    
      -15 - process killed with SIGTERM; someone probably manually killed it
       -9 - process killed with SIGKILL; probably bumped in a priority battle
       -3 - process killed with SIGINT; someone sent it a ^C
        0 - process did an exit(0); frame Done
        1 - process did an exit(1); frame Fail
        2 - process did an exit(2); frame Requeue
    

  • Although tempting, it is not recommend to use process execution times for cpu billing purposes. Wall clock time includes time process may have spent waiting for network load. User and System times only report the respective times spent for the Render Script only; not its sub-processes (eg. the renderer).

    To properly bill for cpu time, you would either need to enable full-on unix process accounting to attain accumulated cpu time for all sub-processes in the user's render script, or, create wrapper scripts that use programs like timex(2) to monitor the binary execution time of the critical render/compositor processes.

    Tools like timex(2) indicate in their documentation they must have unix process accounting enabled to show sub-process totals. This is usually prohibitive on production machines, due to disk resources used by the unix process accounting system.


  • FAQ - Frequently Asked Questions



    TD Questions


    How can I use padded frame numbers (0000) in my render script?
    Use $RUSH_PADFRAME, it is created for you automatically to do 4 digit padding.

    To do your own custom frame number padding, use this unix technique:

        set padframe = `perl -e 'printf("%04d",$ENV{RUSH_FRAME});'`
    
    To use different padding widths, just change the '4' (in '%04d') to a different number.

    My renders are coming up 'FAIL'. How do I figure out what's wrong?
    Check the frame logs being generated by your render script.

    Frame logs contain the error messages from each rendered frame which should help you determine the problem. Make sure your submit script has logdir pointing to a valid directory, which is where your frame logs can be found.

    Also, make sure your render script is returning the proper exit code. The most common problem is a render script that does not properly handle returning exit codes. Your render script must 'exit 0' for a frame to show up 'DONE' in the frame list. Make sure your script is properly checking the error returns from your renderer, and translating them into the codes rush expects. See Render Scripts for more.


    How do I have rush automatically retry frames? How do I set the number of retrys?
    See Retrying Frames.

    My job isn't starting renders on my cpus. What's going on?
    Use 'rush -lc' and check the Notes column for messages.

    If you know the remote cpus aren't just busy with other jobs, then list your cpus and check the 'NOTES' column to see if the system is giving you reasons why your cpus are getting rejected. 

    The job might be in Pause, there are no more frames to render, all the available machines don't have as much ram as your job needs, etc. Here are some typical situations:

    [erco@howland]% rush -lc
    CPUSPEC[HOST]        STATE       FRM  PID     JOBTID  ELAPSED  NOTES
    placid=3@100k        Idle        -    -       1       00:04:37 Job state is 'Pause'
    tahoe=1@1            Idle        -    -       2       00:02:08 No more frames
    superior=1@1         Idle        -    -       3       00:02:08 Not enough ram
    waccubuc=1@1         Idle        -    -       4       00:02:08 This is a 'neverhost'
    ontario=1@1          Idle        -    -       5       00:02:08 Failed 'criteria' check

    How do I setup my submit script to only render on certain platforms or operating systems?
    Use the Criteria submit script command.

    This command allows you to build a list of platforms, operating systems, or other general critera to limit which machines will run your renders.

    You can see the different criteria names in the output of 'rush -lac'. It is up to your sysadmin to maintain the criteria names.


    How can I render several frames in one process using rush?
    With clever scripting. See Batching Multiple Frames for how to render several frames at a time.

    Sometimes it pays to render several frames at a time rather than one at a time, to decrease the amount of time the renderer spends loading files.

    If you have existing script filters which monitor the progress of renders to determine which frames are rendering, you can probably easily modify these scripts to work with rush to reflect changes in the frame list, using either frame notes (rush -notes) or frame state change operations (rush -que/rush -done). 


    My job has its 'k' flag set; why isn't it bumping off other jobs' frames?
    For a job to bump another off a cpu, these things must be true:
    • A job only bump other jobs of lower priority (ie. not same priority) 
    • A job can't be bumped if almighty flag is set ('a'). 
    • A job can't be bumped unless its entry in the -tasklist is either in the Avail or Run state.
    When a frame is bumped, the bumped frame will show a message in its frame list indicating the job that bumped it, e.g.:
    % rush -lf erie-790
    STAT FRAME TRY HOSTNAME PID   ELAPSED  NOTES
    Run  0100  0   tahoe    10290 00:00:26 
    Run  0101  0   tahoe    10291 00:00:26 
    Que  0102  1   tahoe    10292 00:00:09 Bumped by ralph's superior-791,KILLER @300ka
    Que  0103  0   -        0     00:00:00 
    [..]
    

    Is there an easier way to set the RUSH_JOBID environment variable?
    You can use eval `submit` to automatically set it, or a simple alias to set it manually. However, cut and pasting the setenv command is not so hard.

    Some people like to use this alias to make it easy to set new jobid variables:

        # Put this in your .cshrc
        alias jid 'setenv RUSH_JOBID "\!*"'
    Then you can use it on the command line to set one or more jobids:
        erco@tahoe % jid tahoe-932 tahoe-933
    If you want to have the RUSH_JOBID variable set automatically in your shell whenever you invoke your submit script, then use 'eval':
        erco@tahoe % eval `my_submit_script`
    ..the shell automatically parses the 'setenv RUSH_JOBID' command rush prints on stdout when a job is successfully submitted. Error messages are not affected by 'eval', so you don't have to worry about loosing error messages when using this technique.

    What does 'rush' stand for?
    Rush is not an acronym, though surely there are some TDs that would like to think it stands for "Render, yoU f*cking piece of SH*t".

    Can my render script detect being 'bumped' by higher priority jobs?
      Not without clever scripting.

      Usually the desire to do this stems from wanting to clean up left over temporary files generated by renders. In most cases, you can avoid left over files by putting temporary files in $RUSH_TMPDIR, which rush cleans automatically, even after bumps.

      Bumps and dumps use SIGKILL to kill the render script and its children. This signal is NOT trappable. There's a reason:

        Under many circumstances SIGTERM, the 'trappable' kill is not effective, especially during heavy rendering, causing bumped frames not to bump, screwing up unattended use, and leaving processors unproductive.

        Since bumps can happen just as readily as dumps, both use SIGKILL, untrappable, and always effective (except in pathological cases where the process is hung).

        So do not expect to be able to trap interrupts to detect bumps/dumps.

      If you need a way to determine if you are re-rendering a frame that was previous killed mid-execution (ie. bumped by a higher priority job), you can put some logic into your render script:

          #!/bin/csh -f
          ..
          if ( -e /somewhere/$RUSH_FRAME.busy ) then
      	echo We are picking up a frame that was killed.
      	echo Do pickup stuff here..
          endif
      
          # Create a 'busy' file for this frame
          #    If we are bumped, busy file is left behind 
          #    so that the above logic can detect it.
          #
          touch /somewhere/$RUSH_FRAME.busy
          echo Do rendering here..
          rm -f /somewhere/$RUSH_FRAME.busy
          

    Can I chain separate jobs together, so that one waits for the other to get done?
    Yes, see the submit script command WaitFor to have a job wait for others to dump before starting.

    Also, see DependOn to have a job wait for frames in another job to get done, ie. rather than wait for the entire job to complete.


    Is it possible to use negative frame numbers in rush?
    No. You are evil.

    If you are trying to include 'handles' and 'slates' by using negative numbers, don't.


    Is there a way to see just the cpus busy running my job?
    Yes. In unix:

       rush -lc | grep Busy
       rush -lf | grep Run
       

    ..and on WinNT, if you don't have grep(1):

       rush -lc | findstr Busy
       rush -lf | findstr Run
       


    Is there a way to see what jobs a machine is busy rendering?
    In unix:

       rush -tasklist host | grep Busy
       

    ..and on WinNT, if you don't have grep(1):

       rush -tasklist host | findstr Busy
       


    Is there a way to requeue a busy frame for a host that is down?
    If a machine goes down while rendering a frame, the frame stays in the Busy state until the machine is rebooted. Once rush realizes the remote machine rebooted, it requeues the frame.

    But if the machine never reboots, the frame will stay in the Busy state indefinitely, unless you take the following action.

    Assuming you're *sure* the machine is down, and not just 'slow', use the following command:

        % rush -down hosta hostb
        

    ..where 'hosta' is the name of the machine that is down, and 'hostb' is the name of the machine that's the server for the job(s) with the hung frame(s).

    Beware; if the remote machine is not really down, and is still running the frame, doing the above will start the frame running on another machine, and the two frames will overwrite each other.


    Systems Administrator Questions

    What's the best way to verify all the daemons are running?

      Use:

        rush -ping +any

      This 'pings' all the daemons in the $RUSH_DIR/etc/hosts file with a TCP message.

      If the daemon isn't running, tail(1) the daemon's log file in $RUSH_DIR/var/rushd.log.


    How do I stop/start the daemons? (Unix/NT)

        Irix /etc/init.d/rushd stop
        /etc/init.d/rushd start
        Linux/RedHat 6.x /etc/rc.d/init.d/rushd stop
        /etc/rc.d/init.d/rushd start
        Windows NT NET STOP RUSHD
        NET START RUSHD

      All the daemons can be stopped via:

        rush -dexit +any


    Is there an example boot script I can use to invoke rush?

      Yes; see $RUSH_DIR/etc/S99rush.

    Is there a way to run 'rush -online' automatically when someone logs out?
      Yes; when a user logs out of the window manager, the sysadmin can configure the following files to run 'rush -online':

        Irix /usr/lib/X11/xdm/Xreset
        Linux/RedHat 6.x /etc/X11/xdm/TakeConsole

      A literal example of what should be added to these files would be:

        /usr/local/rush/bin/rush -online
        logger -t RUSH "Rush online (user logout)"

      Use of logger(1) is optional; it leaves an audit trail in the syslog. Include full path to logger(1) if security is an issue.


    Is there a way to run 'rush -online' automatically when someone's screensaver pops on?


    How do I update changes to the rush hosts file (or rush.conf file) to the network?
      You should use rdist(1), and the changed files will be picked up automatically by the daemons within a minute. Here's some examples:
    
    # SEND A NEW rush.conf
    foreach i ( `awk '/^[a-z]/{print $1}' /usr/local/rush/etc/hosts` )
       rdist -c /usr/tmp/newconf ${i}:/usr/local/rush/etc/rush.conf
    end
    
    # SEND A NEW RUSH hosts
    foreach i ( `awk '/^[a-z]/{print $1}' /usr/tmp/newhosts` )
       rdist -c /usr/tmp/newhosts ${i}:/usr/local/rush/etc/hosts
    end
    

      NOTE: When sending out new files, you must use rdist(1), and not cp(1) or rcp(1). rdist(1) uses a special 'tmp-file/rename' technique that prevents the daemon from parsing the file before it's finished being written.


    Is there a way to track who's jobs are bump who?
      Grep the $RUSH_DIR/var/rushd.log file for BUMP messages.

    Is there a way to track who's changing other people's jobs?
      Grep the $RUSH_DIR/var/rushd.log file for SECURITY messages.

    Can rush be told to use a different network interface, other than the machine's hostname?