From: Greg Ercolano <erco@(email surpressed)> Subject: Q+A: Submit scripts on network drives Date: Fri, 02 Jan 2003 15:16:58 -0800 |
Msg# 4 View Complete Thread (23 articles) | All Threads Last Next |
[submitted 12/06/02] > Why do I have to run submit scripts from a Network drive? It's best if it's run from a drive that all the machines can see, so that one can make changes to just the one script, and all the machines see that script. Production companies always have a server that contains all their data for rendering, so that all the machines see the same data when the data is changed. > The network drive, could be the server drive mounted in the clients via AFP? Yes. AFP, NFS, NTFS, any of the above. The script submits /itself/, so the script not only runs on the local machine when you invoke it to bring up the GUI, but it also is run on each rendering machine, because it is also the 'render script' that handles invoking the renderer. -- Greg Ercolano, erco@(email surpressed) Rush Render Queue, http://seriss.com/rush/ |
From: Greg Ercolano <erco@(email surpressed)> Subject: Re: aerender batch framing Date: Thu, 17 Mar 2011 12:08:34 -0400 |
Msg# 2057 View Complete Thread (23 articles) | All Threads Last Next |
Greg Ercolano wrote: > In the latter case, from the developer's point of view, the /lack/ of information.. > > In the above error case, the exit code of 0 is bad too.. > > Also, it's important they log actual OS errors.. BTW, feel free to include my observations in your bug report, and feel free have them contact me for questions. When I end up in conversation with the devs, clarity one way or the other is usually found; either the problem can be fixed, or there's a clear reason why it can't. Just knowing the latter is useful to us, and let's us know there's no fix coming, so we can try triage techniques, like having the submit script try to detect missing info by parsing the PROGRESS output. Normally I'd jump on that, but it's a sketchy thing to do in this case because AE is inconsistent with its output; sometimes I've seen it not print progress info at all, even with AE's "-v ERRORS_AND_PROGRESS" flag. -- Greg Ercolano, erco@(email surpressed) Seriss Corporation Rush Render Queue, http://seriss.com/rush/ Tel: (Tel# suppressed)ext.23 Fax: (Tel# suppressed) Cel: (Tel# suppressed) |
From: Greg Ercolano <erco@(email surpressed)> Subject: Re: Newbie submit script not working Date: Mon, 24 Oct 2011 18:35:55 -0400 |
Msg# 2136 View Complete Thread (23 articles) | All Threads Last Next |
On 10/24/11 15:23, Kevin Sallee wrote: > On 10/24/2011 05:20 PM, Greg Ercolano wrote: >> On 10/24/11 15:08, Kevin Sallee wrote: >> [..] >>> Error: Cannot find procedure "AbcExport". >>> >>> So it seems it's not finding the plugins. where should i specify the >>> path for the plugins? >>> >> Probably a separate MAYA_PLUG_IN_PATH environment variable, eg: >> >> os.environ["MAYA_PLUG_IN_PATH"] = "/some/path/to/your/plugins" > Ok that's a great answer! thank you very much, i'm gonna try it right > away :) Great.. and feel free to keep progress updated on the thread, which I've cc'ed here. BTW, note you can get one script to do both the submit and render; just add a special flag (eg. -render) which the 'submit' part of the script can specify as the command it sends to rush, so when the script runs on the render nodes, it runs a different part of the script to handle the rendering. It's a cool way of getting one script to do the work of two. This is a special form of "recursion" that you have to handle carefully, otherwise you'll end up with a 'network worm' where you create a job that submits jobs..! You can end up with that situation even with two scripts, where you accidentally submit a job that runs the submitter script instead of the render script. To prevent that, add some code in your submit code as follows to protect it from accidentally being run as if it were a render. The daemons always set the RUSH_ISDAEMON variable before running a render, so you can check this variable just before submitting the job to "short-circuit" such a problem before it goes out of control. eg: if os.environ.is_key("RUSH_ISDAEMON"): print "Avoiding recursion: exiting" sys.exit(1) else: # Submit the job submit = os.popen("rush -submit", 'w') [..] -- Greg Ercolano, erco@(email surpressed) Seriss Corporation Rush Render Queue, http://seriss.com/rush/ Tel: (Tel# suppressed)ext.23 Fax: (Tel# suppressed) Cel: (Tel# suppressed) |
From: Kevin Sallee <kevinsallee@(email surpressed)> Subject: Re: Newbie submit script not working Date: Mon, 24 Oct 2011 18:54:52 -0400 |
Msg# 2137 View Complete Thread (23 articles) | All Threads Last Next |
On 10/24/2011 05:35 PM, Greg Ercolano wrote: On 10/24/11 15:23, Kevin Sallee wrote:On 10/24/2011 05:20 PM, Greg Ercolano wrote:On 10/24/11 15:08, Kevin Sallee wrote: [..]Error: Cannot find procedure "AbcExport". So it seems it's not finding the plugins. where should i specify the path for the plugins?Probably a separate MAYA_PLUG_IN_PATH environment variable, eg: os.environ["MAYA_PLUG_IN_PATH"] = "/some/path/to/your/plugins"Ok that's a great answer! thank you very much, i'm gonna try it right away :)Great.. and feel free to keep progress updated on the thread, which I've cc'ed here. BTW, note you can get one script to do both the submit and render; just add a special flag (eg. -render) which the 'submit' part of the script can specify as the command it sends to rush, so when the script runs on the render nodes, it runs a different part of the script to handle the rendering. It's a cool way of getting one script to do the work of two. It will be a good idea for rendering scripts, but we're kind of using rush just to send other kind of jobs to the farm. This is a geometry baking job that will be introduced between animation and ligthing/texturing This is a special form of "recursion" that you have to handle carefully, otherwise you'll end up with a 'network worm' where you create a job that submits jobs..! You can end up with that situation even with two scripts, where you accidentally submit a job that runs the submitter script instead of the render script. To prevent that, add some code in your submit code as follows to protect it from accidentally being run as if it were a render. The daemons always set the RUSH_ISDAEMON variable before running a render, so you can check this variable just before submitting the job to "short-circuit" such a problem before it goes out of control. eg: if os.environ.is_key("RUSH_ISDAEMON"): print "Avoiding recursion: exiting" sys.exit(1) else: # Submit the job submit = os.popen("rush -submit", 'w') [..] Yeah i saw that in the original script and kept that part of it for the moment, but as i said this job will probably not do renders. So I tried to specify my environment vars but it doesn't seem to be working. if i do something like this: #!/usr/bin/env python import os, commands, sys os.environ["PATH"] = "/usr/autodesk/maya/bin" os.environ["MAYA_LOCATION"] = "/usr/autodesk/maya"os.environ["MAYA_PLUG_IN_PATH"] = "/mnt/springfield/.coatlicue/maya/plugins:/opt/pixar/RenderManStudio/plug-ins:/opt/bakery/licenses/relight-1.1.2.7_22217/plugins/maya_2011:/usr/local/alembic-1.0.2/maya/plug-ins" os.system("maya -batch -file /home/kevinsallee/alembicTests/testbatch.mb -command 'AbcExport -v -jobArg \"-ro -uvWrite -frameRange 1 20 -file /home/kevinsallee/alembicTests/testbatch.abc\";'") he doesn't find maya, and neither does he find AbcExport. For the moment i managed to do it by doing it like this: #!/usr/bin/env python import os, commands, sys #os.environ["PATH"] = "/usr/autodesk/maya/bin" #os.environ["MAYA_LOCATION"] = "/usr/autodesk/maya"#os.environ["MAYA_PLUG_IN_PATH"] = "/mnt/springfield/.coatlicue/maya/plugins:/opt/pixar/RenderManStudio/plug-ins:/opt/bakery/licenses/relight-1.1.2.7_22217/plugins/maya_2011:/usr/local/alembic-1.0.2/maya/plug-ins" os.system("/usr/autodesk/maya/bin/maya -batch -file /home/kevinsallee/alembicTests/testbatch.mb -command 'loadPlugin \"/mnt/springfield/.coatlicue/maya/plugins/AbcExport.so\"; AbcExport -v -jobArg \"-ro -uvWrite -frameRange 1 20 -file /home/kevinsallee/alembicTests/testbatch.abc\";'") which isn't very clean for me, but hey, for the moment it works. I don't understand why it's not taking into account my environment vars. thanks for the help kevin |
From: Greg Ercolano <erco@(email surpressed)> Subject: Re: Render Log Paths Date: Fri, 16 Dec 2011 16:49:55 -0500 |
Msg# 2160 View Complete Thread (23 articles) | All Threads Last Next |
|
From: "Mr. Daniel Browne" <dbrowne@(email surpressed)> Subject: Re: Render Log Paths Date: Fri, 16 Dec 2011 17:16:44 -0500 |
Msg# 2161 View Complete Thread (23 articles) | All Threads Last Next |
|
From: "Mr. Daniel Browne" <dbrowne@(email surpressed)> Subject: Re: Render Log Paths Date: Fri, 16 Dec 2011 21:31:44 -0500 |
Msg# 2162 View Complete Thread (23 articles) | All Threads Last Next |
|
From: Greg Ercolano <erco@(email surpressed)> Subject: Re: Render Log Paths Date: Fri, 16 Dec 2011 22:46:21 -0500 |
Msg# 2163 View Complete Thread (23 articles) | All Threads Last Next |
On 12/16/11 18:31, Mr. Daniel Browne wrote: > I've come upon a stumbling block; making a safety copy that the render is run from. Though you can add the %s wildcard into the log directory name for the JobID, that would mean having to make the copy into the job sub-directory in the Render portion of the script which could lead to race conditions. Is there another mechanism I can use? Is it possible for me to have submit get passed back to itself so that the completion happens and then the copy can take place in a single action before any batches start? If you want to copy data into the job's log directory for storage, sounds like what the jobstartcommand is for. That's the cleanest way, I'd think. 1. JOBSTARTCOMMAND ------------------ The jobstartcommand runs just before the first frame starts. The jobstartcommand will be passed the logdir (via the RUSH_LOGFILE and/or RUSH_LOGDIR env variables) and the command's own output will in fact be sent to the jobstartcommand.log in the logdir. You can pass arguments to the script to tell it e.g where the source material is, or you can point to a file that is a manifest of data. The jobstartcommand can be the submit script itself, passed a special argument to tell it what to do. For instance I often have the submit scripts understand these arguments: -submit -- handle submitting the job -render -- handle rendering a frame -jobstartcommand -- handle the jobstartcommand (if any) -jobdonecommand -- handle the jobdonecommand (if any) [..etc..] This way the submit script can set up the render queue to use itself to do all the different operations. 2. PAUSE APPROACH ----------------- The other way would be to again submit the job in either the paused state, or with 0 cpus assigned to it, so you can then do your work, then kick the job into gear once you're done. Or an even different route: don't use the jobid at all.. 3. CREATE A UNIQUE DIRNAME -------------------------- Use some combo of time of day, username, hostname, and even the pid of the script itself if need be. That way you always end up with a unique pathname for the job, so you don't end up with any races by depending on attributes of the job. (ie. define your own attributes, then use those) * * * If you still have trouble, give me the whole picture of what you're doing, as choosing the best route to take often involves knowing the big picture first. -- Greg Ercolano, erco@(email surpressed) Seriss Corporation Rush Render Queue, http://seriss.com/rush/ Tel: (Tel# suppressed)ext.23 Fax: (Tel# suppressed) Cel: (Tel# suppressed) |
From: "Mr. Daniel Browne" <dbrowne@(email surpressed)> Subject: Re: Render Log Paths Date: Fri, 16 Dec 2011 23:58:46 -0500 |
Msg# 2164 View Complete Thread (23 articles) | All Threads Last Next |
Ah of course, I forgot about jobstartcommand; I'm losing my mind. Thanks Greg. On Dec 16, 2011, at 7:46 PM, Greg Ercolano wrote: [posted to rush.general] On 12/16/11 18:31, Mr. Daniel Browne wrote: > I've come upon a stumbling block; making a safety copy that the render is run from. Though you can add the %s wildcard into the log directory name for the JobID, that would mean having to make the copy into the job sub-directory in the Render portion of the script which could lead to race conditions. Is there another mechanism I can use? Is it possible for me to have submit get passed back to itself so that the completion happens and then the copy can take place in a single action before any batches start? If you want to copy data into the job's log directory for storage, sounds like what the jobstartcommand is for. That's the cleanest way, I'd think. 1. JOBSTARTCOMMAND ------------------ The jobstartcommand runs just before the first frame starts. The jobstartcommand will be passed the logdir (via the RUSH_LOGFILE and/or RUSH_LOGDIR env variables) and the command's own output will in fact be sent to the jobstartcommand.log in the logdir. You can pass arguments to the script to tell it e.g where the source material is, or you can point to a file that is a manifest of data. The jobstartcommand can be the submit script itself, passed a special argument to tell it what to do. For instance I often have the submit scripts understand these arguments: -submit -- handle submitting the job -render -- handle rendering a frame -jobstartcommand -- handle the jobstartcommand (if any) -jobdonecommand -- handle the jobdonecommand (if any) [..etc..] This way the submit script can set up the render queue to use itself to do all the different operations. 2. PAUSE APPROACH ----------------- The other way would be to again submit the job in either the paused state, or with 0 cpus assigned to it, so you can then do your work, then kick the job into gear once you're done. Or an even different route: don't use the jobid at all.. 3. CREATE A UNIQUE DIRNAME -------------------------- Use some combo of time of day, username, hostname, and even the pid of the script itself if need be. That way you always end up with a unique pathname for the job, so you don't end up with any races by depending on attributes of the job. (ie. define your own attributes, then use those) * * * If you still have trouble, give me the whole picture of what you're doing, as choosing the best route to take often involves knowing the big picture first. -- Greg Ercolano, erco@(email surpressed) Seriss Corporation Rush Render Queue, http://seriss.com/rush/ Tel: (Tel# suppressed)ext.23 Fax: (Tel# suppressed) Cel: (Tel# suppressed) |
From: "Mr. Daniel Browne" <dbrowne@(email surpressed)> Subject: Re: Render Log Paths Date: Mon, 19 Dec 2011 17:19:32 -0500 |
Msg# 2165 View Complete Thread (23 articles) | All Threads Last Next |
I can't seem to get either $ENV{RUSH_LOGFILE} or $ENV{RUSH_LOGDIR} (or variations) to resolve in the jobstartcommand script in Perl. On Dec 16, 2011, at 7:46 PM, Greg Ercolano wrote: The jobstartcommand will be passed the logdir (via the RUSH_LOGFILE and/or RUSH_LOGDIR env variables) and the command's own output will in fact be sent to the jobstartcommand.log in the logdir. |
From: Greg Ercolano <erco@(email surpressed)> Subject: Re: Render Log Paths Date: Mon, 19 Dec 2011 18:57:06 -0500 |
Msg# 2166 View Complete Thread (23 articles) | All Threads Last Next |
On 12/19/11 14:19, Mr. Daniel Browne wrote: > I can't seem to get either $ENV{RUSH_LOGFILE} or $ENV{RUSH_LOGDIR} (or = > variations) to resolve in the jobstartcommand script in Perl. Hmm, try having your perl jobstartcommand script run the following: system("printenv|sort"); # Unix system("set"); # Windows ..and then paste the contents of your jobstartcommand.log file here, including the headers at the top. -- Greg Ercolano, erco@(email surpressed) Seriss Corporation Rush Render Queue, http://seriss.com/rush/ Tel: (Tel# suppressed)ext.23 Fax: (Tel# suppressed) Cel: (Tel# suppressed) |
From: "Mr. Daniel Browne" <dbrowne@(email surpressed)> Subject: Re: Render Log Paths Date: Mon, 19 Dec 2011 19:36:49 -0500 |
Msg# 2167 View Complete Thread (23 articles) | All Threads Last Next |
I'm not sure syntactically what I was doing wrong, but $ENV{RUSH_LOGFILE} is working now. Though I am a little puzzled why part of the output appears at the top of the log file. On Dec 19, 2011, at 3:57 PM, Greg Ercolano wrote: system("printenv|sort"); |
From: Greg Ercolano <erco@(email surpressed)> Subject: Re: Render Log Paths Date: Mon, 19 Dec 2011 20:31:37 -0500 |
Msg# 2168 View Complete Thread (23 articles) | All Threads Last Next |
On 12/19/11 16:36, Mr. Daniel Browne wrote: > Though I am a little puzzled why part > of the output appears at the top of the log file. Probably stdout buffering in perl. Try adding: $|=1; ..at the top of your perl script to disable stdout buffering. This should help output be synchronized with stderr and system(). -- Greg Ercolano, erco@(email surpressed) Seriss Corporation Rush Render Queue, http://seriss.com/rush/ Tel: (Tel# suppressed)ext.23 Fax: (Tel# suppressed) Cel: (Tel# suppressed) |
From: "Mr. Daniel Browne" <dbrowne@(email surpressed)> Subject: Re: Render Log Paths Date: Mon, 19 Dec 2011 21:59:45 -0500 |
Msg# 2169 View Complete Thread (23 articles) | All Threads Last Next |
I noticed that when the logs flag is set to "keep last" that the jobstartcommand log doesn't show up in iRush. On Dec 19, 2011, at 3:57 PM, Greg Ercolano wrote: [posted to rush.general] On 12/19/11 14:19, Mr. Daniel Browne wrote: > I can't seem to get either $ENV{RUSH_LOGFILE} or $ENV{RUSH_LOGDIR} (or = > variations) to resolve in the jobstartcommand script in Perl. Hmm, try having your perl jobstartcommand script run the following: system("printenv|sort"); # Unix system("set"); # Windows ..and then paste the contents of your jobstartcommand.log file here, including the headers at the top. -- Greg Ercolano, erco@(email surpressed) Seriss Corporation Rush Render Queue, http://seriss.com/rush/ Tel: (Tel# suppressed)ext.23 Fax: (Tel# suppressed) Cel: (Tel# suppressed) |
From: Greg Ercolano <erco@(email surpressed)> Subject: Re: Render Log Paths Date: Tue, 20 Dec 2011 15:45:40 -0500 |
Msg# 2177 View Complete Thread (23 articles) | All Threads Last Next |
|
From: "Mr. Daniel Browne" <dbrowne@(email surpressed)> Subject: Re: Submit Confirmation Date: Thu, 05 Jan 2012 20:49:29 -0500 |
Msg# 2184 View Complete Thread (23 articles) | All Threads Last Next |
That sounds like it would switch the confirmation off across all apps; I simply one to reduce it down to one, so it sounds like I have to make a modification myself to .common.pl. I wasn't aware of the ability to address write nodes that way, however we've gone a more user friendly direction by breaking them up into fields in the submit window. On Jan 5, 2012, at 2:48 PM, Greg Ercolano wrote: On 01/04/12 17:49, Mr. Daniel Browne wrote: > [posted to rush.general] > > Hi Greg, Happy New Year. > > Before the holidays I had made a modification to our Nuke submit to = > split off each write node into a separate job. The downside is that if = > you have a lot of write nodes, you have to click through a whole lot of = > "Ok" messages after hitting the submit button. Is there an easy way to = > suppress this, or do I have to customize the RushSubmit() routine inside = > .common.pl? There are a few things I'd suggest. The newer .common.pl files have an option to turn off the OK messages. If you email me your .common.pl file, I can send back the file with the newer code for RushSubmit() that you can use to disable those dialogs. Also: if you have a *lot* of write nodes, I'd suggest trying to consolidate them into a single job by using rush's ability to use floating point frame numbers. For instance, if you have a range of 5 frames specified as: frames 1-5 ..which gets you: 0001 0002 0003 0004 0005 ..but let's say you have 3 write nodes (A, B and C). Then you can use rush's floating point frames so that the number to the left of the decimal is used as the frame number, and the number to the right of the decimal is used as the write node. So for instance: frames 1.1 1.2 1.3 2.1 2.2 2.3 3.1 3.2 3.3 4.1 4.2 4.3 5.1 5.2 5.3 ..would get you: 0001.1 -- renders frame 1, node 'A' 0001.2 -- renders frame 1, node 'B' 0001.3 -- renders frame 1, node 'C' 0002.1 -- renders frame 2, node 'A' 0002.2 -- renders frame 2, node 'B' : etc : 0004.3 -- renders frame 4, node 'C' 0005.1 -- renders frame 5, node 'A' 0005.2 -- renders frame 5, node 'B' 0005.3 -- renders frame 5, node 'C' You can have as many nodes as you want, since you can control how many digits in the frame specification. So if you had 100 nodes, you might have: frames 1.001 1.002 .. 1.099 1.100 2.001 2.002 .. 2.099 2.100 3.001 ..etc.. ..which would get you: 0001.001 -- renders frame 1, node #1 0001.002 -- renders frame 1, node #2 : etc : 0001.099 -- renders frame 1, node 99 0001.100 -- renders frame 1, node 100 0002.001 -- renders frame 1, node 1 : etc : It's pretty easy to have the script parse out the digits to the left and to the right of the decimal at render time from the floating point value in the RUSH_FRAME variable, eg: if ( $ENV{RUSH_FRAME} =~ /(\d+)\.(\d+)/ ) { my $frame = $1; my $node = $2; ..etc.. } This way /one/ job can render all the nodes on separate processors, with clear logs and simple control without having dozens of separate jobs that have to be managed separately. -- Greg Ercolano, erco@(email surpressed) Seriss Corporation Rush Render Queue, http://seriss.com/rush/ Tel: (Tel# suppressed)ext.23 Fax: (Tel# suppressed) Cel: (Tel# suppressed) |
From: Dan Rosen <dr@(email surpressed)> Subject: Re: Submit Confirmation Date: Fri, 06 Jan 2012 00:18:14 -0500 |
Msg# 2186 View Complete Thread (23 articles) | All Threads Last Next |
Greg, As Dan said we didn't know about that floating point number thing. How do the numbers correlate to Write nodes? As Dan also mentioned we list the Write nodes by name in the Rush submit so that users can turn off any Writes on the fly. We initially get that list from the Write nodes that they have selected or all Write nodes if nothing is selected. We also ignore disabled Write nodes, etc, etc. We normally render all Writes in one job, but this special case we have lighting td's that want each Write to be a separate job for a workflow reason. Sounds like you've given us an i between option to think about. Thanks, Dan On Jan 5, 2012, at 5:49 PM, "Mr. Daniel Browne" <dbrowne@(email surpressed)> wrote: > [posted to rush.general] > > That sounds like it would switch the confirmation off across all apps; I = > simply one to reduce it down to one, so it sounds like I have to make a = > modification myself to .common.pl. > > I wasn't aware of the ability to address write nodes that way, however = > we've gone a more user friendly direction by breaking them up into = > fields in the submit window. > > > > On Jan 5, 2012, at 2:48 PM, Greg Ercolano wrote: > > On 01/04/12 17:49, Mr. Daniel Browne wrote: >> [posted to rush.general] >> =20 >> Hi Greg, Happy New Year. >> =20 >> Before the holidays I had made a modification to our Nuke submit to =3D >> split off each write node into a separate job. The downside is that if = > =3D >> you have a lot of write nodes, you have to click through a whole lot = > of =3D >> "Ok" messages after hitting the submit button. Is there an easy way to = > =3D >> suppress this, or do I have to customize the RushSubmit() routine = > inside =3D >> .common.pl? > > There are a few things I'd suggest. > > The newer .common.pl files have an option to turn off the > OK messages. If you email me your .common.pl file, I can send > back the file with the newer code for RushSubmit() that you can > use to disable those dialogs. > > Also: if you have a *lot* of write nodes, I'd suggest trying > to consolidate them into a single job by using rush's ability > to use floating point frame numbers. > > For instance, if you have a range of 5 frames specified as: > > frames 1-5 > > ..which gets you: > > 0001 > 0002 > 0003 > 0004 > 0005 > > ..but let's say you have 3 write nodes (A, B and C). > Then you can use rush's floating point frames so that > the number to the left of the decimal is used as the > frame number, and the number to the right of the decimal > is used as the write node. > > So for instance: > > frames 1.1 1.2 1.3 2.1 2.2 2.3 3.1 3.2 3.3 4.1 4.2 4.3 5.1 5.2 5.3 > > ..would get you: > > 0001.1 -- renders frame 1, node 'A' > 0001.2 -- renders frame 1, node 'B' > 0001.3 -- renders frame 1, node 'C' > 0002.1 -- renders frame 2, node 'A' > 0002.2 -- renders frame 2, node 'B' > : > etc > : > 0004.3 -- renders frame 4, node 'C' > 0005.1 -- renders frame 5, node 'A' > 0005.2 -- renders frame 5, node 'B' > 0005.3 -- renders frame 5, node 'C' > > You can have as many nodes as you want, since > you can control how many digits in the frame specification. > So if you had 100 nodes, you might have: > > frames 1.001 1.002 .. 1.099 1.100 2.001 2.002 .. 2.099 2.100 3.001 = > ..etc.. > > ..which would get you: > > 0001.001 -- renders frame 1, node #1 > 0001.002 -- renders frame 1, node #2 > : > etc > : > 0001.099 -- renders frame 1, node 99 > 0001.100 -- renders frame 1, node 100 > 0002.001 -- renders frame 1, node 1 > : > etc > : > > It's pretty easy to have the script parse out the digits > to the left and to the right of the decimal at render time > from the floating point value in the RUSH_FRAME variable, eg: > > if ( $ENV{RUSH_FRAME} =3D~ /(\d+)\.(\d+)/ ) { > my $frame =3D $1; > my $node =3D $2; > ..etc.. > } > > This way /one/ job can render all the nodes on separate = > processors, > with clear logs and simple control without having dozens of = > separate > jobs that have to be managed separately. > > > --=20 > Greg Ercolano, erco@(email surpressed) > Seriss Corporation > Rush Render Queue, http://seriss.com/rush/ > Tel: +1 626-576-0010 ext.23 > Fax: +1 626-576-0020 > Cel: +1 310-266-8906 > > |
From: Greg Ercolano <erco@(email surpressed)> Subject: Re: Submit Confirmation Date: Fri, 06 Jan 2012 13:18:33 -0500 |
Msg# 2187 View Complete Thread (23 articles) | All Threads Last Next |
|
From: "Mr. Daniel Browne" <dbrowne@(email surpressed)> Subject: Re: Submit Confirmation Date: Mon, 09 Jan 2012 20:48:25 -0500 |
Msg# 2188 View Complete Thread (23 articles) | All Threads Last Next |
|
From: Greg Ercolano <erco@(email surpressed)> Subject: Re: Submit Confirmation Date: Mon, 09 Jan 2012 21:08:06 -0500 |
Msg# 2189 View Complete Thread (23 articles) | All Threads Last Next |
|
From: Greg Ercolano <erco@(email surpressed)> Subject: Re: line of rush.submit too long! Date: Tue, 10 Jul 2012 11:48:22 -0400 |
Msg# 2254 View Complete Thread (23 articles) | All Threads Last Next |
On 07/10/12 08:07, Kevin Sallee wrote: > Hey greg thanks for all the useful info! > I think I will write an ASCII file also I prefer it that way too. If you prefer the Key: Value format, here's a couple of routines I use for that. I use 16 as padding to keep my key names lined up. You end up with data files that look like: --- snip Frames: 1-50 LicenseBehavior: Fail LogFlags: Overwrite MaxLogSize: 0 MaxTime: 00:00:00 MaxTimeState: Que OutputPath: //meade/net/tmp/images/SpiroBlue.[####].png PrintEnvironment: off QTGeneration: yes Cpus: tahoe erie superior Cpus: spirit waccabuc crystal Cpus: sepulveda canyon huron --- snip Here's a snippet of python code I use for load/saving a one dimensional dict in the above format. --- snip import os,sys,re def ParseKeyValue(s, fields, stripflag=1): '''HANDLE PARSING KEY/VALUE PAIRS FROM A STRING OF TEXT Ignores blank or commented out lines. Input: s -- string from file or eg. 'rush -ljf' output fields -- dict being modified/updated with parsed info stripflag -- [optional] strip()s the string before parsing to remove trailing white space Returns: 1 -- if data was parsed, fields[] contains new data 0 -- if line was blank or empty -1 -- data was present, but not in Key:Value format ''' # Handle input line stripping if stripflag: s = s.strip() else: # Remove only leading white and trailing crlfs # (Carefully avoid removing trailing white) # s = s.lstrip() # " Key: val\n" -> "Key: val\n" s = re.sub("[\r\n]*$","",s) # "Key: val \n" -> "Key: val " # Empty or comment line? if ( s == "" or s[0] == '#' ): return 0 # Try to parse the line try: # parse "Key: Value" pairs (key,val) = re.search("[\s]*([^:]*):[\s]*(.*)",s).groups() except: return -1 # Parsed OK, update fields[] if ( fields.has_key(key)): fields[key] = fields[key] + "\n" + val else: fields[key] = val return 1 def LoadFields(filename, fields, stripflag=1): '''LOAD KEY/VALUE PAIRS FROM FILE filename -- the file containing the "Key: Value" pairs fields[] -- returns a dictionary of key value pairs as: fields[<Key>] = <Value> stripflag -- optional flag indicates if values parsed should have leading/trailing whitespace removed (default on) Returns: Success: returns fields[] containing the loaded key/value pairs Failure: raises RuntimeError with error message. ''' try: fp = open(filename,"r") except IOError,e: raise RuntimeError("could not open '%s': %s" % (filename,e.strerror)) for s in fp: ParseKeyValue(s, fields, stripflag) fp.close() return None def WriteFields(fd, fields): '''WRITE KEY/VALUE PAIRS TO AN ALREADY OPEN FILE fd -- the file descriptor of a file already open for write fields[] -- the dictionary of key value pairs to write: fields[<Key>] = <Value> Returns: Success: returns with fields[] written to fd Failure: raises RuntimeError with error message. ''' keys = fields.keys() keys.sort() for key in keys: val = str(fields[key]) if ( val.find("\n") ): for line in val.split("\n"): try: print >> fd, "%16s: %s" % (key,line) except IOError,e: raise "write error: %s" % e else: try: print >> fd, "%16s: %s" % (key,val) except IOError,e: raise "write error: %s" % e def SaveFields(filename, fields): '''SAVE FIELDS TO FILE filename -- file to write the fields[] to as "Key: Value" pairs fields[] -- the array (dictionary) of fields[] to be saved. (Fields will be saved sorted by their key name) Returns: Success: returns fields[] containing the loaded key/value pairs Failure: raises RuntimeError with error message. ''' try: fp = open(filename,"w") except IOError,e: raise RuntimeError("%s: can't open for write: %s" % (filename,e.strerror)) WriteFields(fp, fields) fp.close() --- snip Here's an example showing how to save a file using the above: fields = {} fields["Aaa"] = "a a a" # single line of data fields["Bbb"] = "b b b\nbb bb bb" # multi-line data SaveFields("/tmp/foo.dat", fields) ..and here's how to load a file using the above routines: fields = {} LoadFields("/tmp/foo.dat",fields) ..and to print the loaded data: for i in fields: print "FIELD[" + i + "]: '" + fields[i] + "'" -- Greg Ercolano, erco@(email surpressed) Seriss Corporation Rush Render Queue, http://seriss.com/rush/ Tel: (Tel# suppressed)ext.23 Fax: (Tel# suppressed) Cel: (Tel# suppressed) |
From: "Abraham Schneider" <aschneider@(email surpressed)> Subject: Re: line of rush.submit too long! Date: Tue, 10 Jul 2012 12:24:01 -0400 |
Msg# 2255 View Complete Thread (23 articles) | All Threads Last Next |
Wouldn't YAML (http://www.yaml.org/) be the perfect solution for this kind of ASCII files, instead of having your own propietary format and parser? Abraham Am 10.07.2012 um 17:48 schrieb Greg Ercolano: > [posted to rush.general] > > On 07/10/12 08:07, Kevin Sallee wrote: >> Hey greg thanks for all the useful info! >> I think I will write an ASCII file also I prefer it that way too. > > If you prefer the Key: Value format, here's a couple of routines > I use for that. I use 16 as padding to keep my key names lined up. > > You end up with data files that look like: > > --- snip > Frames: 1-50 > LicenseBehavior: Fail > LogFlags: Overwrite > MaxLogSize: 0 > MaxTime: 00:00:00 > MaxTimeState: Que > OutputPath: //meade/net/tmp/images/SpiroBlue.[####].png > PrintEnvironment: off > QTGeneration: yes > Cpus: tahoe erie superior > Cpus: spirit waccabuc crystal > Cpus: sepulveda canyon huron > --- snip > > Here's a snippet of python code I use for load/saving a > one dimensional dict in the above format. > > --- snip > > import os,sys,re > > def ParseKeyValue(s, fields, stripflag=1): > '''HANDLE PARSING KEY/VALUE PAIRS FROM A STRING OF TEXT > Ignores blank or commented out lines. > Input: > s -- string from file or eg. 'rush -ljf' output > fields -- dict being modified/updated with parsed info > stripflag -- [optional] strip()s the string before parsing > to remove trailing white space > Returns: > 1 -- if data was parsed, fields[] contains new data > 0 -- if line was blank or empty > -1 -- data was present, but not in Key:Value format > ''' > # Handle input line stripping > if stripflag: > s = s.strip() > else: > # Remove only leading white and trailing crlfs > # (Carefully avoid removing trailing white) > # > s = s.lstrip() # " Key: val\n" -> "Key: val\n" > s = re.sub("[\r\n]*$","",s) # "Key: val \n" -> "Key: val " > # Empty or comment line? > if ( s == "" or s[0] == '#' ): > return 0 > # Try to parse the line > try: > # parse "Key: Value" pairs > (key,val) = re.search("[\s]*([^:]*):[\s]*(.*)",s).groups() > except: > return -1 > > # Parsed OK, update fields[] > if ( fields.has_key(key)): > fields[key] = fields[key] + "\n" + val > else: > fields[key] = val > return 1 > > def LoadFields(filename, fields, stripflag=1): > '''LOAD KEY/VALUE PAIRS FROM FILE > filename -- the file containing the "Key: Value" pairs > fields[] -- returns a dictionary of key value pairs as: > fields[<Key>] = <Value> > stripflag -- optional flag indicates if values parsed > should have leading/trailing whitespace removed > (default on) > Returns: > Success: returns fields[] containing the loaded key/value pairs > Failure: raises RuntimeError with error message. > ''' > try: > fp = open(filename,"r") > except IOError,e: > raise RuntimeError("could not open '%s': %s" % (filename,e.strerror)) > for s in fp: > ParseKeyValue(s, fields, stripflag) > fp.close() > return None > > def WriteFields(fd, fields): > '''WRITE KEY/VALUE PAIRS TO AN ALREADY OPEN FILE > fd -- the file descriptor of a file already open for write > fields[] -- the dictionary of key value pairs to write: fields[<Key>] = <Value> > Returns: > Success: returns with fields[] written to fd > Failure: raises RuntimeError with error message. > ''' > keys = fields.keys() > keys.sort() > for key in keys: > val = str(fields[key]) > if ( val.find("\n") ): > for line in val.split("\n"): > try: > print >> fd, "%16s: %s" % (key,line) > except IOError,e: > raise "write error: %s" % e > else: > try: > print >> fd, "%16s: %s" % (key,val) > except IOError,e: > raise "write error: %s" % e > > def SaveFields(filename, fields): > '''SAVE FIELDS TO FILE > filename -- file to write the fields[] to as "Key: Value" pairs > fields[] -- the array (dictionary) of fields[] to be saved. > (Fields will be saved sorted by their key name) > Returns: > Success: returns fields[] containing the loaded key/value pairs > Failure: raises RuntimeError with error message. > ''' > try: > fp = open(filename,"w") > except IOError,e: > raise RuntimeError("%s: can't open for write: %s" % (filename,e.strerror)) > WriteFields(fp, fields) > fp.close() > > --- snip > > Here's an example showing how to save a file using the above: > > fields = {} > fields["Aaa"] = "a a a" # single line of data > fields["Bbb"] = "b b b\nbb bb bb" # multi-line data > SaveFields("/tmp/foo.dat", fields) > > ..and here's how to load a file using the above routines: > > fields = {} > LoadFields("/tmp/foo.dat",fields) > > ..and to print the loaded data: > > for i in fields: > print "FIELD[" + i + "]: '" + fields[i] + "'" > > -- > Greg Ercolano, erco@(email surpressed) > Seriss Corporation > Rush Render Queue, http://seriss.com/rush/ > Tel: +1 626-576-0010 ext.23 > Fax: +1 626-576-0020 > Cel: +1 310-266-8906 > Abraham Schneider Senior VFX Compositor ARRI Film & TV Services GmbH Tuerkenstr. 89 D-80799 Muenchen / Germany Phone (Tel# suppressed) EMail aschneider@(email surpressed) www.arri.de/filmtv ________________________________ ARRI Film & TV Services GmbH Sitz: München Registergericht: Amtsgericht München Handelsregisternummer: HRB 69396 Geschäftsführer: Franz Kraus, Dr. Martin Prillmann, Josef Reidinger |
From: Greg Ercolano <erco@(email surpressed)> Subject: Re: line of rush.submit too long! Date: Tue, 10 Jul 2012 13:28:39 -0400 |
Msg# 2256 View Complete Thread (23 articles) | All Threads Last Next |
On 07/10/12 09:24, Abraham Schneider wrote: > [posted to rush.general] > > Wouldn't YAML (http://www.yaml.org/) be the perfect solution for this = > kind of ASCII files, Yes, a good solution that does seem to use a similar data format (padded key/value pairs), and perhaps better for Kevin's case if he needs the features it offers. Why offer my own code? It's only 100 lines, so it can be easily pasted into a script, or added to an existing lib. I like small code ;) yaml does what it does in 5800 lines python code and is a full library install. Not as easy, and a bit more code to load up in order to use it. Yaml serves a much larger, more generalized purpose, and includes features I generally don't need for just saving simple one dimensional data. But if I needed a full data saving/loading solution, yaml does look like a great way to go; their data format is similar to mine.. might even be 100% compatible, not sure. > ..instead of having your own propietary format and parser? Is the code I posted any more "proprietary" than yaml? ;) Whether it comes from seriss.com or yaml.org, if it does the job, it shouldn't matter. The reason I have my own code for this is it's actually part of a larger rush python library that will be included in the next release, the code being 100% compatible with the data the older rush perl scripts generate. In general I don't like Rush to depend on external libraries, as it makes installs for the customer harder, and sometimes global needs force such libs to track modern technology that causes new code not to run on slightly older machines. I've always tried to make the rush scripts operate correctly on even the oldest script interpreters that come with the OS's, just to avoid the support issues that come with such problems. I hate forcing people to upgrade entire machines just because the latest release of a lib won't run on yesterday's equipment.. -- Greg Ercolano, erco@(email surpressed) Seriss Corporation Rush Render Queue, http://seriss.com/rush/ Tel: (Tel# suppressed)ext.23 Fax: (Tel# suppressed) Cel: (Tel# suppressed) |