All jobs in Qube! have a consistent structure. It is through this structure the Supervisor is able to communicate to the Worker the appropriate instructions for executing a job.
The job's type is the mechanism by which Qube! indicates to the Worker what it should do with the job. When the Worker receives the job object from the Supervisor, it looks through its library of types to find one that matches. If it finds a match, it will examine the type's configuration file for guidance on which module to load in order to execute the job.
Since the type is vital to the Worker correctly identifying an execution module, it is a mandatory job component. The qbsub command will automatically set the type for you. If you are developing your own job types, it will become more important to explicitly set the job's type before submitting it.Job ID
For example, a job with type "command line" will have discrete strings for each agenda item that will be run at a command prompt (e.g. "
cinema4D -frame 1 5 1 -render /path/to/file.c4d"). A job with type "maya" will, instead, store a discrete set of instructions for Maya for each agenda item that will be run at a mel prompt. In the case of the former, the application is launched when the agenda item starts and then exits when the agenda item is complete. In the case of the latter, the application is launched when the instance starts, but then does not exit until all agenda items in the job have been completed (either by this instance and/or others).
Once a job has been submitted and accepted by the Supervisor, it is assigned an ID. The ID is unique to the job, and serves two purposes: it helps the Supervisor keep track of each job individually, and without any other priority information, the Supervisor can dispatch jobs in chronological ("first-come, first-served") order based upon the ID.
Job parameters act as instructions to the Qube! Supervisor to help it determine how best to dispatch the job to the Workers. Each job can have any of several job parameters:
The commandline "qbsub" will automatically create default parameters for your job submissions, in case you don't want or need to specify them. In the case of the CPUs parameter, the qbsub will submit a job with a default of 1parameters are simply the values used in the submission dialog (or through the API or command line).
If the job object could be thought of as a letter, with the type as the recipient, and the parameters as the address, the package may be thought of as the content. Since each job type module is executed in exactly the same way by the Worker, there must be some way to communicate to the Worker how to execute the module in such a way that is specific to your needs.
The package is how that is done. In general, the package is a generic data structure, included in the job object that can be accessed by the Worker's backend job module and used by that module to set up the job.
The size and contents of the job package will vary from job type to job type. In the case of jobs submitted via qbsub, the package will likely contain a simple command in the package variable "cmdline." However, in the case of a sophisticated rendering application, the package may contain the name of the scene and dozens of global values specific to the job.Job Agenda
The job package contains parameters that are specific to the application that is being rendered. For example, with a cmdline job, the package will contain the command to be executed and possibly the frame range. For a maya job, the package will contain the frame range, and things like the path to the maya scene file, the output path (if specified), the thread count, etc. Every job will have a package, but every package may be different, both in number and type of variables.
The job agenda contains a number of discrete items–often frames–that items – often frames – that are transmitted by the Supervisor to the Worker on an "as needed" basis. The agenda usually contains a name, which is often sufficient for most purposes. Each agenda item can contain a package, much like the job itself.
Separating out the list of work to be done from the job itself at execution time can create a deal of efficiency. Workers can ask for work as quickly as they can complete it. If a worker fails to complete a work item, it can be easily reassigned to another Worker. Adding more CPUs to a job simply increases the number of possible work items executed simultaneously.An angeda could be considered synonymous with a task.
Callbacks are one of the most powerful features of Qube!, enabling it to perform well beyond other similar render farm management systems. Any event that takes place in the system can trigger the Supervisor to perform a series of actions; some are internal like activating a dormant job, others are external like sending out an email message .
Unfortunately, a full discussion of the nuts and bolts of callbacks is beyond the scope of this document. While callbacks are mostly the province of the Qube developer, as we will see, you can still take advantage of Qube's callback features, without actually creating a callback yourself.Instances/subjobs
or executing a python script.
In reality, a job is a large data structure that contains all the information the Supervisor needs to monitor, control, and dispatch a task for a remote host. The actual execution of that task is performed by the Worker by what we call a instance/subjob.
The instance/subjob contains all the relevant information relating to the actual execution of the job. Virtually all information regarding job execution is in reality instance/subjob information.
Since the instance/subjob is a part of the job, the job acts as the clearinghouse for information about a job, including the instance/subjob. While that aspect of the job/instance/subjob relationship can sometime seem confusing, it becomes more evident with practice. The best way to try to understand how instance/subjobs relate to jobs is to work with them. The Worker will execute the instance/subjob by loading the type's module and running it with the appropriate package information.
All jobs must execute with at least one instance/subjob. Adding additional instance/subjobs will cause the job to load the same module with the same package information.
Now, this is often a desired outcome. You may want to run the same command on each host in your compute farm. However, if you don't want the same job to execute multiple times, you probably want to consider using the agenda to give each instance/subjob a little something different to performan instance (previously, these were called subjobs). In other words, a worker runs one or more job instances in parallel and each instance runs one or more agenda items (typically frames) in series.