Scaling up Blender rendering

Rendering a single frame headlessly:

Pre Reqs


This article forms the base and background of this technique


The example that follows aims at producing a rendered frame from a default blender scene using `Xvfb` within a preconfigured singularity container. In each section there will be screen shot footage on the right and a code block provided for each command entered to make copying and reproducing the steps easier.

The first step here is to start an interactive job and get our copy of the singularity image from the github container repository. The code used are

interactive -a <your account name> -n 4

With your account specified here instead of the visteam account. Then navigate to a folder where you would like do do your work.

You will then need a singularity container configured for Xvfb

singularity pull -F docker://

This retrieves the docker image from github container repository and converts it into a singularity image. The -F is for forcing the command to overwrite an existing image if you have already pulled this in the past. This helps make sure that you work with the newest image if there were changes made.

Now we go inside the container and start the Xvfb program to generate a virtual display named 99 with a screen 0 associated of 1024x720 resolution with a depth of 16 images.

singularity shell xvfb_test.sif
Xvfb :99 -screen 0 1024x720x16 &> xvfb.log &

This will ensure the display remains as a background task and sends messages to the log file.

To check whether this is working we can grep for the X process.

ps -aux | grep X

But of course this also reveals any of the other users who are also trying to create displays, so I've blurred that out. Part of picking :99 is to avoid using the same name as other folks on a node.

We are now ready to try to run a headless blender render.

The first screenshot shows the result of jumping the gun and running the command without mentioning which display to use. A more complete command that achieves the result is provided as follows.

DISPLAY=:99.0 blender-3.3.0-linux-x64/blender -b test-headless.blend -f 1

This will make the blender command use the correct display for our rendering of a single frame from the blender scene. There's an exhausting, ahem I mean exhaustive treatment of the command line rendering flags here if you are curious.

The output of the command will be lines suggesting the scene is loaded and that samples are being taken and rendered to the image. The final line shows where the resulting png frame ends up.

If you would like to watch the steps in video form feel free.


Scaling Up

The process of using many cpu cores with blender is as follows.

The image for this step is pretty small, but you may click on it to view it in a larger window. That or you can trust me that the code blocks below are all you need.

The contents of the file on the left simply capture what was done in the interactive session in the previous step. So create a file called with these contents.

Xvfb :99 -screen 0 1024x720x16 &> xvfb.log &
DISPLAY=:99.0 blender-3.3.0-linux-x64/blender -b test-headless.blend -o ./test_$1_ -f 1

We don't need to use the singularity shell command because that will be part of our slurm batch submission script. Also the -o ./test_$1_ lets us name the output images with a templated pattern using a number passed to the script.

The file on the right is our slurm batch submission script named Read more about them in Jobs and Scheduling. The content of this one is:

#SBATCH --output=Sample_SLURM_Job-%a.out
#SBATCH --ntasks=4
#SBATCH --nodes=1             
#SBATCH --time=00:15:00   
#SBATCH --partition=standard
#SBATCH --account=<your account name>   
#SBATCH --array 0-64
# SLURM Inherits your environment. cd $SLURM_SUBMIT_DIR not needed
singularity exec xvfb_test.sif bash ${SLURM_ARRAY_TASK_ID}

The primary change you will need to make is to write your own account in the place where it says --account=<your account name> . Besides that you will notice that the variable${SLURM_ARRAY_TASK_ID} is being passed into the shell script so that each separate task will create a frame png output. To run this batch file we use:


Once the command to run the is provided we see that all our 64 tasks with 4 cores each get kicked off.

To view the results you may return to the open on demand panel and select one of the many finished pngs. I'm hopeful that you will find better ways to apply this to visualizing data besides leveraging 256 cores to render a gray cube on a gray background.

If you would like to watch the steps unfold instead of looking at the still frames feel free to use this video.