Network FAQ
- Mihai Iliuta (Unlicensed)
- kyrstie (Unlicensed)
- dario (Unlicensed)
General
1. Can I use my Floating seat license(s) to render in network?
Yes, Floating seat licenses have complete functionality, including the capability for running as as Rendernode as well. Please see the license types page for further details.
2. If a problem occurs during a network render, can I recover the rendered MXI files?
During a network job, each Node saves its MXI render in its temporary folder. The saving frequency is the same as in the regular Maxwell (at each integer SL, every 12 minutes, or in the minimum time for saving to disk defined by the user in the Maxwell Preferences panel). You can access the Nodes temporary folder through the Node console panel (File>Open Temp Folder).
If the render finishes fine, each node sends its MXI render to the computer running the Manager, which stores them in the Manager temporary folder. If the transfer to the Manager is successful, the node deletes its MXI for its temp folder (to avoid redundant files and save disk space) and begins the next job.
The Manager then starts the merging process (in case of Cooperative renders), merging all the MXI files received from the nodes. If the merging process fails, all the MXI renders from the nodes will remain in the Manager temporary folder. You can access the Manager temporary folder through the Manager console panel (File>Open Temp Folder).
If the Manager merges the MXI files successfully and saves it to the desired output folder, it proceeds to remove the individual MXI files to save space as well.
The entire process saves disk space and you can always access the individual files by going to the temporary folders.
3. Can I have more than one Manager running in the same network?
Although the more standard configuration is having just one Manager per network to prevent communication conflicts, it is possible to have more than one Manager in the same network. The way to configure it to avoid conflicts is assigning each Monitor and Rendernode to the specific Manager you want to connect to, by launching that Monitor and Rendernode via command line and indicating the IP of the proper Manager in the command line. For instance:
mxnetwork.exe -monitor:192.168.0.12
mxnetwork.exe -node:192.168.0.12
You will get a deeper explanation here: Network Rendering
4. Can I have the Manager running in a different subnet than the Monitor and Rendernodes?
Yes, but you will have to launch that Monitor and Rendernode via command line and indicate the IP of the Manager in the command line. For instance:
mxnetwork.exe -monitor:192.168.0.12
mxnetwork.exe -node:192.168.0.12
More details can be found here: Network Rendering
Linux
1. Linux: How can I avoid installing an X Window on each Rendernode?
Because Maxwell Render depends on an X server, it seems like overkill to install a full X window system in each render node. The solution is Xvfb. Xvfb (X virtual framebuffer) is an X server that can run on machines with no display hardware and no physical input devices. It emulates a dumb frame-buffer using virtual memory. The first step is to install it:
On Debian-like distros
# apt-get install xvfb
Or fedora-like distros
# yum install xvfb
The second step is to run the server for testing:
# Xvfb -shmem -screen 0 1280x1024x24
This command starts a virtual X server with virtual display :0 which has virtual resolution of 1280×1024 and 24 bit virtual colors. To test it you can run a following command:
# DISPLAY=:0 xdpyinfo
If everything works fine, you’ll get lots of status information about your server. In the previous command, DISPLAY=:0 specifies which X display information is wanted. This variable can be exported once and then run any commands which require X server to be running.
To make this virtual server run all the time and restart in case of any problems use linux /etc/inittab file. Add the following line to this file:
xvfb:2:respawn:/usr/bin/Xvfb :0 -ac -screen 0 2048x1536x24
and reload it with init q command.
It is highly recommended that the Manager and Monitor run in the same CPU. This will avoid many transfer problems.