FAQs

A list of frequently asked questions to help
you understand how it works.

Volumetric Video

What is volumetric video?

It is an emerging technological advancement or continuation of the regular Digital Video technology. The main technical difference of Volumetric Video is the use of depth data and multible cameras. This enables to record in three-dimensions exactly as it is in the physical reality we live in.

How long can my recordings be?

Technically they can be as long as you as there’s enough space on your computer, however usually due to processing time that is required after recording, we recommend no longer than 5 minute recordings. If your project enables you to record in smaller chunks of data then it will be easier to manage your assets.

What's the file size of the volumetric videos?

With our recommended workflow for .OBJ with texture you can get a single frame to be 1-2MB. If you are wondering about the file size of the original RAW volumetric video recording, it usually ranges in between 1-10GB but can dramatically increase in size depending on the recording time.

Calibration

Which calibration marker should I use?

We have 3 markers that can be found directly in the stereo-calibration feature in EFEVE windows app, or can be downloaded from here http://ef-eve.com/downloads/ . Markers are ready to be used on Letter size (US) or A4 size (EU) sheet of papers or can be customised to suit any size.

How fast is the calibration process?

Recording usually takes ~10-20seconds per camera pair. Calibration is automatic and usually takes ~20seconds per camera. If you are able to get a good calibration recording from the first try, then you should be able to calibrate your volumetric capture setup within 2-5minutes. Which can become longer if you have more cameras (7-12 azure kinects). Calibration has two major steps, recording marker positions and calibrating.

How do I calibrate my scene?

There are a couple of different tools for calibrating your scene, which can all be found in the Calibration tab, but the one that we suggest for everyone to use is Stereo Calibration, where you print out a marker, record a volumetric video by showing that marker to different pairs of cameras and then finally opening the recorded file with the Stereo Calibration function in the Calibration Tab.

What do I need for the calibration process?

To start calibration you’ll need to print a calibration marker and position it on a flat board. Calibration marker can ber printed with a home printer. Also make sure you have a good artificial lighting.

Why can't I get a good result from stereo calibration?

There are many possibilities: * You might not be holding the calibration board still enough. * You might not be showing enough different positions and angles. * You might have entered incorrect calibration board parameters. * You might have been missing frames during recording. * Your recording might be desynchronized. * Your cameras might be too far away from each other. * Your cameras might be too far away from the center. You can find more detailed description of what to do in each case here ef-eve.com/help-center/calibration/ otherwise you can book appointment https://ef-eve.com/get-appointment/ with one of our team members.

How can I fix the calibration?

Record another calibration video and use that either to create a new calibration from scratch or to refine an existing calibration. Alternatively you can try to use manual calibration to fix the calibration by hand.

Capture Stage

How big can my volumetric capture stage be?

We recommend to have a square stage of 1×1 meters in size, and aprox 2m in height. This is a standard stage size to obtain the best quality depth and color data. However you can have a stage of 4×4 meters if the obtained quality is suitable for your particular project. If you are not sure we always recommend to discuss this during a discovery call get appointment https://ef-eve.com/get-appointment/

How long does it take to setup a volumetric capture stage?

Due to our fully automated calibration process it takes just a few minutes to setup all your cameras in a single 3D space.

How to make the stage bigger?

The more cameras you use, the bigger the stage you can technically have. However at 2-4 meter range the quality will suffer, and going further than that is not recommended.

Processing

What does the full process of recording and exporting looks like?

Setup your cameras. Do a calibration recording. Ensure you have a good calibration. Record your actual stage performance. Decide the output quality and data type wish to obtain. Set the computer to export and wait. Check here https://ef-eve.com/help-center/exporting-tutorial/ 6 cameras workflow.

Can I remove a green screen or a blue screen?

Yes, there’s a feature called Chroma Key that does exactly that, so you can record your stage with a green or blue screen and then remove it later with out built in tools. Alternatively if our chrome key feature is not good enough for your use case you can also export the recorded images and post-process them in another software to remove the green screen.

Which features should I use?

Very often you’ll want to clean up your scene, by removing the background, for which the cleaning box is very helpful. Also Mask filtering helps remove inacurate data. Other than that it depends on what your final volumetric video style goal is. You can find more detailed workflows visiting our help center.

Exporting

What's the difference between all the possible export types?

.ply file format supports pointcloud and meshes without textures, while also being able to store data in binary, thus reducing file size. .obj file fomat support pointcloud, meshes and textures, but can’t store data in binary thus increase file size. .gltf or .glb allows you to store the whole volumetric video in a single or in two files.

How can I make the export faster?

Reducing the number of points you’re working always helps, this can be done using cleaning methods in the Scene tab such as the cleaning box or the cleaning brush in the Mask tab. Alternatively you can use Pointcloud decimation to reduce the number of points, and thus increase speed. Another speed bump that usually occurs is in Watertight Mesh generation, where you can set the Sample Point Distance to something bigger to increase speed while reducing the number of points that are used for generation. The final place that usually takes a while is the UV Texture Generation, for this you can try to generate a smaller mesh in the mesh generation features, or you can also split your exporting load between multiple computers (multiple licenses will be needed).

Why do I get an empty export?

This is most often the case when one of your features ended up destroying the mesh. Such features usually are decimation features or mesh cleanup. You can check this by disabling features and rendering to see at which stage your frame is destroyed. If this is not the case, then it’s likely a bug and you should inform us about it.

What do all of these file formats mean? (.ply, .obj, .eve ...)

.ply hold a single frame as long as it isn’t a textured mesh. Able to have compressed data. .obj holds any single frame without compression. .eve is our custom file format for holding the recorded volumetric video. .cr is the project file used to save and load your settings. .clb is a calibration file, which allows you to store and load the calibration for the same cameras. .gltf+.bin or .gltf are different versions of a volumetric video in it’s pointcloud or mesh form – this is most often supported in other programs.

How can I export to .gltf or .glb format?

You can use a feature from the Convert tab after you have finished export a sequence of .obj files which contain meshes with textures. You can find more information here https://ef-eve.com/help-center/obj-with-texutres-to-gltf-or-glb/

Cameras

How many cameras should I use?

Highly depends on what your final goal is. Usually we recommend the usage of 4-7 Azure Kinect cameras for a good recording from all directorions. However you can get even better volumetric capture results with more cameras all the way up to 10 or 12. You can also go lower, with as few as 2 cameras it is technically possible to cover the scene by recording from opposite directions.

How do I connect more cameras?

There are two ways to record with more cameras. First way is to get aditional USB Pcie cards in order to have more USB ports, but keep in mind that azure kinects are very specific about which USB host controllers they support – find out more here https://ef-eve.com/help-center/pc-requirements/ . The other way is to use more than one computer, as the Network feature allows you to use multiple computer on the same network to record at the same time.

Why are the colors between my cameras different?

This might be because each of your camera has automatic gain control, which can be turned off in Record tab > Color controls https://ef-eve.com/help-center/color-controls/ . Alternatively it might be that your scene is not evenly lit.

Why some of my cameras are not shown in the camera list?

There are multiple possible reasons. To ensure that a camera is shown you can try doing these steps: unplug all the cables from the camera, wait for a couple of seconds and plug them all back in. Ensure that no other program is using that camera (like zoom). After all this, refresh the camera list. If your camera is still not shown, try launching the Azure Kinect viewer, otherwise known as k4aviewer and check if that program is able to find your camera. If even the viewer is not finding your camera then it might be faulty.

What camera setup should I use?

Depends on how many cameras you have and what it is that you’re trying to record. The basic idea is to try to space your cameras evenly, while placing them usually at ~1.5 meter distance from the center of your stage. If you have a lot of cameras, consider putting one camera above the scene looking down. Also alternate height if you have enough cameras to do so.

Why are my cameras missing frames?

Your cameras might be missing frames most likely because of incorrect USB host controllers (as stated by microsoft, Windows, Intel, Texas Instruments (TI), and Renesas are the only host controllers that are supported). Another alternative is that you might have insufficient hardware (usually cpu), or your disk is currently being used by another program. We’ve also noticed that on unsupported USB host controllers you can get more stable results if you have fewer of the USB ports used (for example only one USB port used in a hub that has 4 USB ports).

Why are my cameras glitching?

This is usually the fault of USB host controller incompatibility. Azure kinect only work with a couple of different supported USB host controllers, and with everything else, it either works, works partially or doesn’t work. We have noticed so far that with some USB host controllers, the cameras are good as long as you don’t connect too many cameras/devices to the same USB card even if there are extra ports there.

Mixed

Why did the program crash/freeze?

We are activelly developping this program, so bugs and program crashes can still happen for various reasons. In order to help fix any issue that you encountered it would be great if you were to contact us (https://ef-eve.com/contact-us) describing what you were doing that led to the program crash as well as sending us your program log files. It can be found by clicking [Help]->[Open Logs path] in EFEVE top menu or going to the path C:\Users\[YOUR USER]\AppData\LocalLow\EF EVE\EF EVE and finding the files called Player.log and Player-prev.log.

I found a bug

If you found a bug, it would be great if you let us know about it, so that we could fix it ASAP. You can find ways to contact us here: https://ef-eve.com/contact-us.
It’s best if you can provide us this information about your encountered bug: Program version, Description of expected behavior, Description of encountered behavior, Steps to take to reproduce the bug, Program logs (they can be found by clicking [Help]->[Open Logs path] in the program).

What should be my PC specs for a 4 azure kinedt setup?

CPU: Intel® Core™ i9-10920X @3.5GHz (cores:12/ threads:24); RAM: 64GB Quad Channel (4x 16GB); GPU: Minimum: NVIDIA RTX A2000 (Recommended: NVIDIA RTX A4000); 4 Port USB3.0 PCIe Cards with 4 Dedicated 5Gbps Channels (4 usb host controllers). For different camera number setup see requirements here https://ef-eve.com/help-center/pc-requirements/

3rd Party Support

Which 3rd party software can you use with the volumetric videos?

Any program that supports .gltf or .glb files should work. We also have our own custom plugins for Unity and Unreal. It can be found on our Downloads https://ef-eve.com/downloads/ page

Can i use volumetric video in other creative software?

EF EVE has many partnerships with other creative software companies. You can easily integrate your volumetric videos into Unity, Unreal Engine, TouchDesigner, Notch and other major programs.

How do I use the recording in Unity?

You must export your video to either and .obj sequence or a .ply sequence and then using our Unity plugin https://ef-eve.com/downloads/ , import those frames into your project.