Ground truth generation

Introduction

In order to test my algorithms I’ve developed in matlab / blender a set of functions that allows me to generate 3D multicam sequence fully calibrated (where all camera parameters are exactly known).

The main idea of this is to check the accuracy of different parts of FVV algorithm. For example I can check the precision of pose recovery block because I know exactly the extrinsics matrix of the camera. Or I can check the likeness of FVV interpolated image because I can generate that real image using blender.

The sequence to generate a ground truth sequence is:

1- Calculate the camera positions given the camera setup (sphere or cylinder set up) using one of these matlab functions (see figure 1 and figure 2):

[Twoc] = GetCameraPosFromSphereSetup(vPi,dr, nN, nM, dThetaL, dThetaR, dPhiT, dPhiD)

or

[Twoc] = GetCameraPosFromCylinderSetup(vPi,dr, nN, nM, dThetaL, dThetaR, dPhiT, dPhiD)

Camera setup input parameters are:

      1. vPi = Point of interest. Is the point where all cameras have to look at.
      2. dr = Distance from Pi to the optical center of the cameras (radius of the sphere)
      3. nN = Number of cameras per row
      4. nM = Number of cameras per Col
      5. dthetaL = Delimitation left angle in horizontal plane
      6. dthetaR = Delimitation right angle in horizontal plane
      7. dPhiT = Delimitation top angle in vertical plane
      8. dPhiD = Delimitation top angle in vertical plane

The output parameters are:

      1. Twoc = Camera position vector respect world axis [X Y Z] (row per cam)

2- Calculate the camera transformation matrix using this matlab function:

[Rw2oc] = GetCameraRFromLookAt(Twoc, Pi, Up, nCamOriginZFw)

Input parameters:

      1. Twoc= Camera position vector respect world axis [X Y Z] (row per cam)
      2. Pi = Point of interest, the central point of camera view [X Y Z]
      3. Up = Camera UP vector [X Y Z]  ([0 0 1] -> Up is +Z axis)
      4. optional nCamOriginZFw = Indicates where the original camera is pointing. Allowed values: 1 = +Z, -1 = -Z (Default = -1)

Output parameter:

      1. Rw2oc: Camera rotation matrices  [3x3xnum of cams]

3- Generate a Collada format file that define the camera setup with this matlab function:

Collada_SaveCams(strFileName,Rw2oc,Twoc,I,strUnits,strAxisUP)

Input parameters are:

      1. strFileName: File name (including path) of the Collada file (.dae)
      2. Rw2oc = Rotation matrix [mat 3x3xnCams]
      3. Twoc = Translation vector (camera position) [X Y Z] [mat nCamsx3] (row based)
      4. I : Camera information matrix.  One camera per row.
      5. Optional strUnits = Specify the magnitude of Longitude units in collada file. The allowed values are:  ‘Centimeter’, ‘Meter’, ‘Milimeter’. (Default: ‘Centimeter’)
      6. Optional strAxisUP = Specify what is the UP axis in collada file. The allowed values are:  ‘Z’, ‘Y’, ‘X’. (Default: ‘Z’)

4- Using the following matlab function, From Twoc and RW2oc matrices and camera parameters we can calculate the projection matrices (P) of each camera:

[P Kin Kext] = GetCameraP(Rw2oc, Twoc, dCamXFOV, dDAR, nCamPixelsX, nCamPixelsY, dSkewAngle, nCamOriginZFw, nImInv)

Input parameters are:

Camera position data:

      1. Rw2oc = Rotation matrix [mat 3x3xnCams]  (row based)
      2. Twoc = Camera translation vector respect world axis (camera position) [X Y Z] [mat nCamsx3]  (row based)

Camera parameters:

      1. dCamXFOV= X Field of view of the camera ) [degree]
      2. dDAR = Display aspect ratio (Most common: 4/3, 16/9)
      3. nCamPixelsX = CCD pixels per row
      4. nCamPixelsY = CCD pixels per col
      5. dSkewAngle = Angle between CCD rows and cols [degree]
      6. nCamOriginZFw = Indicates where the original camera is pointing. Allowed values: 1 = +Z, -1 = -Z
      7. nImInv = Indicates if the camera gives the inverted image. Allowed values: 0 = Non inverted, 1 = Inverted
Output parameters:
      1. P = Camera projection matrix [mat 4×3xnCams]
      2. Kin = Calibration intrinsics matrix [mat 3x3xnCams]
      3. Kext = Calibration extrinsics matrix [mat 4x4xnCams]

 5- Load Collada generated file in blender (append with existing 3D composition) and generate the M*N (num of cams) video sequences (see figure 3 and figure 4)

Definition of Input parameters of GetcameraPosFromSphereSetup.
Figure 1 – Definition of Input parameters of GetcameraPosFromSphereSetup. θl=45º, θr=45º, øt=45º, ød=22.5º, N=5,M=4,Up=[0X 0Y 1Z]
Definition of Input parameters of GetcameraPosFromCylinderSetup.
Figure 2 – Definition of Input parameters of GetcameraPosFromCylinderSetup. θl=45º, θr=45º, øt=45º, ød=22.5º, N=5,M=4,Up=[0X 0Y 1Z]
Blender cakeman sequence with 20 cameras automatically added
Figure 3 – Blender cakeman sequence with 20 cameras automatically added. Parameters:(θl=45º, θr=45º, øt=45º, ød=22.5º, N=5,M=4,Up=[0X 0Y 1Z]
Composition of Blender generated images of Cakeman sequence
Figure 4 – Composition of Blender generated images of Cakeman sequence. Parameters used: [Cpos P Kin Kext T I] = GetCameraDataFromSphereSetup([0 0 5],20,4,3,90+45,-45,60,-15,[0 0 1],49.134,16/9,1024,576); Blender settings: Resolution: W:1024, H: 576 (AR=16/9) PR=(Px/Py)/AR = 1; AR: X:1 Y:1

6- And now we have a fully calibrated 3D scene. We know exactly the camera positions, and using projection matrices (P) we can calculate the correspondence pixel of each point of the 3D scene over each camera.

We can use this to check how exactly are our Free View Video (FVV) algorithms, for example:

    1. Generate a 3 cameras fully calibrate 3D scene.
    2. Send to our FVV algorithm the images from camera 1 and camera 3 and generate the virtual image of camera 2 position.
    3. Use any likeness comparing method to check the similarity between the real camera 2 image, and the virtual camera 2 image (FVV).

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

%d bloggers like this: