已知四对匹配点,3D、2D坐标已知,求相机位置,x轴偏差大,为什么?

 

当前我有4对匹配点,图像特征的2D位置以及其对应的3D位置都有,我也标定了相机,获得了相机内参和畸变参数,然后我是用python 里面的 cv2.solvePnP(),方法选择AP3P,因为这个方法所需的特征点最少。求得R、T之后,我在利用 `-R.int()*t`获得相机在世界坐标系的位置。不过我现在有个问题,就是获得的结果中,x轴的偏差很大,有几十厘米的误差,然而y轴只有几厘米的误差,为什么?

下面是我的测试代码,求大佬解答。

import cv2
import numpy as np

# unit mm
IntrinsicMatrix = np.array ([[5816.91523, 0., 2634.55839],
                             [0., 5787.34531, 3687.93429],
                             [0., 0., 1.]])
# unit mm
distCoeffs = np.array ([0.0597, -0.0416, 0.005, -0.0058,0.])

# unit cm
Points3D[0, 0,:] = -7.0, 7.0, 0.0
Points3D[1, 0,:] = 7.0, 7.0, 0.0
Points3D[2, 0,:] = 7.0, -7.0, 0.0
Points3D[3, 0,:] = -7.0, -7.0, 0.0

# unit cm
Points2D[0,0,:] = 2365.,3668.
Points2D[1,0,:] = 2766.,3671.
Points2D[2,0,:] = 2771.,3261.
Points2D[3,0,:] = 2364.,3258.

found, rvecs, tvecs = cv2.solvePnP(objectPoints=Points3D,
                                         imagePoints=Points2D,
                                         cameraMatrix=IntrinsicMatrix,
                                         distCoeffs=distCoeffs,
                                         flags=cv2.SOLVEPNP_AP3P
                                         )
if found:
     R,_ = cv2.Rodrigues(rvecs)
     if np.linalg.matrix_rank(R) == 3:
         position_result = -np.linalg.inv(R).dot(tvecs)

 

我的结果如下:

# unit cm
position : 
 [[ -39.0898995 ]
 [   2.77867074]
 [-192.27239882]]

我用相机正对着图片拍摄,保持水平(图片出现在相机中央)。我以图片为中心为世界坐标系(坐标系平行于相机的XOY)的原点,3D四点分别顺序是参考cv2.solovePnP的解释 

.     - point 0: [-squareLength / 2,  squareLength / 2, 0]
.     - point 1: [ squareLength / 2,  squareLength / 2, 0]
.     - point 2: [ squareLength / 2, -squareLength / 2, 0]
.     - point 3: [-squareLength / 2, -squareLength / 2, 0]

。理论上来说,结果应该为[0,0,0]左右,为什么x轴有这么大的误差?

 

我使用一张模板图和一种相机拍摄的图像,进行匹配得到。世界坐标系是我假设的,不知道这样对不对

据我所知,opencv里面的原点一般是img[0][0]的位置,3D点原点也是[0,0,0],也就是说,opencv图像的坐标默认都是在正轴上面的。你看看是不是你坐标系原点弄错了?

import numpy as np
import cv2

# unit mm
IntrinsicMatrix = np.array([[5816.91523, 0., 2634.55839],
                            [0., 5787.34531, 3687.93429],
                            [0., 0., 1.]])
# unit mm
distCoeffs = np.array([0.0597, -0.0416, 0.005, -0.0058, 0.])
Points3D=np.array([[ -7.0, 7.0, 0.0],[7.0, 7.0, 0.0],[7.0, -7.0, 0.0],[-7.0, -7.0, 0.0]])

midx=2566.5
midy=3464.5

Points2D=np.array([[2365.-midx, 3668.-midy],[2766.-midx,3671.-midy],[2771.-midx, 3261.-midy],[2364.-midx, 3258.-midy]])
# unit cm
# Points2D[0,0,:] = 2365.,3668.
# Points2D[1,0,:] = 2766.,3671.
# Points2D[2,0,:] = 2771.,3261.
# Points2D[3,0,:] = 2364.,3258.

found, rvecs, tvecs = cv2.solvePnP(objectPoints=Points3D,
                                   imagePoints=Points2D,
                                   cameraMatrix=IntrinsicMatrix,
                                   distCoeffs=distCoeffs,
                                   flags=cv2.SOLVEPNP_AP3P
                                   )
if found:
    R, _ = cv2.Rodrigues(rvecs)
    if np.linalg.matrix_rank(R) == 3:
        position_result = -np.linalg.inv(R).dot(tvecs)
        print(position_result)