Tomasi-Kanade-Factorization

date:

29 jan 2020

Paper:

Shape and Motion from Image Streams under Orthography: a Factorization Method

Algorithm:

image.webp

image.webp

Singular value decomposition:

image.webp

image.webp

Key equations from paper:

image.webp

image.webp

image.webp

image.webp

image.webp

image.webp

Note:

\((16)\Leftrightarrow (\hat{R}Q)(\hat{R}Q)^T = I\)

My implementation:

import numpy as np
import torch

def TomasiKanade(W,F,P,lr=1e-2,steps=1e3):
  assert W.shape == (2*F,P), 'shape should be (2*F,P)'
  # 1. Equation 1, find W_tilde
  mean = np.mean(W,axis=1, keepdims=True) #; print(mean)
  W_tilde = W - mean #; print(W_tilde)

  # 2. Equation 12, SVD
  u, s, vh = np.linalg.svd(W_tilde, full_matrices=True) #; print(u.shape, s.shape, vh.shape)

  # 3. Equation 13, Approximate Rank
  O1_prime, Sigma_prime, O2_prime  = u[:,0:3], s[0:3], vh[0:3,:]
  #print(O1_prime.shape, Sigma_prime.shape, O2_prime.shape)

  # 4. Equation 14, find R_hat, S_hat
  sqrt_Sigma = np.diag(np.sqrt(Sigma_prime)) #; print(Sigma_prime); print(sqrt_Sigma)
  R_hat = np.matmul(O1_prime,sqrt_Sigma)
  S_hat = np.matmul(sqrt_Sigma,O2_prime)

  # 5. Equation 16, find Q
  Q = torch.eye(3, requires_grad=True,dtype=torch.double) #; print(Q)
  R_hat_tensor = torch.from_numpy(R_hat)
  I = torch.eye(2*F,dtype=torch.double)
  optimizer = torch.optim.Adam([Q], lr=lr)
  loss = torch.nn.MSELoss()
  for i in range(int(steps)):
    optimizer.zero_grad()
    RQ = torch.matmul(R_hat_tensor,Q)
    RQQR = torch.matmul(RQ,torch.t(RQ)) #; print(RQ.shape,RQQR.shape,I.shape)
    loss(RQQR,I).backward()
    optimizer.step()
  #print(Q)

  # 6. Equation 15, find R,S
  Q_np = Q.detach().numpy()
  R = np.matmul(R_hat,Q_np)
  S = np.matmul(np.linalg.inv(Q_np),S_hat)
  return R,S

if __name__ == '__main__':
  F = 2; P = 3
  W = np.array([
    [1,2,3],
    [4,5,6],
    [7,8,9],
    [10,11,12],
  ])
  R,S = TomasiKanade(W,F,P) ; print(R) ; print(S)
[[-0.5  0.   0. ]
 [-0.5  0.   0. ]
 [-0.5  0.   0. ]
 [-0.5  0.   0. ]]
[[ 2.  0. -2.]
 [ 0.  0.  0.]
 [ 0.  0.  0.]]

Solve quadratic equation with pytorch:

import torch

x = torch.tensor(.0, requires_grad=True)
optimizer = torch.optim.Adam([x], lr=0.01)
for i in range(1000):
    optimizer.zero_grad()
    y = (x-2)**2
    y.backward(retain_graph=True)
    optimizer.step()
print(x)
tensor(2.0000, requires_grad=True)