TY - GEN
T1 - Perspective Transformation Layer
AU - Khatri, Nishan
AU - Dasgupta, Agnibh
AU - Shen, Yucong
AU - Zhong, Xin
AU - Shih, Frank Y.
N1 - Publisher Copyright:
© 2022 IEEE.
PY - 2022
Y1 - 2022
N2 - Incorporating geometric transformations that reflect the relative position changes between an observer and an object into computer vision and deep learning models has attracted much attention in recent years. However, the existing proposals mainly focus on the affine transformation that is insufficient to reflect such geometric position changes. Furthermore, current solutions often apply a neural network module to learn a single transformation matrix, which not only ignores the importance of multi-view analysis but also includes extra training parameters from the module apart from the transformation matrix parameters that increase the model complexity. In this paper, a perspective transformation layer is proposed in the context of deep learning. The proposed layer can learn homography, therefore reflecting the geometric positions between observers and objects. In addition, by directly training its transformation matrices, a single proposed layer can learn an adjustable number of multiple viewpoints without considering module parameters. The experiments and evaluations confirm the superiority of the proposed layer.
AB - Incorporating geometric transformations that reflect the relative position changes between an observer and an object into computer vision and deep learning models has attracted much attention in recent years. However, the existing proposals mainly focus on the affine transformation that is insufficient to reflect such geometric position changes. Furthermore, current solutions often apply a neural network module to learn a single transformation matrix, which not only ignores the importance of multi-view analysis but also includes extra training parameters from the module apart from the transformation matrix parameters that increase the model complexity. In this paper, a perspective transformation layer is proposed in the context of deep learning. The proposed layer can learn homography, therefore reflecting the geometric positions between observers and objects. In addition, by directly training its transformation matrices, a single proposed layer can learn an adjustable number of multiple viewpoints without considering module parameters. The experiments and evaluations confirm the superiority of the proposed layer.
KW - deep learning layer
KW - homography
KW - perspective transformation
UR - http://www.scopus.com/inward/record.url?scp=85171996523&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85171996523&partnerID=8YFLogxK
U2 - 10.1109/CSCI58124.2022.00250
DO - 10.1109/CSCI58124.2022.00250
M3 - Conference contribution
AN - SCOPUS:85171996523
T3 - Proceedings - 2022 International Conference on Computational Science and Computational Intelligence, CSCI 2022
SP - 1395
EP - 1401
BT - Proceedings - 2022 International Conference on Computational Science and Computational Intelligence, CSCI 2022
PB - Institute of Electrical and Electronics Engineers Inc.
T2 - 2022 International Conference on Computational Science and Computational Intelligence, CSCI 2022
Y2 - 14 December 2022 through 16 December 2022
ER -