306
Views
0
CrossRef citations to date
0
Altmetric
Research Article

3D Gaussian Geometric Moment Invariants

Article: 2318983 | Received 29 Nov 2022, Accepted 05 Feb 2024, Published online: 26 Feb 2024

ABSTRACT

3D moment invariants are important tools for 3D image feature representation. In this paper, we introduced a novel approach for constructing 3D moment invariants using Gaussian geometric moments. Our proposed method demonstrated invariance under translation, rotation, and scale transformations. The numerical experiments validate the invariance and robustness of the proposed method, comparing it with traditional 3D geometric moments and revealing superior performance in the presence of noise and transformations. Additionally, the method is applied to content-based 3D image retrieval, exhibiting promising results through Minkowski distance-based retrieval on the Princeton Shape Benchmark (PSB) database.

Introduction

Moment invariants, as an important tool for image feature representation, are widely used in image registration (Mo, Hu, and Li Citation2021; Yang et al. Citation2017), object recognition (Aggarwal and Singh Citation2016; Hjouji Citation2022; Singh and Aggarwal Citation2020; Yang et al. Citation2021), image retrieval (Tadepalli et al. Citation2021; Wu et al. Citation2020), digital watermarking (Bin et al. Citation2020; Wang et al. Citation2022), Biomedicine (Amakdouf et al. Citation2021) and other fields. The application of moment invariants to image recognition can be traced back to 1962. Hu (Citation1962) derived seven moment invariants based on the results of the algebraic invariant theory. As 3D technology has rapidly advanced, there is a growing need for 3D moment invariants in scenarios involving stereo image identification and classification (Flusser, Suk, and Zitova Citation2016).

Compared with 2D moment invariants, it is much more complicated to construct 3D invariants, especially 3D rotation invariants. This may be the reason why only a few studies involve 3D invariants. Sadjadi and Hall (Citation1980) first extended moment invariants to 3D space, they constructed three second-order 3D moment invariants, but their invariants could not be further generalized. Later, Lo and Don (Citation1989) constructed 12 third-order 3D moment invariants. Suk and Flusser (Citation2011) further constructed 1185 3D moment invariants up to the 16th order. In subsequent research, Flusser, Suk, and Zitova (Citation2016) derived 13 moment invariants, and proved that these invariants constitute a complete third-order 3D moment invariants.

In the field of 3D moment invariant construction theory, various approaches have been explored. Cyganski and Orr (Citation1988) applied tensor theory to derive 3D rotation invariants. Galvez and Canton (Citation1993) used normalization methods for 3D recognition. Dong and Hua (Citation2006) constructed 3D moment invariants based on geometric primitives (such as distance, area, and volume).

Li et al (Citation2006, Citation2008) used the 3D polar radius moment invariants for the comparison and recognition of 3D images, and subsequently established the 3D polar radius moment invariants and applied it to 3D image retrieval. Suk, Flusser, and J (Citation2015) constructed 3D moment invariants based on the definition of 3D complex moments. Bedratyuk (Citation2020) incorporated the currently known 3D moment invariants into the traditional invariant theory.

Due to the better numerical properties of orthogonal moments, researchers have also made efforts to construct 3D invariants using orthogonal moments. Canterakis (Citation1999) applied Zernike moments to construct 3D rotation invariants. Novotni and Klein (Citation2003) used 3D Zernike moments for 3D image retrieval and proved that Zernike moment descriptor is better than spherical harmonic descriptor in retrieval efficiency. Yang et al (Yang and Flusser Citation2014; Yang, Flusser, and Suk Citation2015) constructed the 3D rotation invariant of Gaussian-Hermite moments based on the theoretical research results of Gaussian-Hermite moments. Mallahi et al. (Citation2018) constructed 3D moment invariants based on Legendre moments, Amakdouf et al. (Citation2018) based on discrete Krawtchouk moments.

Compared with traditional non-orthogonal moments, orthogonal moments can better overcome information redundancy and noise sensitivity, but the calculation of orthogonal invariant moments is limited by the complexity of extracting invariants and high computational cost. Researchers have sought solutions through improved construction methods and fast numerical algorithms and have achieved some results (Jahid et al. Citation2019; Karmouni et al. Citation2021; Yamni et al. Citation2021).

When people extend the moment theory from 2D to 3D, it is a natural choice to directly use triple integral instead of double integral. Therefore, most of the 3D moments are defined based on triple integral, which is applicable to the data stored in the form of 3D array. However, the 3D model has many manifestations, triangular mesh is also a common method to describe 3D models because of its simple and fast processing. For this type of model, the data can only be converted to 3D volumetric representation when calculating 3D moments (Mallahi et al. Citation2018; Yang et al. Citation2017; Yang, Flusser, and Suk Citation2015), which makes the calculation more complex. Therefore, how to construct more easily calculated 3D moments based on surface models is a valuable research direction.

In this paper, we propose a method for directly calculating moments of triangular mesh models by defining surface moments. Considering that triangular mesh models are more prone to deformation under the influence of noise, we choose Gaussian geometric moments instead of geometric moments. Gaussian geometric moments, derived by adding a Gaussian kernel to geometric moments, demonstrate robust feature expression and stability in 2D space (Zhang and Xi Citation2014). We define the 3D Gaussian geometric moment through surface integrals and construct the 3D Gaussian geometric moment rotation invariants. Following the scale-invariant construction method proposed by Yang et al. (Citation2017), we also construct the 3D Gaussian geometric moment scale invariants.

The remainder of this paper is organized as follows. In the next section, we revisit the fundamentals of 2D Gaussian geometric moments and define the 3D Gaussian geometric moments. In Section 3, we show the method of designing rotation invariants. In Section 4, we explain how to construct 3D Gaussian geometric moment invariants that satisfy translation, scale, and rotation invariance. Finally, in Section 5, we illustrate the performance of the proposed invariants by experiments on images.

Gaussian Geometric Moments

2D Gaussian Geometric Moments

Assuming that p,q are nonnegative integers, and d=p+q is the order of the moment, the geometric moment of the image f(x,y) is:

(1) mpq=xpyqf(x,y)dxdy.(1)

The low-order geometric moments have intuitive meaning: m00 denotes the mass of the image, representing the area of the target in a binary image, m10/m00 and m01/m00 indicate the image’s center of gravity or centroid. Gaussian geometric moment is to add a Gaussian kernel to the basis function of geometric moment and introduce a scale factor σ,

(2) Mpq=xσpyσqexpx2+y22σ2f(x,y)dxdy.(2)

Define the one-dimensional basis function of the Gaussian geometric moment:

(3) Bp(x,σ)=xσpexpx22σ2,(3)

thus,

(4) Mpq=Bp(x,σ)Bq(y,σ)f(x,y)dxdy.(4)

3D Gaussian Geometric Moments

In most research documents, the 3D geometric moment is defined by triple integral (Flusser, Suk, and Zitova Citation2016), assuming that p,q,r are nonnegative integers, the geometric moment of the 3D image f(x,y,z) is expressed:

(5) mpqr=Ωxpyqzrf(x,y,z)dV,(5)

where Ω is the spatial region where the object is located, d=p+q+r is the order of the moment, and dV=dxdydz is the volume element.

Similar to the 2D space, 3D Gaussian geometric moments can be defined by introducing a Gaussian kernel and a scale factor:

(6) Mpqr=Ωxσpyσqzσrexpx2+y2+z22σ2f(x,y,z)dV(6)

or

(7) Mpqr=ΩBp(x,σ)Bq(y,σ)Br(z,σ)f(x,y,z)dV.(7)

In practical 3D data models, most of them are constructed by curved surface patches. In order to avoid voxelizing the 3D model data represented by the patches and make the calculation more convenient, we use 3D geometric moments defined by surface integrals (Guo, Liu, and Yang Citation2009), namely,

(8) mpqr=Σxpyqzrf(x,y,z)dS.(8)

where dS is the area element.

Similarly, we define the 3D Gaussian geometric moments:

(9) Mpqr=Σxσpyσqzσrexpx2+y2+z22σ2f(x,y,z)dS(9)

or

(10) Mpqr=ΣBp(x,σ)Bq(y,σ)Br(z,σ)f(x,y,z)dS.(10)

In the next section, we will mainly use the 3D Gaussian geometric moments defined by the surface integral to construct the invariants.

Discrete Implementation

We provide a discrete formula for 3D Gaussian geometric moments applicable to commonly used triangular surface blocks. Since we only have the number of faces and vertex coordinates, it is necessary to simplify the calculation and use each small triangular block as the basic calculation unit. Assume that the coordinates of three vertices on any triangle face are A(xi,yi,zi),B(xj,yj,zj),C(xk,yk,zk), then the discrete calculation formula of 3D Gaussian geometric moment is:

(11) M pqr=i,j,kxijkσpyijkσqzijkσrexpxijk2+yijk2+zijk22σ2Sijk,(11)

where xijk=xi+xj+xk3, yijk=yi+yj+yk3, zijk=zi+zj+zk3, Sijk is the area of the triangle, calculated using the cross product of vectors:

(12) Sijk=12|AB×AC|,(12)

where AB=(xjxi,yjyi,zjzi), AC=(xkxi,ykyi,zkzi), and × means the cross product of two vectors.

Rotation Invariants of 3D Gaussian Geometric Moments

Matrix Description of 3D Rotation Transformation

For the rotation transformation in 3D space, we use the Tait-Bryan angle of external rotation to describe, let α denote the angle of rotation around the axis z, and the rotation matrix is:

(13) Rz(α)=cosαsinα0sinαcosα0001,(13)

let β denote the angle of rotation around the axis y, and the rotation matrix is:

(14) Ry(β)=cosβ0sinβ010sinβ0cosβ,(14)

let γ denote the angle of rotation around the axis x, and the rotation matrix is:

(15) Rx(γ)=1000cosγsinγ0sinγcosγ,(15)

then the rotation of the 3D space can be described by the following rotation matrix:

(16) R=Rx(γ)Ry(β)Rz(α).(16)

Constructing the Rotation Invariants

In this section, we construct the 3D Gaussian geometric moment rotation invariants by establishing the correlation between the 3D geometric moments and the 3D Gaussian geometric moments. The relationship between basis functions of 3D geometric moments and 3D Gaussian geometric moments is first presented.

Theorem 1

Suppose the coordinate transformation of three-dimensional space rotation is as follows:

(17) xyz=Rxyz=Rx(γ)Ry(β)Rz(α)xyz.(17)

For nonnegative integers p,q,r, if the geometric moment basis function expression xpyqzr after transformation can be expanded into the following form:

(18) xpyqzr=i=1Nκ(p,q,r,α,β,γ)xpiyqizri,(18)

the basis function of Gaussian geometric moment also has:

(19) Bp(x,σ)Bq(y,σ)Br(z,σ)=i=1Nκ(p,q,r,α,β,γ)Bpi(x,σ)Bqi(y,σ)Bri(z,σ),(19)

where κ(p,q,r,α,β,γ) is the expression coefficient of the expansion, and the value of N,pi,qi,ri is related to p,q,r.

The proof process of Theorem 1 is shown in Appendix A.

For geometric moments, when formula (18) holds, we have:

(20) m pqr=Ωxpyqzrf(x,y,z)dV=Ω(i=1Nκ(p,q,r,α,β,γ)xpiyqizri)f(x,y,z)dV,(20)

where dV=dxdydz=Rdxdydz=RdV, and R=1 is the determinant of the rotation matrix R, therefore dV=dV.

Then

(21) m pqr=i=1Nκ(p,q,r,α,β,γ)Ωxpiyqizrif(x,y,z)dV=i=1Nκ(p,q,r,α,β,γ)mpiqiri.(21)

From Theorem 1, we can further derive the relationship between 3D Gaussian geometric moment and 3D geometric moment.

Theorem 2

Under the 3D space rotation transformation, for nonnegative integers p,q,r, if the expression xpyqzr can be expanded into the form (18), then for the 3D Gaussian geometric moment we have

(22) Mpqr=i=1Nκ(p,q,r,α,β,γ)Mpiqiri.(22)

The proof process of Theorem 2 is shown in Appendix B.

According to Theorem 2, rotation invariant expressions holding for 3D geometric moments also hold when replacing 3D geometric moments with 3D Gaussian geometric moments. Therefore, according to the 3D geometric moment rotation invariants given by Suk and Flusser (Citation2011), we construct the 3D Gaussian geometric moment rotation invariants. The first five invariants are as follows:

(23) I1=M200+M020+M002(23)
(24) I2=M2002+M0202+M0022+2(M0112+M1012+M1102)(24)
(25) I3=M2003+M0203+M0023+3M200M1102+3M200M1012+3M020\breakM1102+3M020M0112+3M002M1012+3M002M0112+6M110M101M011(25)
(26) I4=M3002+M0302+M0032+3M2102+3M2012+3M1202+3M1022+3M0212+3M0122+6M1112(26)
(27) I5=M3002+M0302+M0032+M2102+M2012+M1202+M1022+M0212+M0122\break+2M300M102+2M300M120+2M030M012+2M030M210+2M003M021\break+2M003M201+2M210M012+2M201M021+2M120M102.(27)

TRS Invariants of 3D Gaussian Geometric Moments

Translation, rotation, and scale transformation (TRS) constitute the simplest spatial coordinate transformation, and TRS invariance is crucial for many practical applications. Although the translation invariance of Gaussian geometric moments can be achieved by defining the center moment, ensuring scale invariance becomes challenging due to the presence of Gaussian functions. This section focuses on constructing 3D Gaussian geometric moment scale invariants through scale factor transformations.

Constructing Translation Invariants

The translation invariants of the 3D Gaussian geometric moment can be obtained by the central moment defined as follows:

(28) Upqr=ΣBp(xxc,σ)Bq(yyc,σ)Br(zzc,σ)f(x,y,z)dS,(28)

where the center coordinates are defined using 3D geometric moments:

xc=m100m000,yc=m010m000,yc=m001m000.

Constructing Translation and Scale Invariants

Under the scale transformation x=sx,y=sy,z=sz, let

(29) σ=σ0m000,(29)

where σ0 is a constant and m000 is the 0-order 3D Gaussian geometric moment.

For scale transformation, we have

(30) σ=σ0m000=σ0Σf(x,y,z)dS=σ0s2Σf(x,y,z)dS=σ0sm000=.(30)

Then for the basis functions of 3D Gaussian geometric moments, we have:

(31) Bp(x,σ)=xσpexpx22σ2=sxpexp(sx)22()2=Bp(x,σ),(31)

thus,

(32) M pqr=ΣBp(x,σ)Bq(y,σ)Br(z,σ)f(x,y,z)dS=s2ΣBp(x,σ)Bq(y,σ)Br(z,σ)f(x,y,z)dS=s2Mpqr.(32)

At this time, the scale invariants of the 3D Gaussian geometric moments can also be obtained by normalization operation:

(33) M pqr/m 000=s2mpqr/s2m000=Mpqr/m000.(33)

By replacing the above moments with 3D Gaussian geometric central moments, we obtain normalized 3D Gaussian geometric central moments, serving as translation and scale invariants:

(34) Vpqr=Upqr/m000.(34)

Constructing TRS Invariants

Since m000 is invariant under rotation, so σ=σ under the rotation transformation. That is, the method of constructing Gaussian geometric moment scale invariants does not affect rotation invariants. By replacing the rotation invariants constructed in Section 3.2 with normalized 3D Gaussian geometric central moments, we obtain TRS invariants:

(35) I1=V200+V020+V002(35)
(36) I2=V2002+V0202+V0022+2(V0112+V1012+V1102)(36)
(37) I3=V2003+V0203+V0023+3V200V1102+3V200V1012+3V020V1102+3V020V0112+3V002V1012+3V002V0112+6V110V101V011(37)
(38) I4=V3002+V0302+V0032+3V2102+3V2012+3V1202+3V1022+3V0212+3V0122+6V1112(38)
(39) I5=V3002+V0302+V0032+V2102+V2012+V1202+V1022+V0212+V0122+2V300V102+2V300V120+2V030V012+2V030V210+2V003V021+2V003V201+2V210V012+2V201V021+2V120V102.(39)

Numerical Experiment

In this section, we carried out two experiments to show the behavior of our invariants. The first experiment aims to showcase invariance and robustness, while the second focuses on image retrieval using 3D Gaussian geometric moments. The 3D images utilized in both experiments are sourced from Princeton Shape Benchmark (PSB) (Shilane et al. Citation2004), which contains 1814 3D images given by triangular patches of their surfaces.

Verify Invariance and Robustness

In this experiment, we work with four animal templates, numbered m0, m40, m80, and m100, two face templates numbered m291 and m297, and two house templates numbered m386 and m389, see . As the translation transformation process is relatively straightforward, we focus solely on the invariants of scale and rotation transformations. To generate transformed versions, we set the rotation angle α,β,γ to be 0,π/4,π/2, and the uniform scale transformation coefficient s to be .5, 1.2; then, for each experimental template, we get 54 transformed images, see for two examples.

Figure 1. The original images.

Figure 1. The original images.

Figure 2. Example of the rotated pig.

Figure 2. Example of the rotated pig.

We calculate the Gaussian geometric moment invariants for all the templates, where scale factor σ=0.3. The related calculation results are shown in . It can be seen from that the calculation results of the original image and transformed images are almost unchanged, showing a straight line for the five invariants.

Figure 3. Values of 3D Gaussian geometric moment invariants.

Figure 3. Values of 3D Gaussian geometric moment invariants.

To evaluate quantitatively the invariance, we calculate the mean relative error (MRE) of each invariant

(40) MREi=1nj=1nIijIiIi×100%,(40)

where n is the total number of rotated images, Ii represents the value of the i -th invariant calculated from the original image, and Iij represents the value of the i -th invariant calculated from the j -th rotated image.

The MRE for our invariants is nearly zero for each template, as shown in .

Table 1. MRE of 3D Gaussian geometric moment invariants.

To verify the robustness of the invariants, we select m100, m386 of as original images. For each image, we perform an equally spaced rotation transform, that is, we take π/18 as the angle interval in [0,π]. We then add a zero-mean Gaussian noise to the coordinates of the triangle vertices of the template surface. This resulted in rotated templates with noisy surfaces, see for examples. For these two templates, we get 6859 kinds of rotating images with noise, respectively.

Figure 4. Example of the rotated animal with noisy surfaces.

Figure 4. Example of the rotated animal with noisy surfaces.

We use the 3D Gaussian geometric moment invariants (GGMI) and 3D geometric moment invariants (GMI) to carry out experiments, respectively. The MRE, for comparison, is presented in for m100 and for m386. Notably, 3D Gaussian geometric moments exhibit better robustness than 3D geometric moments.

Figure 5. MRE for the m100 with with SNR (a) 45 db and (b) 50 db.

Figure 5. MRE for the m100 with with SNR (a) 45 db and (b) 50 db.

Figure 6. MRE for the m386 with SNR (a) 45 db and (b) 50 db.

Figure 6. MRE for the m386 with SNR (a) 45 db and (b) 50 db.

Image Retrieval Using 3D Gaussian Geometric Moments

Content-based 3D image retrieval technology is a hot topic in 3D image retrieval, and how to extract effective features is the key to retrieval. Moment method is an important method in content-based 3D image retrieval. In this experiment, we apply 3D Gaussian geometric moment invariants as features to image retrieval. The similarity between images is measured using the Minkowski distance defined as follows:

(41) dij=k=1nIikIjkmin(Iik,Ijk).(41)

We selected two types of 3D images in the PSB database for retrieval experiments. The experimental images are shown in .

Figure 7. The first set of experimental images ((a) is the image to be retrieved).

Figure 7. The first set of experimental images ((a) is the image to be retrieved).

Figure 8. The second set of experimental images ((a) is the image to be retrieved).

Figure 8. The second set of experimental images ((a) is the image to be retrieved).

We calculate the 3D Gaussian geometric moment invariants for the two groups of experimental images, and use the Minkowski distance to calculate the distance from the image to be retrieved to each image, as shown in . Thus, the retrieval and ranking results of the first group of experimental images are as follows: a, h, f, g, i, d, b, e, c. The retrieval and ranking results of the second group of experimental images are as follows: a, b, d, f, e, c, i, h, g.

Table 2. Mean relative error of TRS invariants.

The calculated distance values are relatively smaller for the first type of experimental images due to their simple modeling. Although images e and g feature distinct patterns on the surface, this is not clearly reflected in the results. For the second set of experimental images, the distinction is more obvious, and the distance between the bicycle to be retrieved and other bicycles is much smaller than the distance between it and the motorcycle.

Conclusions

In this paper, for 3D image constructed by surface blocks, we defined 3D Gaussian geometric moments by the surface integration, and constructed the 3D Gaussian geometric moment TRS invariants. The results of numerical experiments show that the 3D Gaussian geometric moment invariants maintain good invariance under rotation and scale transformation. For noisy rotating images, the MRE of 3D Gaussian geometric moment invariants is smaller than that of geometric moment. Moreover, the method is applied to content-based 3D image retrieval, exhibiting promising results through Minkowski distance-based retrieval on the Princeton Shape Benchmark (PSB) database. This study contributes to the advancement of 3D moment invariants, providing a valuable tool for image feature representation with enhanced robustness and invariance. The proposed method holds potential for applications in computer vision, robotics, and medical imaging.

Supplemental material

Supplemental Material

Download Zip (454.2 KB)

Acknowledgements

This work is partly supported by National key r&d program (Grant no.2019YFF0301800), National Natural Science Foundation of China (Grant no. 61379106), Dongying Science Development Fund (Grant no. DJ2021024).

Disclosure statement

No potential conflict of interest was reported by the author(s).

Data availability statement

The data that support the findings of this study are openly available in Princeton Shape Benchmark at https://shape.cs.princeton.edu/benchmark/, reference number psb_db0–3.

Supplementary Material

Supplemental data for this article can be accessed online at https://doi.org/10.1080/08839514.2024.2318983

Additional information

Funding

The work was supported by the National Key R&D Program of China (Grant no. 2019YFF0301800), National Natural Science Foundation of China (Grant no. 61379106), Dongying Science Development Fund (Grant no. DJ2021024). The Dongying Science Development Fund is a funding project provided by Shandong Institute of Petroleum and Chemical Technology for teacher research.

References

  • Aggarwal, A., and C. Singh. 2016. Zernike moments-based Gurumukhi character recognition. Applied Artificial Intelligence 30 (5):429–21. doi:10.1080/08839514.2016.1185859.
  • Amakdouf, H., A. Zouhri, M. El Mallahi, A. Tahiri, D. Chenouni, and H. Qjidaa. 2021. Artificial intelligent classification of biomedical color image using quaternion discrete radial Tchebichef moments. Multimedia Tools and Applications 80 (2):3173–92. doi:10.1007/s11042-020-09781-x.
  • Amakdouf, H., A. Zouhri, M. E. Mallahi, A. Tahiri, and H. Qjidaa. 2018. Translation scaling and rotation invariants of 3D Krawtchouk moments. In 2018 International Conference on Intelligent Systems and Computer Vision (ISCV), Fez, Morocco, 1–6. doi:10.1109/ISACV.2018.8354059.
  • Bedratyuk, L. 2020. 3D geometric moment invariants from the point of view of the classical invariant theory. arXiv: Journal of Mathematical Imaging and Vision 62 (8):1062–75. doi:10.1007/s10851-020-00954-9.
  • Bin, X., J. Luo, X. Bi, W. Li, and C. Beijing. 2020. Fractional discrete Tchebyshev moments and their applications in image encryption and watermarking. Information Sciences 516:545–59. doi:10.1016/j.ins.2019.12.044.
  • Canterakis, N. 1999. 3D Zernike moments and Zernike affine invariants for 3D image analysis and recognition. In Scandinavian Conference on Image Analysis, Kangerlussuaq, Greenland, 85–93. DSAGM.
  • Cyganski, D., and J. A. Orr. 1988. Object recognition and orientation determination by tensor methods. In Advances in computer vision and image processing, 101–44. Greenwich, Connecticut, USA: JAI Press.
  • Dong, X., and L. Hua. 2006. 3-D affine moment invariants generated by geometric primitives. In International Conference on Pattern Recognition, 544–47. IEEE Computer Society, Los Alamitos, California, USA.
  • Flusser, J., T. Suk, and B. Zitova. 2016. 2D and 3D image analysis by moments. Chichester, West Sussex, UK: John Wiley and Sons.
  • Galvez, J. M., and M. Canton. 1993. Normalization and shape recognition of three-dimensional objects by 3D moments. Pattern Recognition 26 (5):667–81. doi:10.1016/0031-3203(93)90120-L.
  • Guo, K. H., C. C. Liu, and J. Y. Yang. 2009. Application of curved surface moment invariants. Journal of System Simulation 21 (6):1599–601.
  • Hjouji, A. 2022. Orthogonal invariant LaGrange-Fourier moments for image recognition. Expert Systems with Application Aug:199. doi:10.1016/j.eswa.2022.117126.
  • Hu, M. K. 1962. Visual pattern recognition by moment invariants. IRE Trans Information Theory 8 (3):179–87. doi:10.1109/TIT.1962.1057692.
  • Jahid, T., H. Karmouni, M. Sayyouri, A. Hmimid, and H. Qjidaa. 2019. Fast algorithm of 3D discrete image orthogonal moments computation based on 3D cuboid. Journal of Mathematical Imaging and Vision 61 (4):534–54. doi:10.1007/s10851-018-0860-7.
  • Karmouni, H., M. Yamni, O. El Ogri, A. Daoui, M. Sayyouri, H. Qjidaa, A. Tahiri, M. Maarouf, and B. Alami. 2021. Fast computation of 3D discrete invariant moments based on 3D cuboid for 3D image classifcation. Circuits, Systems, and Signal Processing 40 (8):3782–812. doi:10.1007/s00034-020-01646-w.
  • Li, Z. M., Y. J. Liu, and H. Li. 2008. 3D model retrieval based on polar-radius surface moment invariants. Journal of Software 18:71–76.
  • Li, Z. M., G. B. Yu, Y. J. Liu, and H. Li. 2006. 3D polar-radius-invariant-moments and their application to 3D model retrieval. Pattern Recognition & Artificial Intelligence 19 (3):362–67.
  • Lo, C. H., and H. S. Don. 1989. 3-D moment forms: Their construction and application to object identification and positioning. Pattern Analysis and Machine Intelligence, IEEE Transactions On 11 (10):1053–64. doi:10.1109/34.42836.
  • Mallahi, M. E., J. E. Mekkaoui, A. Zouhri, H. Amakdouf, and H. Qjidaa. 2018. Rotation scaling and translation invariants of 3D radial shifted Legendre moments. International Journal of Automation & Computing 15 (2):47–58. doi:10.1007/s11633-017-1105-8.
  • Mo, H., H. Hu, and H. Li. 2021. Geometric moment invariants to spatial transform and n-fold symmetric blur. Pattern Recognition 115:107887–107887. doi:10.1016/j.patcog.2021.107887.
  • Novotni, M., and R. Klein. 2003. 3D Zernike descriptors for content based shape retrieval. In ACM symposium on solid modeling and applications, 216–25. Seattle, Washington, USA: ACM.
  • Sadjadi, F. A., and E. L. Hall. 1980. Three-dimensional moment invariants. IEEE Transactions on Pattern Analysis and Machine Intelligence 2 (2):127–36. doi:10.1109/TPAMI.1980.4766990.
  • Shilane, P., P. Min, M. Kazhdan, and T. Funkhouser. 2004. The Princeton shape benchmark. In Shape Modeling Applications, Genova, Italy, 167–178. doi:10.1109/SMI.2004.1314504.
  • Singh, C., and A. Aggarwal. 2020. An effective approach for noise robust and rotation invariant handwritten character recognition using Zernike moments features and optimal similarity measure. Applied Artificial Intelligence 34 (13):1011–37. doi:10.1080/08839514.2020.1796370.
  • Suk, T., and J. Flusser. 2011. Tensor method for constructing 3D moment invariants. In 14th International Conference on Computer Analysis of Images and Patterns, 212–19. Springer, Berlin, Heidelberg.
  • Suk, T., J. Flusser, and B. J. 2015. 3D rotation invariants by complex moments. Pattern Recognition the Journal of the Pattern Recognition Society 48 (11):3516–26. doi:10.1016/j.patcog.2015.05.007.
  • Tadepalli, Y., M. Kollati, S. Kuraparthi, P. Kora, A. K. Budati, and L. K. Pampana. 2021. Content-based image retrieval using Gaussian-Hermite moments and firefly and grey wolf optimization. CAAI Transactions on Intelligence Technology 6 (2):135–46. doi:10.1049/cit2.12040.
  • Wang, C., Q. Zhang, B. Ma, Z. Xia, J. Li, T. Luo, and Q. Li. 2022. Light-field image watermarking based on geranion polar harmonic Fourier moments. Engineering Applications of Artificial Intelligence: The International Journal of Intelligent Real-Time Automation 113:104970. doi:10.1016/j.engappai.2022.104970.
  • Wu, Z., S. Jiang, X. Zhou, Y. Wang, Y. Zuo, Z. Wu, L. Liang, and Q. Liu. 2020. Application of image retrieval based on convolutional neural networks and Hu invariant moment algorithm in computer telecommunications. Computer Communications 150:729–38. doi:10.1016/j.comcom.2019.11.053.
  • Yamni, M., A. Daoui, O. El Ogri, H. Karmouni, M. Sayyouri, and H. Qjidaa. 2021. Accurate 2D and 3D images classification using translation and scale invariants of Meixner moments. Multimedia Tools and Applications 80 (17):26683–712. doi:10.1007/s11042-020-10311-y.
  • Yang, B., and J. Flusser. 2014. 2D and 3D image analysis by gaussian-hermite moments. In Moments and moment invariants – Theory and applications, ed. G.A. Papakostas, 143–173. Thrace, Greece: Science Gate Publishing.
  • Yang, B., J. Flusser, and T. Suk. 2015. 3D rotation invariants of Gaussian-Hermite moments. Pattern Recognition Letters 54 (mar1):18–26. doi:10.1016/j.patrec.2014.11.014.
  • Yang, B., J. Kostkova, J. Flusser, and S. T. 2017. Scale invariants from Gaussian-Hermite moments. Signal Processing 132 (mar):77–84. doi:10.1016/j.sigpro.2016.09.013.
  • Yang, H., S. Qi, J. Tian, P. Niu, and X. Wang. 2021. Robust and discriminative image representation: Fractional-order Jacobi-Fourier moments. Pattern Recognition 115:107898. doi:10.1016/j.patcog.2021.107898.
  • Yang, B., T. Suk, J. Flusser, Z. Shi, and X. Chen. 2017. Rotation invariants from Gaussian-Hermite moments of color images. Signal Processing 143 (FEB.):282–91. doi:10.1016/j.sigpro.2017.08.027.
  • Zhang, C., and P. Xi. 2014. Gaussian-geometric moments and its application in feature matching & image registration. Journal of Computer-Aided Design & Computer Graphics 26 (7):1116–25.

Appendix A

Proof of Theorem 1

The following proves that under the coordinate transformation of 3D space rotation, the basis function of the Gaussian geometric moment has the same form as the expression of the geometric moment basis function.

Suppose the coordinate transformation of 3D space rotation as follows:

(A1) (x,y,z)T=R(x,y,z)T,(A1)

that means

(A2) xσ,yσ,zσT=Rxσ,yσ,zσT.(A2)

For nonnegative integers p,q,r, if the geometric moment basis function expression xpyqzr can be expanded into the following form:

(A3) xpyqzr=i=1Nκp,q,r,α,β,γxpiyqizri,(A3)

then we have

(A4) (xσ)p(yσ)q(zσ)r=i=1Nκ(p,q,r,α,β,γ)(xσ)pi(yσ)qi(zσ)ri.(A4)

For the sum of squares of the transformed coordinates, we let

(A5) x2+y2+z2=(x,y,z)(x,y,z)T=(x,y,z)RTR(x,y,z)T,(A5)

because the matrix R is orthogonal, that is RTR=E, where E is the identity matrix, we get

(A6) x2+y2+z2=(x,y,z)RTR(x,y,z)T=(x,y,z)(x,y,z)T=x2+y2+z2.(A6)

Then for the basis function of the Gaussian geometric moment, we have

Bp(x,σ)Bq(y,σ)Br(z,σ)=(xσ)p(yσ)q(zσ)rexp(x'2+y'2+z'22σ2)y2+z22=i=1Nκ(p,q,r,α,β,γ)(xσ)pi(yσ)qi(zσ)riexp(x2+y2+z22σ2)=i=1Nκ(p,q,r,α,β,γ)Bpi(x,σ)Bqi(y,σ)Bri(z,σ).

The proof of Theorem 1 has been completed.

Appendix B

Proof of Theorem 2

Below we prove that under the rotation transformation of the three-dimensional space, the 3D Gaussian geometric moments have the same expression as the 3D geometric moments.

Suppose the area element:

(B1) dS=(dxdy)2+(dxdz)2+(dydz)2,(B1)

where dxdy, dxdz and dydz are the projections of the area elements on the plane xOy, xOz and yOz, respectively.

For the transformation (x,y,z)T=R(x,y,z)T, there are

dxdy=Adxdy,dxdz=Bdxdz,dydz=Cdydz,

where A=cosαsinαsinαcosα, B=cosβsinβsinβcosβ, C=cosγsinγsinγcosγ.

Due to A=B=C=1, we have

dxdy=dxdy,dxdz=dxdz,dydz=dydz,

that is dS=dS, so when the object is rotated, its surface area remains unchanged.

Therefore, under the condition of formula (18), it can be known from Theorem 1 that for 3D Gaussian geometric moments:

M pqr=ΣBp(x,σ)Bq(y,σ)Br(z,σ)f(x,y,z)dS=Σ(i=1Nκ(p,q,r,α,β,γ)Bpi(x,σ)Bqi(y,σ)Bri(z,σ))f(x,y,z)dS=i=1Nκ(p,q,r,α,β,γ)ΣBpi(x,σ)Bqi(y,σ)Bri(z,σ)f(x,y,z)dS=i=1Nκ(p,q,r,α,β,γ)Mpiqiri.

That is, under the rotation transformation of three-dimensional space, if expression xpyqzr can be developed into the form of Equationequation (18), then for 3D geometric moments we have:

(B2) m pqr=i=1Nκ(p,q,r,α,β,γ)mpiqiri.(B2)

And for the 3D Gaussian geometric moments we also have:

(B3) M pqr=i=1Nκ(p,q,r,α,β,γ)Mpiqiri.(B3)

The proof of Theorem 2 has been completed.