iconOpen Access

ARTICLE

crossmark

An Improved Calibration Method of Grating Projection Measurement System

Qiucheng Sun*, Weiyu Dai, Mingyu Sun, Zeming Ren, Mingze Wang

College of Computer Science and Technology, Changchun Normal University, 130000, Changchun, China

* Corresponding Author: Qiucheng Sun. Email: email

Computers, Materials & Continua 2023, 75(2), 3957-3970. https://doi.org/10.32604/cmc.2023.037254

Abstract

In the traditional fringe projection profilometry system, the projector and the camera light center are both spatially virtual points. The spatial position relationships specified in the model are not easy to obtain, leading to inaccurate system parameters and affecting measurement accuracy. This paper proposes a method for solving the system parameters of the fringe projection profilometry system, and the spatial position of the camera and projector can be adjusted in accordance with the obtained calibration parameters. The steps are as follows: First, in accordance with the conversion relationship of the coordinate system in the calibration process, the calculation formula of the vertical distance from the camera light center to the reference plane and the calculation formula of the distance between the projector and the camera light center are given respectively. Secondly, according to the projector calibration principle, the position of the projector light axis perpendicular to the reference plane is gained by comparing the parallel relationship between the reference plane coordinate system and the projector coordinate system’s Z-axis. Then, in order to fulfill the position restriction that the line between the projector light center and the camera light center must be parallel to the reference plane, the camera’s spatial location is adjusted so that the vertical distance between it and the reference plane tends to that between the projector light center and the reference plane. And finally, the three-dimensional (3D) reconstruction of the target object can be finished using the phase height model’s system parameters once the aforementioned position limitations are put into practice. Experimental results demonstrate that the method improves the measurement accuracy, and verifies that it is effective and available in 3D shape measurement.

Keywords


1  Introduction

Phase measuring profilometry (PMP) [1] is a famous active optical measurement method, which plays an important role in measuring 3D surface shape and object deformation due to its characteristics of high precision, non-destructive, full-field and non-contact. With the emergence of low-cost experimental components, PMP has been widely applied in biomedical industry, industrial inspection, machine vision, profile modeling, reverse engineering, etc. [26].

A projector and a camera make up the traditional phase measuring profilometry system. The process is that the projector projects a group of grating with certain phase and periodic changes to the tested object. The camera captures the image of the modulated deformation grating fringes, and the phase information of the grating fringes can be obtained through image processing. Then the pixel coordinates of each point on the surface of the object are obtained by decoding. The surface contour or warping of the object can be recreated from the photos using phase-height mapping. Although PMP measurement system has been widely used, there are some uncertainties in practical application and measurement, which may lead to errors in shape construction. The main problem is the difficulty in determining the precise location relationship between the projector and the camera when building a traditional PMP system.

In order to overcome this problem, numerous researchers have made efforts in recent years. Some early the literature [7,8] guaranteed the strict position constraints of the traditional PMP system and are committed to solving the accurate system parameters. Zhou et al. [9] proposed a direct phase height mapping algorithm that obtains a linear mapping of the phase plane height and phase difference through multiple measurements of the phase of various target planes, and the light axis of the projector does not have to be orthogonal to the reference plane. Tavares et al. [10] proposed a single-plane linear calibration method to explain the correlation between height and phase. But the practical operation is not limited to the proposed linear relationship. However, some researchers believe that the position constraints of the traditional PMP system are not easily met and have improved the position structure of the camera and projector in the traditional PMP system, such as adding the deflection angle into the traditional phase height model, which has relaxed the vertical and parallel conditions. Mao et al. [11] presented a improve optical structure that only requires the optical element to be moved in order for the two light axes to intersect in the reference plane, which improved the operability of the experiment. Xiao et al. [12] proposed an improved method whereby the positional constraints of the projector and camera are relaxed and only the two light axes are required to meet at the one point of the plane, which improves the operability and simplicity of the experiment. However, coordinate transformation must be taken into account. Bian et al. [13] also realized the limitations of the traditional model and suggested a new model. The maneuverability of the experiment is enhanced by the addition of rotation angles when the two pupil heights of the projection and imaging system are not equal and the corresponding axes are not collinear. With continuous investigation, many scholars have diligently to change the measurement system’s structure and have subsequently created the relevant phase-height mapping method [14]. Du et al. [15] proposed a phase-to-height model using 12 coefficients. However, the model complexity of this method is relatively high, it is not widely used. Zhang et al. [16] discussed the internal relationship between two height models in PMP method, i.e., geometric transformation and matrix transformation, and deduced the phase height model with nine coefficients by using structural constants, which further simplifies the model parameters. Ma et al. [17] suggested a new phase-height model that relaxed the rigid position restrictions for the projector and the camera and simplified the experiment. Kang et al. [18] proposed a phase height model using nine coefficients. But each object requires at least two measurements of different heights to uniquely determine the coefficient, which increasing the complexity of the calculation.

However, the introduction of these parameters increases the complexity of the model. Due to the simplicity of the traditional model calculation, the model still has a certain practical significance. This paper proposes a method for solving the system parameters of the fringe projection profilometry (FPP) system. This method manually adjusts the projector’s and camera’s spatial positions in accordance with the calibration parameters, making the calculated system parameters conform to the positional constraints of the model. This solves the issue of low measurement accuracy caused by inaccurate system parameters in the past and ensures the accuracy of the model.

The structure of the essay is as follows. Section 2 describes the four-step phase shift method’s basic idea. Section 3 introduces the equation derivation of the traditional phase height model. In Section 4, the method for determining the distance between the optical center of the camera and the projector’s light center as well as the distance between the optical center of the camera and the reference plane is presented. Section 5 provides an experimental comparison of the method with conventional measurement methods. Finally, the conclusions of this paper are presented in Section 6.

2  Phase Shift Method

The technique is to project a set of fringe images onto the object, and use the CCD camera to capture the modulated grating fringe images. The grating light intensity distribution function obtained after the modulation of the CCD camera is as follows, of which the projected stripe pattern satisfies the standard sinusoidal distribution.

I(x,y)=G(x,y){A(x,y)+B(x,y)×cos[ϕ(x,y)+nπ2]}(1)

In which G(x,y) stands for the light intensity ratio coefficient on the surface of the measured object, B(x,y) stands for the modulation amplitude, A(x,y) stands for the background light intensity of the object, [ϕ(x,y)+(nπ/2)] is the phase value, (nπ/2) stands for the phase shift of the image, and ϕ(x,y) stands for the calculated phase main field.

Since accurate relative phase values can be obtained by using few experimental images, the four-step phase-shift approach is applied in this paper [19].

I1=A(x,y)+B(x,y)×cos[ϕ(x,y)]I2=A(x,y)+B(x,y)×cos[ϕ(x,y)+π2]I3=A(x,y)+B(x,y)×cos[ϕ(x,y)+π]I4=A(x,y)+B(x,y)×cos[ϕ(x,y)+3π2](2)

The phase main value equation can be obtained by using Eq. (2).

ϕ(x,y)=arctan[I4(x,y)I2(x,y)I1(x,y)I3(x,y)](3)

Since the arctangent function is used in Eq. (3), the phase principal value obtained from the four-step phase-shift method is wrapped in the interval of (π,π), and the relative phase value of the folded grating needs to be expanded from (π,π) to the continuous absolute phase value of (,+) [2022]. The phase shift height algorithm need to be constructed in order to determine the object’s height.

Fig. 1 shows the Phase-unwrapping. First, it shows the pictures of the phase shift grating acquired at three different frequencies (140/134/129). Secondly, the phase wrapping operation is carried out using the phase shift approach, and the phase-unwrapping operation is carried out using the heterodyne principle, finally the 3D measurement is completed in the 3D software according to the 3D data of the measured object.

images

Figure 1: Phase-unwrapping diagram

3  Phase-to-Height Mapping Algorithm

A camera plus a projector make up an FPP system. The principle of the FPP system is triangulation measurement model.

Fig. 2 shows the geometric mathematical model of grating projection system, OXYZ is the world coordinate system, OXY plane is the reference plane to establish the right hand coordinate system, the Z axis is the optical axis of the projector, origin O is the projection of the projection center OP on the reference plane. The point OC is the light centers of the CCD camera, and OP is the light centers of the projector. M is a random point on the tested object, D stands for the distance of OCOP, h stands for the length of the line MN, and L stands for the distance of OOP.

images

Figure 2: Traditional PMP system

In ΔBNM and ΔBOOP,

BNMN=BOL(4)

In ΔANM and ΔAOCOC,

ACMN=AOCL(5)

Hence, MN can be calculated by using Eqs. (4) and (5).

MN=L×BAD+BA(6)

OB=θB2πλ0(7)

OA=θA2πλ0(8)

BA=OAOB=(θAθB)λ02π(9)

Hence, h can be calculated by using Eqs. (6) and (9).

h=L(θAθB)(θAθB)+2πDλ0(10)

h=LΔϕΔϕ+2πDλ0(11)

In Eq. (10), θA is the phase value of the phase shift fringe on the reference plane, and the phase θA can be solved by obtaining the grating image of the reference plane through the CCD camera. θB is the phase value after modulation by object height. In the measurement, the grating image is deformed by the change of object height, and the phase θB at this time is solved by collecting the changed grating image.

Eq. (11) is the height-phase mapping relation of the traditional fringe projection measurement system. In Eq. (11), L, D, and λ0 are the system parameter, where λ0 is the pitch of the raster line, which is the pixel change value corresponding to one cycle (2π) of phase change along the X-axis on the reference plane after the raster has been projected onto the reference plane. Since both point OC and OP are imaginary points in space, it is difficult locate their specific positions accurately in actual measurements. In order to improve measurement accuracy, this paper proposed a method to solve L and D by means of the coordinate system space transformation relation during the projector and the camera calibration process. The method adjusts the spatial positions of the projector and camera in accordance with the calibration parameters, making the calculated system parameters conform to the positional constraints of the model. During the experiment, the technique modifies the projector’s and camera’s relative positions, making the calculated system parameters conform to the positional constraints of the model, and improves the calculation accuracy of the system parameters L and D.

4  Solving the Parameters L and D by Calibration

This section gives the solution process of the vertical distance L from the light center of the camera to the reference plane and the distance D from the light center of the camera to the center of the projector.

4.1 Solving the Parameter L

The principle diagram of coordinate system transformation in the process of camera calibration is shown in the Fig. 3. The imaging process is a series of transformations of spatial object points in the four coordinate systems shown in Fig. 3.

images

Figure 3: Coordinate transformation diagram of the model of the camera

The spatial attitude of the target plane can be transformed arbitrarily in the calibration process. When the target plane is confined within the reference plane, the target plane is the reference plane. Then L can be seen as the distance between the light center of the projector or camera and the world coordinate plane, as shown in Fig. 4. External parameters represent the spatial position relationship between the camera coordinate system and the world coordinate system. In this paper, Zhang’s calibration method [23,24] is adopted to obtain the internal parameters of the camera and the corresponding external parameters of the target.

images

Figure 4: Conversion relationships between the world and the camera coordinate systems

In the calibration process, the vertical distance L between the camera light center and the reference plane can be converted into the vertical distance between the camera light center and the target plane, i.e., L is the length value of the CCD camera light center in the world coordinate system. The rotation matrix R and a translation vector t can be used to convert the world coordinate system into the camera coordinate system. The transformation equation is.

[XcYcZc]=R[XwYwZw]+t(12)

[XwYwZw]=R1([XcYcZc]t)(13)

When XC, YC and ZC are all 0, the absolute value of ZW is calculated by using Eq. (13), where the absolute value of ZW is the vertical distance L between the camera optical center and the reference plane.

4.2 Solving for the Parameter D

In the binocular camera model [25], the spatial position association of the left and right cameras is related as follows, where Pl and Pr are the coordinates of the left and right cameras, and RP and TP are the rotation and translation matrices of the right camera with respect to the left camera.

Pr=RPPl+TP(14)

In the measurement model, the projector is often seen as another camera. Thus, the spatial position of the projector and the CCD camera can also be represented by RP and TP.

In this paper, Falcao’s projector camera calibration method [26] is adopted to obtain the external parameters of the projector, the principle is as follows:

•   Camera parameters obtained by Zhang’s calibration method.

•   Compute the calibration plane under the camera coordinate system.

•   Detect the corners of the projected checkerboard.

•   Recover the 3D spatial coordinates of the projected board through the ray plane intersection points.

•   Correspondence between passed 2D and 3D points calibrated projector.

The model diagram of the camera and projector is shown in Fig. 5. The main steps of projector calibration experiment are as follows:

images

Figure 5: Camera and projector model diagram

Step 1: Capture of the experimental photograph, where the calibration plate and the projector checkerboard must be in the same plane, as shown in Fig. 6.

Step 2: Calculate the internal and external parameters of the CCD camera.

Step 3: Calculate the internal and external parameters of the projector.

Step 4: Calculate the relative position of the projector to the camera.

images

Figure 6: Camera-projector Calibration

The external parameter RP  and TP are the projector relative to the camera. Where the rotation matrix RP is of the form RP=[rp1rp2rp3rp4rp5rp6rp7rp8rp9], and the translation vector TP is of the form TP=[Tp1Tp2Tp3]. When the light center connection line between the CCD camera and the projector is parallel to the reference plane, the distance D can be calculated by the translation vector TP. The equation is as follows,

D=Tp12+Tp22+Tp32(15)

5  Experiments and Analysis

An experimental system is created, as illustrated in Fig. 5, in order to validate the method suggested in this study. The experiment uses a DLP4500 projector with a resolution of 912×1140 pixels and an MV-EM500C/M monochrome CCD camera with a resolution of 2592×1944 pixels. The main experimental steps are as follows:

Preparation: Prepare to build the experimental system, including the test object and reference plane, projection and imaging system.

Calibration: Conduct monocular camera calibration and camera-projector calibration experiments, and adjusts the spatial positions of the projector and camera in accordance with the calibration data, making the calculated system parameters L and D conform to the positional constraints of the model.

Phase shift method: Projection stripes to the object, the camera collects the image of the distorted raster stripes, and the phase wrapping operation is carried out using the phase shift approach, and the phase-unwrapping operation is carried out using the heterodyne principle

Data processing: The 3D measurement is completed in the 3D software according to the 3D data of the measured object.

The location constraints of the projector and the camera are satisfied by experimental operation debugging, the process is as follows:

In the preparation stage of the experiment, the light center line of the projector and camera is ensured to be parallel to the target plane as much as possible, and the light axis of the projector is also orthogonal to the reference plane as much as possible. It is difficult to guarantee the initial spatial pose accuracy, the relative positions of the reference plane, projector and camera are further calibrated by the coming operations.

Firstly, the parameters of the projector and the camera are obtained from monocular camera calibration and camera-projector calibration, then the expressions of the z-axis of the projector coordinate system and the z-axis of the world coordinate system are calculated from the calibration data. It is thought that the projector light axis is perpendicular to the reference plane in the system when the z-axis of the projector coordinate system is parallel to the z-axis of the world coordinate system (adjust the projector attitude to find the optimal solution), as are shown in Fig. 7a. Next, the vertical distance L1 from the projector’s light center to the reference plane is calculated by the method of this paper.

images

Figure 7: Calibration process of experimental system

After completing the above operation, fix the projector and reference plane positions. The vertical distance L from the camera light center to the reference plane can be measured by the monocular camera calibration experiment. Manually adjust the position of the camera so that the value of L tends to L1. When the absolute value of the difference between and L1 is less than the given permissible error, it can be considered that the connection line between the CCD camera light center and the projected light center is parallel to the reference plane, as shown in Fig. 7b.

The experiments are divided into face model morphology reconstruction and rectangular hexahedron accuracy measurement with known height. In order to verify that the method has a good reconstruction effect on complex surfaces, a 3D rebuilding of the face model is first implemented.

Fig. 8 is the images of the intensity of the deformation stripes on the target object collected by the CCD camera.

images

Figure 8: Experimental photo of Four-step phase shift method

Fig. 9 shows the real image, the point cloud image and the 3D model map of the face model.

images

Figure 9: (a) Image of the face model, (b) The point cloud image of the face model, and (C) a truthful view of the face model

To further verify the accuracy of this method, a series of rectangular hexahedra with known height were measured by the proposed method. The distance between the two planes is calculated to obtain the average height of the rectangular hexahedron. Using this method, the 5, 12, and 20 mm rectangular hexahedra are measured repeatedly. For comparison, the results of the measurements using traditional measurement method and X. bian’ s measurement method (adding the rotation angle and relaxing the position restriction of the model) are also given.

Fig. 10 shows the real image and the point cloud image of the rectangular hexahedron. According to point cloud data, the upper and lower planes of the rectangular hexahedron are fitted, as shown in Fig. 11.

images

Figure 10: (a) Image of the Rectangular hexahedron with prominent stripes, (b) The point cloud image of the Rectangular hexahedron

images

Figure 11: The upper and lower plans of rectangular hexahedron

In Tables 1 and 2, h0 stands for known height, havg stands for average height, MAE stands for absolute mean error obtained from multiple repeated measurements of the same rectangular hexahedron, X. bian represents the X. bian’ s measurement method [16], and the word “traditional” is abbreviated to “Tra”. On the whole, compared to the traditional technique and X. bian’s measurement method, the average height computed by the proposed approach is closer to the actual height, and the MAE value obtained by repeated calculation of measured objects using this method is lower than the X. bian’s measurement method. Tables 1 and 2 show that the measurement method of system parameters proposed in this paper is more accurate than the traditional method and the X. bian’ s measurement method.

images

images

The above experiments show that the method has good 3D reconstruction results and improves the measurement accuracy.

6  Conclusion

In this paper, a calibration method is proposed to calculate the distance from projector light center to the camera light center and the distance from projector light center to the reference plane according to the external parameters of camera-projector and camera calibration process. During the experiment, the technique modifies the projector’s and camera’s relative positions, and improves the calculation accuracy of the system parameters L and D. The experimental results show that the method is effective and available. In addition, the method has higher accuracy and operability compared to the traditional measurement method.

Acknowledgement: The authors acknowledge the support of teacher Weiming Li from Guilin University of Electronic Science and Technology.

Funding Statement: This work described in this paper is supported by Foundation of Jilin Province Department of Science and Technology under Grant YDZJ202201ZYTS531.

Conflicts of Interest: The authors declare that they have no conflicts of interest to report regarding the present study.

References

1. S. Gorthi and P. Rastogi, “Fringe projection techniques: Whither we are?” Optics and Lasers in Engineering, vol. 48, no. 2, pp. 133–140, 2010. [Google Scholar]

2. D. Liu, X. Qu, J. Dong, P. Zhou, Y. Cheng et al., “Context-aware biaffine localizing network for temporal sentence grounding,” in IEEE/CVF Conf. on Computer Vision and Pattern Recognition (CVPR), Nashville, TN, USA, pp. 11230–11239, 2021. [Google Scholar]

3. J. A. M. Rodríguez, “Micro-scale spherical and cylindrical surface modeling via metaheuristic algorithms and micro laser line projection,” Algorithms, vol. 15, no. 5, pp. 145, 2022. [Google Scholar]

4. J. Niu, Y. Fu, Q. Hu, S. Yang, T. Zhang et al., “A 3d measurement method based on coded image,” Computers, Materials & Continua, vol. 69, no. 2, pp. 1839–1849, 2021. [Google Scholar]

5. M. Thakur, C. J. Tay and C. Quan, “Surface profiling of a transparent object by use of phase-shifting talbot interferometry,” Applied Optics, vol. 44, no. 13, pp. 2541–5, 2005. [Google Scholar] [PubMed]

6. S. Zhang, “Recent progresses on real-time 3d shape measurement using digital fringe projection techniques,” Optics and Lasers in Engineering, vol. 48, no. 2, pp. 149–158, 2010. [Google Scholar]

7. Z. Zhang, P. Li, S. Zhao, Z. Lv, F. Du et al., “An adaptive vision navigation algorithm in agricultural IoT system for smart agricultural robots,” Computers, Materials & Continua, vol. 66, no. 1, pp. 1043–1056, 2021. [Google Scholar]

8. J. Zhang, K. Yu, Z. Wen, X. Qi and A. K. Paul, “3D reconstruction for motion blurred images using deep learning-based intelligent systems,” Computers, Materials & Continua, vol. 66, no. 2, pp. 2087–2104, 2021. [Google Scholar]

9. W. S. Zhou and X. Y. Su, “A direct mapping algorithm for phase-measuring profilometry,” Optica Acta International Journal of Optics, vol. 41, no. 1, pp. 89–94, 1994. [Google Scholar]

10. P. J. Tavares and M. A. Vaz, “Linear calibration procedure for the phase-to-height relationship in phase measurement profilometry,” Optics Communications, vol. 274, no. 2, pp. 307–314, 2007. [Google Scholar]

11. X. Mao, W. Chen and X. Su, “Improved Fourier-transform profilometry,” Applied Optics, vol. 46, no. 5, pp. 664–668, 2007. [Google Scholar] [PubMed]

12. Y. Xiao, Y. Cao and Y. Wu, “Improved algorithm for phase-to-height mapping in phase measuring profilometry,” Applied Optics, vol. 51, no. 8, pp. 1149, 2012. [Google Scholar] [PubMed]

13. X. Bian, F. Zuo and J. Cheng, “Analysis of a new phase and height algorithm in phase measurement profilometry,” Optical Review, vol. 25, no. 2, pp. 190–196, 2018. [Google Scholar]

14. L. Huang, P. S. Chua and A. Asundi, “Least-squares calibration method for fringe projection profilometry considering camera lens distortion,” Applied Optics, vol. 49, no. 9, pp. 1539, 2010. [Google Scholar] [PubMed]

15. H. Du and Z. Wang, “Three-dimensional shape measurement with an arbitrarily arranged fringe projection profilometry system,” Optics Letters, vol. 32, no. 16, pp. 2438–40, 2007. [Google Scholar] [PubMed]

16. X. Zhang, C. Li, Q. Zhang and D. Tu, “Rethinking phase height model of fringe projection profilometry: A geometry transformation viewpoint,” Optics & Laser Technology, vol. 108, pp. 69–80, 2018. [Google Scholar]

17. Q. Ma, Y. Cao, C. Chen, Y. Wan, G. Fu et al., “Intrinsic feature revelation of phase-to-height mapping in phase measuring profilometry,” Optics & Laser Technology, vol. 108, pp. 46–52, 2018. [Google Scholar]

18. J. -C. Kang, C. -S. Kim, I. -J. Pak, J. -R. Son and C. -S. Kim, “A new phase to height model in fringe projection profilometry by considering radial distortion of camera lens,” Optik, vol. 247, pp. 167895, 2021. [Google Scholar]

19. D. Liu, X. Qu, J. Dong and P. Zhou, “Reasoning step-by-step: Temporal sentence localization in videos via deep rectification-modulation network,” in Proc. of the 28th Int. Conf. on Computational Linguistics, Barcelona, Spain, pp. 1841–1851, 2020. [Google Scholar]

20. C. Yu and Q. Peng, “A Correlation-based phase unwrapping method for Fourier-transform profilometry,” Optics & Lasers in Engineering, vol. 45, no. 6, pp. 730–736, 2007. [Google Scholar]

21. D. Liu, X. Qu, X. -Y. Liu, J. Dong, P. Zhou et al., “Jointly cross-and self-modal graph attention network for query-based moment localization,” in Proc. of the 28th ACM Int. Conf. on Multimedia, New York, NY, USA, pp. 4070–4078, 2020. [Google Scholar]

22. W. Zhu, X. Li and Y. Shon, “Research on clothing simulation design based on three-dimensional image analysis,” Computers, Materials & Continua, vol. 65, no. 1, pp. 945–962, 2020. [Google Scholar]

23. Z. Zhang, “A flexible New technique for camera calibration,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 22, no. 11, pp. 1330–1334, 2000. [Google Scholar]

24. Jean-Yves Bouguet, “Camera calibration toolbox for matlab ®,” [Online]. Available: http://www.vision.caltech.edu/bouguetj/calib_doc/ [Google Scholar]

25. J. Hu, F. Zhang, Z. Li and H. Huang, “Research on indoor three dimensional measurement algorithm based on binocular technology,” Computer Measurement and Control, vol. 27, no. 9, pp. 66–70, 2019. [Google Scholar]

26. G. Falcao, N. Hurtos and J. Massich, “Plane-based calibration of a projector-camera system,” vibot master, 2008. [Online]. Available: https://procamcalib.googlecode.com/files/ProCam_Calib_v2.pdf [Google Scholar]


Cite This Article

APA Style
Sun, Q., Dai, W., Sun, M., Ren, Z., Wang, M. (2023). An improved calibration method of grating projection measurement system. Computers, Materials & Continua, 75(2), 3957-3970. https://doi.org/10.32604/cmc.2023.037254
Vancouver Style
Sun Q, Dai W, Sun M, Ren Z, Wang M. An improved calibration method of grating projection measurement system. Comput Mater Contin. 2023;75(2):3957-3970 https://doi.org/10.32604/cmc.2023.037254
IEEE Style
Q. Sun, W. Dai, M. Sun, Z. Ren, and M. Wang "An Improved Calibration Method of Grating Projection Measurement System," Comput. Mater. Contin., vol. 75, no. 2, pp. 3957-3970. 2023. https://doi.org/10.32604/cmc.2023.037254


cc This work is licensed under a Creative Commons Attribution 4.0 International License , which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.
  • 471

    View

  • 486

    Download

  • 0

    Like

Share Link