Multiplex

Multiplexing can be used to reduce the number of frames \(T\). This accelerates the acquisition time at the cost of increased measurement uncertainty.

Spatial Division Multiplexing

In spatial division multiplexing (SDM) [1], the fringes for each direction are additively superimposed, which results in crossed fringe patterns, cf. Fig. 5. The amplitude \(B\) is halved, i.e. for each direction only have the signal strength is available. The number of frames \(T\) is halfed.

../_images/SDM.png

Fig. 5 Spatial division multiplexing (SDM).

In the decoding stage, the recorded fringe pattern sequence \(I^*\) is Fourier-transformed and the directions are separated in frequency space. Because this is done within the camera frame of reference, the demultiplexed directions only correspond to the encoded ones when the camera and scree are well aligned, i.e. they must face each other directly. Otherwise, the decoded coordinate directions can not be assigned to the screen axes correctly.

Wavelength Divison Multiplexing

In wavelength divison multiplexing (WDM) [2], the shifts are multiplexed into the color channel, resulting in an RGB fringe pattern, cf. Fig. 6. Therefore it is required that all shifts \(N = 3\). The number of frames \(T\) is cut into thirds.

../_images/WDM.png

Fig. 6 Wavelength division multiplexing (WDM).

This works best when an RGB-prism-based camera is used, because its spectral bands don’t overlap and hence the RGB-channels can be separated sharply. Additionally, a white balance has to be executed to ensure equal irradiance readings in all color channels.

Also, the effect of color absorption by the surface material cannot be neglected. This means that the test object itself must not have any color.

Overall, less light is available per pixel because it is divided into the three color channels. Therefore, it requires about 3 times the exposure time compared to grayscale patterns.

Spatial and wavelength division multiplexing can be used together [3]. If only one set \(K=1\) per direction is used, only one frame \(T=1\) is necessary, cf. Fig. 7. This allows single shot applications to be implemented.

../_images/SDM%2BWDM.png

Fig. 7 Spatial and wavelength division multiplexing combined.

Frequency Division Multiplexing

In frequency division multiplexing (FDM) [4], [5], the directions \(D\) and the sets \(K\) are additively superimposed. Hence, the amplitude \(B\) is reduced by a factor of \(D * K\). This results in crossed fringe patterns if we have \(D = 2\) directions, cf. Fig. 8 and Fig. 9.

../_images/FDM_D.png

Fig. 8 Frequency division multiplexing (FDM). Two directions are superimposed.

../_images/FDM_DK.png

Fig. 9 Frequency division multiplexing (FDM). Two directions and two sets are superimposed.

Each set \(k\) per direction \(d\) receives an individual temporal frequency \(f_{d,k}\), which is used in temporal demodulation to distinguish the individual sets. A minimal number of shifts \(N_{min} \ge \lceil 2 * f_{max} \rceil + 1\) is required to satisfy the sampling theorem.

If one wants a static pattern, i.e. one that remains congruent when shifted, the spatial frequencies must be integers: \(\nu_i \in \mathbb{N}\), must not share any common divisor except one: \(gcd(\nu_i) = 1\), and the temporal frequencies must equal the spatial ones: \(\nu_i = f_i\). With static/congruent patterns, one can realize phase shifting by moving printed patterns [6].

Fourier Transform Method

If only a single frame is recorded using a crossed fringe pattern, the phase signal introduced by the object’s distortion of the fringe pattern can be extracted with a purely spatial analysis by virtue of the Fourier-transform method (FTM) [7]:

The recorded phase consists of a carrier with the spatial frequency \(\nu_r\) (note that \(\nu_r\) denotes the spatial frequency in the recorded camera frame, therefore \(\nu\) and \(\nu_r\) are related by the imaging of the optical system but not identical): \(\varPhi_r = \varPhi_c + \varPhi_s = 2 \pi \nu_r + \varPhi_s\). If the offset \(A\), the amplitude \(B\) and the signal phase \(\varPhi_s\) vary slowly compared with the variation introduced by the spatial-carrier frequency \(\nu_r\), i.e. the surface is rather smooth and has no sharp edges, and the spatial carrier frequency is high enough, i.e. \(\nu_r >> 1\), their spetra can be separated and therefore filtered in frequency space.

For this purpose, the recorded fringe pattern is Fourier transformed by the use of the two-dimensional fast-Fourier-transform (2DFFT) algorithm - hence the name - and processed in its spatial frequency domain. Here, the Fourier spectra are separated by the carrier frequency \(\nu_r\), as can be seen in Fig. 10. We filter out the background variation \(A\), select either of the two spectra on the carrier, and translate it by \(\nu_r\) on the frequency axis towards the origin.

../_images/FTM.png

Fig. 10 In this image, the spatial frequency \(\nu_r\) is denoted as \(f_0\). (A) Separated Fourier spectra; (B) single spectrum selected and translated to the origin. From [8].

Again using the 2DFFT algorithm, we compute the inverse Fourier-transform. Now we have the signal phase \(\varPhi_s\) in the imaginary part completely separated from the unwanted amplitude variation \(B\) in the real part. Subsequently, a spatial phase-unwrapping algorithm may be allpied to remove any remaining phase jumps.

This phase unwrapping method is not critical if the signal-to-noise ratio is higher than 10 and the gradients of the signal phase \(\varPhi_s\) are less than \(\pi\) per pixel. This only yields a relative phase map, therefore absolute positions remain unknown.