514
Views
0
CrossRef citations to date
0
Altmetric
Articles

Depth formulation assessment of 1D light field display using self-interference incoherent digital holography

, , , , &
Pages 187-196 | Received 29 Mar 2023, Accepted 07 Aug 2023, Published online: 22 Aug 2023

Abstract

Light field display based on a lenticular lens is currently the most commercially available three-dimensional (3D) display system that can only provide one-dimensional (1D) parallax. In this paper, we outline the criteria for evaluating the voxel formulation of a 1D light field display system and experimentally measure its expressible depth range using an incoherent holographic camera. We use self-interference incoherent digital holography with a geometric phase lens to obtain the phase profile of the light field. To analyze the characteristic of the light field display, we reconstruct the incoherent hologram and apply the autofocus algorithm to evaluate the sharpness of formulated light field.

1. Introduction

Light field displays reproduce a three-dimensional (3D) image by modulating the direction of light rays by using an optical element such as a lens or pinhole [Citation1–3]. As the density of the display panel increases, the image quality presented by the light field displays is further improved. There are novel techniques to improve further performance, such as temporal multiplexing, foveated rendering, and eye tracking [Citation4–7]. Furthermore, the accommodation control of light field displays enables the realization of compact head-mounted displays for virtual reality and augmented reality [Citation8–10].

Multiview displays and 1D integral imaging are light field displays that can only provide horizontal parallax using a lenticular lens. They are easy to implement high-resolution 3D displays using the current rectangular FPD system [Citation11–14]. Based on these advantages, lenticular lens-based light field display has become the most commercially available glasses-free 3D display, and consumer-accessible products were launched in the market, such as Looking Glass and Dimenco. The main difference between the multiview display and integral imaging is the width of the elemental image. In general, the elemental image area of a multiview display is larger than the lens pitch. On the other hand, integral imaging has the same width as both. Furthermore, two different approaches improve the 1D light field – one is super multiview, which presents multiple views on pupils to attain the smooth motion parallax, and the other is the eye-tracking-based approaches that use dynamic eye tracking combined with the narrow viewing region to enhance the resolution of light field display [Citation12, Citation13].

For decades, the characteristics of light field display, including the resolution, field of view, and eye box, have been studied. The geometric and wave optic-based analysis of lens arrays and sampled display pixels related to the trade-offs between the angular resolution and expressible depth range have also been discussed [Citation15–18]. In the case of 1D light field displays, the side effects from the periodicity of lenticular lenses, such as the crosstalk, the sim noise, and moiré effects, have been reported [Citation19–22]. Moreover, in terms of human visual perception, the 3D images generated from the light field display enable the accommodation response on proper ray overlap, especially the super-multiview condition that constructs 3D images by two or more light rays on the pupil of an observer induces the accommodation response [Citation23–26]. However, experimental demonstrations of a light field display’s theoretical limitations through a conventional camera are highly limited, and it is hard to define the criteria for quantification. Several studies have measured light field display’s characteristics, such as resolution and depth of field, by indirect approaches, such as the knife edge method [Citation27–29]. Nevertheless, the volumetric pixels or voxels formulated on the intermediate or virtual area are the hardest characteristics to measure using image sensor-based conventional display quantification systems.

In terms of measuring the imaging of the light, one possible approach is to use interference. However, most light field displays are based on incoherent illumination, such as light-emitting diodes. Thus, an interferometer that can acquire the interference patterns under the incoherent light source is required. Self-interference incoherent digital holography (SIDH) is a promising technique for recording complex light waves which split the incident light into two waves and modulate differently to interfere with each other and can measure the response of light field and the characteristic of integral imaging [Citation30–35]. In this paper, we propose the assessment method of voxels in 1-D light field displays using geometric phase lens-based self-interference incoherent digital holography (GP-SIDH). The overlaps between light rays enable the accommodation response, and the super-multiview can offer the proper overlap condition like integral imaging. However, unlike general integral imaging, since the 1D light field displays only present single directional parallax, it is difficult to evaluate the voxels by conventional approaches based on the camera. To address this obstacle, SIDH captures the represented 3D images and evaluates the sharpness of the edge. Our SIDH utilizes the geometric phase (GP) lens as a wavefront divider and phase modulator, a liquid crystal-based polarization-selective passive optical element [Citation30, Citation36–39]. This enables the wave optics-based wave propagation analysis, and to evaluate the acquired 1D light field holograms, the autofocus algorithm based on edge differentiation is used. Sobel filtering is a classical and reliable algorithm of edge detection, which can calculate the quantitative sharpness by convolution of the Sobel operator [Citation40, Citation41]. We design the test patterns for edge sharpness assessment and demonstrate our prototype 1D light field display constructed by the mobile display.

2. Proposed method

2.1. Voxel formulation of 1-D light field display

1D light field displays aim to represent autostereoscopic images. However, autostereoscopy has a limited viewing zone and viewing angle. Therefore, the super-multiview, which can produce more than two views in the observer’s pupil, has been developed. In terms of ray optics, the dense ray distribution can compose the ray overlap cross-section, which acts similarly to the voxels formulated in integral imaging. Thus, the super-multiview can be considered as the focused mode integral imaging system, in which the gap between the micro-lens array (MLA), and the display panel is equal to the focal length of MLA. In terms of integral imaging, the image degradation, which is proportional to the expressed depth of 1D light field display, can be explained by a simple geometric relation of light rays, and the marginal depth range can be defined through this relation.

The sampled pixels of the display panel are the major cause of degradation in image quality as the expression depth. Figure illustrates the schematic diagram of ray overlapping in a 1D light field display. The depicted display panel has seven pixels in each elemental lens, and each pixel can represent a specific direction. The marginal ray from the peripheral elemental lens contributes to the formulation of the maximum depth voxel, as shown in Figure (a). The marginal depth range D can be derived using the number of pixels in the elemental lens Np and the gap g between the lens and display panel: (1) D=g(Np1)(1) On the other hand, if the disparity of each elemental image is out of the marginal depth range, the location of the marginal elemental lens shifts, and it forms a cross-section with the optical axis, as presented in Figure (b). This overlap still serves as voxels; however, the ray from the view of the near elemental lens has a parallel angle with the marginal rays, resulting in a blurred image. To address the blurry image by blocking the misaligned pixels, this blocked pixel can result in a missing viewpoint in reconstructed images.

Figure 1. Schematic diagram of ray overlaps and voxel formulation of 1D light field display. (a) Ray overlapping in marginal depth range; (b) out of marginal depth range and blurred image.

Figure 1. Schematic diagram of ray overlaps and voxel formulation of 1D light field display. (a) Ray overlapping in marginal depth range; (b) out of marginal depth range and blurred image.

Note that the marginal depth range only considers the exact overlapping of light rays in terms of geometric optics. Moreover, there are degradation factors, including the lens aberrations or border effects. The aberrations especially affect the dispersion of light rays and decrease the expressible depth range of light field displays. These factors diminish the expressible depth range and, in this paper, we only consider the theoretical maximum that can be calculated by geometric optics.

2.2. Voxel estimation based on self-interference incoherent digital holography

To estimate the formulation of the overlapping area and voxel formulation at the 1D light field display in an incoherent holographic manner, it has to consider the direction of the light rays and image formulation simultaneously Figure illustrates the proposed voxel estimation techniques. The proposed method is estimating the edge sharpness of formulated voxel by GP-SIDH. The measurement is conducted by directly capturing the formulated voxels to GP-SIDH as shown in Figure (a). The rectangle pattern with a single color is designed to acquire edge sharpness effectively, as shown in Figure (b). Due to the parallax being provided by the perpendicular to the lenticular lens direction, the edge sharpness is evaluated in the direction in which parallax is provided. The produced voxels on 1-D light field display have different edge sharpness proportional to be formulated depth, and it is evaluated by the holographic reconstruction of captured light field holograms. Figure (c–e) shows the camera-captured results of 1-D light field displays. As the disparity of each view image increases, the formulated depth also increases. However, the resolution of formulated voxels decreases linearly, and at some point, they cannot be distinguished and become blurred due to the aforementioned theoretical depth limitation.

Figure 2. Schematic diagram of proposed voxel estimation system and pattern design. (a) Schematic diagram of measuring process of 1D light field display; (b) voxel estimation pattern design and camera capture result of 1D light field display; (c) disparity of 5 pixels; (d) 10 pixels; and (e) 15 pixels.

Figure 2. Schematic diagram of proposed voxel estimation system and pattern design. (a) Schematic diagram of measuring process of 1D light field display; (b) voxel estimation pattern design and camera capture result of 1D light field display; (c) disparity of 5 pixels; (d) 10 pixels; and (e) 15 pixels.

The principle of GP-SIDH is the interference of two spherical wavefronts divided by the positive and negative focal lengths of the GP lens [Citation36–38]. Let the positive focal length of the GP lens as + fGP and vice versa. Let the spherical wave as Q, the formulated hologram is denoted by the following equation: (2) H(x,y;z)=I0(x0,y0,z0)Hi(x,y;x0,y0,z0)dx0dy0dz0,(2) (3) Hi=(Iδ(0)Iδ(π))i(Iδ(π/2)Iδ(3π/2))ψpψn(3)

The reconstructed holograms of a voxel are calculated by the angular spectrum method to reconstruct the depth zr [Citation42]. (4) I(x, y;zr)=|F1(F(H(x,y;0))exp(j2πzr(1/λ)2fx2fy2))|,(4) (5) zr=(zs+zh)(fGP(zs+zh)zszh)zs2(5)

The reconstruction depth is determined by the relationship between the distance from the voxel to the GP lens, from the GP lens and image sensor, and the focal length of the GP lens [Citation34]. The reconstruction distance varies in a quadratic order according to the position of the voxel, and the voxel within a certain distance condition can be more enlarged than the actual depth. After calculating the intensity profile at the reconstruction distance through the angular spectrum method, the sharpness of the voxel at the corresponding depth can be analyzed. The Sobel filtering-based differentiation method is utilized to evaluate the direction-selective edge sharpness. The edge sharpness metric or focus metric can be calculated by the following: (6) Mz=nNmM(I(m,n;zr)S)2(6) (7) S=(101202101)(7) where Mz is the focus metric according to the reconstruction depth. The Sobel operator is a 3 × 3 operator that can interpret separately for the horizontal and vertical direction. The conventional autofocus method calculates the focus metric by performing Sobel filtering in all horizontal and vertical directions using the operator’s transpose [Citation41]. On the other hand, we only use the vertical direction operator to measure the focus metric of 1-D integral imaging.

Note that the focus metric has two main considerations. First, the actual distance for formulated voxel can be found by the maximum focus metric, which is obvious to autofocus algorithms. Second, there is an ambiguous point to discriminating the focused depth caused by the blurry degradation of the voxel. In other words, the maximum expressible depth range can be defined, in which the focus is difficult to distinguish due to degradation. The ambiguity of focus is also affected by axial resolution and depth of focus of holograms. The focal length of the GP lens and numerical aperture will determine the axial resolution of GP-SIDH. These parameters are scalable and can be adjusted based on the human visual system.

3. Experimental result

In this section, the prototype 1D light field display has been measured by the incoherent holographic camera. Figure depicts the experimental setup. Prototype 1D light field displays were implemented by mobile displays and customized lenticular lenses by Samsung Display. The mobile display used is Galaxy S8 with a resolution of 2960 × 1440, with a 0.2849 mm-wide lenticular lens. The light field display is designed for a slanted-type super multiview display with 45 views. The test patterns are composed to be a slanted design to match the angle parallel to the lenticular and only use green pixels to maintain wavelength dependency. The experimental pattern image is a rectangle of 600 × 400, and the disparity between elemental images varied from 0 to 20 pixels. The holographic camera system consists of the relayed polarizer, the GP lens, and the polarized image sensor [Citation30, Citation39]. The polarizer is determined by the type of GP lens that serves as a half-wave plate or quarter-wave plate. In this experiment, we utilized the circular polarizer. The customized GP lens has a focal length of 275 mm at a wavelength of 525 nm. The polarized image sensor has a resolution of 2448 × 2048 with a pixel pitch of 3.45 µm. The resultant resolution after the phase shift method is 1024 × 1024 by the parallel phase shift. The distance between the displays and holographic camera modules is 200 mm.

Figure 3. Experimental setup to measure the prototype light field display with GP-SIDH system.

Figure 3. Experimental setup to measure the prototype light field display with GP-SIDH system.

Figure details the numerical reconstruction and Sobel-filtered results of the proposed 1D light field display assessment pattern according to the view disparity. The depth of produced pattern is 11.2, 22.4, and 33.6 mm, in which the disparity of elemental images is 5, 10, and 15 pixels, respectively. The reconstruction distance is 180, 200, 220, 240, and 260 mm. The top row displays the numerical reconstruction results, the middle row reveals horizontal Sobel-filtered results, and the bottom row presents vertical Sobel-filtered results. As the disparity increases, the blurred area at the periphery of the test pattern expands. This variation is amplified by the horizontal Sobel filtering, as evident in the results in the middle row. On the other hand, the results of the vertical Sobel filtering exhibit a consistent pattern regardless of disparity. Additionally, the maximum value of the focus metric decreases as the depth increases, indicating that the image quality degradation from depth representation can be assessed by the decrease in the focus metric value.

Figure 4. Numerical reconstruction results and Sobel filtering results of the proposed 1-D light field display assessment rectangle pattern. Each top row presents intensity reconstruction, the middle row shows horizontal Sobel operator convolution results, and the bottom row displays vertical Sobel operator convolution results. (a) The disparity of 5 pixels, (b) 10 pixels, and (c) 15 pixels.

Figure 4. Numerical reconstruction results and Sobel filtering results of the proposed 1-D light field display assessment rectangle pattern. Each top row presents intensity reconstruction, the middle row shows horizontal Sobel operator convolution results, and the bottom row displays vertical Sobel operator convolution results. (a) The disparity of 5 pixels, (b) 10 pixels, and (c) 15 pixels.

Figure explains the focus metric variation with reconstruction distance. The focus metric value corresponds to the sharpness of an image, indicating the clarity level. In the context of reconstruction distance, when a particular region exhibits high sharpness, it signifies that the region is in focus. On the other hand, when the focus metric fails to discriminate from defocus noise, resembling the ripple effect in holograms, it implies a lack of clarity. Similar to defocus observed by the human eye, if the hologram fails to distinguish between noise levels and the point with the highest focus metric, it suggests that the corresponding disparity or depth is not resolved.

Figure 5. Focus metric of 5, 10, and 15 disparity patterns according to the reconstruction distance.

Figure 5. Focus metric of 5, 10, and 15 disparity patterns according to the reconstruction distance.

In terms of 3D display, the expressed voxel must expand to virtual images. Figure shows the holographic camera and reconstruction result of real and virtual images of 1D light field displays. The disparity and depth are ±5 pixels and ±11.2 mm, respectively. The test pattern is a thin line pattern to amplify the accommodation measurement. Figure (d–f) presents the SIDH that can capture the accommodation response of 1D light field displays regardless of real and virtual images.

Figure 6. Real and virtual voxel of 1D light field capturing results. (a) Intensity camera result; (b)pPhase-angle part; (c) amplitude part of the captured hologram; (d) numerical reconstruction result of backward focused; (e) middleward focused; and (f) frontward focused.

Figure 6. Real and virtual voxel of 1D light field capturing results. (a) Intensity camera result; (b)pPhase-angle part; (c) amplitude part of the captured hologram; (d) numerical reconstruction result of backward focused; (e) middleward focused; and (f) frontward focused.

Figure demonstrates the focus metric value according to the voxel depth and reconstruction distance. Both real and virtual mode voxels are measured. The reconstruction distance range is 0–500 mm, and the voxel depth is intended between 0 and 100 mm, with a disparity of 0–20 pixels. As voxel depth increases, the variance of the focus metric decreases. In numerical reconstruction, the periodic defocus noise like ripples affects the Sobel filtering result. Therefore, reducing focus metric variance indicates that the autofocus algorithm cannot distinguish the noise and voxels, and this ambiguous location becomes the limitation of the 1D light field display.

Figure 7. Focus metric according to the voxel location and reconstruction distance. (a) Real mode and (b) virtual mode.

Figure 7. Focus metric according to the voxel location and reconstruction distance. (a) Real mode and (b) virtual mode.

We compared the box plot and maximum focus metric value, as exposed in Figure . The grey dotted lines represent the theoretical depth range calculated by the aforementioned marginal depth range equation. The maximum of each focus metric value and box plot saturates as the voxel location increases, and the saturation starts near the theoretical depth range. The maximum value of focus metric value illustrated as red and blue lines at the voxel location indicates that the Sobel filter can recognize the formulated voxel with denoise errors of the hologram. However, since the theoretical depth limit is calculated based on the lenticular lens specification and the gap between the display panel of the 1D light field display, some physical errors may be considered. Nevertheless, the saturation behavior observed in the Sobel filtering result provides an experimental marginal depth range of an unknown 1D light field display.

Figure 8. Box plot and maximum focus metric value of evaluated prototype 1-D light field display. The red and blue lines illustrate the maximum value of the focus metric at each represented voxel.

Figure 8. Box plot and maximum focus metric value of evaluated prototype 1-D light field display. The red and blue lines illustrate the maximum value of the focus metric at each represented voxel.

Comparing our proposed holographic camera system with conventional plenoptic camera systems like TOMBO [Citation42, Citation43], one notable difference lies in extracting depth map data from multiple 3D scene captures. However, in light field display assessment, the major distinction arises from the utilization of diffraction-based wave propagation in our holographic camera system, contrasting with plenoptic systems. The proposed voxel assessment is based on the depth of focus and axial resolution of the incoherent holographic camera, which can be varied by the configuration of the measuring system. It should be noted that the axial resolution of our holographic camera system is affected by the focal length of the GP lens and the field of view of the holographic camera. On the other hand, these can be optimized by the objective display system, and if the objective optimization function is focused on the human visual system, a user experience-based 3D image assessment system can be composed.

4. Conclusion

In summary, we present the 1D light field display evaluation method using the incoherent holographic camera. We assume that the multiview display with sufficiently many viewpoints serves as the integral imaging and the dense light ray distribution creates the accommodation response even in the single direction. This horizontal parallax can be evaluated by measuring the edge sharpness of rectangle patterns. The incoherent holograms of the light field are acquired, and the edge sharpness was assessed by reconstructing the holograms and conducting the Sobel filtering. The experimental results demonstrate the light ray overlaps in 1D light field display and the autofocus algorithm can evaluate the maximum expressible depth range by finding the indistinguishable voxel depth. Note that it is possible to evaluate virtual and real images. It is a powerful advantage as a quantification system for evaluating 3D displays. In future works, we plan to optimize the parameter of GP-SIDH to focus on the human visual system. This can give the feasibility of 3D display evaluation in humans’ visual system aspect.

Disclosure statement

No potential conflict of interest was reported by the author(s).

Additional information

Funding

This research was supported by Samsung Display Co, Ltd.

Notes on contributors

Youngrok Kim

Youngrok Kim received his B.S. and M.S. degrees in information display from Kyung Hee University, Seoul, Korea, in 2020 and 2022, respectively. He is currently working on a Ph.D. degree in the Department of Information Display at Kyung Hee University. His research interests include 3D display and digital holography.

Hyunsik Sung

Hyunsik Sung received his B.S. and M.S. degrees in information display from Kyung Hee University, Seoul, Korea, in 2012 and 2015, respectively. He is currently working on a Ph.D. degree in the Department of information display at Kyung Hee University. His current research interests include 3D displays and devices of floating display systems.

Wonseok Son

Wonseok Son received his B.S. degree in information display from Kyung-Hee University, Seoul, Korea, in 2023. He is currently pursuing an M.S. degree in the Department of Information Display at Kyung Hee University, Seoul, Korea. His research interests include 3D display and digital holography.

Dong-Woo Seo

Dong-Woo Seo received his B.S. degree in electronic engineering from Kumoh National Institute of Technology, Gumi, Korea, in 2023. He is currently pursuing an M.S. degree in the Department of Information Display at Kyung Hee University, Seoul, Korea. His research interests are in 3D display and digital holography.

Chihyun In

Chihyun In received his B.S. degree in information display from Kyung-Hee University, Seoul, Korea, in 2023. He is an undergraduate researcher at the Display Optical Application Laboratory at Kyung-Hee University, Seoul, Korea. His research interests include 3D display and VR/AR display.

Sung-Wook Min

Sung-Wook Min received his B.S. and M.S. degrees in electrical engineering from Seoul National University, Republic of Korea, in 1995 and 1997, respectively. In August 2004, he received his Ph.D. from his alma mater. Currently, he is a faculty member in the Department of Information Display at Kyung Hee University, which he joined in 2007. He is recently interested in incoherent holographic cameras, 3D imaging methodology, and advanced display systems.

References