## Abstract

The measurement of aspheric and free-form surfaces in a non-null test configuration has the advantage that no compensation optics is required. However, if a surface is measured in a non-null test configuration, retrace errors are introduced to the measurement. We describe a method to calibrate the test space of an interferometer, enabling to compensate retrace errors. The method is effective even for strong deviations from null test configuration up to several 100 waves, enabling the fast and flexible measurement of aspheres and free-form surfaces. In this paper we present the application of the method to the calibration of the Tilted Wave Interferometer. Furthermore, the method can be generalized to the calibration of other setups.

© 2014 Optical Society of America

## 1. Introduction

Aspheric surfaces are an enabling technology for many applications [1]. The advantage of aspheric, compared to classical spherical optics is the highly increased degree of freedom for the design. This allows the construction of more compact optical systems, by simultaneously increasing the optical performance. As a logical consequence, aspheric optics are widely used in modern optical systems, reaching from low cost imaging optics in smartphones to high end lens systems in the field of lithography and space applications. With free-form optics, that do not have to show any symmetry at all, even more sophisticated optical systems can be realized. The fabrication technology for such aspheric and free-form surfaces has made a major progress in the last years. However, for the control of the process, metrology is needed. State of the art in high accuracy asphere metrology is the use of computer generated holograms (CGH) [2]. The drawback of such holograms is, that for every design shape to be tested, a matching CGH has to be fabricated, which is costly and time consuming. Other technologies, like stitching or scanning interferometry, are more flexible [3, 4]. However, since the surface under test has to be moved during the measurement relatively to the interferometer, a single measurement takes several minutes. Furthermore, the final measurement result has to be stitched together from the single measurements, requiring sophisticated algorithms. The Tilted Wave Interferometer (TWI) [5], which was developed at the University of Stuttgart, is a flexible measurement technique for aspheres but also free-form surfaces. It combines the high accuracy of interferometric measurements with a high dynamic range of up to 10° gradient deviation from the spherical form, without the need for compensation optics like CGHs. Furthermore it has - like all full field interferometric methods - a high lateral resolution. An outstanding feature is its short measurement time well below one minute, which is achieved by a highly parallelized data acquisition. The unique combination of these features makes the TWI a perfect candidate for the integration into the manufacturing chain of asphere and free-form production.

## 2. Setup of the Tilted Wave Interferometer

In this paper we focus on the calibration of this non-null interferometer. For convenience we briefly review the basic setup of the Tilted Wave Interferometer (see Fig. 1). The beam of a coherent laser source L is divided into a test and a reference wave by a polarizing beam splitter BS2. The wave illuminates a micro lens array that is followed by a pinhole array. These two parts serve as an point source array for the test wavefronts. The spherical wavefront from each point source is, after passing the beam splitter BS1, collimated by C2 resulting in a set of plane wavefronts with different well defined amounts of tilt. The tilted wavefronts are transformed to spherical wavefronts by the objective lens O to compensate the basic spherical form of the surface under test. In the case of surfaces with a best fit sphere of infinite radius no transmission sphere is needed. After reflection at the surface under test the wavefronts propagate back to the beam splitter, where they are reflected to the camera arm of the interferometer. In the Fourier plane [6] an aperture stop A is located, to block all light that would generate fringes with a density that violates the Nyquist criterion [7]. After the aperture the light passes the imaging optics IO and interferes with the reference wavefront on the camera C. The distance between the sources in the point source array is chosen to cover the whole surface under test without gaps and a slight overlap. To avoid interference between neighboring sources, the measurement is divided into four steps, with only every fourth source enabled in each step. The switching between the sources is realized by a simple aperture array AA, that is moved in front of the micro lenses and blocks every second source in each row and column. The main difference between this approach and the stitching type interferometers is that the acquisition of the data is highly parallelized, since all test wavefronts are applied to the surface in only four steps. Further, the surface under test does not have to be moved during the measurement process. Both these advantages lead to a very short measurement time of well below a minute.

## 3. Calibration

The intrinsic non-null setup requires a calibration [8]. In null-test configuration, a ray impinges perpendicularly on the surface and takes (after being reflected by the surface) the same path back through the interferometer. Therefore it is sufficient to calibrate the OPD that is introduced by the interferometer for each pixel coordinate on the camera. The calibration can be expressed as a 2-dimensional phase map *ϕ* = *f* (*x*, *y*) with *x* and *y* being the coordinates of the pixel and *f* being some polynomial function or a look-up table containing the OPD correction value for each pixel. In a non-null test, spatial and field dependencies are not decoupled any more and the relationship has to be replaced by a more involved relationship. The ray impinges on the surface under an angle that may differ from the perpendicular case and therefore the ray may take an arbitrary path through the interferometer, introducing retrace errors to the measurement. As a result, a 2-dimensional calibration is not sufficient, since the introduced OPD also depends on the field angle, i.e. how the light passes through the setup.

This can be described as a 4-dimensional dependency *ϕ* = *f* (*x*, *y*, *m*, *n*), where two dimensions (*x* and *y*) cover the spatial dependency of the phase, as in the null-test example and the other two dimensions (*m* and *n*) cover the field dependency. This is equal to the Hamiltonian point characteristics of the system [9]. With such a 4-dimensional description any possible ray through the system can be described, as long as we chose the position for the reference plane in a region without any caustic.

#### 3.1. Mathematical model

In our case the calibration model of the interferometer is split into two parts for practical reasons. The first part describes the illumination from the point sources to the region behind the last surface of lens O, called test area, since here the surface under test is positioned. The second part describes the imaging from the test area to the camera. In the following, we describe a mathematical formulation that allows to calculate all OPD that can be observed when an arbitrary surface under test is put into the test area. The illumination part is denoted with a subscript *Q* and describes the OPD from the point sources to a reference plane *E _{Q}*, that is defined in the test area of the interferometer (see Fig. 2). In case of a classical interferometer with a single test wavefront, for this part a 2-dimensional calibration depending on the coordinates on

*E*is sufficient. In case of the Tilted Wave Interferometer the OPD at

_{Q}*E*is dependent on the position on

_{Q}*E*but also on the position of the microlens. The OPD of a ray from a source to the reference plane

_{Q}*E*can be written as

_{Q}*X*and

*Y*are the normalized coordinates on

*E*,

_{Q}*Z*are a set of Zernike polynomials, with

_{i}*i*being Noll’s sequential index [10], and

*q*the polynomial coefficients. For a classical interferometer

_{i}*q*is a vector containing the coefficients of the Zernike polynomials describing the OPD in

_{i}*E*. In case of the Tilted Wave Interferometer

_{Q}*q*can be written as where

_{i}*M*,

*N*is the normalized source position on the micro lens array,

*j*are Noll’s sequential indices and

*Q*is a two dimensional matrix containing the polynomial coefficients describing the spatial and field dependency of the OPD. From Eqs. (1) and (2) we get for the OPD at

_{ij}*E*[11] For the OPD of the imaging part we apply the same method resulting in where

_{Q}*x*,

*y*are the normalized coordinates on the reference plane

*E*(see Fig. 3),

_{P}*m*,

*n*are the normalized pixel coordinates on the camera, and

*k*,

*l*are the Noll’s sequential indices of the Zernike polynomials

*Z*.

With *W _{Q}*(

*X*,

*Y*,

*M*,

*N*) and

*W*(

_{P}*x*,

*y*,

*m*,

*n*), all optical paths through the interferometer are described. The rays in the test area can be traced using classical ray tracing methods and the condition that a ray has to be perpendicular to the wavefront in

*E*and

_{Q}*E*described by

_{P}*W*(

_{Q}*X*,

*Y*,

*M*,

*N*) and

*W*(

_{P}*x*,

*y*,

*m*,

*n*). If all polynomial coefficients of

*Q*and

_{i,j}*P*are known, it is possible to distinguish between the aberration of the surface under test and the phase introduced by the interferometer itself, like lens aberration and retrace errors [12, 13]. This is not only possible for regions with low fringe density, where the rays are near to null-test, but for any possible ray, even if the fringe density is slightly below the Nyquist frequency of the camera. The task of the calibration is to determine

_{k,l}*Q*and

_{i,j}*P*.

_{k,l}#### 3.2. Calibration as an inverse problem

This can be achieved by solving an inverse problem. From the nominal ray tracing model of the interferometer we calculate a first guess for the polynomial coefficients. This is done by tracing a set of rays through the device and fitting the calculated optical paths with the polynomials defined in *W _{Q}*(

*X*,

*Y*,

*M*,

*N*) and

*W*(

_{P}*x*,

*y*,

*m*,

*n*). With the nominal system and a known reference sphere at a known position in the test area we can directly calculate the OPD of a ray from any source of the point source array to any pixel on the camera.

*c*,

_{x}*c*and

_{y}*c*are the coordinates of the sphere in the test area,

_{z}*r*is the radius of the sphere and

*OPD*is the OPD that is added by the geometrical length of the ray from

_{geom}*E*over the sphere to

_{Q}*E*. Since there is only one possible ray from a source to a pixel (as long as there is no caustic in the region of

_{P}*E*and

_{Q}*E*)

_{P}*M*,

*N*,

*m*and

*n*can be calculated as well, and do not have to be specified.

By placing the reference sphere in different positions in the test area, different source-pixel combinations can be realized. We now define a vector of rays *v _{b}* that contains rays from all of the sources to a predefined grid of pixels. The OPD of the rays in

*v*is

_{b}*b*.

*b*that is calculated from

_{nom}*Q*and

_{nomi,j}*P*contains all the OPDs of the test rays for the nominal system. We now define a matrix

_{nomk,l}*A*containing

*b*for a small perturbation

*ε*of all the coefficients in

*Q*and

_{nomi,j}*P*as well as for changes in the position of the sphere. This is necessary, since the stage that moves the sphere has a finite accuracy that has to be taken into account, too. The matrix

_{nomk,l}*A*is set up as follows, where

*A*covers the perturbations in

_{Q}*Q*,

*A*covers the perturbations in

_{P}*P*,

*A*covers the perturbations of the sphere positions

_{c}*c*.

*b*(

*Q*

_{ε(i,j)}) being the changes in

*b*when the coefficient

*Q*is changed by

_{nomi,j}*ε*and all other coefficients have their nominal value,

*b*(

*P*

_{ε(i,j)}) is the changes in

*b*for changes in

*P*and

_{nomk,l}*b*(

*c*(

_{xε}*t*)),

*b*(

*c*(

_{yε}*t*)) and

*b*(

*c*(

_{zε}*t*)) are the change in

*b*for a misalignment of the reference sphere in position

*t*in direction

*c*(

_{x}*t*),

*c*(

_{y}*t*) and

*c*(

_{z}*t*) about

*ε*.

The actual calibration measurement consists of a series of phase shifting measurements of the reference sphere, that is moved to the positions required to record the data in *b* yielding *b _{real}*.We now can explain the difference

*δb*=

*b*−

_{real}*b*as a linear combination of the perturbation vectors in

_{nom}*A*.

*εx*for

*Q*,

_{i,j}*P*as well as for

_{i,j}*c*,

_{x}*c*and

_{y}*c*. As a result we obtain a set of parameters

_{z}*Q*and

_{i,j}*P*, describing the aberrations of the real system.

_{i,j}#### 3.3. Physically conditioning the system

To be able to calculate the correct parameters for *Q _{i,j}* and

*P*(and the real sphere positions), the system of linear equations defined in

_{i,j}*A*has to have a unique solution. This is the case when there are no linear dependencies between the parameters of

*Q*,

*P*and the sphere positions. To achieve this, a sophisticated choice of the rays in

*b*, as well as some side conditions to the matrix

*A*are required.

The easiest way to assure that all sources of the point source array are included in *b* (which is necessary to calibrate them), is to move the sphere to all positions, where the displacement of the reference sphere compensates the tilt of each source, and therefore an interferogram with low fringe density is obtained. Let’s call this the null-test position of a source. In Fig. 4 some measured modulo 2*π* maps with the reference sphere positioned in this way are shown, covering the central nine sources.

There is an ambiguity that has to be eliminated, namely the linear dependency of the aberrations field dependency in *W _{Q}*(

*X*,

*Y*,

*M*,

*N*) and the misalignment of the reference sphere. To understand this, we imagine a system with all sphere positions at their nominal value

*c*=

_{xreal}*c*, and the linear field dependency of the tilt of the illumination wavefronts in x-direction at its nominal value.

_{xnom}*Q*(2, 2) =

_{real}*Q*(2, 2). This system cannot be distinguished from a system, where the sphere position’s x-values were scaled by a factor

_{nom}*c*=

_{xreal}*α*

_{1}

*c*and the tilt of the sources is scaled as well

_{xnom}*Q*(2, 2) =

_{real}*α*

_{2}

*Q*(2, 2), with

_{nom}*α*

_{2}being the factor to compensate the tilt introduced by

*α*

_{1}. Equivalent considerations can be made for higher order field dependencies of the tilt and defocus terms (as well as for higher order spatial aberrations). The linear dependency can be eliminated by adding rays from additional measurements to the system of linear equations. For these measurements the reference sphere is positioned in a defocused position between the null-test positions of several neighboring sources. As a result a single measurement containing information about several sources is obtained. In Fig. 5, a modulo 2

*π*phase map of a measurement containing rays from four sources is shown. By adding rays from such defocused measurements to the vector

*b*, the tilt of the sources that contribute to the measurement is still linearly dependent from the misalignment

*c*of the reference sphere in this position. However, the relative tilt between the neighboring sources is independent from the misalignment. If we add one measurement that contains information of source

*M*= 0,

*N*= 0 and source

*M*= 0.1,

*N*= 0 and another measurement that contains information about source

*M*= 0.1,

*N*= 0 and

*M*= 0.2,

*N*= 0, the relative tilt between source

*M*= 0,

*N*= 0 and

*M*= 0.2,

*N*= 0 is defined over these two measurements. By adding additional measurements that link every source of the point source array to every other source over one or more measurements, the field dependency of tilt and defocus (as well as the higher order aberrations) is no longer linearly dependent from the misalignment of the reference sphere. We can describe this as a graph: The sources are vertices and an edge between two sources exists, if a source is contained in one measurement with another source. In this picture we can formulate our necessary boundary condition: The graph has to be connected.

Another degree of freedom that has to be removed is the definition of the coordinate system in the test space. Without any constraint the coordinate system can be moved and rotated, leading to an infinite number of different valid solutions. To understand this we imagine one solution with all sphere positions *c _{xreal}* =

*c*, and the global tilt of the illumination wavefronts in x-direction

_{xnom}*Q*(2, 1) =

_{real}*Q*(2, 1). This solution is equivalent to a solution where

_{nom}*c*=

_{xreal}*c*+

_{xnom}*δ*

_{1}and

*Q*(2, 1) =

_{real}*Q*(2, 1) +

_{nom}*δ*

_{2}(and small changes to the higher order aberrations in

*Q*), with

*δ*

_{1}being the amount of displacement to compensate the tilt

*δ*

_{2}. Equivalent considerations can be made for all six degrees of freedom of the coordinate system. To obtain a unique solution with a clearly defined coordinate system, it is therefore necessary to add six side conditions to

*A*. The first three side conditions define the misalignment of the reference sphere in the null-test position

*c*(1) of the central source

*M*= 0,

*N*= 0, which is one test position included in

*b*, to zero:

*x*,

*y*and

*z*. The next two side conditions define the misalignment of the reference sphere in a position

*c*(2) on the optical axis

*z*that is defocused by a small amount, in

*x*and

*y*direction to zero: This defines the rotation of the coordinate system about the

*x*and

*y*axis. The last side condition defines the misalignment of the reference sphere at the position

*c*(3) with

*x*=

*d*,

*y*= 0

*z*= 0, that contains rays from the source

*M*= 1,

*N*= 0 in

*y*direction to zero: Here

*d*is the amount of displacement that compensates the tilt of the outer source

*M*= 1,

*N*= 0. In Fig. 6 the definition of the coordinate system is visualized. In a measurement of

*b*there is of course some misalignment to these sphere positions. However, with the conditions added in the way described above, the system of linear equations is not over-determined and also, even with a misalignment in the positions, an orthogonal coordinate system is defined. With the side conditions and the additional defocused measurement positions described above, the system of linear equations has a unique solution. However, since there is some noise in the measurement data and the problem is ill posed, the optimization algorithm may still find the wrong solution. This is because there is a degree of freedom (DOF) left, that is theoretically well defined, however the signal is small and easily distorted by noise. In the singular value decomposition (SVD) this mode has a very small singular value. To understand this DOF we imagine a system, where the field dependent tilt in

_{real}*x*and

*y*direction of the sources has its nominal value

*Q*(2, 2) =

_{real}*Q*(2, 2),

_{nom}*Q*(3, 3) =

_{real}*Q*(3, 3), and the positions of the reference sphere in the null test positions of the sources in

_{nom}*x*and

*y*direction are at their nominal value

*c*(

_{xreal}*Null*) =

*c*(

_{xnom}*Null*). Further, the

*z*position of the defocussed measurement positions is at its nominal value

*c*(

_{zreal}*Def*) =

*c*(

_{znom}*Def*). The measurment information

*b*of this system is almost equal to a system, where the tilt of the sources in

*x*and

*y*direction is scaled by a factor

*Q*(2, 2) =

_{real}*α*

_{2}

*Q*(2, 2),

_{nom}*Q*(3, 3) =

_{real}*α*

_{2}

*Q*(3, 3), the reference sphere positions in the null test positions are scaled in

_{nom}*x*and

*y*direction to compensate this tilt

*c*(

_{xreal}*Null*) =

*α*

_{1}

*c*(

_{xnom}*Null*),

*c*(

_{yreal}*Null*) =

*α*

_{1}

*c*(

_{ynom}*Null*) and the defocused measurement positions are misaligned in

*z*direction by a small value

*c*(

_{zreal}*Def*) =

*c*(

_{znom}*Def*) +

*δ*. This is because through the misalignment of the defocused positions in

_{z}*z*the magnification of the system, when imaging the sources to the camera, is changed. This DOF can be eliminated by adding a second reference sphere with a different radius to the measurement. If a reference sphere is misaligned by a small amount in

*x*direction

*c*=

_{xreal}*c*+

_{xnom}*δ*

_{1}we obtain mainly tilt in the measurement, but also some higher order aberrations, mainly coma. The amount of coma that is present for a certain amount of tilt is dependent on the radius of the reference sphere. Therefore the field dependent tilt

*Q*(2, 2) =

_{real}*α*

_{2}

*Q*(2, 2),

_{nom}*Q*(3, 3) =

_{real}*α*

_{2}

*Q*(3, 3) can no longer be compensated by a misalignment of the reference sphere

_{nom}*c*(

_{xreal}*Null*) =

*α*

_{1}

*c*(

_{xnom}*Null*),

*c*(

_{yreal}*Null*) =

*α*

_{1}

*c*(

_{ynom}*Null*), since this would lead to contradictory OPDs in

*b*. This is due to the different amount of higher order aberrations in the measurement of the reference spheres with the two different radii, increasing the robustness of the algorithm.

## 4. Measurement results

In Fig. 7(a) a TWI measurement result of a weak asphere is shown. The surface has a deviation from the spherical form of 1.5*μm* over the clear aperture of 6*mm*. Because of the weak asphericity only the central source of the TWI was used for this measurement. This is equivalent to using the calibration method described above, for the calibration of a conventional full field interferometer. In Fig. 8(a) the raw modulo 2*π* measurement data is shown. The measurement has only been evaluated in the clear aperture of the asphere. The measured deviation of the surface from its nominal form is 330nm PV. As comparison a measurement result of the asphere using a tactile IBS Precision Engineering Isara 400 [14, 15] coordinate measurement machine is shown (see Fig. 7(b)). The defocus term has been subtracted from both measurements. In Fig. 8(a) the difference between the two measurements is shown. The measurement data of the TWI was interpolated to the grid of the tactile Isara 400 measurement for the difference plot. The bumps in the tactile measurement arise from dust particles on the stylus that were present during the measurement.

In Fig. 8(b) a measurement result of a steep asphere that was performed on a prototype of the TWI at the company Mahr GmbH is shown. The measured asphere has a deviation of 550*μm* and slope deviation of 5° from the spherical form. The measured deviation of the surface from the nominal shape is 1.5*μm* PV. The patch distribution for the different sources in the four positions of the aperture array is shown in Fig. 9(a).

## 5. Further extensions

The current setup of the TWI can measure surfaces with a clear aperture up to 60mm in diameter and a radius of the best-fit sphere up to 48mm. For larger surfaces the measurement has to be stitched from several subapertures. We have designed a set of objective lenses with higher focal length, that will be used for the stitching of larger surfaces. These lenses have -compared to conventional interferometric lenses - higher aberrations on the optical axis, which can be calibrated. Instead, they have a better aberration correction in the field. The advantage of the TWI when used as stitching interferometer, compared to a classical interferometer is the strongly increased size of the single subapertures, due to the high dynamic range of the interferometer. This allows to reduce the error that is introduced by the stitching algorithm. Another advantage is the possibility to correct for alignment introduced errors as described in [16]. The combination of these features make the TWI a promising metrology technique for the stitching measurement of large aspheric or free-form optics.

## 6. Conclusion

The Tilted Wave Interferometer is a non-null test interferometer for the measurement of aspheres and free-form surfaces with a deviation of up to 10° from the spherical form. Its main benefits are the short measurement time of well below one minute, combined with its high lateral resolution. It is highly flexible and does not need costly compensation optics. All these advantages make it a promising metrology technique for the integration into the process chain of asphere and free-form fabrication. The presented calibration method can further be applied to conventional full-field interferometers, allowing to eliminate the retrace errors from the measurement. Furthermore, the requirements for the alignment accuracy are strongly reduced.

## Acknowledgments

The EMRP is jointly funded by the EMRP participating countries within EURAMET and the European Union. We also thank the BMBF (German Ministry of Education and Research) for the financial support FKZ 13N10854 MesoFrei and IBS Precision Engineering for the comparison measurement of the asphere.

## References and links

**1. **B. Braunecker, R. Hentschel, and H. J. Tiziani, *Advanced Optics Using Aspherical Elements* (SPIE Press Monograph PM173, 2008). [CrossRef]

**2. **D. Malacara, K. Creath, J. Schmit, and C. Wyant, *Optical Shop Testing* (Wiley, 2007), chap. 12.12 Interferometers using synthetic holograms, 3rd ed. [CrossRef]

**3. **M. F. Kuechel, “Interferometric measurement of rotationally symmetric aspheric surfaces,” Proc. SPIE **7389**, 738916 (2009). [CrossRef]

**4. **P. Murphy, G. Forbes, J. Fleig, P. Dumas, and M. Tricard, “Stitching interferometry: A flexible solution for surface metrology,” Opt. Photon. News **14**, 38–43 (2003). [CrossRef]

**5. **E. Garbusi, C. Pruss, and W. Osten, “Interferometer for precise and flexible asphere testing,” Opt. Lett. **33**, 2973–2975 (2008). [CrossRef] [PubMed]

**6. **J. W. Goodman, *Introduction to Fourier Optics* (MaGraw-Hill, 2005), 3rd ed.

**7. **H. Nyquist, “Certain topics in telegraph transmission theory,” Trans. AIEE **47**, 617–644 (1928).

**8. **C. J. Evans and J. B. Bryan, “Compensation for errors introduced by nonzero fringe sensities in phase-measuring interferometers,” CIRP Annals Manufacturing Technology **42**(1) 577–580 (1993). [CrossRef]

**9. **H. A. Buchdahl, *An Introduction to Hamiltonian Optics* (Cambridge University, 1970).

**10. **R. J. Noll, “Zernike polynomials and atmospheric turbulence,” JOSA **66**, 207–211 (1976). [CrossRef]

**11. **J. Liesener, “Zum einsatz räumlicher lichtmodulatoren in der interferometrischen wellenfrontmesstechnik,” Ph.D. thesis, University of Stuttgart (2006).

**12. **E. Garbusi and W. Osten, “Perturbation methods in optics: Application to the interferometric measurement of surfaces,” J. Opt. Soc. A **26**, 2538–2549 (2009). [CrossRef]

**13. **G. Baer, J. Schindler, J. Siepmann, C. Pruss, W. Osten, and M. Schulz, “Measurement of aspheres and free-form surfaces in a non-null test interferometer: reconstruction of high-frequency errors,” Proc. SPIE **8788**, 878818 (2013). [CrossRef]

**14. **I. Widdershoven, M. Baas, and H. Spaan, “Ultra-precision 3D coordinate metrology results showing 10nm accuracy,” in Proceedings of the 11th international symposium of measurement technology and intelligent instruments, (2013), pp. 1–5.

**15. **I. Widdershoven, M. Baas, and H. Spaan, “Tactile coordinate metrology fur ultra-precision measurement of optics: Results and intercomparison,” in Proceedings of 2014 ASPE Summer Topical Vol 48, (2014), pp. 92–97.

**16. **G. Baer, J. Schindler, C. Pruss, and W. Osten, “Correction of misalignment introduced aberration in non-null test measurements of free-form surfaces,” JEOS:RP8 (2013), 1–5.