Monday, August 29, 2016

optics - What causes blurriness in an optical system?


The way I understand the purpose of a typical optical system is that it creates a one to one mapping between each possible incident ray and a point on a sensor plane. This is like a mathematical function. If there was no mapping, and each ray was free to strike any point on a sensor there would be no image formed on it and it would be just blurry average light. This would be like having a camera sensor without a lens.


Now there's a very simple concept that creates this one to one mapping, pin-hole camera. In a pin-hole camera there's no blurriness possible, as long as the hole is small enough, each point opposing the hole is mapped onto one specific ray. This means this type of camera can never have a blurry image, no matter where it's focal point is. This can be proven, geometrically.


In an optical system that uses lenses to create this mapping however things are not always ideal, because blurriness does happen. Which indicates that mapping is not one to one, and that some sensor points share rays with each other creating local averages, i.'e blurriness. It is often claimed that it happens because the focal point is not at the "right place". If you consider the pin-hole model as the ideal you will understand that this is not true. Changing focal point alone will only make image seem smaller or larger. From geometrical optics alone I don't see what could possibly cause the sharing of rays. It seems to me there's more to it and that i'm not the only one confused.


So what does REALLY create the blurriness? Is there some sort of imperfection in lenses that causes them to send multiple rays to the same point on a sensor and that somehow becomes more visible at certain focal distances? This is the only explanation i have.



Answer



To add some details to Eoin's answer.



Your description of imaging as a mapping is a good one to begin with and it will get yoi a long way. However, even in ray optics (see my answer here for more info), i.e. when the propagation of light can be approximated by the Eikonal equation (see here for more info), the mapping of points one-to-one between the object and image plane as you describe can only happen in very special conditions. In general, a bundle of rays diverging from one point will not converge to a point after passing through an imaging system made of refracting lenses and mirrors. One has to design the system so that the convergence is well approximated by convergence to a point. As Eoin said, this non-convergence is the Ray theory description of aberration: spherical, comatic, astigmatic, trefoil, tetrafoil, pentafoil and so forth are words that are used to describe aberration with particular symmetries (spherical aberration is rotationally symmetric about the chief ray, coma flips sign on a $180^o$ rotation about the chief ray, trefoil flips sign on a $120^o$ rotation and so forth). There is also chromatic aberration, where the image point position depends on wavelength so that point sources with a spectral spread have blurred images. Lastly, the imaging surface, comprising the points of "least confusion" (i.e. those best approximating where the rays converge to a point) is always curved to some degree - it is often well approximated by an ellipsoid - and so even if convergence to points is perfect, the focal surface will not line up with a flat CCD array. This is know as lack of flatness of field: microscope objectives with particularly flat imaging surfaces bear the word "Plan" (so you have "Plan Achromat", "Plan Apochromat" and so forth).


Only very special systems allow for convergence of all ray bundles diverging from points in the object surface to precise points in the image surface. Two famous examples are the Maxwell Fisheye Lens and the Aplanatic Sphere: both of these are described in the section called "Perfect Imaging Systems" in Born and Wolf, "Principles of Optics". They are also only perfect at one wavelength.


An equivalent condition for convergence to a point is that the total optical path - the optical Lgagrangian is the same for all rays passing between the points.


Generally, lens systems are designed so that perfect imaging as you describe happens on the optical axis. The ray convergence at all other points is only approximate, although it can be made good enough that diffraction effects outweigh the non-convergence.


And of course, finally, if everything else is perfect, there is the diffraction limitation described by Eoin. The diffraction limit simply arises because converging plane waves with wavenumber $k$ cannot encode any object variation that varies at a spatial frequency greater than $k$ radians per second. This, if you like, is the greatest spatial frequency Fourier component that one has to build an image out of. Images more wiggly than this Fourier component of maximum wiggliness cannot form. A uniform amplitude, aberration-free spherical wave converges to an Airy disk, which is often taken as defining the "minimum resolvable diffraction limited distance". However, this minimum distance is a bit more complicated than that. It is ultimately defined by the signal to noise as well, so an extremely clean optical signal can see features a little bit smaller than the so-called diffraction limit, but most systems, even if their optics is diffraction limited, are further limited by noise to somewhat less than the "diffraction limit".


No comments:

Post a Comment

classical mechanics - Moment of a force about a given axis (Torque) - Scalar or vectorial?

I am studying Statics and saw that: The moment of a force about a given axis (or Torque) is defined by the equation: $M_X = (\vec r \times \...