# Thread: Optics question: Diffraction limit

1. After reading through a number of articles lately I am beginning to think that the Hubble and similar telescopes could zoom in more to their targets if they wanted to, but do not due to a diffraction limit. Is this true? Say you had a camera where each pixel was the size of the Plank Length-could it resolve more detail than the Hubble of distant objects if you were to digitally zoom into your picture? What I want to know essentially is, is there an information limit or a hardware limit to looking at distant objects? Will even 1 million years of advances in technology make any difference as to how far into the universe we can look?

2.

3. Yes, there is a limit that depends on the size of the telescope mirror and the observed wavelength. Objects cannot be imaged infinitely sharp. Every so called point source (i.e. distant star) is transformed into an image with a Gaussian brightness profile (see: Airy disk). The typical equation for the smallest resolution element is:

smallest resolution angle in radian
wavelength in metres
diameter of main mirror in metres

Diffracted limited imaging can be only fully done from space, because the atmosphere blurs any image to some extent, called "seeing". Its amount depends on the turbulence of air contained in the column above the telescope and the wavelength of observation. This is usually much worse than the diffraction limit of terrestrial telescopes, so they cannot exploit their resolving power to the full extent. In order to overcome this problem, one uses the technology of adaptive optics. It is able to correct for these turbulences up to a certain degree, which improves the resolution of the optical system. But up to now, it is not working at optical wavelengths and is currently used in the infrared.

4. I understand why wavelength limits the resolving power, but why does the size of the mirror have the same restricting power? Is it because a larger mirror allows you to
equate in parts of the image at a larger angle therefore virtually making it seem as if the object is larger, decreasing the diffraction limit? What if you only had a basic lens that gathers light and focuses it with an infinite pixel density CCD at the other end. Would you still have the same diffraction limit? If you had a "perfect eye" devoid of any and all distortions and problems, would you still have limited resolving power? Basically, by the time the light from some distant object almost reaches your eye, is it already blurred due to diffraction, or does the viewing device itself cause the diffraction to occur?

5. Ok, I found this website: http://www.cambridgeincolour.com/tut...hotography.htm

What if you never send the light through a hole (ignoring whether or not this is possible)? Will you still get diffraction due to the photon's proximity to each other by the time they reach the sensor?

I was looking at the James Web Telescope, and it does not seem to have any holes for which the light to go through. Could this mean that it does not suffer from the diffraction limit?

6. The web site you found is pretty good. But all modern telescopes consist of mirrors, not lenses. It is the diameter of the main mirror that limits the resolution and therefore determines its diffraction limit. Just imagine that a mirror basically acts like a lens, but instead of letting the light pass through, it is reflected. Nevertheless, it is an optical element that focusses light.

I would like to try explain it in a different way, although it is a bit more difficult to understand. I would like to describe light in terms of a wave instead of rays. You might know that it is the wave nature of light that produces the interferences that are mentioned on the web site you quote. For the light, it does not matter, whether it is diffracted at a small hole or at an obstacle of the same shape.

This explanation uses the concept of a wave front. You may describe the wave front of a point source (unresolved) as being ideally planar. Imagine that the spherical wave front emitted by a star becomes planar at infinity. Any resolved light source has a wave front that is curved. In order to be able to detect and measure the curvature of that wave front, you need an optical element that is large enough. If a lens or a mirror is too small, the curvature of the wave front is too shallow to able to be seen as curved. In that case, the light source appears unresolved.

When hitting the aperture of a telescope (mirror or lens), the planar wave front is modified according to the Huygens principle. It says, that every point of a wave front is the origin of a new elementary wave front. Now, if the initial wave front is cropped by an aperture, the superposition of elementary wave fronts produces the diffraction pattern, whose size depends on the size of the aperture. The smaller the aperture, the stronger the diffraction effect.

Interferometers like the VLTI probe the wave front not with a single telescope, but with a number of telescopes. These telescopes measure the wave at different positions of the front coming in, so that it can increase the resolution power, although an image reconstruction is much more difficult, because they can do it only partially.

Another way to look at it, is to apply the uncertainty principle of Heisenberg (the Wiki page is quite useless here). The smaller the aperture, the better defined is the location of the wave or photon passing through (or being reflected). As a result, its momentum must be highly uncertain. According to de Broglie, this is directly connected to the wavelength of the scattered photon. This leads consequently to a blurring of the image produced by the aperture.

7. Thanks for the info.

If you look at a Ritchey-Chrétien Cassegrain Reflector, there is a large hole in the middle of the primary mirror. My understanding as to why this does not make a large hole in the middle of the image output from the secondary mirror is that what the mirror is essentially doing is reflecting a number of "replicas" of the same image onto a single point; in this way, cutting a hole in the middle is only removing a number of the replica images and therefore dimming the output. If you have a pencil in front of you, move a few inches to the right, then a few inches to the left, you will see a different angle of the light output since the pencil is not a perfect mirror and is not being hit by laser light in a vacuum. So, when you take a camera lens and point it at the pencil, you are not only getting the light that is moving straight at the lens, but you are also gathering the light that is at a slight angle to the lens and otherwise would have gone somewhere else, collecting all of the 'replica' images onto a single point, forming a brighter central image. This is why my camera's aperture can become much smaller without blocking the edges of the image, it is only preventing replica images from that angle from being gathered onto the central image. Is this correct? If so, then why does receiving replica images (correct me if there is a better term) from a larger angle from the normal enhance resolving power outside of diffraction effects? Outside of diffraction, what will the difference in the image be from one taken using a large mirror and a small mirror?

 Bookmarks
##### Bookmarks
 Posting Permissions
 You may not post new threads You may not post replies You may not post attachments You may not edit your posts   BB code is On Smilies are On [IMG] code is On [VIDEO] code is On HTML code is Off Trackbacks are Off Pingbacks are Off Refbacks are On Terms of Use Agreement