mgoodm3, it might help at some point to go slog your way through the earlier thread. The thread starts
HERE with a discussion of scale bars (!), but soon goes off-topic into an extended discussion of what causes the blurring effect. It's a long read, and what you'll see is that most everybody (myself included) has trouble getting a good grip on how to think about the phenomena.
Anyway, someplace near the end of that thread, Graham links to this illustration:
http://www.cmmp.ucl.ac.uk/~ahh/teaching ... node4.html. The illustration shows the Huygens-Fresnel principle of wave propagation, as it applies to a plane wave (think laser beam) passing through an aperture.
The Huygens-Fresnel principle itself is described in more detail in a
Wikipedia article.
In very brief and paraphrased form, the principle states that you can think of a arbitrary wavefront as being replaced by a set of infinitesimally small sources, each of which emits a spherical wave. The evolution of the original wavefront is then equal to the sum of the evolutions of the spherical waves.
If you start with a plane wave in free space (no aperture), the contributions of all those tiny spherical waves turns out to add up to just another plane wave, happily propagating "in a straight line".
When you impose an aperture, you remove from the sum all of those contributions that would otherwise have come from outside the aperture.
The removal of those contributions makes the result be no longer a simple plane wave. Instead, it is something that looks very much like a plane wave close to the aperture's center, but is obviously an interference pattern near the aperture's edges.
The farther back from the aperture you make the observation, the more obvious it becomes that in fact
the whole thing is an interference pattern. As you move far behind the aperture, that pattern resolves itself into something that we recognize as the Airy disk.
Here are the results of a simulation that illustrates this effect. It shows what happens when a plane wave, moving downward, strikes an aperture.
It is important to note that
this simulation does NOT involve sines, cosines, angles, spherical wavefronts, or any other obviously wave-related mathematics.
Instead, it simply iterates over and over again the following set of additions and subtractions:
Code: Select all
for (int ix = 1; ix < gridNX-1; ix++) {
for (int iy = 1; iy < gridNY-1; iy++) {
d2u[ix][iy] = k[ix][iy] * ( + u[ix-1][iy]
+ u[ix+1][iy]
+ u[ix][iy-1]
+ u[ix][iy+1]
- 4*u[ix][iy] );
}
}
for (int ix = 1; ix < gridNX-1; ix++) {
for (int iy = 1; iy < gridNY-1; iy++) {
du[ix][iy] = du[ix][iy] + d2u[ix][iy];
u[ix][iy] = u[ix][iy] + du[ix][iy];
}
}
These additions and subtractions are nothing more --
and nothing less -- than a direct numerical solution to the wave equation. The fact that the result ends up looking like interfering waves simply reflects the fact that waves and interference are a natural way of thinking about the solutions to this equation.
Getting back to mgoodm3's words,
the aperture edges must have at least a partial effect upon the resulting Airy disc.
The reason for me is that a laser shot through an aperture causes an Airy disc
I hope it's clear at this point that the edges of the aperture do not cause the Airy disk.
What causes the Airy disk is the removal of all contributions
outside the edges of the aperture.
If you observe close behind the aperture, then the effects are definitely more obvious near the edges. But that is simply because you are paying attention to an area where the weight of various contributions is changing quickly. Farther back, the weights become more equal, and the classic Airy disk appears.
--Rik
Edit: to tweak phrasing.
Edited 11/2/2021 to fix formatted of code block after upgrade of forum software