Let me approach this from a different angle -- one that may add insight for other readers also.
Troels has written that:
If we should draw a precise picture of all these impressions we could draw the details on the inside of a transparent hemispere exactly as we see them when wee look in different directions. But if we want to make a flat picure we must choose some kind af projection.
We could easily end up with something like this:

The picture shown by Troels is not correct.
In fact, the flat picture needed for Troels' use is exactly the one produced by Ray's standard camera. When the viewer's eye is positioned relative to the print in exactly the same way that the lens entrance pupil was positioned relative to the sensor, then as the viewer rotates his eye to focus on different parts of the print, he will see exactly the same shapes from the print that he would see from the world when looking at the world directly.
If you're having trouble with this concept, consider the following sketch and imagine that the thin lines are rays of light. When the picture is taken, rays of light coming from the world go through the lens to create an image on the sensor. After the print is made, and placed in the proper position with respect to the eye, rays of light come from the print and go through the eye's pupil to form an image on the retina. The rays of light take the same paths in both cases, so the viewer sees the same view of the world in both cases -- either directly from the world, or indirectly through the intermediates of the sensor and print.
If you want to make the equivalent of a full sphere surrounding the viewer, then all that's required is to surround him with multiple planar faces, each corresponding to pointing the camera in a different direction. A common example is the "cubic panorama", which can be easily created with a wideangle lens by shooting up/down/north/south/east/west and assembling the prints into a cube sized to make features line up properly at the edges. But there is nothing special about a cube. Any number of faces can be used, surrounding the viewer in whatever arrangement you like. The requirement is only that each print be positioned so as to match the corresponding sensor position.
At this point, I can practically hear Ray saying
"Great!! Rik agrees with me!!" Alas, my friend, that is not the end of this story.
The thing is, this wonderful property of accurately matching the world with a planar image does not depend on the sensor being perpendicular to the optical axis of the lens.
To illustrate, let me redraw the sketch with a tilted sensor:
If you apply similar triangles and think about this for a while, you should reach the conclusion that these new sensor and print positions work just as well as the first ones did.
At that point, I ask, rhetorically,
What is it that's really special about Ray's "natural configuration" for the camera?
The answer is,
It's the only one that corresponds to viewing the print from an eye position located perpendicular to the center of the print. That's because the lens position was located perpendicular to the center of the sensor, and the eye position has to correspond with that in order for the geometry to work out right.
So, from my standpoint, a good case can be made for calling the usual camera configuration "natural", if we also agree that it's natural to view prints face on and centered. I have no great problem with that.
What I do have a problem with, is Ray's habit of asserting that one or another thing is "distorted", without ever saying clearly what he means by that word.
I say that carefully -- "what
he means" - because it's very common for different people to use the same word to mean different things.
As far as I can tell, that's exactly what is happening here.
Ray apparently asserts that all images are distorted unless they were shot with his standard camera configuration.
The rest of us strongly disagree with that, because we have a different definition of what "distorted" means.
To us, an image can also be considered
not distorted if shapes in the plane of focus are preserved in the focused image.
That happens whenever the focus plane, the sensor plane, and the lens plane are all parallel -- notably, in the shifted configuration.
So, from my standpoint what's really being debated is what it means to be "distorted". Ray's definition seems too narrow to the rest of us, and I guess that ours seems too broad to Ray.
There may also be misunderstandings about how the geometry works, but I don't think that's a key issue.
In his last post, Ray wrote that:
As long as you agree that the straight-on view can be stretched in a certain way as to make it look like the shifted view, then we're good.
No -- I don't think that agreeing on that point addresses the conflict at all.
I certainly agree that it's possible to stretch the straight-on view so as to make it look like the shifted view. And if you actually do that, say using Photoshop or a tilt/shift enlarger, then I also agree that you have
distorted the image that came out of the camera. But that's very different from saying that the resulting image is distorted when considered by itself.
In my view, if somebody sets up the camera to shoot the shifted view, skipping the other view altogether, then I'd just call that "
getting it right the first time". Shooting with shift, in this interpretation, is a way of setting up the camera to avoid distortion in the first place, as opposed to shooting in the other configuration and then having to undo the effects of perspective that were introduced by doing that.
Yes, this is a matter of semantics. But if you look up the definition of "semantics", you'll find "the branch of linguistics and logic concerned with meaning." Semantics is meaning, and meaning is what I care about. Bring on the semantics!
Bottom line, I will not agree that shifting the lens introduces distortion, as Ray asserted in the beginning.
--Rik