MarkSturtevant wrote: ↑Sun Jan 15, 2023 2:28 pm
Gilles Benoit wrote: ↑Sun Jan 15, 2023 11:41 am
CGI illustrations are fun to play with but are not a photographic process. They would be better suited in an illustrator’s or graphic arts forum.
It might depend. These certainly go way beyond what has been considered here before (although I still very much like them). But haven't there been occasions here where someone takes a micrograph and runs it thru one of the many special effects filters in Photoshop?
I find it useful to think about these methods in terms of how they relate to the subject being shown.
The special effects filters in Photoshop do not care about the subject. They affect the rendering style, such as oil painting versus watercolor versus tile mosaic. But they are not designed or trained to recognize anything in particular and act on that recognition.
Some other tools, such as Topaz Sharpen AI, are subject aware and deliberately seek to enhance real aspects of the subject. They work by recognizing what is being shown in the original image, then modifying the image to show that same thing in more idealized form. If the original image looks like blurred hair, then the modified image looks like sharp hair; if the original image looks like a face with blurred eyes, then the modified image looks like the same person with sharp eyes. Critical in this view is the idea that the recognition is neutral -- the software figures out for itself what it is probably looking at, without guidance from a human.
Then we have these new tools, which work by "recognizing" parts of an image that somehow resemble whatever the tool is told to look for, then they modify those parts so they look even more like the target description. In other words they start with what what the camera captured, interpret that in some way that is
different from reality, and then modify the image to be as consistent as possible with the different interpretation. (
See that bright spot in the textured cuticle? Make it look like a pearl. In fact, make 'em ALL look like pearls! )
This is essentially the same process that happens when people experience hallucinations: the visual system starts with what it actually sees, interprets that in some way that is inconsistent with reality, and then elaborates the impression to be as consistent as possible with the misinterpretation. In fact a
2016 paper on the automated technique uses the phrase "
computational hallucinations". This term seems wonderfully concise and accurate; I am saddened that it has not become popular.
With all this as background, perhaps I can say that I personally don't view PMN as being the right place to host hallucination parties.
That said, I expect there are other uses of the same technology that are more relevant to PMN's interests, either as tools to use or traps to recognize, or both.
For example, no great imagination is required to foresee a retouching tool in which the human says "
This is a thicket of sparse hairs, seen from the side. Remove the haze so the hairs can be seen clearly." Or even better, a tool that automatically recognizes such hair and does the right thing by itself.
Of course this leaves open the question of what "the right thing" is, and how far we're willing to go with replacing objective reality by idealized forms.
The answer to that might be "pretty far", at least if the portrait photographers' collective fondness for zapping zits is any indication!
--Rik