If we were to take an RGB image, separate the channels, export and stack the channels individually (examine/assess) and then recombine or reform an RGB image (maybe even replace the noisiest channel and reassign or simulate) would this provide any visible advantages? It's the long way 'round but I'm curious about the whole process and possible outcome.
I'm not sure they match what you have in mind, but experiments along these lines were done somewhat inadvertently during recent development of the pyramid methods.
The original paper on
"Exposure Fusion" described what was essentially a single-channel algorithm. The authors of that paper commented in passing that "For dealing with color images, we have found that carrying out the blending each color channel separately produces good results." Presumably this was correct in their experiments, which focused on HDR fusion.
But when codes based on separated channels were tested for focus stacking, the output images frequently developed strange localized color shifts, rather like those in the flowing water image on the Prokudin-Gorskii site that you link.
The cause of those color shifts was eventually tracked down to the fact that pixel values were being accumulated differently in the various channels, so that RGB proportions at a pixel in the reassembled image did not necessarily correspond to the RGB proportions of any pixel in the source images.
In other words, separating the channels caused the colors to shift by optimizing each channel independently.
The problem was greatly reduced (at least in TuFuse and CombineZP) by tweaking the algorithm to do the same operations on all three channels together, instead of potentially different operations on each channel separately. Even with the tweaks, there can still be color shifts with deep stacks due to various nonlinearities, but the problem is about 10X less than with separated channels.
--Rik