I mis-spook about DF. I know it usually starts to "drop out" above NA1.2 or so, but the resultant COI you get instead can look like a poor DF too. That's what I was (clumsily) referring to. Here's a test shot from last night - all dry, no oil. I decided to switch the blue filter cube in for this one, which only allows blue light to the sensor (and eyepieces). Definitely see the extra resolution from doing that (compared to white light).
I have my A7riv connected directly to the C-Mount trinocular using a C-mount lens adapter. This gives exactly 1x onto the sensor, so a 100x objective gives 100x reproduction ratio, which is handy. A single shot of these Stauroneis sp. comes out like this (using the Heine condenser). I used full frame to get the whole FoV in here (61 megapixels) but I'll generally use APS-C crop mode with this camera - that's still 26 megapixels. That gives you the full width of the FoV, but crops top and bottom a bit.
Same image converted to B&W using only the blue channel, cropped square and inverted to negative.
And finally, cropped in tighter to show details. Those rows of dots are approximately 320nm pitch. All images straight exports with a tiny bit of sharpening for screen on export. Otherwise this is just a straight, single shot (no stacking).
All this with a simple Plan 40x/0.75 objective too. Way better than any results I ever got on my Zeiss with equivalent lenses. Me and the BX, and the Heine, will definitely be enjoying a happy and productive future relationship...
Edit: I forgot, this isn't QUITE a straight shot. It was a 4-shot pixel-shift capture. You only trigger the shutter once, but the camera takes 4 shots, shifting the sensor by one pixel for each shot to sample R, G, G and B at every pixel position. It doesn't change the image resolution, per-se, but does improve colour resolution by ensuring each pixel is fully sampled for R, G and B. The colours are not "guesstimated" from the Bayer array (important as we're only using the blue channel here. The Bayer approach would only get 1 pixel in 4 "correct" while the other three would be estimates of the blue contribution, but derived from red and green pixels).