Flashpoint Exposure Variation

Have questions about the equipment used for macro- or micro- photography? Post those questions in this forum.

Moderators: rjlittlefield, ChrisR, Chris S., Pau

mawyatt
Posts: 2497
Joined: Thu Aug 22, 2013 6:54 pm
Location: Clearwater, Florida

Flashpoint Exposure Variation

Post by mawyatt »

I have been struggling with variations with my exposure using the Flashpoint 320M and DG600 strobes. These are cheap strobes from Adorama and well used/abused by my sessions.

The new setup I am working on has two 320M and two DG600 strobes (I'll add more later probably). They are all RF Triggered from the Hotshot on the Nikon D500 and given 5 seconds between flashes at about 1/4 to 1/2 power for up to 500 exposures, then a few minutes rest and repeat again and again.

I was not able to identify a specific strobe that was mis-firing or weak output. But during stacking sessions in Zerene I was able to see some banding even with Brightness Compensation active in Zerene. I suspected this was caused by exposure variations. With some experimentation I've found the that activating the Optical Sensor in each strobe along with the RF Trigger inputs seems to have solved the band issue.

I can only surmise that these cheap strobes are prone to having a varied output under the RF Trigger/Synch input, which could be caused by many things, but seem much more consistent with both Optical and RF/Synch triggers active.

I really can't complain much, as these strobes have been really worked hard. BTW this is for a stack & stitch session for a 6mm by 5mm chip images with a Mitutoyo 10X & Nikon 200mm F4 tubes. The chip is divided into 16 sections for stacking each section with a step size of 5 microns with about 300~400 images per section.

Anyway, hope this helps someone that might be using multiple cheap strobes.

Best,

Mike

nathanm
Posts: 222
Joined: Thu Jun 02, 2016 8:13 pm
Location: Bellevue, WA
Contact:

Post by nathanm »

This has occurred for me with Canon speedlight flashes as well.

My conclusion is that they are just not that good at low power settings. Granted that I am typically using 1/64 to 1/32 power.

The reason seems to be random. Multiple flashes will tend to average out the variation.

Studio flashes (I use Profoto D2) are much better but that is for large subjects - the flashes are too physically large for high mag work.
nathanm

mawyatt
Posts: 2497
Joined: Thu Aug 22, 2013 6:54 pm
Location: Clearwater, Florida

Post by mawyatt »

I used up to 12 strobes before for uniformity and averaging, but this dual coupling seems to work well with these cheap 4 Adorama strobes. I just shot a 16 stitch (actually 19) at 400 shots per stitch without a mis-fire or weak output I could detect by banding in Zerene.

Best,

Mike

mawyatt
Posts: 2497
Joined: Thu Aug 22, 2013 6:54 pm
Location: Clearwater, Florida

Post by mawyatt »

I went back to my old evaluation of flashes and strobes and at the end of this thread I had done some evaluations on RF Trigger delays. See:

http://www.photomacrography.net/forum/v ... hs&start=0

It seems the 433MHz triggers have a ~1.4ms delay, whereas the 2.4GHz triggers have ~0.5ms delay. Since I believe all these triggers send the data pattern multiple times and the data rate for the 433MHz is probably lower than the 2.4GHz triggers, thus one could easily get a few ms delay difference between multiple strobes.

With a 6.25ms exposure window (1/160) and the actual flash exposure waveform it's quite possible to get a time skew where a strobe is being "cut off" before it's had time to complete the entire exposure. Seems the 2.4GHz triggers might be a better trigger.

With the Optical trigger activated, then all the strobes are somewhat synchronized within a few microseconds.

I don't want to slow down the shutter speed because I use this to eliminate any ambient light affecting the image, and the faster shutter helps with vibration issues as well.

Anyway, this has been bothering me on why the Optical Trigger and RF Trigger worked, now I think I know and can put this to bed!!

Best,

Mike

TheLostVertex
Posts: 317
Joined: Thu Sep 22, 2011 9:55 am
Location: Florida

Post by TheLostVertex »

nathanm wrote:My conclusion is that they are just not that good at low power settings. Granted that I am typically using 1/64 to 1/32 power.

The reason seems to be random.
Low power exaggerates the problem. The E640 specs +/-1/10 stop at 1/128 power and above and +/-2/10 of a stop at 1/256 power for this very reason. What I think maybe happening on speedlights and some studio strobes is the IGBT that is cutting off the current flow, has a fixed cut off variability.

*Lets pretend that the IGBT has a variability in cutting the current off at +/-50 microseconds.

If our flash pulse is 0.5milliseconds then that accounts for a +/-10% pulse length difference.

If the flash has a longer pulse, lets say 2 milliseconds, then that variability would be only +/-2.5% of the flash pulse.

But things are worse than that. Because the IGBT mostly is clipping off the tail end of the flash pulse, which has little power. ( See this image for full power and this image for 1/2 power ) So the area that has a variability in timing is less intense at higher power settings, and more intense at low power settings.

There maybe other components also contributing, but I suspect it is mostly a timing issue.

*All numbers are of course totally made up and simplified. Maybe an actual electronic engineer can step in and tell me how wrong they think I am ;)

Edit: Not using real numbers bugged me. Flash duration on a 580ex at 1/128 power is about 1/30000, or 33microseconds. HERE is the timing info from a hitachi spec sheet for a high voltage IGBT listing an off time variablity of 3 microseconds. So potential timing difference of +/-5%. I of course have no idea that parts or specs used on these speed lights. But some real numbers still seem to hold up with my theory I think.

mawyatt
Posts: 2497
Joined: Thu Aug 22, 2013 6:54 pm
Location: Clearwater, Florida

Post by mawyatt »

Steven,

50us seems like a lot of variability from one shot to the next, especially with IGBT control. However the variability will be greater for lower power settings as you mention.

Another issue I mentioned is the synchronization of multiple speedlights/strobes, which seems to be more of a problem in my case. With the optical synch mentioned this is no longer an issue.

Best,

Mike

mjkzz
Posts: 1681
Joined: Wed Jul 01, 2015 3:38 pm
Location: California/Shenzhen
Contact:

Post by mjkzz »

I think, if I read specs correctly, those variability regarding IGBT turning on and off is referring to variability across product samples, ie, one particular component might have min value and the other particular component with same model number has max value, yet others have different values but bounded by the min and max.

Variability from one shot to the next might not change at all. Then again, I am not semiconductor expert, just a speculation and past experience building a 400WS IGBT based flash 5 years ago, it is the flash tube that is major reason causing the variation.

Normally, there is a small capacitor wired to a transformer (trigger transformer), when this cap is shorted, it generates a pulse on the low side, and on the high side, normally about 4KV, is connected to metal wire around the flash tube (or across terminals in case of speedlight), this high voltage ionizes the gas inside the tube and causing a short circuit through the tube (the current can be more than 200A) and generating a lot of light. The IGBT will cut the current off thus turning off the flash.

I am sure all flash manufacturers do calibration on duration to use to get specific power level, but this calibrated duration is fixed in firmware and can not deal with the fact some times it takes longer for the tube to ignite for some reason from shot to shot, this is a mystery for me when I built that flash.

TheLostVertex
Posts: 317
Joined: Thu Sep 22, 2011 9:55 am
Location: Florida

Post by TheLostVertex »

I suppose that shifting the start of the pulse with a relatively fixed cut off time would have the exact same clipping effect on the power levels. It makes intuitive sense that that the bulb would have shot to shot variability as well, due to heating and other factors.

Still would be nice to have a way to confirm where and how things are changing.

mjkzz
Posts: 1681
Joined: Wed Jul 01, 2015 3:38 pm
Location: California/Shenzhen
Contact:

Post by mjkzz »

Still would be nice to have a way to confirm where and how things are changing.
If you build an IGBT flash, you can start to shorten the duration until you can not get any flash anymore. Different tubes have different critical duration where you would not get any flashes. With the famous Sunpak 120J tube, if I remember it correctly, it is about 11us.

You can modify a YN560 by bypassing its gating signal with a 2N3904 to ground (I used yet another MOSFET) to take over the gating. Then you can start shortening the duration using an Arduino or some other MCU. [Edit]You can identify the IGBT because it is the only TO220 component and I believe the middle pin is the Gate pin. I think it is made by Toshiba, at least my version of YN560 I modded 5 years ago.[/Edit]

What I got was that when duration is around some point, sometimes the flash would go off and sometime it would not. I think that is the point you can sort of guess the variation -- find out the duration the flash always go off and the duration where it does not no matter what. Then this variation could at least be used to gauge our expectation, not necessarily the variation of shot to shot, but should give us some sense.

disclaimer: I never did this experiment, but did try to find the cutoff duration where the Sunpak 120J tube just would not go off, it was about 11us. And for another tube it is about 15us

mawyatt
Posts: 2497
Joined: Thu Aug 22, 2013 6:54 pm
Location: Clearwater, Florida

Post by mawyatt »

IGBT by semiconductor standards are incredibly slow, where we deal with femto seconds routinely, but in photographic terms they are pretty fast. A few microsecond variation in the turn off time is a small % of a 1~5ms burst, but much large % of a 50~100us burst, so it follows higher variability for lower outputs (shorter bursts).

The optical tubes have a gas ionization rate and minimum ionization potential that determines the minimum optical burst, I suspect this is quite different for different gas mixtures, gas pressure and temperature.

The initial energy storage capacitor voltage has a square law effect on optical output, and thus needs to be highly repeatable between flashes. This is the reason we all know to let the speed light/strobe recharge well past the "ready" light for more uniform flashes.

Some of the cheap strobes use a rectified line voltage doubler without voltage regulation, so they are prone to variability due to line voltage variations.

However all added up, the variation from a single "well designed" speed light/strobe that is stabilized at temperature (many prior flashes) and voltage is pretty good from my experience. Multiple trigger timing skew was more of a problem in my case, masking itself as fundamental individual strobe variations.

Best,

Mike

mjkzz
Posts: 1681
Joined: Wed Jul 01, 2015 3:38 pm
Location: California/Shenzhen
Contact:

Post by mjkzz »

A few microsecond variation in the turn off time is a small % of a 1~5ms burst, but much large % of a 50~100us burst, so it follows higher variability for lower outputs (shorter bursts).
Are you saying that with a specific IGBT device, its variation of turn off time can cause output variation? One day, I will build a circuit with a IGBT or some one who has time to do so, then measure the time difference between when it is NOT gated and when it is turning off, take maybe 100 samples and see if there is large variance value in the sample.

One note however, a lot of IGBT devices need a driver to turn them on and off, so this driver might cause some variation.

mawyatt
Posts: 2497
Joined: Thu Aug 22, 2013 6:54 pm
Location: Clearwater, Florida

Post by mawyatt »

Peter,

Yes basically because the output time turn off variation directly affects the total integrated optical output power.

Also, yes again. The driver could have lots to do with the IGBT turn off time variation. A good "stiff" driver will pull the IGBT gate low quickly with little variation, a lesser driver will pull the gate low slower allowing more variation.

However, the IGBT must turn off rather quickly otherwise it will burn out since instantaneous power is simply V(t)*I(t).

Best,

Mike

TheLostVertex
Posts: 317
Joined: Thu Sep 22, 2011 9:55 am
Location: Florida

Post by TheLostVertex »

mjkzz wrote: Are you saying that with a specific IGBT device, its variation of turn off time can cause output variation? One day, I will build a circuit with a IGBT or some one who has time to do so, then measure the time difference between when it is NOT gated and when it is turning off, take maybe 100 samples and see if there is large variance value in the sample.
This would be a more ideal test I think. Unfortunately one I can not carry out at this time.
mawyatt wrote: Yes basically because the output time turn off variation directly affects the total integrated optical output power.
My theory exactly. However I did not consider the added possibility of the driver coming into play as well.

mawyatt
Posts: 2497
Joined: Thu Aug 22, 2013 6:54 pm
Location: Clearwater, Florida

Post by mawyatt »

Steven,

The driver may have more influence on the timing off variation than the IGBT itself, especially a weak driver which isn't capable of pulling the gate low quickly.

Probably should mention that as the gate voltage transitions from on to off or off to on, the effective gate capacitance grows due to the "Miller Effect" during the transition. Since the stored charge in the gate is proportional to the capacitance, the charge increases as well. What this means is the driver must either supply (off to on) or dissipate (on to off) this gate charge to turn the IGBT on or off respectively.

Thus the driver must deal with not just the nominal gate capacitance/charge but the "Miller Effect" capacitance/charge also, complicating the driver design.

Best,

Mike

mjkzz
Posts: 1681
Joined: Wed Jul 01, 2015 3:38 pm
Location: California/Shenzhen
Contact:

Post by mjkzz »

Mike,

Yes basically because the output time turn off variation directly affects the total integrated optical output power.

Also, yes again. The driver could have lots to do with the IGBT turn off time variation. A good "stiff" driver will pull the IGBT gate low quickly with little variation, a lesser driver will pull the gate low slower allowing more variation.

However, the IGBT must turn off rather quickly otherwise it will burn out since instantaneous power is simply V(t)*I(t).
I think I was not clear. What I was saying is that for a specific, particular device, does its on-off time change ENOUGH from shoot to shoot to cause output variation?

I really doubt so because once a device is made, I do not think its on-off time will change enough to cause noticeable output change in case of flash application. I think it can be shown by experiments, by collecting enough sample measuring the on-off time for a specific IGBT device, or unless you can provide some theory that a specific device can have large enough variance to cause output variance (in flash application).

What those (large) variance shown in databook are between samples, ie, devices within the same batch or same model number, not for a specific device (from shoot to shoot in case of flash application)

So in essence, I doubt the variance in output is caused by variance of on-off time of IGBT, at least not enough, it is caused by, rather, something else.

Yeah, the driver could be an issue.

Regards
Peter

Post Reply Previous topicNext topic