So, I bought Baader narrowband filters that are significantly wider (Ha 7nm, SII 8nm, OIII 8.5nm).
In order to compare them, I shot images of the jellyfish nebula. It has a very strong signal but also dark areas nearby. In order to minimize the influence of changing seeing conditions, I shot one hour with one filter, then one hour with the other and then again with the first and so forth.
I'm still struggling with my unreliable lodestar guider, so I had to throw away a number of images. But ended up with enough data to compare them. I decided to compare them by contrast and signal to noise ratio.
I calculated contrast by first measuring average signal in a bright area and in a dark area
and then calculating it Contrast = (Bright - Dark) / (Bright + Dark)
For Signal to Noise ration I measured the images with Pixinsight's subselector script. It doesn't report absolute values but relative values.
Here are the results:
The contrast is very similar between both filters. But the Signal-to-Noise ratio is significantly better with the wider filters.
The other question is how bad the background gradients are that get created with both filters. I used the OIII images to compare as the Jellyfish nebula is very weak in OIII:
Using AutomaticBackgroundExtraction, I get the following background images:
Clearly, the wider filter has larger gradients. But after the first background removal the residual background is:
Looks like we can remove the stronger gradient fairly easily.
So, this does seem to support my suspicion, that the smaller narrowband filter extend the integration time. Although I am not sure what the reason is - the Ha/SII/OIII signal should be the same with the smaller and the wider filters.
Anyway: I'll probably stick with the wider narrowband filters.