Tuesday, April 29, 2014

Image pre-processing with Pixinsight - stacking

Important: I'm still new to Pixinsight. The instructions below are my current, very incomplete understanding of how this works.

Because I took my subframes on separate nights, I could not use the Image Integration step from the batch processing script. But also, this step usually requires some parameter tweaking and visual checking, I rather do this by hand.

Before I start with integrating the images, I am using the SubframeSelector script to remove outlier from my images. First, I add all registered images of one filter to the script:

Then press Measure ... wait ... and then open the plot section:

Here I can easily mark the outliers. And once I did that (I am using FWHM, SNR and Eccentricity for this), I open the Output section:

Here I define what I want to do. I usually move the rejected files to a subdirectory and leave the approved images in place. Click "Output Subframes" to do the actual move.

Because these images are taken as usual from our home, they have pretty strong gradients in them. Which can interfer with averaging, outlier rejection and stacking. At this point, I don't want to do a fine-grained background extraction, but just something basic. The AutmatedBackgroundExtraction is perfect for this. In order to apply it to all my images, I use an ImageContainer:

I add all the subframes, set the output directory and change the output template (by default it adds the timestamp to the filename, I just add a "_ABE" to it). Next, I open the AutomaticBackgroundExtraction process, set Correction to "Subtraction" and check "Discard Background Model" and apply it to the ImageContainer. Then it runs over all the files and stores them in the output directory. As a side effect, it opens up all the files too - which sucks up a lot of memory. I haven't figure out yet how to avoid that ...

I found a good presentation from Jordi Gallego on  "Image integration techniques: Increasing SNR and outlier rejection with PixInsight". It gives a good introduction into image integration techniques: average vs means... and different pixel rejection algorithms. There are two basic concepts:

  1. Which pixel rejection algorithm to use (sigma clipped, Winsorized sigma clipped, linear fit clipping). His method is to integrate the images all 3 ways plus one without rejection and choose the one that creates the lowest SNR and is closest to no rejection. The ImageIntegration script outputs this value at the end of its run.
  2. Tweak the
    "Sigma Low" such that we just don't get any dark pixels, and
    "Sigma High" that we just remove any bright unwanted pixels (e.g. trace of a plane)
Step #1 is fairly straight forward. Open the ImageIntegration process. Make sure to press the "Reset" button to ensure that we start from scratch. Add all files from one of the filter directories and press "Apply Global" (the default is "No Pixel Rejection").

After 215.6 seconds, I get these values:

Gaussian noise estimates : 2.8248e-004
Scale estimates : 4.4882e-004
Location estimates : 9.2372e-002
Reference noise reduction : 1.7204

Median noise reduction : 1.6913


The main value to maximize is the last one (Median noise reduction).

I repeat this with the 3 pixel rejection algorithms and get these values:

Sigma Clipping: 1.6146
Winsorized Sigma Clipping: 1.6422
Linear Fit Clipping: 1.6533

First, you will notice that the subsequent runs are MUCH faster (less then 30 seconds). This is because Pixinsight still has a cache and does not need to do the analysis of the individual subframes available.

Linear Fit Clipping is the highest (1.6533) and not too far from the result without any rejection (1.6913).

We need to find a region with bright and dark pixels in our image. I am starting with the red filter subframes, here is the image that I get without any pixel rejection:

I have a plane trace in it and choose the lower right area (which is dark but has the trail):
 

In ImageIntegration, I can do the integration analysis in a specified region that I can define off a preview. I am selecting this preview for it (this will make the analysis even faster and will allow me to see the same subsection of the image all the time) by clicking on "From Preview" and select my preview.

Preview with "Linear Fit Low"=5 (Default):

No black pixels, increase to 8:

Still, no black pixels, increase to 10 (maximum):

Still no black pixels, so we can leave it this high. Next the high value. With the default of 2.5 we can't see the plane trail, let's increase to 5:

No trail, let's try 8:

No trail, set it to 10 (maximum):

Now, you can see the trail in the middle. Backing down to 9:
Still, you can see it. So, we keep it at 8.

Now, we remove the "Region of Interest" and run the integration with these parameters. We get a Median noise reduction of 1.7038. Which is even higher(??) then the average value?

We store this image, close all files, reset the ImageIntegration process and repeat the process for the green, blue and the Ha filter.

Sunday, April 27, 2014

Image pre-processing with Pixinsight - calibration and registration

Important: I'm still new to Pixinsight. The instructions below are my current, very incomplete understanding of how this works.

Now, where I started using Pixinsight for post-processing, I also want to figure out how to use it for calibration, registration and stacking.

Before calibrating images, I need to create masters (dark, bias, flat). Pixinsight has the ImageIntegration process that can do that. I decided instead to use the BatchPreprocessing script that reads all the calibration frames and creates masters on the fly - and re-uses them later.


I'm adding Bias, Darks, Lights and Flats with the "+" buttons at the bottom. Bias and Darks are straight forward. But flats is trickier. I took my lights on two different nights and took flats both nights too. Which means I will have to calibrate both nights individually to use the right flats.

So, first the lights and flats for the first night. I select my first image as the reference image (have to remember to use it for the other filters too, so that all subs are aligned properly). I created a directory ("processing") to store all the intermediate frames. Have to make sure that "Image Integration is not clicked as I only want to calibrate and register". Finally, I clicked on the "Diagnostics" button that checks if everything is setup correctly:

Click "Run" ... and wait ...

Many minutes later, it's done (weird, no success message or such - you know that it's done when the process console disappears again).

In my processing directory, Pixinsight created three directories: master, calibrated, registered. In the master directory are the calibration masters stored:

In the calibrated and registered directory are all subframes stored. Pixinsight added a '_c' for calibrated subframes and '_c_r' for calibrated and registered subframes.

Next, I want to calibrate and register the subframes from the second night. After the first run, the checkboxes for "Use master bias", "Use master dark" and "Use master flat" are checked for subsequent runs. Which is fine for bias and dark frames but not for flats as we want to use the flats from the other night. So, I remove the light and dark frames and load the frames from the second night. I needed uncheck the "Use master flat" checkbox before loading the flat frames - otherwise Pixinsight would treat every flat subframe as a new master. Again, I have to make sure to keep the frame from the first set as the reference frame, so that all images are aligned.
This time it didn't take that much time as the dark and bias masters were already created.

Now, I had to do the same for the green and blue filter. Actually, the script can automatically sort by different filters. Have to remember to uncheck the "Use master flat" checkbox again! So, I'm loading all my green and blue flats and lights (you can see in the screenshot how Pixinsight separated the blue and green flats):

Quick check with "Diagnostics" to make sure that I didn't forget anything - and "Run".

After it's done, I have the following files in the master directory:

The bias and dark master and one master for each filter (the script overwrote the red master from the first run). And in the registered directory I have three directories: blue, green, red.

Now, where I'm done with registering, I can move on to stacking.


Wednesday, April 23, 2014

Focuser Slip

Before I sent the PL16070 camera back to FLI, I wanted to image Markarian's Chain. But somehow I could not get focus. When I ran SGP's autofocus routine, it go all the way to focus position 2, but not build a V-shape anymore. As I needed to send the camera backanyway, I didn't want to spend too much time analyzing what happened. But when I put my SX H694 camera back on, I had the same effect. When I then checked the scope, I could see that despite position 2, the focuser was not completely in! Apprarently, the weight of the ProLine camera (5.6 lbs compared to 2.8 lbs for the MicroLine cameras) pulled too much on the robofocuser making it slip.
Seems like I'm pushing my luck a little bit with my current setup. Maybe I should think about a better focuser after all. Or replace my OAG with a guide scope.

Monday, April 21, 2014

The Leo Triplet


These are three galaxies in the constellation Leo. They are in a distance of 35 million light years.

This the first time I'm doing RGB imaging and using Pixinsight to post process. Considering that, I'm pretty happy with the results.

Sunday, April 20, 2014

Galaxy imaging from our backyard - Using Pixinsight

Now, where galaxy season is here, I want to figure out how to do RGB imaging from our backyard. I heard a lot about Pixinsight's capabilities to deal with gradients and noise corrections. On the Pixinsight forum, I found a tutorial inside this thread.

I chose the Leo Triplet. I took 13 hours of images (5 min subs). On top of the light pollution, the moon was pretty full too. Here is how the individual subs looked like:

First, when I registered and stacked them in CCDStack, I had remove A LOT of subs. I ended up with 2h 45min for each channel (i.e. had to remove almost 2 hours of data!).

When I combined the 3 channels, this is what I got:


Crazy gradients, because the 3 color stacks had severe gradients:

I smoothened the gradients with the AutomaticBackgroundExtractor (ABE):

Now, when I combined them, I got:

Now, I applied ABE to this image:

Looks much better. Next, I want to apply DynamicBackgroundExtraction (DBE). First, I let DBE create a mesh:

Then I make sure that no data points are too close to the galaxies and add more data points around the galaxies:

And finally, I apply the DBE:
There aren't any gradients left, but still a lot of background noise.

Next I try to neutralize this with Background Neutralization:
I am not sure if I'm doing something wrong here. There is virtually no difference before and after BackgroundNeutralization...

Next, I'm using ColorCalibration to correct the colors:
Now, the image has a pretty dark color.

Next, is Deconvolution. I struggled with this quite a lot. But I found a thread on the Pixinsight forum that explains better the individual steps here. First, I have to create a PSF curve:

And next a star mask to protect the stars from deconvolution:

And finally applying Deconvolution.
You can see the difference in the magnified preview of NGC3628 (top after deconvolution, bottom before).

And now, I apply a HistrogramTransformation:

Next, I increase the saturation to get the colors out:
You can see how much I increase saturation, but there isn't much color coming out :-(

ACDNR for Chrominance:

HistrogramTransformation to adjust the black point in all three channels:

The image itself now looks like this:

I posted to the Pixinsight forum, asking for help at this step...


Tuesday, April 15, 2014

Lunar Eclipse

The clouds made it impossible to do any "serious" imaging. So, I reverted to just a tripod, my Nikon D7000 and a lot of manual focus and exposure adjustment:

At the end of the core shadow.

First glimpse out of earth's core shadow.

With Mars (in the upper right corner).

Sunday, April 13, 2014

Laptop broken!

ARRRGGGHHHH! Last night, I wanted to continue to try out the 16070 chip. But then I noticed that my laptop was not consistently charging. It went on and off every few seconds. With the result that the laptop would discharge (slowly but still). Tried various things: different power supply, 12V power supply... But nothing. When the laptop is turned off, it completely recharges. So, it should still be possible to backup all the data (mostly images).

... now looking for a new laptop ...

I will use this opportunity to get a laptop with more memory and a newer, faster CPU.

Wednesday, April 2, 2014

NGC 2327 - The Parrot Nebula

This was a tricky object as it was very low and it passes behind our neighbors large tree. I could only image it for 2-3 hours per night.
(click on image for more detail)

This nebula is part of a larger region known as the Seagull Nebula. It's an HII region and emission nebula with an embedded star - HD 53367 - wich ionizes it. HD 53367 is a young 20 solar mass star with a 5 solar mass companion in a highly elliptical orbit.It's at a distance of 3750 lightyears from earth and has a diameter of ~100 lightyears.

This image consists of 2h 10 min Ha data and 6h 20 min OIII and SII data (overall almost 16 hours!) I would like to image this object again with a camera with a larger view. The Parrot Nebula itself contains very little OIII (blue) data. But you can see in the upper left corner, that the larger Seagull Nebula contains more.

This is the first image that I processed with Pixinsight.

Tuesday, April 1, 2014

Giving Pixinsight another try

My current workflow for (narrowband) images gets more and more convoluted. First CCDStack, then Photoshop - and then sometimes back to CCDStack. And in Photoshop I am using various plugins. Pixinsight on the other side (apparently) has everything that is needed to process images.

found this tutorial how to process narrowband images with Pixinsight - it gives step-by-step instructions. I wanted to try it out on my recent images of the Parrot Nebula. I did the calibration, alignment, stacking in CCDStack.

First, here is what I get with Photoshop:

And here is my first try with Pixinsight (using the color combination from the tutorial):
There isn't a lot of color here (the Ha is so dominating) and I was clearly too ambitious and made the image WAY too bright and saturated.

So, I tried again using the standard Hubble Palette resulted in this image:


I like the colors of the Photoshop version better. But with Pixinsight, I could get WAY more detail and the sky looks much better too.

I asked on both the narrowbandimaging mailing list as well as the Pixinsight forum how to better color combine narrowband images. The two ideas that I received were to use CurvesTransform or HistogramTransformation.


I played with the CurvesTransform but found it very hard to manage. I could not find a way to boost the signal in one channel (red) and reduce another channel (green). But every change that I tried to make had broad heavy impact on the entire image and color scheme.

I had more luck with the HistrogramTransformation process. I could boost the red a little and reduce the green a little. Unfortunately, it had the side-effect that the background became too noisy. I tried to mitigate that by creating a mask by extracting the L component of this image and clipping the black point. With this mask, I could protect the background and apply the changes mostly to the nebula. This was the result:

Overall, I like the detail and the color in this image the most.