Tuesday, May 27, 2014

Using Google Drive for all astro images

When I asked on the SGPro mailing list about copying/syncing configs between two computers, somebody mentioned that in the next main release you can specify where to store/read the config settings. And with that, you can store them on Dropbox or Google Drive for automatic syncing. I will never image from both computers at the same time, so the ability to image from one and then pick up changes from the other is perfect.

I was thinking about this some more and actually really like the idea of using Google Drive for this:
  • I will still use local storage speed for writing images to disk
  • The will get automatically synced in the background and in the morning they are available on my Laptop locally
  • They are automatically backed up
  • I can share individual images (just via a URL). Either on Astrobin where you can link to the original FITS/TIFF file of an image (check the Crescent Nebula page under "Link to TIFF/FITS". Or sometimes when I have questions about something I want to send a file link around. This will make it way easier (e.g. the stacked images for my Rosetta Nebula image)
And from a recent (internal) promo, I have 1TB(!!) storage space on Google Drive!

I first set my imaging location in SGPro to the Google Drive folder. Then I checked CCDAP - it also has a setting where to write images and I set that to Google Drive as well. Next, I checked TheSkyX and it seems to have on entire tree (for images, guide images, config settings...) that gets stored under a fixed location (Documents\Software Bisque\TheSky Professional Edition). I asked on the mailing list if this can be adjusted.

Finally, I moved all my images from my local drive to Google Drive. Well, I started this a few hours back - it will take a while* to sync all those images first from my laptop to Google Drive - and then back to the NUC...

But know where everything is setup, I can see that the NUC stores a newly capture image and less then a minute later it is on my other laptop. That's a great setup!

* It actually took more then a week to move the 500 GB around!

Monday, May 26, 2014

Adding a NUC computer for imaging

I read in a couple of places about NUC computers that people mount directly on the mount and use for imaging. And then VNC into them via wi-fi to control them. For imaging at home, this sounded like a great solution - mostly because I won't have to run several cables from the scope inside the house (and risk data loss every now and then). Also, I managed a couple of times to cut an imaging session short because I crashed my imaging laptop when I used it for other things.

At the same time, I want to have my setup flexible enough such that I can image directly from the laptop when I'm in the field (and don't want to power two computers from my batteries).

I changed my cabling slightly to run 3 USB cables from the mount: camera, scope, "the rest". The cables are long enough that I can use extend them outwards to plug them into the laptop or to roll them up and plug them into the NUG on the scope. I ordered a NUC, memory, SSD, wifi card, wireless keyboard, display adapter, Windows 7:

Assembly was easy: First open the back and insert the memory banks:

Then the wifi card:and connect it:
And finally the SSD card:

Put the back cover back on - that's it!

Next, I had to hook it up to and install Windows. Because the NUC only has a mini HDMI port, I had to buy an HDMI -> mini-HDMI adapter. Also, I wanted to connect it to my desktop monitor which only has a DVI input, so I needed that adapter too. I still have a (super slow) external DVD drive that I could connect and use for the Windows installation. But when the first screen came up, I realized that I needed a wired keyboard (the wireless keyboard needs some drivers which aren't in the Windows install kernel).
With all that, Windows installation went very smooth. Before long, I had my NUC ready to work. The only tricky bit was the wi-fi network card. It needed a driver from the Intel web site. There was no link or such anywhere and it took me a while to find it.

Next, I noticed that several devices weren't recognized properly (USB Hub, Network, "unknown"). I went to the Intel site and found a bundle of drivers for this NUK, downloaded it and ran the setup programs for the network card, the USB 3.0 driver which fixed all but the "unknown". The "unknown" showed the Hardware Id "ACPI\NTN0530" and "*NTN0530". I found somewhere, that this is the chipset, so I ran the setup program for the chipset, but that didn't change anything. Next, I tried the "Consumer Infrared" setup... and it fixed it!!!

When I searched on the Intel site for these drivers, I also found a driver update utility page (http://www.intel.com/p/en_US/support/detect). When I ran that, it updated some other components (Bluetooth, Audio...)

Next, I installed TightVNC for remote control. That was fairly easy and I could start controlling my NUC from my regular laptop. The only issue I have with TightVNC is that I can't easily reboot my NUC. TightVNC disconnects very early in the shutdown process and I think that Windows is then waiting for user prompt to shutdown some programs that aren't shutting down fast enough. I had to power off/on the NUC every time I wanted to reboot it. Will try to use Windows connectivity for it.

Finally, I could install all the software for imaging. Initially, I thought that I only need PHP2 and SequenceGeneratorPro. But of course I also need all the drivers:
  • Robofocus
  • FLI camera + filter wheel
  • Starlight Xpress camera
  • Starlight Xpress filter wheel
  • Astro-Physics mount
  • TEMPerHum (this one took me a little longer - but after a couple of reboots, the computer recognized the TEMPerHum device)
  • Alnitak Flip-Flat
And for almost all of that, I needed the Ascom Platform first.

And below that, I needed drivers for the startech.com Serial->USB converter. Here, I made sure that I set the converters to use the same serial ports as on my laptopt. So, that I can use the config settings from my laptop.

After several installations, reboots and trial-and-errors, I finally had PHP2 working. And everything in SequenceGeneratorPro. There was some weirdness with my USB hub being on and off, but I think it finally resolved itself.

Next, I installed the local astrometry.net server. And then I copied all settings from SGPro over. I couldn't find the PHD2 settings to copy them over, so I entered them manually.

I finally attached the NUC to the mount and redid all the cabling (I used this opportunity to replace one USB cable and to add an extra USB cable for the camera that goes directly to the NUC):

Finally, I could try it out.

  • Guiding with PHP went well
  • A few SGPro settings weren't copied over:
    • Name scheme and base directory
    • Default Profile
When I took my first images, I noticed some crazy oscillations in the RA axis and remembered that I also used the opportunity of having the scope off the mount to remesh the gears. Which means I have to record a new periodic error.

... which meant I have to install PEMPro again (easy) and then AstroArt. The AstroArt download link that I had was long expired, so I requested a new one. At least they sent me a new one within a day (even on a Sunday!)

Also, I like the comfort of TheSkyX for random slews. Unfortunately, you have to have an active subscription to download the software again (that you previously purchased)... So, I forked off the $75 for a new 1-year subscription.

Finally, I had to reconfigure our home network. In order to get a good VNC performance I moved all our other devices to the (slower) 2.4GHz band and only left the NUC and my laptop on the 5GHz band. With that, I can control the NUC pretty well from my laptop.


When I tried to use a Wireless-N connection, I ran into a stupid incompatibility:
I could not connect the NUC (with the Centrino Advanced-N 6235 card) to the Wireless-N router. When searching I found this page that recommends to set "802.11 mode" to "Use 802.11n only" and "Channel width" to "40 MHz" on the wireless router. But when I did that, security was always disabled. I then found in this forum that my router (a Linksys WRT 610n) has a bug when you select a fixed frequency for "Channel width" it can't keep the security setting. Bah!
So, I connect the NUC to the Wireless-G network only :-(

Thursday, May 22, 2014

Light Pollution Filter for my Nikon D7000

For imaging with my Nikon D7000 camera, I finally bought a Light Pollution clip-in filter. I ordered a Hutech LPS-D1-N4 filter. It arrived within 2 days! Installation was very easy:

This how the filter looks:
and from the other side:

It's a little tricky to see, but the filter has some screws that you can see in the second image. This side has to go to the outside/top when you insert it.

Inserting it:
First, put the filter into the camera
like this:
Then, with a small screw driver, push it down (of course only on the plastic parts):
Until it's all snug in:

Now put the lens back on. If you inserted the filter the wrong way round, focusing doesn't work.

Here is the effect on natural day light:
Without filter:With filter:

You can see the blueish hue on the right hand side.

Monday, May 19, 2014


M106 is a galaxy 20-25 million lightyears away. What's astounding about M106 is the supermassive black hole in the center of the galaxy. The mass of this black hole is about 30 million times the mass of our sun. As a comparison, the black hole in our milky way has a mass of 4 million suns. And the black hole in our milky way is not active (i.e. it's not actively eating up material) - whereas the black hole in M106 is very active. And because it is so big and active, it eats up a lot of material, and in the process emits some material back out with very high energy. These rays of high speed material slam into the gas in the galaxy, heating it up to over 1 million degrees (in comparison our sun is 10 million degrees hot on its surface!) and making it emit X-rays.
... that doesn't sound like the best galaxy to live in!

This image consist of 150 min Ha data, 110 min Luminance data and 60 min each RGB data. I used a light pollution filter (IDAS-P2 from Hutech) as the luminance filter for this image. I also used the HaRGBCombination script from Silvercup (instead of the SHO-AIP script that comes with Pixinsight). I found that it creates a better color mix. And I used a different approach to process the luminance image. Not bad for only 7.3 hours imaging time from my light polluted backyard - but when you zoom in you can see that this image could use more data.

Saturday, May 17, 2014

A different approach to process luminance images in Pixinsight

Recently, the Pixinsight team released a new tutorial how to do Deconvolution and Noise Reduction with some of the new tools in Pixinsight. I tried this out on data that I took of M106.

After using CanonBandingRemoval and neutralizing the background, this is how it looked like:

The first step in the tutorial is to create a PSF. It encourages to a) focus on small stars, and b) on stars around the galaxies in the image. Here is my selection:

Created a star mask:

Luckily, my star mask didn't pick up any of the non-star elements, so I don't have to process it further.

And with the PSF and the star mask we can finally use Deconvolution:
Clearly, lots of detail recovered in the galaxy. But if you look closely, there is some dark halo around the stars. It's easier to in darker regions:

I played with all the deringing parameters and the one that is most effective here was the "Global Dark" parameter. Setting it to 0.009 (it was 0.005), I get this result:

Now, the halo is too bright. Setting it to 0.007:

Almost gone. 0.006:

Perfect - now, there is no halo around the stars at all!

Next is noise reduction. In the past, I used a copy of the image or RangeSelection to protect the larger structure. But because the images isn't stretched yet, we can use a "Linear Mask" in the MultiscaleLinearTransformation tool. With the preview functionality, it is pretty easy to create it using the "Amplification" slider (one thing to remember is to create a duplicate of the image as this won't create a new image, but modify the existing one!) I want to get just enough to protect the 3 smaller, fainter galaxies below M106. This one looks good:

It's worth checking the created mask with an autostretch - it protects much more then you would think from this image:
I found that I had to play with the strength of the mask a lot by applying the noise reduction (next step) and trying to make the background as little blotchy as possible.

Next we apply the noise reduction with MultiscaleLinearTransformation:

Nice noise reduction without harming the galaxy details. Next, we stretch the image. I now always do a strong first stretch until the background is just getting a little bright. Then I adjust the black point just so that I don't clip any pixels (or at least only a very few). Then I do a second, much smaller stretch and adjust the black point again.

Looks good. Finally, we are doing a dynamic range compression using HDRMultiscaleTransform:

This is how the luminance image now looks:

I'm very happy with it - lots of detail in the galaxy, I preserved the faint outer layers. And the other galaxies below also have some structure.

Friday, May 16, 2014

Ha-only version of Crescent Nebula

I used the star dimming technique from Gerald Wechselberger to create an Ha image of the Crescent Nebula with very few stars:

And then applied some noise reduction, sharpening, histogram transformation and curves transformation:
(click on image for larger version)
Pretty dramatic effect!

Thursday, May 15, 2014

Removing stars from images for tone mapping

For my recent image of the Crescent Nebula, I researched ways to combine the narrowband images. And I found a presentation by J-P Metsavainio (astroanarchy) from this years NEAIC about tone mapping. The main idea is to remove the stars from the images, then stretch and combine them (without bloating the stars) and then copy the stars back in (http://astroanarchy.blogspot.co.uk/2009/11/power-of-tone-mapping.html).

I wanted to use this technique especially for the SII image of the Crescent nebula which has myriads of stars and only a very dim image of the nebula:

I have used Straton in the past for this, but never for an image of this complexity (many stars, dim signal). Trying it out, I get this:

Lots of distortion in this image. Zooming in:

Clearly, lots of signal got lost when removing all these stars. Not only because there are so many, but also because some of them are quite bloated. Let's try again with the Ha image which has less and smaller stars:

This is the outcome of the Straton star removal:

Looks much better. Let's zoom in:

There is some signal loss, but it's not as pronounced as with the SII image. So, it is an issue with these tiny stars that plaster the SII image and how to reconstruct what's underneath (and not to disturb the adjacent regions). I tried playing with the (few) parameters in Straton, but could not get better results.

Searching around (and especially looking for a Pixinsight-based solution), I found a couple of other attempts:
  1. A Photoshop approach and Photoshop action by J-P Metsavainio himself
  2. In this Pixinsight thread (search for "Remove stars" in the page) by Alejandro Tombolini
  3. A video tutorial by Gerald Wechselberger for Pixinsight

#1 Photoshop actions by J-P Metsavainio

I installed the StarRemoving.atn actions and tried it out. There were actually two actions: RemoveStars and StarRemove1. Both delivered similar results:



The first one (which is just several times applying "Dust and Scratches" with various radius) did an OK job with the small stars and kept a lot of the nebula intact. But when zooming in, you can see that the image is very "blotchy". The second one left a lot of the stars...

#2 Approach by Alejandro Tombolini

The first step is to extract the stars and maybe some of the nebula. Using the approach by Alejandro, I get this:
Not bad for such a simple approach. But it missed a lot of the tiny stars:
Original:Extracted Stars:

When I use this star mask in the next step (subtract from the image), I get this image:
Which is not what we were looking for. We were looking for the isolated Nebula with maybe some star fragments still there.

I tried using the star mask from above (created using StarMask with smoothness=4). This results in an (almost) black image as the stars and the background have a different level then the original image (if I subtract the original image from the star mask, I basically get the star mask again). I.e. this star mask is good for protecting the image for subsequent operations, but not for direct manipulation.

#3 Approach by Gerard Wechselberger

The very first step here is to create a star mask that contains all stars. But I was not successful in this as my star masks never contained all smaller stars. I asked on the Pixinsight mailing list about how to improve the star mask to include more stars. And got a very quick response how to use the parameters in StarMask. This is what I got:
On first look, this looks much better - it also cuts out the nebula itself. Zooming in:
Original:Extracted Stars:

This star mask would cover a lot of the underlying nebula and background. Trying different settings for smoothness:
With smoothness=2, the individual stars become complete squares - which isn't good.

I also wrote an email to Gerald asking him for advice. He was very kind and sent me back a series of steps to minimize the smaller stars.

Here is his sequence:
Original Image:

Star Mask:

Applied Star Mask:

Step1: MMT

Step2: MMT

Step3: HT

Step4: MMT

Step5: HT


Step7: PM

Step8: PM

The individual steps were quite subtle, so here is a before and after comparison:

The smaller stars are clearly dimmer and the nebula more pronounced (and not distorted).

I will process my image again using this approach.