top of page
Writer's pictureTony Brown

Elephant Trunk Nebula (IC1396)

This was the third night in a row with reasonable skies, some cloud cut the shoot short but I wasn’t confident in the weather to leave the rig out all night long, so I guess having to stop imaging and run a series of darks after about 3 hours wasn’t such a big deal.

The automated run in Nina went really well with guiding, plate solving and auto focussing doing a good job at recovering picture quality even during minutes of total cloud obscuring this target.  IC1396 proved a really good subject for my wide field scope and I was really surprised at being able to pull out the trunk in my final images.

Processing

Rather than detailing the capture process I thought I’d write about a little experiment I ran during registering and stacking of my frames. I’ve watch YouTube videos about which is better in terms of the number of stacked subs; quality over quantity or vice versa.

My stacker is Deep Sky Stacker (DSS), after my evening imaging I’d managed to acquire around 132, 2 minute subs. Rather than manually going through one by one eyeballing each I loaded them all into DSS.

I knew a good number of them would be lost through cloud.  Registering the frames allowed me to easily remove those images where:

  1. The star count was either zero or significantly fewer than the best frame

  2. If the FWHR (full width half radius) rating of the image was significantly different to the best, my best here was around 2.8, worse going up to 4.6. I’d read some forum post where a member stated he’d drop frames with scores over 4…I didn’t do this but I kept the number in mind.

Loading my flats and darks I then ran through 4 stacking processes to come out with 4 Tiff files. The difference in the 4 stacking runs was simply to check different registered subs each time as follows:

  1. All==60, 2 minute subs, 02:00:00 total integration

  2. FWHR scores <=4 == 43, 2 minute subs, 01:26:00 total integration

  3. 50 percentile based on score == 30, 2 minute subs, 01:00:00 total integration

  4. 80 percentile based on score == 12, 2 minute subs, 00:24:00 total integration

  5. 90 percentile based on score, 6, 2 minute subs. 00:12:00 total integration

In case you were wondering I copied the registration table in DSS, pasted into Excel then used the percentile function on Score column to get the value which I then used back in DSS to select based on threshold.

Now the output of each I dropped into Photoshop and simply converted to 16 bit images, then iteratively used levels with a final single curves stretch.  I tried to take a similar approach, I.e. same number of stretches for each of my 4 images.  I will admit that my post processing skills are very rudimentary and that I have not automated so human inaccuracies are abound during this step.  The output of each I exported as a ‘png’ file.

Here are the images I ended up with, these images are rather large as given the main focus of this blog is image quality it seemed silly to compress each of the images.:

90 Percentile

90 Percentile via score meant only 0h 12m of integration

80 Percentile

80 Percentile via score meant 0h 24m of integration

50 Percentile

50 Percentile via score meant 1h 0m of integration

FWHM 4

FWHM <4 meant 1h 26m of Integration

All Subs (After removal of cloud obscured)

All Subs gave me 2h 0m of Integration

Conclusion

As is the common understanding with my eyes it is obvious that only selecting the highest quality images left me with so little integration time that there is insufficient signal to noise ratio in the first 90 Percentile image and this goes also for the 80 percentile.  Less difference between the 50 Percentile and the FWHM4 image, perhaps the FWHW4 wins out just.

Winner…integration time is king, proviso that at least some weeding out of really bad subs has to be done.  No surprises I guess but it is good to test these ‘knowns’ out yourself sometimes.

Comments


Commenting has been turned off.
bottom of page