Lynkeos concepts

Lynkeos uses some signal processing concepts which are explained hereunder.

Signal and noise

The image recorded by the detector is not perfect (otherwise you wouldn't use this software). It can be described as a perfect image deteriorated by some processes (turbulence for example).
In signal processing, the perfect image is called the "signal" and the deteriorations, the "noise".
The term noise usually refer to a statistically random deterioration, for a systematic one we use the term "bias".
When many images are added, the signal and the biases are added (they grow linearly with the number of images), while the noises are added with a quadratic sum. This means that the resulting noise is the square root of the sum of the squares of the noise in each image (remember that Pythagore theorem?). For short, when stacking, the signal and the bias grow much faster than the random noise : stack 4 images, the signal and bias are multiplied by 4, the random noise by 2, the signal is two times stronger than the random noise in the resulting image. As the biases accumulate with the signal, they shall be corrected before stacking.

The identified noises/biases, our images suffers from are :

Many functionalities of this software each intend to reduce the amount of one kind of noise.

Image calibration

Prior to any processing, the source image must be calibrated, ie to remove the biases.
To create the calibration frames described hereafter, all the webcam settings shall be (except otherwise noted) the same as for the recording of the images to be processed

Dark frame

The dark frame is a stack of many frames recorded with the telescope shut, at the same temperature as for the recording of the images to be processed. It is a measure of the thermal bias.
It is substracted from each image before processing.

Flat field

The flat field is a stack of many images recorded with the telescope looking at an evenly illuminated target, with the same optical setup as the images to be processed, at a shutter speed giving a bright enough, but non saturated image. It is a measure of the uneven transmission/sensitivity.
Each image is divided by the flat field before processing.

Aligning images

To align two images, Lynkeos uses intercorrelation.
To speed up the computing, it uses a fast fourier transform (FFT), this is why the alignment box is a square with a power of two as side.
It then searches a peak in the correlation result, which gives the alignment offset between the images.

Some user preferences affect this processing :

Analyzing images

To compute an image quality level, Lynkeos uses one of two methods, based on :

  1. Image entropy.
    The entropy is a measure of the image "lack of information"  ie: a "flat" image with all pixels at the same value, has the highest entropy and the less information in it (it can be described with just one pixel value). The quality value is derived from entropy, but the best images are supposed to be those with the highest quality value.
    This method is rather efficient on planetary images.
  2. Power spectrum.
    Once again FFT is used for speed's sake, that's why the image part is a square and its side a power of two.
    The quality level is the mean value of the power spectrum between two frequencies, which are modifiable in user preferences : This method has not proved to be very effective but has been kept for historical reasons : as it was the only method in early releases of Lynkeos, it may be of some use to some users.
No analysis method is perfect, so far. Therefore, if you know a killer analysis algorithm, let me know.

Stacking the images

To stack the images, Lynkeos translate the images by their alignment offset down to fractions of pixels. Each pixels is split in 4 according to the pixel fraction and accumulated in the pixels of the resulting image.

The stacking is done with floating numbers, there is no risk of "brightness overflow" if you have many bright images.

Final processing

The final processing uses two filters, the deconvolution filter and the unsharp masking filter

The deconvolution filter is exactly what it says (a Wiener deconvolution filter to be precise). This is done by dividing the spectrum of the image by the spectrum of the presumed convolver, a gaussian in this case.
The threshold value is the convolver spectrum module value under which the division is not performed. This avoids amplifying too much the highest frequencies (experiment with low threshold to see what I mean).

The unsharp masking filter is the digital translation of a darkroom technique. It substracts a blurred image to the original image and amplifies it. This amplifies small scale details and attenuate large scale brightness fluctuations.

The filtered image, wich is a floating numbers image is then brought back to an 8 bit integer RGB image for display on the screen. This conversion uses the black and white levels provided by you.
There is once again no risk of brightness under or overflow whatever processing you do (don't forget to adjust the levels).

The filtered image is converted the same way to a 16 bit integer RGB image for exporting into a TIFF file

Reference ...