Inside Starnet++

What’s Starnet?

Starnet is a tool developed to remove the stars (or more precisely, star-like objects) from an astronomical image. It is a one-step process with only one parameter to dial, that makes the star removal procedure simple and easy. It was initially developed as a standalone program, but it became so popular, and demonstrated to be so powerful, that has been recently integrated on PixInsight as a standard process.

As told, Starnet is simple to use. Only a parameter can be tweaked, the stride, and once chosen starnet makes the rest. It analizes the image, decides whith features are stars, and removes them.

Seems like there’s no reason for an article about it, but the fact is that the results are not always perfect and sometimes there are additional processes that must be applied in order to achieve a clean starless image .

The original source for information about starnet can be found in the following link:

basic Procedure and analysis

Let’s see how starnet works and then we will analize the result looking for opportunities for improvement.

I chose an image quite difficult, LDN1235, with very bright stars, some stars embedded on bright reflection nebulas, and a very dim nebulolisty on the background.

Prior to applying starnet, the image has to be stretched (non linear). The recommendation from the author is to use an standard STF stretch.

Application of an standard STF stretching

Once stretched, open the process and choose the stride parameter. Check the starmask option, if you want to produce the corresponding starmask too.

Apply the process to the image. The process is relatively long, depending on your computer and the stride parameter.

This is the result.

It is not perfect, but much more perfect that what my skills can produce manually, and in much less time and effort. In any case, if we look closer we will see different aspects that can be improved. This issues are the ones presented in this image in particular with my particular optical system, other cases may differ.


Sometimes starnet fails removing big stars.

pattern on stars

The pixels substituted by starnet tend to present a geometric pattern that can be harmful for further processes as sharpening ones.


Some halos are not properly removed, I suppose that because starnet have considered them as background. This can degrade the accuracy of the representation of the background structures.

unwanted removal of non stellar structures

In this image there is a small and faint galaxy on the upper right area of the image. Its core has been considered as a star and has been removed.

Now that we know how it works and which limitations it has, let’s try to get the most of every step to see if we can improve the result by better preparing our image and/or making some postprocessing.



As told, starnet works only with non linear images. That means that all the processes usually used in linear stage should be already applied, but take into account that any change made to the shapes of the stars (ringing, clipping, halos produced by agressive noise reductions) can confuse the algorithm of starnet leading to not optimal results.

My suggestion at this point would be to perfom an application of starnet as soon as possible, and keep a copy of this result in order to check for artifacts in the application of the following processes.

As an example, If I use this workflow on the linear stage:

  1. Crop
  2. DBE
  3. Mure Denoise
  4. MLT/MMT Sharpening (or Deconvolution)
  5. MLT/MMT Noise Reduction
  6. Saturation
  7. Non Linear Transformation

I would make a copy of the Mure denoised image on step 3, perform a NLT and apply starnet to keep it as a reference.


There are three main subjects to be considered when going to perform a non linear transformation:

  • How to apply the clipping (black and white) points?
  • Which process to use? Histogram Transform? Masked Stretch?…
  • How much to stretch?
clipping points

As commented, the author suggests to apply an standard STF Stretch to the image, this procedure works very well on most of the situations, but for our purposes, it does something that we want to avoid. It performs a clipping and a strecth is a single step, and therefore we cannot apply (or I don’t know how) a relinearization. Not being able to relinearize a stretched image means that it is not feasible to apply starnet before some processes designed to be applied on the linear stage.

As a conclusion any stretch performed on the image should be done in two steps, first adjusting the black and white point, and then applying MTF in a separate application. This procedure will be covered later more in detail.

stretching method

About the stretching method, I made a comparison between the most popular ones, but I advance that I recommend Histogram Transformation even not reaching the better results because of other reason, and is that this kind of stretching can be reversed, retuning the image to the linear state. This is a so big advange that ovecomes the benefits of other methods, but in any case is worth to see the results.

No mathematical aproach has been performed here, only stretched to be quite similar to the eyes.

Top Left: Arcsinh Stretch
Top Right: Masked Stretch
Bottom Left: Histogram Transformation (MTF)
Bottom Right: Histogram T. (Clip)

As seen in the images, ArcSinh Strech produces a good result overall, specially on brighter stars, but there are remaining color halos. It is noticeable also that is the method that achieves less contrast on the nebula.

Masked Stretch produces slightly better contrast on the nebula and does a good work on the stars in general, but the brighter stars are not well managed.

Same happens for Histogram Transformations, both MTF and High Clipping, specially on the last one. This happened as expected, because the stars have been severely clipped during the process, losing his gaussian shape, and this seems to confuse the algorithm.

Seems that big stars are not well managed, no matter with stretching method is used, so I assume that they will need a cosmetic correction in any case, but the overall result seems better when applying a MTF throught Histogram Transformation.


The question about how much to stretch an image is not a silly one and it should be studied carefully. The following comparison shows 6 images stretched with Histogram Transformation. Every image has been stretched in a different amount, assuming 50% (the last one) as when the peak of the histogram is located right on the middle, so we have stretched until 6%, 12%, 18%, 25% (Standard STF), 37% and 50%. More than this is clearly unusable. An additional STF has been applied (note the green lines along the image tags) to match their brightness for visual comparison purposes.

A general look can lead us to the conclusion that the less stretch, the better result, and this is true for the bigger stars, but in a closer view things are different…

For 6% and 12% stretching, some tiny stars haven’t been removed, and for 37% and 50% stretching, medium stars have been eroded very agressively, letting some holes on the image. So the solution seems to be between 18% and 25% (the recommended value). I usually use 18%, but this is s decision that the user has to take depending of the image characteristics.


At this point we probably have reached the best starless result that starnet can provide, so if we are looking for further improvementents, we must look for additional manipulations. The following sections will propose ways to deal with each of this aspects by using an image as an example. I followed using my shoot of LDN1235, but this time choosing the luminance, a monochrome image, for simplification.

We will first go back to the application of a non linear transform, to see the whole proposed workflow from the begining.

Image Stretch

The selected stretching method has been MTF Histogram Transformation and the procedure has been splitted in 3 steps, 2 clips and 1 MTF.

-Determine white point with HT
This operation seems obvious, but it is not. Incorrect clipping of the cores of the stars can lead to artifacts when appliying processes like, for example, saturation. The star cores of the brighter stars must be clipped (their values should be 1.000). This happens naturally when the sensor reaches its FWC (Full Well Capacity), but some processes like DBE can lower its value. To check it, we will take a measurement on the core of one of the brighter stars in the FOV. If the value is not 1.000, we should clip it with HT. I suggest to make this step manually and carefully.

Star core with incorrect value
Same star after clipping

-Determine black point with HT
We will use STF to transfer the auto stretch to HT. Then we will take note of the midtones value and then we will reset it. We will apply it in a separate process.

-Apply MTF
Now we have an image correctly clipped in both highlights and shadows, and we only need to purely stretch it. To do that we are going to use Pixelmath. I used a value sightly bigger than the suggested because as seen before, results in a stretch sightly less agressive than the proposed by SFT and it gave better results in this image.

We can drag an instance of the procees and keep it. This allows to relinearize the image after this cleaning workflow, in order to process it in linear stage.

-Rename the result as L_NLT

L_NLT, our image right after non linear transform.
Note that the histogram is located slightly under 25%. This yielded better results than STF for this particular image


The application of the process is almost automatic…

-Apply with the desired Stride parameter. In this example I used the default, 128.

-Rename the resulting starless image as L_sn0

cleaning process

As seen previously, the inspection of the result reveals some imperfections that we may want to fix. At a glance, what we want is to split the image into purely stars, and purely background (including nebulas and others DSO’s), and every issue we find is really that some pixels are on the wrong side, as stars that remains on the starless image, or bright nebula details that appear on the starmask.

The workflow that I propose in this article is based in a transferring technique, removing things in one side and then producing the corresponding pair, so no destructive.

We will proceed in the following order:

  1. Details removed from the DSO
  2. Big stars not removed
  3. Unnatural replacement texture
  4. Remaining halos
step 1: Details Recovery

In some cases starnet can fail to discern which features are stars or fine structures from the DSOs. This is usually seen on galaxy cores, or very bright DSO features, specially if they are just next to a significant star. To recover this details we’re going to work over the starmask to remove the details that we want to recover, and then we will transfer them to a new improved starless image using the following workflow:

Note that in this case we have recovered also some stars. This is because we will use Photoshop to remove them manually using the Spot Healing Brush.

Let’s see the procedure step by step:

-Open L_sn0

As we want to remove unwanted features from the starmask, we need to produce this starmask. And as explained before, the starless and the starmask images added should result in the original image. We can express this relation with the following equation:

L_NLT = L_sn0 + L_st0

So we can use PixelMath to generate our starmask from the starless image and the original one.

-Use PixelMath to do L_NLT – L_sn0.

It is important to note that this, and the following PixelMath operations in this article are made without rescaling, unless specified.

-Rename result to L_st0

-Use CloneStamp to remove unwanted details from L_st0 (galaxy cores, DSO bright details, etc…)

-Rename result to L_st1

-Use PixelMath to do L_NLT – L_st1

-Rename result to L_sn1

STEP 2: Recover not removed stars

Not removed stars is the next issue that we must fix because it is important to have an accurate and representative starmask for the following steps.

-Save L_sn1 as TIFF 16bit

-Open in Photoshop

-CloneStamp and/or reduce the brightness of the remaining stars on the starless image.

-Save the image as L_sn2

Note: The name of this image should be L_sn2

-Open L_sn2 in PixInsight

-Use PixelMath to do L_NLT – L_sn2

-Rename the result to L_st2

annex: Creation of an absolute starmask

At this point we want to produce a good starmask, that will be used in the next step, so we will describe a procedure, that even not being part of the cleaning itself, it is highly related.

This new starmask (L_st2) includes all the stars of the image, but as the starnet starmask image is a simple subtraction from the starless image, the cores of the stars placed on top of bright nebula features are not 1,1,1. Their value are the difference from the starless brightness level on this area. Or in other words, for a given area, the more bright is the nebula, the more dimmed will be the resulting stars placed over it. This fact can be seen in the following images. On the left, our original stretched image and a measurement just aside of a star placed on a very bright part of the nebula. The surrounding value is 0.846 (in the normalized real range). On the right, the result of the application of starnet, and a measurement on the core of the same star. It’s value is 0.269.

Demostration of the relative values of the stars cores.
Left: Original image L_NLT
Right: Star-only result from starnet

As a result, if we try to use L_st2 as a starmask, we will notice that it doesn’t protect the stars enough. We can clip the highlights on this image, to compensate their low values, but in this case, the stars placed in the background will be severly clipped.

So in the end, we need to add the values left on the starless image to the cores of the stars in the stars-only image in order to convert this starmask, with their relative values, to an absolute mask. To do that we’re going to add the starless image to the starmask, but masking it (otherwise we would recover the entire initial image) with a bypass mask.

-Duplicate L_st2

-High Clip with the dimmest star core as white point reference.
This step should be applied taking into account every specific image. FOVs with relatively homogeneus stars sizes would work, but for big differences between stars sizes in the image, maybe additional steps should be performed to fine tune this clipping.

-Rename as L_st2_bypass

-Mask L_st2 with L_st2_bypass

-In PixelMath do L_st2 (masked with L_st2_bypass) + L_sn2

-Rename the result as L_st2_cr

Left: L_st2
Right: L_st2_cr

The result of this processing is a starmask that has become independent from the starless image. This starmask, with its absolute values, and its well bloated cores, not only is useful to create proper starmasks variations, but also to add the stars on a brightness manipulated starless image by applying it as a screen layer in Photoshop.

step 3: Halos

Starnet tends to fail managing the outer halos of the stars. Halos can be harmful in cases where the background has very faint structures. In that cases the enhancement or noise reduction processes that we may want to apply to the “true” background can be confused by the presence of this halos. Here we can follow two different strategies, one would be to remove them from our image, if we consider them as a defect of the image so we will discard them, and another would be to remove them from the starless image for them to be transferred to the starmask. In any case we will need an halo mask, to capture this halos and reduce its brightness.

            -Duplicate L_st2_cr

            -Rename to L_halo_msk

            -Remove small scale structures with ATWT

            -Fine tune the mask with HT

            -Mask L_sn2 with L_halo_msk

            -Reduce halos brightness with HT (use preview and general view to visually evaluate the result)

            -Rename L_sn3

Left: L_sn2
Right: L_sn3

step 4: Inner Texture

The texture left by starnet on the brighter areas of the stars is one of the most concerning issues, and has been frequently asked by the users. Here we’re going to describe how to improve it in two steps. The first one is to remove, or reduce this texture in PixInsight and the second on is perfomed in Photoshop to replace the starnet texture with one taken from the surrounding texture of the image. This first step could be avoided, but I prefer to do it in order to have a good base for the second one.

            -Duplicate L_st2_cr


            -Rename to L_core_msk

            -Mask L_sn3 with L_core_msk

            -Use ATWT or MLT or MMT, disable layers 4,8

            -Rename RGB_sn4

Left: L_sn3
Right: L_sn4

            -Save L_sn4 as TIFF 16bit

step 5: Photoshop Cleaning

The last step can be performed in two ways. Using PixInsight or using Photoshop. Here I describe the procedure in Photoshop, as it produces a cleaner result. Photoshop «Fill» operation is very suitable for a workflow like this.

            -Open L_core_msk in Photoshop

            -Select the stars with Wizard selection (non contiguous)

            -Save the selection as core_msk

            -Open L_sn4 in Photoshop

            -Load core_msk selection

            -Expand selection 3px

            -Edit -> Fill -> Environment

            -Use CloneStamp to remove remaining defects

            -Save result as L_sn5

Left: L_sn4
Right: L_sn5


After the procedure described in this article our image should look much cleaner, allowing us to process it more agressively. The image suffers from severe vignetting too, but this is another story…

A closer inspection reveals how the procedure reveals a cleaner background, that now is easier to process.

3 comentarios sobre “Inside Starnet++

Deja una respuesta

Introduce tus datos o haz clic en un icono para iniciar sesión:

Logo de

Estás comentando usando tu cuenta de Salir /  Cambiar )

Imagen de Twitter

Estás comentando usando tu cuenta de Twitter. Salir /  Cambiar )

Foto de Facebook

Estás comentando usando tu cuenta de Facebook. Salir /  Cambiar )

Conectando a %s