Exposure time is one of the key parameters that an astroimager has to chose. Years ago the most accepted rule was «expose as long as you can», that means, the longer that your autoguiding system, or your light pollution allowed. Exposing as long as you can gives a good amount of signal on every subexposure, but leads to other problems like guiding errors, bloomed stars, plane trails, passing clouds, that can ruin the subexposures… in addition, to expose long means that you will end with less images to stack for the same total integration time, and this is worst for the stacking rejection algorithms.

Readind the above, one can conclude that the best is to take very short exposures, keeping the same total integration time. But if you ever have done astrophotography, you know that this is not a free lunch. Exposing shorter produces lower signal to noise ratios and this is bad for the image in general, but specially for the faintest areas. This is because of the Read Noise (RN), as it has a fixed value for a given combination of camera, gain settings, etc…

We can understand how Read Noise imposes a limit with the following example. This is an image of M43. Let’s analize an slice of it represented by the horizontal blue line.

If we plot the pixel intensities against the pixel coordinate, we are going to obtain some similar to this:

We have represented the saturation (upper limit) and Read Noise. As you can see, the pixels on the brighter areas of the nebula as quite far from the read noise levels. This produces a clean representation of the structures, as they have better Signal-to-Noise ratio (SNR). But if we look at the background areas, the signal is very close to the read noise, so its SNR will be lower and the representation of the nebula structures will be poorer.

We’re not going to into the math of this statement, but here you can find some useful info:

https://www.cloudynights.com/articles/cat/articles/astrophotography/finding-the-optimal-sub-frame-exposure-r1571

So the question is, how much the signal has to overcome the read noise to allow a good representation of the background? Long story short, the most accepted value is 9.76xRN^{2}, which we can round to 10xRN^{2}. To calculate the exposure time needed to achieve this signal level I developed the following spreadsheet.

This spreadsheet has been developed as a quick tool to decide mainly wich subexposure lenght to choose and how many subexposures to shoot, but there are many other calculations that can help also in other imaging aspects. Let’s take a look at the different sections.

*Note: All the values in orange, must be filled by the user, taking into account the equipment used. The rest of the values are automatically calculated.*

### SETUP SECTION

The upper values (Date/Location/Moon Phase) are for tracing purposes, so one can fill them or not.

The following sections represent the **Camera** and **Telescope** characteristics, that have to be filled depending on the equipment used.

The Target SNR section is composed by 2 parameters. **RN Contribution** controls how much SNR should have the background. 95% is the usual value, as it corresponds to 5% of RN. **SNR Boost** has an influence over the number of subs needed to achieve the corresponding SNR improvement.

### FILTERS AND TEST EXPOSURE

On the center area of the spreadsheet is where most of the calculations are done, but before, it is necessary to introduce some additional parameters.

First, on the **Filter** section, the user must know the banwidth and central wavelenght of its filter set. Note that in the example, there’s an indication of «+LPS» on top. That means that, I added an IDAS LPS-V2 to the image train, so the bandwidth form the original LRGB filters are modified. To know the resulting bandwidth I made an aproximation by superimposing both filter plots and calculating the resulting area. The result can be seen on the right column.

To calculate the proper subexposure lenght, we need to take a **Test Exposure**. Once done we must fill the fields of the exposure lenght and we have to **measure the ADU level of the darkest area of the image**. A DSO measurement can be filled also, but this value is optional.

### RESULTS

With the mentioned values, the spreadsheet calculates the results:

#### SIGNAL, NOISE AND SNR (of the test exposure)

This section calculates the **Signal** and **Noise** values (in electrons) of the test exposure (this is important. This values **are not** from the resulting optimum subexposure)

The Signal and Noise values are then used to calculate the **Total Noise** and the corresponding **SNR**

#### SKY

The following section calculates the most important value of this spreadsheet: the **Background Flux**. The value is expressed in both e/s and e/min and is key to calculate the optimum exposure. In addition, it gives us information about how dark is our sky (..or more precisely, the FOV of our shoot) by comparing this values with the ones proposed by Robin Glover.

More info can be found in the original video:

https://www.youtube.com/watch?v=3RH93UvP358

Apart of the Background Flux, there’s a subsection that calculates **an estimation** of the **SQM** value by comparing the **Photon Flux** with a reference **Photon Flux of 20.7 SQM**. This SQM calculations were kindly shared by Steve Bellavia.

#### OPTIMUM SUBEXPOSURE, TOTAL INTEGRATION TIME, SESSIONS PLANNING

Finally, the value we were looking for, the **Optimum Exposure Time**, is calculated. There are 2 proposed values, one uses the formulas proposed by the **Gibraltar Astronomical Society (GAS)**, and the other uses the criteria from **John Smith**. Both are similar and will lead to almost identical results. The **Optimum Signal Sky Background (ADU)** is also calculated, an can be used as a reference to check the subexposures. This can be useful to be aware if, for example, a certain night has better transparency (so darker results), that could lead to a change of the Optimum Exposure Time.

The following subsection gives values related to the maximum exposure allowed before the DSO becomes saturated (this calculation is based on the DSO Maximum value filled in the Test Exposure section). It gives also the **Optimum Stacking** to achieve the SNR improvement defined by the **SNR Boost** value. This Optimum Stacking, multiplied by the Optimum Subexposure gives the integration time by filter, and the sumatory gives the **Total Integration Time** needed to meet the user’s specifications.

The final sections is intended to give a hint while planning the entire project. It uses a value called **Imaging Efficiency** that represents how much time the telescope needs to be «under the stars» to achieve 1h of imaging integration. The value can seem difficult to calculate, but is very straighforward. We only need to take note of the time of the first and last «good» subexposure taken in a normal astrophotographic session and calculate the total time. This will be the total time «under the stars». Then get the images taken, and multiply the number of subs by the subexposure time to get the real imaging time. Finally simply divide the time under the stars by the real imaging time. In my case, I need 1.4h of session to get 1h oh real integration, taking 60s subs (it is supposed that the longer exposure time, the better the efficiency, but…). Depending on the season, the night will be longer or shorter. With the previous values and the night lenght, the spreadsheet will give an estimation on how many «good» sessions we will need.

The spreadsheet can be downloaded from here:

Hi Alberto. I hope you understand English. Question: Do you know if it is possible to use the spreadsheet for DSLRs? I’m having a hard time finding the data such as dark current, full well capacity, etc.

Me gustaMe gusta

Hi Bob. I think you should be able to use my spreadsheet with a DSLR. The hardest part would be to choose the filter wavelenght and bandwidth. I use a monochrome camera, so I manage this for the different filter. You will need to figure which values will work for you. The good news is that this values are used only for photon flux calculations, that are informative values, not necessary to find the optimum subexposure time. About the values you are looking for (dark current, FWC, etc…) that are required for the optimum subexposure calculations, I’m quite sure that are ways to measure them directly for your camera, if you don’t find them on the www. I remember a review from Alessio Beltrame http://www.alessiobeltrame.com where he analized a QHY163M (my current camera) and calculated almost every parameter specifically for this camera, to compare the results with the claimed in the catalogs. An advanced, but very interesting reading. Good luck Bob! 🙂

Me gustaMe gusta

Thanks for your kind response. Unfortunately Allessios’ links are not working. I found his portfolio but the link you provided and his twitter account are dead. Oh well, I’ll keep looking for ways to get the technical data about my Nikon D7500 and D780.

Me gustaLe gusta a 1 persona