A recurring topic of discussion for planetary imaging is how long the videos can go on planets without losing contrast because of their rotation. Jupiter, that offers so much details but that spins so fast is of course the first concerned! So how long is “too long” on Jupiter?
For this test I’m using different videos with a length of 3 to 4 min that I got over the past weeks to derotate with WinJupos, a software that is able to correct the rotation of the planet and to increase noticeably the time of video grabbing.
How much time is necessary to detect the rotation?
If you process an entire video of 2-3 min you may not be able to detect the rotation in itself: the details can still look good. I have made a simple test here, by separating a 3 min movie obtained with the Astronomik R filter on October 30th, 2013 into three 1 min parts (without any frame selection) and I have done an animation to see if the details move…
The rotation of the planet is well visible in each 1 min part.
The size of the planet is 40,7 arcsecond, still far from the maximum of 49″ (47″ for the current 2013-2014 apparition), and the diameter of the telescope used is 250 mm, quite smaller than the biggest diameters now frequently found among the amateurs (350/400 mm). So even with a moderate resolution one can see that only one minute is enough to detect the rotation of the planet.
But is there a real difference when stacking 2 minutes instead of one?
At left are two stacks of 1 min and 2 min. Apart of the rotation that is also visible, it is clear that the 2 min image shows a lower contrast despite having a better signal-to-noise ratio.
Of course this is not a big difference but why should we of ourselves introduce a deterioration of the image while we must otherwise face seeing conditions, optical problems… ?
However, modern softwares are really powerful…
WinJupos derotation vs. AS!2 multipoints
Of course, today we have WinJupos to overtake this problem. Derotating the whole 3 min movie, or derotating the final images of the three 1 min sequences would allow you to get an excellent final image.
However, while making a comparison with Autostakkert!2 instead of Registax 5 (a software of the preceding generation that processes the entire raw frame in a whole) I had a surprise…
The third comparison is a 3 min stack of a derotated WinJupos image (finally processed with AS!2 but again without any frame selection), and the original SER file not derotated but still processed with AS!2.
The images are really close! It is possible to see a subtile deterioration of contrast, but not as much as the 2 min image got from Registax 5 above.
Finally, a fourth comparison is made with a RG610 movie got on October 13th 2013, whose length is 4 min instead of 3.
This time a deterioration of contrast is easier to see on the edges (take a look at the GRS region for example). The difference is still not huge but the resolution is even lower and that difference would certainly be more important at higher resolution.
I would conclude two things…
1) If there is slight loss of contrast in 3 min, it looks clear that AS!2 is able to process a 2 min file with no need of performing any previous derotation.
2) The WinJupos derotation is still able to get better results, but is really interesting only with files of at least of 3 to 4 min.
Results to be confirmed by further experimentation as usual, of course!
Thanks Salvatore ! It’s possible that under very poor seeing changing the parameters may not result in very strong differences. But otherwise it does make a difference. At the Juno Workshop, Emil showed some tests from his own data between single and multiple alignement, edge and gradient noise etc. and the differences were very easy to see.
Good luck for your next data!
Very interesting indeed Christophe, thanks for sharing. Your datasets are very suitable for these experiments.
Generally speaking about my operating conditions, the seeing does not allow to put in evidence the finest details along the edge and particularly at the higher latitudes, thus the multi point stack seems to be not really effective on my datasets, expecially in absence of prominent ovals or other high contrast details. Actually my latest images were stacked using a single alignment point (I’ve got a very slow hardware, I admit :-) ) and the quality of northern and southern details turned out as blurred as in the case of a random number of alignment points. In the end, because I had captured a sequence of many 35 sec files with no pause, I tried to stack (again in AS) two, previously independently stacked, images taken within the 70 seconds range from two distinct captures. I didn’t notice any shocking difference at my resolution level. I earned a flatter noise as expected.
So your article is very instructive and most appreciated.
Thank you Eugene, it was interesting to test this :)
Excellent research, Christophe! I’ve been asking this question and you have clearly found the answer. Thank you!