Target imaging simulation method for ground-based system based on signal-to-noise ratio postprint
Chunxu Ren, Yun Li, Yanzhao Li, Weihua Gao, Wenlong Niu, Xiaodong Peng
Submitted 2025-12-03 | ChinaXiv: chinaxiv-202512.00040 | Original in English

Abstract

Space  target  imaging  simulation  technology  is  an  important  tool  for  space  target  detection  and  identification, with  advantages  that  include  high  flexibility  and  low  cost.  However,  existing  space  target imaging  simulation technologies  are  mostly  based  on  target  magnitudes  for  simulations,  making  it  difficult  to  meet  image  simulation requirements  for  different  signal-to-noise  ratio  (SNR)  needs.  Therefore, design  of  a  simulation  method  that  generates target  image  sequences  with  various  SNRs  based  on  the  optical  detection  system  parameters  will  be  important  for  faint space  target  detection  research. Addressing  the  SNR  calculation  issue  in  optical  observation  systems,  this  paper proposes  a  ground-based  detection  image  SNR  calculation  method  using  the  optical  system  parameters.  This  method calculates  the  SNR  of  an  observed  image  precisely  using  radiative  transfer  theory,  the  optical  system  parameters,  and the  observation  environment  parameters.  An  SNR-based  target  sequence  image simulation  method  for  ground-based detection  scenarios  is  proposed.  This  method  calculates  the  imaging  SNR  using  the  optical  system  parameters  and establishes  a  model  for  conversion  between  the target's  apparent  magnitude  and  image  grayscale  values,  thereby enabling  generation  of  target  sequence  simulation  images  with  corresponding  SNRs  for  different  system  parameters. Experiments show that the SNR obtained using this calculation method has an average calculation error of <1 dB when compared  with  the  theoretical  SNR  of  the  actual  optical  system.  Additionally,  the  simulation images  generated  by  the imaging simulation method show high consistency with real images, which meets the requirements of faint space target detection algorithm research and provides reliable data support for development of related technologies.

Full Text

Preamble

Astronomical Techniques and Instruments, Vol. 2, September 2025, 288–298 Article Open Access A target imaging simulation method for ground-based system based on signal-to-noise ratio Chunxu Ren , Yun Li , Yanzhao Li , Weihua Gao , Wenlong Niu , Xiaodong Peng 1 National Space Science Center Chinese Academy of Sciences Beijing 100190, China

2 University of Chinese Academy of Sciences , Beijing 101408, China

3 National Key Laboratory of Deep Space Exploration Hefei 230000, China *Correspondence:

INTRODUCTION

In recent years, with the rapid developments in aerospace technology, the number of spacecraft in orbit has shown a significant growth trend . This trend has led to a continuous increase in the occupation of space orbit resources, gradual rises in the amount of space debris and the number of decommissioned satellites, and a signifi- cant increase in the risk of collisions between spacecraft, which poses a serious threat to the normal, safe opera- tion of satellites in orbit. As a result, space situational awareness (SSA) has become a strategic focus for all coun- tries to maintain safe operation of their spacecraft, ensure sustainable use of space resources, and enhance space secu- rity capabilities . Dim space target detection technol- ogy, as an essential technology for SSA, can identify and monitor low-brightness and long-distance space targets effectively in complex space environments. This technol- ogy provides essential support to ensure both the safe opera- tion of spacecraft in orbit and the sustainable use of space resources. This detection technology thus has significant engineering application value and has become an impor- tant direction for research and development in the aerospace field Dim space target detection methods can be divided into model-driven methods and data-driven methods.

Model-driven methods are based on the use of mathemati- cal theories to suppress the background and perform tar- get enhancement. However, these methods require numer- ous assumptions and involve multiple parameters, mak- ing the calculations complex. In contrast, data-driven meth- ods often use deep learning to learn the features of tar- gets, thus enabling distinctions to be made between tar- gets and the background. These methods have demon- strated strong robustness and accuracy, and have become the main development trend in this field . However, © 2025 Editorial Office of Astronomical Techniques and Instruments, Yunnan Observatories, Chinese Academy of Sciences. This is an open access article under the CC BY 4.0 license ( Citation: Ren, C. X., Li, Y., Li, Y. Z., et al. 2025. A target imaging simulation method for ground-based system based on signal-to-noise ratio.

Astronomical Techniques and Instruments (5): 288−298.

Abstract

Space target imaging simulation technology is an important tool for space target detection and identification, with advantages that include high flexibility and low cost. However, existing space target imaging simulation technologies are mostly based on target magnitudes for simulations, making it difficult to meet image simulation requirements for different signal-to-noise ratio (SNR) needs. Therefore, design of a simulation method that generates target image sequences with various SNRs based on the optical detection system parameters will be important for faint space target detection research. Addressing the SNR calculation issue in optical observation systems, this paper proposes a ground-based detection image SNR calculation method using the optical system parameters. This method calculates the SNR of an observed image precisely using radiative transfer theory, the optical system parameters, and the observation environment parameters. An SNR-based target sequence image simulation method for ground-based detection scenarios is proposed. This method calculates the imaging SNR using the optical system parameters and establishes a model for conversion between the target’s apparent magnitude and image grayscale values, thereby enabling generation of target sequence simulation images with corresponding SNRs for different system parameters.

Experiments show that the SNR obtained using this calculation method has an average calculation error of <1 dB when compared with the theoretical SNR of the actual optical system. Additionally, the simulation images generated by the imaging simulation method show high consistency with real images, which meets the requirements of faint space target detection algorithm research and provides reliable data support for development of related technologies.

Keywords

Image SNR calculation; Imaging simulation; Ground-based optical detection system; Space target image sequence

data-driven methods require large numbers of training images, and the acquisition of imaging data from actual observation systems is challenging . These challenges have limited the development of space target detection algo- rithms.

Space target imaging simulation technology is one of the most effective approaches used to provide training images for target detection algorithms. This technique is used widely in both technical research and engineering applications for ground-based verification of equipment and algorithms As early as 2009, Zhang et al. established a detailed simulation model by studying a combination of the transformation relationship from stellar magnitude to grayscale and the statistical patterns of star numbers.

Their model can generate starry sky background images with relatively large magnitudes (magnitude > 10). Then, Han et al. used Satellite Tool Kit (STK) software (Analytical Graphics) to obtain the positional relationship between the target and the observation platform, and used OpenGL to render the simulation images to enhance the realism of the images that were generated. However, because of the complexity of the OpenGL rendering pro- cess, the algorithm ran slowly. To address this issue, Zhang et al. used MATLAB rather than OpenGL to add various noise types and effects to the images to acceler- ate the algorithm’s simulation speed.

Subsequently, to simulate the brightness characteris- tics of targets with different shapes in images, Yan et al. proposed a target illumination model based on the effective optical reflective area, which addresses the conversion relationship between magnitude and grayscale, along with that between magnitude and pixel intensity.

This conversion model uses a linear relationship that does not align fully with the nonlinear characteristics of actual complex optical imaging scenarios, which limits its accu- racy and its wider applicability. To solve this problem, Xia et al. used an exponential conversion relationship to per- transformation between magnitude grayscale, which is more closely aligned with actual com- plex optical imaging scenarios, and also conducted a simula- tion analysis of the image characteristics under the differ- ent system monitoring mode conditions. This method allevi- ates the limitations of the linear conversion model effec- tively, but it does not account for the effects of the sys- tem parameters and environmental factors on the target’s brightness during the actual imaging process.

In addition, to make the simulations more realistic, Ouyang et al. proposed a method to simulate optical images of space debris for use with a small field-of-view (FOV), high-detection-sensitivity space imaging system that considered the interference from stripes and the satu- rated star background. Wang et al. also considered the effects of stray light, including ground-based light and moonlight, on the optical detection system and estab- lished a stray light imaging simulation model. Based on this model, Liu et al. modeled the imaging process under the platform jitter condition by applying a jitter effect to the simulated images to enhance their realism.

Additionally, to evaluate the performance of wide-field sen- sors, Xu et al. implemented space imaging simulations based on the characteristics of a panoramic scanning sen- sor to generate a sequence of images that covered a 360° × 10° FOV. Although these methods analyzed the factors that affected optical target imaging under various observa- tion conditions, they did not establish the corresponding relationship between the observation conditions and the SNR of the image, thus making it difficult to provide tar- get data with differing SNRs for target detection algo- rithms.

At present, space target imaging simulations use a rela- tively well-established technical framework, and the simula- tion results obtained are quite close to real-world scenar- ios. However, these existing simulation methods do not take the impact of the observation system parameters and environmental factors on the target brightness into account when calculating the brightness, and they also do not define the relationship between the SNR and the tar- get image grayscale values clearly. As a result, these meth- ods are still unable to generate images with a specified SNR based on the system parameters of the observation platform, particularly for target imaging under low SNR conditions. This makes it difficult for the existing simula- tion methods to meet the demand for training data with dif- ferent SNRs as required for weak space target detection technology. Therefore, the work in this paper uses a ground-based optical detection system as an example, with dynamic imaging of small space targets as the simula- tion target, and it also conducts space target imaging simu- lation research based on the SNR for the starry sky back- ground. The main contributions are described as follows: (1) A ground-based detection image SNR calculation method based on the optical system parameters is pro- posed. Based on the apparent target magnitude, system parameters such as the operational spectral range, the quan- tum efficiency, the effective aperture, the optical transmit- tance, the exposure time, and the noise parameters of the ground-based optical detection system, and the observa- tion environment parameters, the method can calculate the image SNR under various observation conditions. (2) A target sequence image simulation method based on the SNR for ground-based detection scenarios is also proposed. The method calculates the image SNR under the current observation conditions based on the observa- tion system parameters and establishes the conversion rela- tionship between the target magnitude and grayscale based on the SNR; then, it obtains the target motion model from the target orbital data, thereby realizing simula- tion of the target sequence images.

METHODS

2.1 . Apparent Magnitude Calculation

The magnitude is an important indicator for measure-

ment of the brightness of celestial objects and typically includes two types: the absolute magnitude and the appar- ent magnitude. Modern astronomy uses a logarithmic scale to describe brightness differences. Specifically, a reduction of five magnitudes corresponds to a 100-fold increase in the brightness of the celestial object The absolute magnitude is a standardized measure of brightness that reflects the intrinsic luminosity of an astro- nomical object and is unaffected by its distance from the observer. For stellar objects, the absolute magnitude refers to the apparent magnitude that an object would have if placed at a distance of 10 parsecs. The absolute magni- tude for non-stellar objects such as planets, comets, and asteroids is defined as the apparent magnitude that the object would exhibit when it is located at a distance of one astronomical unit (AU) from both the Sun and Earth with a phase angle of 0°. Additionally, the absolute magni- tude of non-stellar objects can also be calculated using the following formula

H = 15 . 618 − 5lg D − 2 . 5 lg P v , (1)

The apparent magnitude refers to the brightness of an astronomical object as observed from the Earth. The appar- ent magnitude is dependent on the object’s intrinsic lumi- nosity and the distance between the object and the Earth.

Additionally, the apparent magnitude is strongly affected by the absorption properties of dust, gas, and other media located between the object and the Earth. Variations in observational angle and atmospheric conditions can fur- ther modulate its value. For stellar objects, there is a spe- cific conversion relationship between the apparent magni- tude and the absolute magnitude, which can be expressed as

M = m + 5lg ( d 0 d

For celestial bodies that do not emit their own light, e.g., planets and asteroids, their apparent magnitude is related to the solar phase angle. The equation used to calcu- late this magnitude is as follows: 

V = H + 2 . 5 lg

where is the absolute magnitude of the object; are vectors from the Sun to the asteroid and from the observer to the asteroid, respectively; is the surface albedo of the asteroid; and are phase functions, which are given by

φ 1 = e − 3 . 33tan k 0 . 63 , (4)

φ 2 = e − 1 . 87tan( k / 2) 1 . 22 , (5)

where is the angle between For nonluminous targets, e.g., space debris , the cal- culation of their apparent magnitude is related to the phase function, and the method is given as follows: where is the cross-sectional area of the target; is the diffuse reflection coefficient of the target, is the phase function of the target, is the solar phase angle of the target, and is the distance between the observer and the target.

SNR Calculation Method for Ground based Systems Based on the methods described above for the calcula- tion of the apparent magnitude of a target, we can obtain the apparent magnitude under different observation scenar- ios. To establish the conversion relationship between the tar- get magnitude and the SNR, this paper proposes an SNR calculation method that is suitable for ground-based sys- tems. This method integrates the target’s radiative trans- fer characteristics, the observation system parameters, and the environmental factors, while also comprehensively con- sidering the imaging process of the optical system, the detector’s noise characteristics, and the sky background interference to calculate and obtain the target’s SNR within the image.

First, the celestial objects that are observable by ground-based systems mainly include asteroids, planets, and various small space objects. These objects emit light by reflecting sunlight and can therefore be reasonably approximated as blackbodies during modeling. The spec- tral radiance that corresponds to each wavelength can be calculated using the Planck formula as follows:

M ( λ, T ) = 2 hc 2

h h = 6 . 626 × 10 − 34 J s c T k B k B = 1 . 38 × 10 − 23 J K − 1

where Planck’s constant, where ; is the speed of light; is the sur- face temperature of the Sun; and is the Boltzmann con- stant, where Then, according to the Stefan–Boltzmann law, the spec- tral photon flux density of a target with an apparent magni- tude of zero can be obtained, as shown in the following equation:

I ( λ, T ) = f ⊙ 2 . 512 − ( BC − m b ⊙ )

BC f ⊙ f ⊙ = 1 367 .

51 W m

− 2 σ

where is the thermal magnitude correction parameter; is the solar constant, where Stefan–Boltzmann constant, where ; and is the apparent ther- mal magnitude of the Sun.

σ = 5 . 67 × 10 − 8 W m − 2 K − 4 m b ⊙

Then, the number of electrons that the imaging sen- sor can generate during the imaging time at the different wavelengths is given by

N T = ∫ λ 2

where represents the operational spectral range of the detector, which is typically taken to be the visible light spectrum (400–800 nm); is the quantum effi- ciency of the detector; is the effective aperture of the tele- scope; is the optical transmittance; is the exposure time; and is the apparent magnitude of the target.

During the imaging process, the main noise sources include the noise generated by the sky background and the internal noise of the detector. The sky background bri- ghtness is typically represented by the parameter , which has units of . The equation for calcula- tion of the sky background noise is given as follows:

mag arcsec

N bg = ∫ λ 2

where is the FOV of a single pixel in the detector. If cannot be obtained directly, it can be calculated using the following equation:

SAO star catalog Targets/stars-in-FOV selection Coordinate transformation

Convert apparent magnitude to grayscale value

First, the positions and magnitudes of the stars in the J2000.0 geocentric inertial coordinate system are obtained based on the data from the Smithsonian Astrophysical Observatory (SAO) Star Catalog. Additionally, the tar- get’s position in the J2000.0 geocentric inertial coordi- nate system is obtained using the simplified general pertur- bations SGP4 orbital model based on the target’s two-line element (TLE) orbital data.

pixel pixel where are the width and the height of the detec- tor’s photosensitive element surface, respectively; is the focal length of the telescope; and the numbers of pixels (resolution) of the detector’s image sensor in the directions, respectively. pixel pixel

R SN = N T / p √

where represents the pixel size occupied by the target in the image.

Target Sequence Image Simulation Method in Ground based Detection Scenes Based on the SNR calculation method above, a tar- get sequence image simulation method for ground-based detection scenes is proposed, as shown in the overall pro- cess flowchart given in Simulation parameters Zero-mean Gaussian noise model

Uniform sky background image

Noise image Next, based on the simulation parameters, the targets and stars that lie within the FOV are selected. The observa- tion field is generally regarded as a circular FOV with its center at the pointing direction of the observation plat- form’s optical axis, and a radius that is half of the diago- nal of the observation platform’s FOV. Targets and stars within this circular FOV can then be observed by the plat- form. The constraint condition for the object within the Two-line elements of targets

Apparent magnitude calculation

SNR of the system calculation Grayscale value of targets in images Gaussian point spread function (PSF) Star background image Targets image Motion targets sequence simulation images

observation FOV is expressed as follows: where is the radius of the circular observation field, is the pointing direction of the starboard sen- sor’s optical axis.

Then, the positions of the stars and the targets within 

M r =

  pixel pixel

M p =

where are the rotation transformation matrix and the perspective projection transformation matrix, respec- tively; are the right ascension, the declina- tion, and the camera rotation angle of the observation plat- form in the J2000.0 geocentric inertial system, respec- tively; and is the projection position of the target or star on the -axis of the observation platform camera’s coordi- nate system.

Then, the position of the target or star with coordi- nates in the J2000.0 geocentric inertial system is given in the image plane coordinate system as follows: 

 = M p · M r

  Additionally, it is necessary to convert the star’s appar- ent magnitude into the corresponding grayscale value in the image. For stars, the brightness differs between two stars with adjacent unit magnitudes by approximately 2.512 times. Therefore, the grayscale value of a star with apparent magnitude can be calculated based on the observation system’s magnitude sensitivity , as follows:

g i = g max 2 . 512 m i − m , (17)

g max where is the grayscale value that corresponds to the brightest magnitude (255 in an 8-bit grayscale image).

At the same time, the conversion relationship between the target’s apparent magnitude and its grayscale value can be obtained based on the SNR, the background noise mean, and the background noise variance of the image. The target’s apparent magnitude is calculated using the corresponding formula based on the target type mentioned above. The SNR of the image is the ratio of the grayscale mean in the target region to the standard devi- ation in the noise region. Because the system’s SNR is equal to the SNR of the simulated image, after the SNR of the system’s image is calculated using the simulation parameters, the grayscale mean of the target in the image can then be deduced as follows: the FOV are transformed into the image plane coordinate system. The coordinate transformation process includes a rotation transformation from the J2000.0 geocentric iner- tial coordinate system into the observation platform cam- era’s coordinate system, and a perspective projection trans- formation from the observation platform camera coordinate system into the image plane coordinate system. The formu- la used to perform this transformation is shown as follows: 

g t = R SN σ n + g n , (18)

where is the SNR that corresponds to the target’s apparent magnitude; is the mean grayscale value of the uniform background in the simulation image; and repre- sents the standard deviation of the Gaussian noise in the simulation image.

In ground-based long-distance detection scenarios, the target and the stars are located far away from the tele- scope, and their signals can be regarded as point light sources. After passing through the optical system, these sig- nals present an approximately Gaussian distribution within a local region of the image. Therefore, the Gaussian PSF model can be used to approximate the grayscale distribu- tion of the target and the stars.

The equation for the Gaussian PSF is as follows: where is the grayscale value at the center of the target or star; are the pixel coordinates of the star spot center and the pixel coordinates of the point of the target or star, respectively; and is the stan- dard deviation of the PSF, which can be calculated using the following:

σ = w

We obtain the grayscale value at the target’s center via reverse engineering based on the target’s grayscale mean value and the Gaussian PSF. The equation required is as follows:

A = g t p p ∑

i = 0 exp [ − ( x i − x 0 ) 2 + ( y i − y 0 ) 2

Finally, the noise image obtained by adding the zero- mean Gaussian noise and the uniform background is synthe- sized with the target image and the star background image to obtain the final target sequence simulation image.

EXPERIMENT

In this work, experiments were conducted to verify the accuracy of the proposed SNR calculation method and the accuracy and the simulation effectiveness of the pro- posed target sequence image simulation method.

For the proposed SNR calculation method, images of specified celestial objects must first be captured using a des- ignated instrument with different exposure times. The actual SNR for the captured image sequences will then be computed. Next, the theoretical SNR can be calculated using the proposed method with the observation parame- ters and the celestial object’s apparent magnitude. The accu- racy of the proposed method can then be validated by com- paring the theoretical and actual SNR values to deter- mine the error between them.

For the proposed target sequence image simulation method, observations must first be conducted on a desig- nated sky region using the specified observation parame- ters, and the corresponding background star simulation images will then be generated. These simulated images will then be compared with the actual telescope images of the corresponding sky region to check whether the posi- tions of the stars match, thus confirming the accuracy of the proposed simulation method. Subsequently, multiple sets of target sequence simulation data will be generated by adjusting the various exposure times, and the SNR varia- tions between the different sequences will then be observed to evaluate the effectiveness of the simulation method.

Accuracy Verification Experiment for the posed SNR Calculation Method At the Huairou Campus of the National Space Sci- ence Center of the Chinese Academy of Sciences, we con- ducted observational experiments using a Celestron C11HD Schmidt-Cassegrain telescope paired with a ZWO ASI174MM planetary camera. The weather on the day was clear with no moon, but the light pollution was rela- tively severe, with an SQM index of We observed Saturn and its moons, along with the star SAO 93495. The telescope parameters are given in and the planetary camera parameters are listed in Six data sets were collected from the experiments.

These sets included three sets of observational data for Sat- urn and its moons, which were captured with exposure times of 0.3 s, 0.1 s, and 0.02 s; and three sets of observa- tional data of Uranus and SAO 93495 that were acquired 75 mag arcsec using exposure times of 0.5 s, 0.2 s, and 0.1 s. The images of Saturn and its moons are shown in , and the images of Uranus and SAO 93495 are shown in Parameter Value Optical system HD Schmidt-Cassegrain Aperture/mm Focal length/mm Focal ratio Finderscope 9 × 50 Optical coating Starbright XLT Coating Optical efficiency/(%) Analog-to-Digital Converter/bit 12 /10 In the Saturn observation images, both Titan and Hype- rion are located far away from Saturn and are less strongly affected by the diffused light from Saturn. There- fore, the SNRs of these two moons in the image are calcu- lated primarily. In the Uranus observation data, Uranus has a higher magnitude, and the charge received by the tar- get pixels in the charge-coupled device (CCD) in the detec- tor reaches the saturation charge level. As a result, reduc- ing the exposure time does not affect the SNR signifi- cantly. Therefore, the SAO 93495 star within the FOV is selected to calculate the SNR in this case.

By inputting the target’s apparent magnitude, the expo- sure time, the pixel size, the telescope parameters, and the detector parameters into the proposed SNR calculation method, both the S/N SNR and the dB SNR calculation results are obtained. The target’s name, the target’s appar- ent magnitude, the target’s pixel size, the exposure time, the actual SNR, and the calculated theoretical SNR are listed in (C) 0.02 s. In all images, Titan is located in the red box and Hyperion is located in the green box.

(C) 0.1 s. In all images, SAO 93495 is located in the red box, and Uranus is located in the green box.

Target name Apparent magnitude /mV

0.822 (S/N) and 1.462 (dB) when compared with the actual observed SNR, with average errors of 0.331 (S/N) and 0.601 (dB). For the S/N SNR, the error between the results obtained from the proposed method and the actual observed SNR does not exceed 1. However, for the dB SNR, because the dB SNR and the S/N SNR have a loga- rithmic relationship, the dB SNR compresses the varia- tion of the S/N SNR when the S/N SNR > 1, and it ampli- fies the variation of the S/N SNR when the S/N SNR < 1.

Therefore, when the S/N SNR < 1, even a small error can lead to a significant difference in the dB SNR results. As a result, the maximum error in the dB SNR is relatively large, but for the positive dB SNR values, the maximum error is 0.694. From the average error between the actual SNR and the calculated theoretical SNR, it can be con- cluded that the average calculation error of the proposed SNR calculation method is <1 dB. Although there are some errors, the results remain within the same order of magnitude as the actual SNR, thus indicating that the method has significant reference value. The proposed method can reflect the SNR of a target in an image realisti- cally under specific observational conditions.

We compared the background star simulation images of a designated sky region that were generated using the simulation algorithm under specified observation condi- Actual SNR Calculated SNR Error between two SNRs tions with images captured by the Guan Sheng Optical (GSO) 8-inch RC reflecting telescope. By comparing the relative positions of multiple stars in the corresponding images, we verified the accuracy of the proposed simula- tion algorithm.

We selected the typical Pleiades cluster as the observa- tion target. The observation parameters, which are given , include the observation time, the observation location, the FOV angle, the telescope axis orientation, the image resolution, and the camera focal length. The back- ground star image generated using the simulation algo- rithm is shown in , and the corresponding sky region image captured by the GSO 8-inch RC reflecting telescope is shown in generated data of the Pleiades cluster Parameter Value Observation time (UTC) 2025−01−25 06:30:00 Longitude 116° Latitude 39° Altitude 49 m Observation axis direction Right ascension 56.62° Declination 24.21° Observation FOV 1.12° × 1.12° Image resolution 500 × 500 Focal length 150 mm Observation location , we compared the positions of 17 stars in the Pleiades cluster image generated by the simulation algo- rithm with the corresponding positions of these stars in the image of the Pleiades cluster captured by the GSO 8- inch RC reflecting telescope. The figure shows that the sim- ulated results align well with the actual distribution of the Exposure time Target pixel count 13 × 13 13 × 13 13 × 13 −0.472 7 × 7 7 × 7 −0.491 7 × 7 −8.446 −6.984 11 × 11 8 × 8 8 × 8

background stars in the captured image. However, some stars observed in the actual captured image were not found as corresponding star points in the simulation results. These differences may be caused by two factors: on the one hand, the SAO Star Catalog does not contain a complete list of star entries, and on the other hand, the observational parameters that we selected caused stars with relatively low apparent magnitudes to have smaller Simulation Experiment of Target Sequence Images Acquired under Different Observation Parameters Based on the proposed target sequence image simula- tion algorithm for ground-based scenarios, we simulated multiple observation data sets for the DSCS 2-2 satellite (OPS 9432) with different exposure times.

The physical characteristics and the orbital informa- tion of the satellite are presented in . The DSCS 2-2 satellite (OPS 9432) has an average diameter of 1.42 m and a surface albedo of 0.2. By assuming that the satel- lite’s cross-section is circular, the calculated apparent mag- nitude under the given observational conditions is 13.08.

The simulation parameters of the algorithm were set as shown in . We simulated the sequence of motion images of the DSCS 2-2 satellite (OPS 9 observed in Beijing for 100 s starting from 06:30:00 UTC on January 25, 2025, with exposure times of 0.1 s, 0.2 s, 0.3 s, and 0.5 s. The sky background brightness for that day was set at 19.75 , and the telescope’s opti- cal axis was pointed at a right ascension of 132.47° and a declination of 3.75°. In addition, we set the target pixel size to be 3 × 3. mag arcsec The SNR calculation results for the target observa- tion data recorded over different exposure times as gener- ated using the simulation algorithm are given in The exposure times for Seq. 1, Seq. 2, Seq. 3, and Seq. 4 grayscale values in the image, thus making them unde- tectable.

Additionally, the simulation results also show that the star spots for stars with higher magnitudes are larger.

This occurs because stars with higher magnitudes emit stronger light, which leads to the higher brightness in the image. As a result, when simulating using the PSF, the star spot formed has a larger extent.

are 0.1 s, 0.2 s, 0.3 s, and 0.4 s, respectively. The aver- age SNR for Seq. 1 is −1.12 dB and represents the mini- mum, while the average SNR for Seq. 4 is 4.83 dB and rep- resents the maximum. It can thus be observed that the SNR of the target observation data increases with increas- ing exposure time, which aligns with the actual observa- tion conditions.

The simulation results for the target sequence images Category Parameter Value

Apparent magnitude/Mv 13.08 Surface albedo 0.2 Mean diameter/km 0.001 42 Mass/kg 520.0

Physical characteristics Eccentricity Semi-major axis/km Periapsis distance/km Orbital inclination/(°) Longitude of ascending node/(°) Argument of periapsis/(°) Mean argument of periapsis/(°) Orbital period/min Mean speed/(°/d) Aphelion distance/km Orbital information Guan Sheng Optical (GSO) 8-inch RC reflecting telescope. (A) Simulation-generated Pleiades cluster image. (B) Pleiades cluster image captured by the GSO 8-inch RC reflecting telescope. The stars enclosed in the red boxes are used for the comparison, and stars with the same number represent the comparison results for the same star.

Parameter Value Observation time (UTC) 2025−01−25 6:30:00 Observation location (116°, 39°, 49 m) Observation axis direction (132.47°, 3.75°) Simulation duration/s CCD sensor size/mm 2.93 × 2.93 CCD pixel size/μm Focal length/mm Sky background brightness/mag arcsec Quantum efficiency Optical system transmittance Readout noise/e pixel Dark current noise/e pixel CCD gain/e Camera aperture diameter/m Exposure time/s 0.1, 0.2, 0.3, 0.5 generated using the proposed algorithm over different expo- sure times are shown in . Based on the exposure times, we generated four sets of target sequence simula- tion images, with each set comprising four images cap- tured at 0 s, 10 s, 20 s, and 30 s intervals from the start of the observation. The DSCS 2-2 satellite is highlighted within the red box in each image. As the exposure time increases, both the target brightness and the background Seq. 1 Seq. 2 Seq. 3 Seq. 4 brightness in the image also increase. Additionally, the posi- tion of the DSCS 2-2 satellite in the image also shifts with the changes in the observation time.

Therefore, the proposed target sequence image simula- tion algorithm for ground-based detection scenarios is able to generate target observation data with varying SNRs under different observational conditions based on the input observation parameters. The algorithm takes sev- eral factors into account, including the exposure time, the telescope aperture, and other relevant parameters, to simu- late image data that reflect the actual observations. In addi- tion, the algorithm is able to simulate the target’s motion over time. By incorporating the target’s orbital parame- ters and trajectory into the simulation, it can model the dynamic changes in the target’s position and orientation within the image field accurately. This enables the algo- rithm to generate time-series image sequences that reflect the actual motion of the target in space. 20 s, and 30 s intervals from the start of observation. Seq. 1, Seq. 2, Seq. 3, and Seq. 4 are the target sequence simulation images acquired under exposure times of 0.1 s, 0.2 s, 0.3 s, and 0.5 s, respectively.

DISCUSSION

The SNR results obtained from the proposed ground- based system’s SNR calculation method still show a small error when compared with the actual observed data. On the one hand, this may be caused by the influence of atmo- spherics on the actual observed images, where the target is not of a regular shape, and this would lead to some errors in the SNR values calculated from the actual observa- tion data. On the other hand, the proposed SNR calcula- tion method assumes a constant quantum efficiency for the observation system, whereas in reality, this efficiency varies with the spectral wavelength. In future work, we will consider using a variable quantum efficiency and attempt to account for the atmospheric effects on the observed target to improve the accuracy of the SNR calcula- tion method.

The proposed target sequence image simulation method for ground-based detection scenarios is able to sim- ulate target observation data with varying SNRs under dif- ferent observational conditions. However, its observa- tional scenarios have been limited to ground-based detec- tion to date. In future work, we will explore the develop- ment of an SNR-based target sequence image simulation method for space-based detection scenarios and will design a simulation system to support practical applica- tion requirements.

CONCLUSIONS

We have proposed an SNR calculation method for ground-based systems and validated the method’s accu- racy using actual captured images. The calculated results showed an error of less than 1 dB when compared with the measured data. Based on this SNR calculation method, we have also introduced a target sequence image simulation method for ground-based scenarios. This method can simulate target observation data under vari- ous observational conditions, and the simulated star maps aligned well with the actual star maps. The method is able to simulate target trajectories and other dynamic aspects accurately. This simulation method provides a solid data foundation for research into weak space target detection algorithms and holds significant practical applica- tion value.

ACKNOWLEDGEMENTS

This work was supported by Open Fund of National Key Laboratory of Deep Space Exploration (NKDSEL2024014) and by Civil Aerospace Pre-research Project of State Administration of Science, Technology and Industry for National Defence, PRC(D040103).

AI DISCLOSURE STATEMENT AI-assisted technology is not used in the preparation of this work.

AUTHOR CONTRIBUTIONS

Chunxu Ren conceived the ideas, designed and imple- mented the study, and wrote the paper. Yun Li , Wen- long Niu and Xiaodong Peng provided supervision and guidance throughout the study, and were responsible for project administration and funding acquisition, and con- tributed to manuscript review and editing. Yanzhao Li assisted with experimental procedures and data collection, and participated in manuscript revision and proofreading.

Weihua Gao conducted literature review and provided rele- vant supporting materials. All authors read and approved the final manuscript.

DECLARATION OF INTERESTS The authors declare no competing interests.

REFERENCES

Bi, J. K., Xiao, W. P., He, H. D. 2024. Global space launch statistics in 2023.

Space International , (2): 12−16. (in Chinese) Gill, E., Akos, D. M. 2024. Snapshot GNSS receivers for low-effort, high-gain space situational awareness.

Advances in Space Research (1): 42−52. Xue, C. B., Cai, H., Gehly, S., et al. 2024. Review of sensor tasking methods in Space Situational Awareness.

Progress in Aerospace Sciences : 101017. Barbosa, D., Coelho, B., Bergano, M., et al. 2024. PASO-- Astronomy and Space Situational Awareness in a Dark Sky Destination. arXiv: 2404.04090.

Karpukhin, V., Oğuz, B., Min, S., et al. 2020. Dense Passage Retrieval for Open-Domain Question Answering. arXiv: 2004.04906 Zhao, M. J., Li, W., Li, L., et al. 2022. Single-frame infrared small-target detection: A survey.

IEEE Geoscience and Remote Sensing Magazine (2): 87−119.

Yang, B., Zhang, X. Y., Zhang, J., et al. 2024. EFLNet:

Enhancing feature learning network for infrared small target detection.

IEEE Transactions on Geoscience and Remote Sensing , 62 : 5906511.

Zhu, R., Fu, Q., Liu, N., et al. 2024. Improved target detection method for space-based optoelectronic systems.

Scientific Reports : 1832. Zhang, F., Lin, S. L., Xiao, X. Y., et al. 2024. Global attention network with multiscale feature fusion for infrared small target detection.

Optics & Laser Technology Suthakar, V., Sanvido, A. A., Qashoa, R., et al. 2023.

Comparative analysis of Resident Space Object (RSO) detection methods.

Sensors (24): 9668.

Xia, S. F., Chen, J. Y., Lei, X. X., et al. 2020. Space debris space-based monitoring image simulation study.

Journal of Space Science , 40 (6): 1084−1090. (in Chinese)

Zhang, W., Pan, H. B., Bao, W. Z., et al. 2009. Digital image generation of star map.

Optics and Precision Engineering (3): 676−682. (in Chinese) Han, Y., Sun, H. Y., Li, Y. C., et al. 2010. Simulation of space-based optical measurement serial images.

Optical

Technique , 36 (1): 93−97. (in Chinese)

Ansys STK Software for Digital Mission Engineering and products/missions/ansys-stk [Accessed 2025−01−14].

Zhang, J., Lou, S. L., Ren, J. C. 2014. A simulation method for space observation image.

Electronics Optics & Control , 21 (11): 18−23. (in Chinese)

Yan, L. B., Li, J. S., Huang, Z. Y., et al. 2016. Space target optical imaging simulation in space-based system.

Computer Simulation (4): 120−124. (in Chinese) Ouyang, Y., Xu, T. X., Huang, X. B., et al. 2018. An approach space-debris optical image simulation considering the streak and saturated star-background.

Journal of Physics: Conference Series : 012069.

Wang, Y. P., Niu, Z. D., Wang, D. Y., et al. 2022.

Simulation algorithm for space-based optical observation images considering influence of stray light.

Laser & Optoelectronics Progress , 59 (2): 0229001. (in Chinese)

Liu, P. J., Li, M. Y., Zhang, L. C. 2022. Simulation modeling of infrared images of space targets in jitter state.

Journal of Applied Optics (2): 331−338. (in Chinese)

Xu, X. R., Shi, D. L., Xin, M. R., et al. 2020. Imaging simulation of space-based infrared panoramic scanning sensor. In Proceedings of SPIE Second Target Recognition and Artificial Intelligence Summit Forum.

Liu, W. M. 2011. Discussions on the conception of magnitude in Astrophysic.

Journal of Shangqiu Teachers College , 27 (9): 34−36. (in Chinese)

Myhrvold, N. 2016. Comparing NEO Search Telescopes.

Publications of the Astronomical Society of the Pacific (962): 045004.

Yang, X., Zhao, K. X., Gan, Q. B., et al. 2021. Analysis of ground-based and space-based optical observation system warning capabilityof near-Earth asteroids.

Transactions of Beijing Institute of Technology , 41 (12): 1307−1313. (in Chinese)

Tang, Y. J., Jiang, X. J., Lu, X. M., et al. 2010. Analysis of photometric characteristics of medium and high apogee satellites based on light-reflection model.

Optical Journal (3): 763−767. (in Chinese)

Submission history

Target imaging simulation method for ground-based system based on signal-to-noise ratio postprint