Abstract:
Detection of dim and weak space targets against complex stellar backgrounds is a core challenge in space situational awareness. Existing methods suffer from issues such as high miss-detection rates and significant localization bias, caused by background noise interference, target occlusion, and inter-frame misalignment. To address these challenges, a novel fusion detection framework, named HASD-StarNet, was developed. First, the local contrast threshold was dynamically adjusted to suppress the background noise. Second, the observation time window was extended and target spatiotemporal features were fused to maintain motion continuity and address occlusion issues. Finally, stellar magnitude features were incorporated into the geometric matching algorithm to improve registration accuracy. Experimental results on both simulated and real star map datasets show that the signal-to-noise ratio (SNR) of the initial star images improves by 93.37% after processing with HASD-StarNet. Target detection rates under different SNR conditions reach 95.58%, 97.59%, and 100.00%, respectively, while inter-frame registration accuracy achieves 0.52 pixels. The processing speed is improved by a factor of 6 to 60 compared to that of traditional methods. The framework proves effective for detecting dim and weak space targets in complex stellar backgrounds, providing support for spacecraft safety.