Abstract:With the increasing integration of uncertain distributed renewable energies (DREs) into distribution networks (DNs), communication bottlenecks and the limited deployment of measurement devices pose significant challenges for advanced data-driven voltage control strategies such as deep reinforcement learning (DRL). To address these issues, this paper proposes an offline-training online-execution framework for volt-var control in DNs. In the offline-training phase, a graph convolutional network (GCN)-based denoising autoencoder (DAE), referred to as the deep learning (DL) agent, is designed and trained to capture spatial correlations among limited physical quantities. This agent predicts voltage values for nodes with missing measurements using historical load data, DRE outputs, and global voltages from simulations. Furthermore, the dual-timescale voltage control problem is formulated as a multi-agent Markov decision process. A DRL agent employing the multi-agent soft actor-critic (MASAC) algorithm is trained to regulate the tap position of on-load tap changer (OLTC) and reactive power output of photovoltaic (PV) inverters. In the online-execution phase, the DL agent supplements the limited measurement data, providing enhanced global observations for the DRL agent. This enables precise equipment control based on improved system state estimation. The proposed framework is validated on two modified IEEE test systems. Numerical results demonstrate its ability to effectively reconstruct missing measurements and achieve rapid, and accurate voltage control even under severe measurement deficiencies.