Abstract
This paper investigates the intelligent load monitoring problem with applications to practical energy management scenarios in smart grids. As one of the critical components for paving the way to smart grids’ success, an intelligent and feasible non-intrusive load monitoring (NILM) algorithm is urgently needed. However, most recent researches on NILM have not dealt with practical problems when applied to power grid, i.e., ① limited communication for slow-change systems; ② requirement of low-cost hardware at the users’ side; and ③ inconvenience to adapt to new households. Therefore, a novel NILM algorithm based on biology-inspired spiking neural network (SNN) has been developed to overcome the existing challenges. To provide intelligence in NILM, the developed SNN features an unsupervised learning rule, i.e., spike-time dependent plasticity (STDP), which only requires the user to label one instance for each appliance while adapting to a new household. To upgrade the feasibility in NILM, the designed spiking neurons mimic the mechanism of human brain neurons that can be constructed by a resistor-capacitor (RC) circuit. In addition, a distributed computing system has been designed that divides the SNN into two parts, i.e., smart outlets and local servers. Since the information flows as sparse binary vectors among spiking neurons in the developed SNN-based NILM, the high-frequency data can be easily compressed as the spike times, and are sent to the local server with limited communication capability, whereas it is unable to handle the traditional NILM. Finally, a series of experiments are conducted using a benchmark public dataset. Meanwhile, the effectiveness of developed SNN-based NILM can be demonstrated through comparisons with other emerging NILM algorithms such as the convolutional neural networks.
OVER the past decades, the emerging desire of a next-generation power grid, i.e., the smart grid [
The basic concept of NILM is to model an appliance by its numerous electric features such as active and reactive power difference [
Thanks to the emerging Internet of Things (IoT) technology, the current and voltage data of each appliance can be measured individually through a smart plug. Therefore, the burden of desegregating the summed power data is relieved, and the algorithm can focus on inferring the types of appliances only. For example, the Grid Sense system utilizes the smart outlets to measure and classify appliances [
To overcome these difficulties, a novel NILM algorithm based on biology-inspired spiking neural network (SNN) feaures an unsupervised learning scheme and a distributed computing design. Recognized as the third generation of neural networks, the SNN is inspired by the human brain neurons. Human brains can encode a huge amount of information using small populations of spikes and consume significantly less energy than analog neural networks (ANNs) [
In addition, the most popular training mechanism, i.e., the spike-timing-dependent plasticity (STDP), features an unsupervised manner where the training dataset does not need to be labeled. Instead of labeling every data for traditional deep ANNs, the users will only need to label the model one time after the SNN is learned. The concept of STDP is inspired by Hebb’s rule, which means that any two cells or systems of cells that are repeatedly active at the same time will tend to become “associated” so that the activity in one facilitates that in the other [
The contributions of this paper can be summarized as follows.
1) A novel SNN-based NILM algorithm follows the unsupervised learning scheme. The users do not need to label the dataset prior to training.
2) The proposed SNN-based NILM algorithm is especially suitable for low-cost hardware implementation of deep learning.
3) A novel distributed computation mechanism is developed based on SNN to utilize the high-frequency data that are impossible to transmit due to the communication constraint.
The rest of this paper is organized as follows. Section II provides the problem formulation for NILM as well as the background of SNN. In Section III, the novel SNN-based NILM system has been developed along with the novel distributed computation mechanism. Finally, the developed algorithm has been tested using the benchmark public dataset and compared with other state-of-art algorithms in Section IV. Section V concludes the paper.
In this section, the practical problem of NILM systems is described. Then the SNNs are introduced.
Given a set of appliances, assume that each appliance is connected with a smart plug which can measure the current and voltage. The smart plug reports to the local household server, and then the local server trains a deep neural network by the unlabeled data. Let denote a feature matrix generated from the voltage and current of an appliance, i.e., , where v is an l-dimensional voltage vector; i is an l-dimensional current vector; is the space that contains all feasible X; and is half of the dimension of the input feature matrix. Consider a feature matrix set along with its corresponding labels , where is the space containing all labels. The input matrices and the output vectors can be written as a set of pairs , where is the space containing all possible pairs.
Assumption 1: assume there exists a bijection function between space and space .
Define a parameter matrix , the cost function , and the mapping . Also, an optimal parameter is defined as:
(1) |
Next, the classifier operator is defined as follows.
Definition 1: given any member of the vector space , let an operator satisfy Assumption 1 with the output . The operator is called the classification operator, and the corresponding parameter is called the weight matrix.
We specifically seek this operator and its optimal parameter such that the cost function satisfies (1). With the above definition, we particularly define the cost function as a cross-entropy function, i.e.,
(2) |
where is the probability of among all possible . Intuitively, we want to design a neural network and find its optimal weights so that the input dataset can be correctly classified.
The overall structure of a distributed SNN-based NILM system is demonstrated in

Fig. 1 Overall structure of a distributed SNN-based NILM system.
The information in human brain neurons flows as electric spikes. In a simplified model shown in
(3) |

Fig. 2 Structure of a spiking neuron. (a) Biology structure of a spiking neuron. (b) Simplified model of a spiking neuron. (c) Simplified electric circuit for a spiking neuron with LIF model.
where is the voltage of the membrane potential; is the resting voltage of the membrane potential; is a time constant of the neuron; and are the resistance and capacitance of the RC circuit representing the neuron, respectively; and is the external current generated by the input spikes. Moreover, if exceeds the firing threshold , and is the resting voltage.
References [

Fig. 3 Binary V-I trajectory image of different appliances using data from plug load appliance identification dataset (PLAID). (a) Air conditioner (AC). (b) Compact fluorescent lamp (CFL). (c) Fan. (d) Fridge. (e) Hairdryer. (f) Heater. (g) Incandescent light bulb (ILB). (h) Laptop. (i) Microwave. (j) Vacuum. (k) Washing machine (WM).
Next, the binary V-I trajectory image is encoded to the spikes. In biology, the timing of action potentials (spikes) is highly irregular [

Fig. 4 Process of encoding a V-I trajectory image into spikes.
Instead of the general LIF model shown in (3), this paper divides the external input into two types, i.e., the excitatory and inhibitory parts for the best performance on the classification tasks [
The membrane potential is calculated as:
(4) |
where and are the equilibrium potentials of inhibitory and excitatory synapse [
Compared with the original LIF model (3), the term and the external current are replaced by two separate synapses, which creates the competition between neurons to improve the overall performance [
The synaptic conductance is, however, not known if we seek the optimal performance. The process of finding better synaptic conductance, which is related to the synaptic weights, are usually named as learning in traditional ANNs. Different from ANNs where the weights are learned in a supervised manner using gradient descent, the SNNs perform well in the unsupervised learning method known as the STDP. As the key component of the SNN, the conductance decays exponentially unless a presynaptic spike arrives.
(5) |
where is a time constant. Let be the synaptic weight between the neurons and . The conductance increases for each presynaptic spike. The inhibitory conductance updates in the same fashion but with a different time constant . Based on (5), if a neuron receives presynaptic spikes from neuron on a more frequent basis, the conductance would increase, which means that more frequent firing behaviors will appear in neurons and with similar input. Intuitively, the synaptic weights between these neurons are greater to react to a certain input pattern. To tune the evolution of the conductance, a learning rule for the weight change is proposed in [
(6) |
where is the learning rate; is the maximum weight; is a offset value; is an eligibility trace which denotes the dependence of the previous weight; and is the presynaptic trace parameter, which contains the information of all presynaptic spikes. The presynaptic trace is increased by 1 for every arrived presynaptic spike and decays exponentially otherwise. When the presynaptic trace is larger than the offset value of , the weight of this synapse is enhanced. Therefore, the conductance is increased by a greater value. However, when does not reach the offset value, the conductance of this synapse will be reduced. This mechanism along with the conductance adjustment ensures that the neurons responding strongly to one input get more connected while irrelevant neurons are disconnected by decreasing the conductance.
Unlike the back-propagation method, which is widely used in recent deep ANN learning rules, the STDP rule can adjust the synapse of neurons without a reference output guidance. In a trained SNN, a specific neuron set is excited by a corresponding type of appliance. The users are only required to label one data instance for each type of appliance to associate the neuron excitement patterns (output spike rates) with appliance types. This relieves the assumption of a large set of labeled data, which makes it more reasonable to commercialize the NILM system.
To this end, the SNN for appliance classification along with the unsupervised learning rules has been introduced. Next, a distributed NILM system is proposed with the SNN embedded.
As mentioned above, the NILM system hardware follows a three-layer structure, i.e., the smart outlet, the local household server, and the area server. In practical applications such as the Grid Sense system, the communication of the smart outlets and the local server is often limited due to the requirement of low-cost hardware. This stringent constraint frustrates many existing deep learning methods such as [
The structure of the proposed NILM system is shown in

Fig. 5 Structure of NILM system.
The received sequence of spike time is then decoded into spike trains and sent to the connected neurons in the hidden layer in the local server. The connections between the hidden and output layers are the circuits, and the information flows as electric pulses. Compared with the traditional ANN with various activation functions that requires an intensive computation unit, the SNN neurons are physical RC circuit, and the updated law does not require solving the differential equation. Therefore, the computation efforts of both learning and classification are greatly reduced. The speed of the forward process of electric flow is also much higher than that of digital computation. Consequently, the costs of the smart outlet and the household server are reduced, because no high-end computation-intensive unit such as the GPU is required.
The training process is represented as a pseudo-code in
In summary, the SNN-based NILM algorithm can improve the traditional appliance inference in four aspects: ① the cost of hardware is reduced; ② the communication requirement is lowered; ③ the running speed is increased; and ④ the need for numerous labeled data is canceled.
In this section, an SNN with three layers is constructed and then tested using a public dataset.
We utilize the benchmark public PLAID dataset [
To reduce the input complexity of SNN, each instance is uniformly downsampled to the size of . Then, the V-I trajectory image is generated using the method described in Section II. We randomly select of the data as the training data and the rest as testing data. To this end, the training and testing data are prepared. It is worth noting that the label of these appliances is not required, which complies with the unsupervised training method.
The SNN includes four layers, i.e., input layer, two hidden layers, and output layer. The input layer includes 784 () coding neurons, which are the Poisson distribution generators. Each pixel of the binary V-I trajectory image is connected with one input neuron. There are 400 () neurons in each of the two hidden layers and 11 output neurons in the output layer. The synapses between the input and hidden layers and the hidden and output layers follow the “all-to-all” fashion, meaning that all neurons are connected. The proposed SNN structure is shown in

Fig. 6 Proposed SNN structure.
The resting voltages are set to be to 65 mV and to 60 mV for the two hidden layers, respectively. The membrane potential thresholds are set to be to 52 mV and to 40 mV for the two hidden layers, respectively. The Brian [
We set the time for a single simulation to be 0.35 s. Note that this time represents the STDP learning method performed for 0.35 s for each single V-I trajectory image, , instance. And the total simulation time depends on the number of instances in the training set. As previously mentioned, we set of the data as the training data, which results in instances. To increase the prediction accuracy of SNN, we have repeated the instances several times and then shuffled the repeated training data. When all instances in the training dataset are iterated to update the synaptic weights and conductance, we input the only one labeled data instance for each appliance to observe the neurons’ behavior. Each neuron in the output layer is labeled with the most excited appliance type, i.e., with the highest spikes rate. For example, when the 11 labeled data are input into the SNN, the first neuron in the output layer finds that AC has the highest output spikes rate, then this neuron is marked as AC type. In the testing, when an unknown instance is regarded as an input into the SNN, the output spike rates of the neurons of the same type are averaged. Then the neuron type with the highest average spike rate is selected as the classified appliance type.
The testing data contains of the whole PLAID dataset. The overall accuracy Acc is computed by:
(7) |
where Ncorrect and Ntotal are the numbers of correct classification and total classification, respectively.
As a result, the classification of the test data reports an overall accuracy of . The confusion matrix of the test data is given in

Fig. 7 Confusion matrix of test results.

Fig. 8 Computation of accuracy for each type of appliance.

Fig. 9 Synaptic weight matrix of synapses between input layer and hidden layer with each synaptic weights’ pattern matrix having learned V-I trajectory of one type of appliance.

Fig. 10 Accuracy plot with respect to training dataset size.
Next, the results are analyzed using the benchmark evaluation indices and then compared with traditional ANN methods. According to [
(8) |
(9) |
(10) |
where TP, TN, FP, FN represent the true positive, true negative, false positive, and false negative classifications, respectively.
The precision, recall, and F-score are computed and listed in

Fig. 11 Average F1-score of SNN and CNN.
Finally, we compare the performance of the proposed algorithm with that of the transfer learning (TL) algorithm [
Combined with the previous analysis, it can be observed that the SNN method not only outperforms the traditional ANN in classification accuracy of most appliances, but is especially useful in practical NILM scenarios where very few labeled data are provided. Moreover, it is known that the SNN is faster and can be implemented with low-cost electronic components easily.
In this paper, a novel SNN-based NILM algorithm is proposed along with the distributed computation fashion to better embed the NILM functionality in a practical load demand analysis system, i.e., the Grid Sense system. The proposed NILM algorithm utilizes the biology-inspired SNN and features an unsupervised learning scheme named STDP. To learn a new household model, the user only needs to label one data instance for each appliance, which costs significantly less effort than training the traditional ANNs. Moreover, the nature of SNN makes it extremely easy to be implemented into low-cost hardware where an RC circuit can represent each neuron. With this setup, the expensive computational intensive unit, , GPU, is no longer needed for smart outlets so that the massive deployment in households becomes possible. Another advantage caused by the hardware friendly representation of the SNN is the distributed computation scheme. Instead of the digitized analog signal in most ANNs, the signal of SNN flows as a sparse binary vector, i.e., the spike train. As a result, the SNN layers can be easily separated into different devices to access high sampling rate measurements without powerful communication equipment. The experiments are conducted on the PLAID dataset to show that the proposed SNN method can accurately classify the appliances. The performance is better than traditional CNN and other algorithms. Specifically, the proposed unsupervised SNN algorithms have been compared with the other popular deep-learning-based algorithms such as the AlexNet-TL [
REFERENCES
Y. Kabalci, “A survey on smart metering and smart grid communication,” Renewable and Sustainable Energy Reviews, vol. 57, pp. 302-318, May 2016. [Baidu Scholar]
J. Duan, C. Wang, H. Xu et al., “Distributed control of inverter-interfaced microgrids based on consensus algorithm with improved transient performance,” IEEE Transactions on Smart Grid, vol. 10, no. 2, pp. 1303-1312, Mar. 2017. [Baidu Scholar]
Z. Zhou, Y Xiang, H. Xu et al., “Self-organizing probability neural network based intelligent non-intrusive load monitoring with applications to low-cost residential measuring devices,” Transactions of the Institute of Measurement and Control, vol. 43, no. 3, pp. 635-645, Sept. 2020. [Baidu Scholar]
A. S. Bouhouras, P. A. Gkaidatzis, K. C. Chatzisavvas et al., “Load signature formulation for non-intrusive load monitoring based on current measurements,” Energies, vol. 10, no. 4, p. 538, Apr. 2017. [Baidu Scholar]
J. Kelly and W. Knottenbelt, “Neural NILM: deep neural networks applied to energy disaggregation,” in Proceedings of the 2nd ACM International Conference on Embedded Systems for Energy-efficient Built Environments, Seoul, South Korea, Nov. 2015, pp. 55-64. [Baidu Scholar]
S. Makonin, F. Popowich, I. V. Bajić et al., “Exploiting hmm sparsity to perform online real-time nonintrusive load monitoring,” IEEE Transactions on Smart Grid, vol. 7, no. 6, pp. 2575-2585, Nov. 2015. [Baidu Scholar]
N. Batra, A. Singh, and K. Whitehouse. (2015, Oct.). Neighbourhood NILM: a big-data approach to household energy disaggregation. [Online]. Available: https://arxiv.org/abs/1511.02900v1 [Baidu Scholar]
O. Krystalakos, C. Nalmpantis, and D. Vrakas, “Sliding window approach for online energy disaggregation using artificial neural networks,” in Proceedings of the 10th Hellenic Conference on Artificial Intelligence, Patras, Greece, Jul. 2018, pp. 1-6. [Baidu Scholar]
J. M. Gillis, S. M. Alshareef, and W. G. Morsi, “Nonintrusive load monitoring using wavelet design and machine learning,” IEEE Transactions on Smart Grid, vol. 7, no. 1, pp. 320-328, Jan. 2015. [Baidu Scholar]
Z. Zhou, Y. Xiang, H. Xu et al., “A novel transfer learning-based intelligent nonintrusive load-monitoring with limited measurements,” IEEE Transactions on Instrumentation and Measurement, vol. 70, pp. 1-10, Jul. 2020. [Baidu Scholar]
Y. Xiang, X. Lu, Z. Yu et al., “IoT and edge computing based direct load control for fast adaptive frequency regulation,” in Proceedings of IEEE PES General Meeting, Atlanta, USA, Aug. 2019, pp. 1-5. [Baidu Scholar]
A. Krizhevsky, I. Sutskever, and G. E. Hinton, “Imagenet classification with deep convolutional neural networks,” in Proceedings of the Advances in Neural Information Processing Systems, Reno, USA, Dec. 2012, pp. 1097-1105. [Baidu Scholar]
K. Ehsani, H. Bagherinezhad, J. Redmon et al., “Who let the dogs out? Modeling dog behavior from visual data,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Salt Lake City, USA, Jun. 2018, pp. 4051-4060. [Baidu Scholar]
H. Jahangir, H. Tayarani, S. Baghali et al., “A novel electricity price forecasting approach based on dimension reduction strategy and rough artificial neural networks,” IEEE Transactions on Industrial Informatics, vol. 16, no. 4, pp. 2369-2381, Apr. 2019. [Baidu Scholar]
M. Bouvier, A. Valentian, T. Mesquida et al., “Spiking neural networks hardware implementations and challenges: a survey,” ACM Journal on Emerging Technologies in Computing Systems (JETC), vol. 15, no. 2, pp. 1-35, Apr. 2019. [Baidu Scholar]
S. Dutta, V. Kumar, A. Shukla et al., “Leaky integrate and fire neuron by charge-discharge dynamics in floating-body mosfet,” Scientific Reports, vol. 7, no. 1, pp. 1-7, Aug. 2017. [Baidu Scholar]
B. Das, J. Schulze, and U. Ganguly, “Ultra-low energy LIF neuron using Si NIPIN diode for spiking neural networks,” IEEE Electron Device Letters, vol. 39, no. 12, pp. 1832-1835, Dec. 2018. [Baidu Scholar]
J. L. Lobo, J. Del Ser, A. Bifet et al., “Spiking neural networks and online learning: an overview and perspectives,” Neural Networks, vol. 121, pp. 88-100, Sept. 2019. [Baidu Scholar]
Z. Bing, C. Meschede, F. Röhrbein et al., “A survey of robotics control based on learning-inspired spiking neural networks,” Frontiers in Neurorobotics, vol. 12, p. 35, Jul. 2018. [Baidu Scholar]
Wikipedia. (2020, Apr.). Neuron. [Online]. Available: https://en.wikipedia.org/wiki/Neuron/ [Baidu Scholar]
Y. Liu, X. Wang, and W. You, “Non-intrusive load monitoring by voltage-current trajectory enabled transfer learning,” IEEE Transactions on Smart Grid, vol. 10, no. 5, pp. 5609-5619, Dec. 2018. [Baidu Scholar]
D. F. Teshome, T. Huang, and K.-L. Lian, “Distinctive load feature extraction based on Fryze’s time-domain power theory,” IEEE Power and Energy Technology Systems Journal, vol. 3, no. 2, pp. 60-70, Apr. 2016. [Baidu Scholar]
L. Du, D. He, R. G. Harley et al., “Electric load classification by binary voltage-current trajectory mapping,” IEEE Transactions on Smart Grid, vol. 7, no. 1, pp. 358-365, Jun. 2015. [Baidu Scholar]
W. Gerstner, W. M. Kistler, R. Naud et al., Neuronal Dynamics: From Single Neurons to Networks and Models of Cognition. Cambridge: Cambridge University Press, 2014. [Baidu Scholar]
J. Hahne, D. Dahmen, J. Schuecker et al., “Integration of continuous-time dynamics in a spiking neural network simulator,” Frontiers in Neuroinformatics, vol. 11, p. 34, May 2017. [Baidu Scholar]
P. U. Diehl and M. Cook, “Unsupervised learning of digit recognition using spike-timing-dependent plasticity,” Frontiers in Computational Neuroscience, vol. 9, p. 99, Aug. 2015. [Baidu Scholar]
W. Kong, Z. Y. Dong, B. Wang et al., “A practical solution for non-intrusive type II load monitoring based on deep learning and post-processing,” IEEE Transactions on Smart Grid, vol. 11, no. 1, pp. 148-160, May 2019. [Baidu Scholar]
S. Barker, M. Musthag, D. Irwin et al., “Non-intrusive load identification for smart outlets,” in Proceedings of 2014 IEEE International Conference on Smart Grid Communications (SmartGridComm), Venice, Italy, Nov. 2014, pp. 548-553. [Baidu Scholar]
J. Gao, S. Giri, E. C. Kara et al., “PLAID: a public dataset of high-resolution electrical appliance measurements for load identification research: demo abstract,” in Proceedings of the 1st ACM Conference on Embedded Systems for Energy-efficient Buildings, New York, USA, Nov. 2014, pp. 198-199. [Baidu Scholar]
M. Stimberg, R. Brette, and D. F. Goodman, “Brian 2, an intuitive and efficient neural simulator,” eLife, vol. 8, p. e47314, Aug. 2019. [Baidu Scholar]
T. Iakymchuk, A. Rosado-Muñoz, J. F. Guerrero-Martinez et al., “Simplified spiking neural network architecture and STDP learning algorithm applied to image classification,” EURASIP Journal on Image and Video Processing, vol. 2015, no. 1, p. 4, Feb. 2015. [Baidu Scholar]
M. Mozafari, S. R. Kheradpisheh, T. Masquelier et al., “First-spike-based visual categorization using reward-modulated STDP,” IEEE Transactions on Neural Networks and Learning Systems, vol. 29, no. 12, pp. 6178-6190, May 2018. [Baidu Scholar]
L. de Baets, J. Ruyssinck, C. Develder et al., “Appliance classification using VI trajectories and convolutional neural networks,” Energy and Buildings, vol. 158, pp. 32-36, Jan. 2018. [Baidu Scholar]
J. Gao, E. C. Kara, S. Giri et al., “A feasibility study of automated plug-load identification from high-frequency measurements,” in Proceedings of 2015 IEEE Global Conference on Signal and Information Processing (GlobalSIP), Orlando, USA, Dec. 2015, pp. 220-224. [Baidu Scholar]
Y. Liu, X. Wang, and W. You, “Non-intrusive load monitoring by voltage-current trajectory enabled transfer learning,” IEEE Transactions on Smart Grid, vol. 10, no. 5, pp. 5609-5619, Dec. 2018. [Baidu Scholar]