Compressive NILM & Bayesian Surprise

I am excited to announce two new NILM (Non-Intrusive Load Monitoring) papers that will be presented; one at BuildSys, and the other at the International NILM Workshop. Here are the abstracts of and links to each paper:

Compressive NILM

In non-intrusive load monitoring (NILM), an increase in sampling frequency translates to capturing unique signal features during transient states, which, in turn, can improve disaggregation accuracy. Smart meters are capable of sampling at a high frequency (typically 20kHz). However, transmitting signals continuously would choke the network bandwidth. Given the deployment of millions of smart meters which communicate over a wireless wide-area network (WAN), utilities can only collect power signals at very low frequencies. We propose a compressive sampling (CS) approach. After measuring the high-frequency power signal from a smart meter will be encoded (by a random matrix) to very few samples making the signal suitable for WAN transmission without choking network bandwidth. CS guarantees the recovery of the high-frequency signal from the few transmitted samples under certain conditions. This work shows how to simultaneously recover the signal and disaggregate it; hence, the name Compressive NILM.

Read the paper here.

Bayesian Surprise Training NILM

In Non-Intrusive Load Monitoring (NILM), as in many other machine learning problems, significant computational resources and time are spent training models using as much data as possible. This is perhaps driven by the preconception that more data leads to more accurate models and, eventually, better performing algorithms. When has enough prior training been done? When has a NILM algorithm encountered new, unseen data? This work applies the notion of Bayesian surprise to answer these important questions for both, supervised and unsupervised algorithms. We compare the performance of several NILM algorithms to establish a suggested threshold on two combined measures of surprise: postdictive surprise and transitional surprise. We validate the use of transitional surprise by exploring the performance of a particular Hidden Markov Model as a function of surprise threshold. Finally, we explore the use of a surprise threshold as a regularization technique to avoid overfitting in cross-house performance. We provide preliminary insights and clear evidence showing a point of diminishing returns for model performance with respect to dataset size, which can have implications for future model development, dataset acquisition, as well as aiding in model flexibility during deployment.

Read the paper here.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s