Automated Performance Tracking of Post-Deployment Sites

Session: Rapid-Fire Introductions by Poster PresentersDate: Tuesday, October 15, 2019 / 4:05PM - 4:35PM PDTTags: Poster / Modeling and Analysis

At Enel X, one of our main products is DER.OS, a battery control and optimization software that we deploy to the sites in our portfolio. Prior to deployment, we predict a certain level of performance from our software and savings to our customers. This is typically done by determining how the site would behave with the addition of an Energy Storage System (ESS) and DER.OS, which when combined with the applicable electricity rate information can estimate the savings to the customer from actions such as demand-capping and energy arbitrage. However, these expected savings may not be realized in production due to a variety of reasons, such as dirty data, interruptions in communication between various microservices, human error, and/or hardware problems and software performance. Hence, there is a real need to gain more visibility into the performance of our sites after the deployment of our production software.

Post-deployment ESS behavior can be accurately simulated using a combination of actual time-series data of the onsite demand (and any applicable photovoltaic systems), demand forecasts received by the ESS control system, electricity rate information and the configuration of the ESS onsite. Additionally, by simulating the control system with demand forecasts that are 100% accurate, we can determine the theoretical maximum savings for a given site during a specified time period. The expected savings from these simulations can then be compared to the actual savings received by the customer to gain useful insights. For example, a significant difference between the theoretical maximum savings and actual savings indicates a potential issue with the load forecasting algorithm or a communication failure onsite. Traditionally, this process was performed manually, and thus was time-consuming, prone to human error and not scalable. Hence, we have developed an automated performance tracking tool that enables us to simulate and analyze savings with the addition of an ESS across our production sites, and identify any potential red flags or areas for improvement. This performance tracker is a Python based tool that aggregates high-fidelity, real-time data from each site in our portfolio, performs relevant simulations to assess their expected behavior and presents the information in a concise manner. In our usage, this tool has significantly reduced the time to get actionable insights, by automating several cumbersome steps in post-deployment performance tracking.

In this presentation, we will first present some of the challenges in post-deployment site monitoring and explain the workings of the performance tracker. We will then discuss real world examples where we’ve gleaned useful and actionable insights, using data from sites where DER-OS has been deployed. We will also touch upon other added functionality in the performance tracker that have aided in the improvement of our control and optimization software.