Learning to Run a Power Network

Task introduction & significance

Power Grids are the backbone of modern society, a reliable access to electricity is an important sustainable development goal and index for measuring standard of living. In order for the grid to function safely operators need to maintain a balance between generation and demand, while respecting numerous system and timing constraints. Grid operators place a large emphasis on maintaining the stability and reliability of the grid while trying to minimise the associated cost of generation.

With growing concern over climate change there is a need to rethink how we generate and operate our grids. The increased usage of renewable energy, increasing electricity demand and added objectives make the task of controlling the grid increasingly difficult. This increase in difficulty led to an interest in ML approaches to operate or assist in operating the grid.

The “Learning to run a power network” (L2RPN) challenge is a series of competitions proposed by Kelly et al. (2020)[1, 2] with the aim to test the potential of reinforcement learning to control electrical power transmission. In 2020, one such competition was run at the IEEE World Congress on Computational Intelligence (WCCI) 2020. The winners have published their novel approach of combining a Semi-MDP with an after-state representation at ICLR 2021 and made their implementation publicly available[3]. This competition is recurring with the 2022 edition ongoing currently.

While the L2RPN challenge is likely much too ambitious for a two day Hackathon the aim is to work on a simpler power grid and get participants familiarised with the Grid2Op package, develop interesting agent designs they might expand on later and foster collaboration between members of the ATI community.

The following notebooks will bring you up to the baseline familiarity likely needed for this task:

  1. Introduction
  2. Toy Example
  3. Grid2Op Framework
  4. Observations
  5. Actions
  6. Training an Agent
  7. Studying an Agent

Once familiar with the baseline knowledge, groups can continue to develop agents in the “l2rpn_case14_sandbox” configuration of the environment. The hackathon will end with a presentation from the different groups on their agent design and performance.

Helpful tools & resources

Running the project locally

We have tested running the project locally with python 3.8.10 and using the following requirements file.

References

  1. Marot, Antoine, Isabelle Guyon, Benjamin Donnot, Gabriel Dulac-Arnold, Patrick Panciatici, Mariette Awad, Aidan O’Sullivan, Adrian Kelly, and Zigfried Hampel-Arias. "L2rpn: Learning to run a power network in a sustainable world neurips2020 challenge design." Réseau de Transport d’Électricité, Paris, France, White Paper (2020).
  2. Kelly, Adrian, Aidan O'Sullivan, Patrick de Mars, and Antoine Marot. "Reinforcement learning for electricity network operation." arXiv preprint arXiv:2003.07339 (2020)
  3. Yoon, Deunsol, Sunghoon Hong, Byung-Jun Lee, and Kee-Eung Kim. "Winning the l2rpn challenge: Power grid management via semi-markov afterstate actor-critic." In International Conference on Learning Representations (2021)