Repository logo

Infoscience

  • English
  • French
Log In
Logo EPFL, École polytechnique fédérale de Lausanne

Infoscience

  • English
  • French
Log In
  1. Home
  2. Academic and Research Output
  3. Journal articles
  4. Federated Reinforcement Learning for Electric Vehicles Charging Control on Distribution Networks
 
research article

Federated Reinforcement Learning for Electric Vehicles Charging Control on Distribution Networks

Qian, Junkai
•
Jiang, Yuning  
•
Liu, Xin
Show more
February 1, 2024
Ieee Internet Of Things Journal

With the growing popularity of electric vehicles (EVs), maintaining power grid stability has become a significant challenge. To address this issue, EV charging control strategies have been developed to manage the switch between vehicle-to-grid (V2G) and grid-to-vehicle (G2V) modes for EVs. In this context, multiagent deep reinforcement learning (MADRL) has proven its effectiveness in EV charging control. However, existing MADRL-based approaches fail to consider the natural power flow of EV charging/discharging in the distribution network and ignore driver privacy. To deal with these problems, this article proposes a novel approach that combines multi-EV charging/discharging with a radial distribution network (RDN) operating under optimal power flow (OPF) to distribute power flow in real time. A mathematical model is developed to describe the RDN load. The EV charging control problem is formulated as a Markov decision process (MDP) to find an optimal charging control strategy that balances V2G profits, RDN load, and driver anxiety. To effectively learn the optimal EV charging control strategy, a federated deep reinforcement learning algorithm named FedSAC is further proposed. Comprehensive simulation results demonstrate the effectiveness and superiority of our proposed algorithm in terms of the diversity of the charging control strategy, the power fluctuations on RDN, the convergence efficiency, and the generalization ability.

  • Details
  • Metrics
Type
research article
DOI
10.1109/JIOT.2023.3306826
Web of Science ID

WOS:001166992300085

Author(s)
Qian, Junkai
Jiang, Yuning  
Liu, Xin
Wang, Qiong
Wang, Ting
Shi, Yuanming
Chen, Wei
Date Issued

2024-02-01

Publisher

Ieee-Inst Electrical Electronics Engineers Inc

Published in
Ieee Internet Of Things Journal
Volume

11

Issue

3

Start page

5511

End page

5525

Subjects

Technology

•

Electrical Vehicle (Ev)

•

Federated Learning (Fl)

•

Optimal Power Flow (Opf)

•

Reinforcement Learning

•

Vehicle-To-Grid (V2G)

Editorial or Peer reviewed

REVIEWED

Written at

EPFL

EPFL units
LA3  
FunderGrant Number

Natural Science Foundation of Shanghai

Available on Infoscience
April 17, 2024
Use this identifier to reference this record
https://infoscience.epfl.ch/handle/20.500.14299/207143
Logo EPFL, École polytechnique fédérale de Lausanne
  • Contact
  • infoscience@epfl.ch

  • Follow us on Facebook
  • Follow us on Instagram
  • Follow us on LinkedIn
  • Follow us on X
  • Follow us on Youtube
AccessibilityLegal noticePrivacy policyCookie settingsEnd User AgreementGet helpFeedback

Infoscience is a service managed and provided by the Library and IT Services of EPFL. © EPFL, tous droits réservés