Cevher, VolkanLeblebici, YusufKrawczuk, Igor2024-07-022024-07-022024-07-02202410.5075/epfl-thesis-9020https://infoscience.epfl.ch/handle/20.500.14299/208954Modern integrated circuits are tiny yet incredibly complex technological artifacts, composed of millions and billions of individual structures working in unison. Managing their complexity and facilitating their design drove part of the co-evolution of modern computational machinery, with advances in compute enabling advances in computational techniques and vice versa.Graphical representations and their sparsity based efficiency have a long history in this co-evolution. This thesis presents results from over four years of work towards the development of generative ML methods that can serve as a building block in the next generation of end-to-end learning based electronic circuit design tools. A key challenge herein has been the scalability with respect to the graph size. At the outset, the best methods were able to learn models of graphs with at the highest dozens of nodes, while even a toy electronic circuit will easily comprise hundreds of nodes.The sheer discrepancy between the capabilities of that State-of-the-Art and the requirements of even toy examples has forced us to keep the theoretical scalability of our methods in mind, which motivated a strict adherence to permutation equivariant methods to avoid the combinatorial explosion via node relabelings or relying on heuristic node orderings. Our first contribution was the development of GG-GAN, an equivariant generative adversarial network for graphs, at the time the first purely equivariant GAN. As part of this work we empirically isolated a key issue previously hindering purely equivariant graph generators - the collision problem - and showed how to alleviate it without losing permutation equivariance. This method improved both on the state of the art in modeling quality and scalability, being one of the earliest works to scale to hundreds of nodes and in particular being fast at inference time. However, this model, being based on a GAN, does not admit an easily computable likelihood function, a property which might be desirable, for example to parametrize, evaluate and guide reinforcement learning based agents. The following chapter will present DiGress, another permutation equivariant generative model based on a discrete denoising diffusion models which \emph{does} afford this capability and also massively improved on the modeling fidelity of GG-GAN. At the cost, however, of massively increasing generation latency, as well as increasing the asymptotic complexity with respect to graph size from a linear relationship to a quadratic one. The mitigation of these issues is the final methodological contribution of this thesis. By leveraging the hierarchical community structure of many common types of graphs, but especially electronic circuits with their grouped functionalities,we can drastically improve both the theoretical complexity and the horizontal scalability. The new method closely retains the expressive power of DiGress while being orders of magnitude faster at both training and inference. Together, the work presented in this thesis paves part of the way towards developing fully learning based electronic circuit design tools by providing them with efficient parametrizations for both sampling based and likelihood based algorithms which either scale linearly or close to linearly in the graph size. This removes a massive barrier from the application of these methods in electronic circuit design tools.enPermutation Equivariance. Modularity. Graph GenerationDenoising Diffusion ModelsGenerative Adversarial NetworksEDAScalabilityGraph Neural NetworksTopology GenerationDeep LearningCircuit GenerationGraph generative deep learning models with an application to circuit topologiesthesis::doctoral thesis