Present-day nanoscale Integrated Circuit (IC) technologies allow to integrate billions of transistors into a single chip, while almost every key subsystem of modern chip designs, such as clock distribution networks, power delivery networks (PDNs), embedded memory arrays, as well as analog and mixed-signal systems, may reach an unprecedented complexity of hundreds of millions of components. As a result, it becomes extremely difficult and even intractable to model, simulate, optimize and verify future nanoscale ICs at large scale with existing design methodologies. On the other hand, recent research results (a.k.a. Laplacian Paradigm) from theoretical computer science and applied math communities show that it is possible to construct almost-linear-sized subgraphs (graph sparsifiers) that can very well approximate the spectra of the original graph via spectral graph sparsification, which can immediately lead to the development of much faster numerical and graph-based algorithms. For instances, spectrally sparsified transportation networks will allow to develop much faster navigation algorithms in large transportation systems, spectrally sparsified data networks can allow to more economically store, partition and cluster data networks, spectrally sparsified circuit networks will allow to more efficiently simulate, optimize and verify large circuit systems. However, there still remain extremely challenging issues for designing practically efficient spectral sparsification algorithms for handling large scale real world graphs (networks).
In this talk, I will first introduce our recent work on developing a practically efficient, nearly-linear complexity spectral graph sparsification approach for aggressively sparsifying large graph Laplacian matrices, integrated circuits, and as well as data networks. Next, I will talk about how to leverage the latest result for developing nearly-linear time numerical and graph-based algorithms for solving large partial differential equation (PDE) and sparse matrix problems, design automation of future nanoscale ICs, as well as graph partitioning and data clustering of large data networks. Lastly, I will discuss how to leverage graph sparsification techniques for optimally solving large PDEs and sparse matrices on emerging heterogeneous parallel computing platforms.