News
2025
Congratulations to Lizi Zhang for starting a summer internship with Cadence.
Congratulations to Navid for winning a DAC’25 Richard Newton Young Student Fellowship. The fellowship covers full registration to attend Design Automation Conference at San Francisco in June and an award of $500 to cover travel costs.
New paper “FreDDI: Frequency-Driven DNN Partitioning in Distributed Inference” to appear in proceedings of SMARTCOMP’25. This work formulates distributed inference in deep neural networks (DNNs) when considering frequency selection of an edge/hub/cloud device as an additional knob of control which can impact the global latency and/or energy. Congratulations to first and only student author Robert Viramontes! He has also received a $1600 NSF travel award to present his work at Cork Ireland, June 16-19.
2024
Our paper titled “ReBERT: LLM for Gate-Level to Word-Level Reverse Engineering” has been nominated for best paper in DATE’2025. Congratulations to first author Lizi Zhang and thanks to our industry collaborator Dr. Rasit Topaloglu. This paper discusses an effective way to encode a a gate-level circuit given in a Hardware Description Language to reverse engineer the higher-level words using Large Language Models. We combine different embedding schemes to encode circuit information to best be inferred by the BERT model. This includes a novel tree-based positional embedding scheme to encode the position of each gate within the graph structure of a circuit as a token sequence.
Congratulations to Lizi for winning a $1000 travel award to present his paper at MLCAD’24.
Paper accepted in IEEE/ACM Symposium on Machine Learning for CAD on effective use of neural networks for automatic test pattern generation (ATPG) in VLSI. Neural networks are used to reduce the number of backtracks when searching the decision tree of test patterns, and reduce the runtime of ATPG. Our results show neural networks are not effective if applied for backtracing in circuit nodes which are too close to, or too far away from, the inputs. We also propose a look-up technique to reuse previously-computed inferences to significantly accelerate the runtime. The neural network itself should not be too complex because of the associated inference overhead, and overall objective to reduce the runtime of ATPG. Congratulations to first (and only student) author Lizi Zhang. He will present his work in Snowbird Utah in September 2024. A version of this paper also appeared in Int’l Workshop on Logic & Synthesis in June 2024.
Congratulations to Robert and Lizi for winning multiple travel awards (over $3500) to present their research at DAC, SmartComp and MLSys (Young Professional Symposium) at San Francisco, Japan, and Santa Clara.