2018年与2024年分别于复旦大学微电子学院获得微电子科学与工程学士学位和微电子学与固体电子学博士学位。主要研究方向包括:面向AI大模型加速的存算一体芯片设计、基于2.5D/3D可扩展Chiplet芯片与系统架构,以及低功耗类脑计算芯片设计。以第一作者或通讯作者在ISSCC、CICC、DAC、ICCAD、TCAS-I、TVLSI等集成电路领域顶尖会议和期刊发表论文十余篇。欢迎对集成电路设计与深度学习算法感兴趣的同学加入课题组。
个人主页:shiweiliu-ai.github.io
论文发表:
[1] Guanchen Tao, Junyi Luo,Shiwei Liu*, Anhang Li, Gregory Kielian, Kauna Lei, Qirui Zhang, Dennis Sylvester and Mehdi Saligane, “An 11.16μJ/token Edge SLM Decoder Accelerator with Scalable Ring-based Configuration for Token-level Pipelining in 16nm FinFET,” in IEEE Custom Integrated Circuits Conference (CICC), 2026.(*通讯作者).
[2] Hao Jiang, Zexing Chen, Jiajun Lu, Siqi He, Liangjian Lyu, Jiamin Xu, Jianguo Yang,Shiwei Liu*, Yingping Chen, Chixiao Chen, Qi Liu, Ming Liu, “A 1024-Ch 583-nW/Ch Spike-Sorting SoC With Sparsity-Aware Spike Detection Scratchpad and Ultra-Low-Leakage Dual-Voltage 5T-SRAM for 16K-Template Clustering,” in IEEE Transactions on Circuits and Systems I: Regular Papers, 2026.(*通讯作者).
[3]Shiwei Liu*, Jiangnan Yu, Peizhe Li, Feng Lin, Chixiao Chen, “
Scalable Sparse Transformer Accelerator with In-Memory Butterfly Zero Skipper and Local Attention Reusable Engine for Semi-Structured-Pruned NN,” in IEEE Journal on Emerging and Selected Topics in Circuits and Systems, 2026.(*第一作者).
[4]Shiwei Liu*, Zhirui Huang, Jiangnan Yu, Qi Liu, Chixiao Chen, “McPAL: Scaling Unstructured Sparse Inference with Multi-Chiplet HBM-PIM Architecture for LLMs” in ACM/IEEE Design Automation Conference (DAC), 2025.(*第一作者).
[5]Shiwei Liu*, Guanchen Tao, Yifei Zou, Derek Chow, Zichen Fan, Kauna Lei, Bangfei Pan, Dennis Sylvester, Gregory Kielian, Mehdi Saligane, “Consmax: Hardware-Friendly Alternative Softmax with Learnable Parameters,” in IEEE/ACM International Conference on Computer-Aided Design (ICCAD), 2024.(*第一作者).
[6]Shiwei Liu*, Chen Mu, Hao Jiang, Yunzhengmao Wang, Jinshan Zhang, Feng Lin, Keji Zhou, Qi Liu, Chixiao Chen, “Hardsea: Hybrid Analog-ReRAM Clustering and Digital-SRAM In-memory Computing Accelerator for Dynamic Sparse Self-Attention in Transformer,” in IEEE Transactions on Very Large Scale Integration (VLSI) Systems, vol. 32, no. 2, pp. 269-282, 2024.(*第一作者).
[7]Shiwei Liu*, Peizhe Li, Jinshan Zhang, Yunzhengmao Wang, Haozhe Zhu, Wenning Jiang, Shan Tang, Chixiao Chen, Qi Liu, Ming Liu, “A 28nm 53.8TOPS/W 8b Sparse Transformer Accelerator with In-Memory Butterfly Zero Skipper for Unstructured-Pruned NN and CIM-Based Local-Attention-Reusable Engine,” in IEEE International Solid-State Circuits Conference (ISSCC), 2023.(*第一作者).