Full publications can be found here (* indictes equal contribution).
2024
-
Neural architecture search for adversarial robustness via learnable pruning
Yize Li, Pu Zhao, Ruyi Ding, Tong Zhou, and 3 more authors
Frontiers in High Performance Computing, 2024
-
AdaPI: Facilitating dnn model adaptivity for efficient private inference in edge computing
Tong Zhou, Jiahui Zhao,
Yukui Luo, Xi Xie, and
3 more authors In 2024 IEEE/ACM International Conference on Computer Aided Design (ICCAD), 2024
-
Bileve: Securing Text Provenance in Large Language Models Against Spoofing with Bi-level Signature
In The Thirty-eighth Annual Conference on Neural Information Processing Systems, 2024
Text watermarks for large language models (LLMs) have been commonly used to identify the origins of machine-generated content, which is promising for assessing liability when combating deepfake or harmful content. While existing watermarking techniques typically prioritize robustness against removal attacks, unfortunately, they are vulnerable to spoofing attacks: malicious actors can subtly alter the meanings of LLM-generated responses or even forge harmful content, potentially misattributing blame to the LLM developer. To overcome this, we introduce a bi-level signature scheme, Bileve, which embeds fine-grained signature bits for integrity checks (mitigating spoofing attacks) as well as a coarse-grained signal to trace text sources when the signature is invalid (enhancing detectability) via a novel rank-based sampling strategy. Compared to conventional watermark detectors that only output binary results, Bileve can differentiate 5 scenarios during detection, reliably tracing text provenance and regulating LLMs. The experiments conducted on OPT-1.3B and LLaMA-7B demonstrate the effectiveness of Bileve in defeating spoofing attacks with enhanced detectability.
-
TBNet: A Neural Architectural Defense Framework Facilitating DNN Model Protection in Trusted Execution Environments
In Proceedings of the 61st ACM/IEEE Design Automation Conference, 2024
-
ArchLock: Locking DNN Transferability at the Architecture Level with a Zero-Cost Binary Predictor
In The Twelfth International Conference on Learning Representations, 2024
Deep neural network (DNN) models are vulnerable to misuse by attackers who try to adapt them to other tasks. Existing defenses focus on model parameters, neglecting architectural-level defenses. This paper introduces ArchLock, a method utilizing neural architecture search (NAS) and zero-cost proxies to generate models with low transferability, hindering attackers’ attempts. ArchLock maintains high performance on the original task while minimizing performance on potential target tasks.
2023
-
MirrorNet: A TEE-Friendly Framework for Secure On-Device DNN Inference
In 2023 IEEE/ACM International Conference on Computer Aided Design (ICCAD), 2023
-
Autorep: Automatic relu replacement for fast private network inference
In Proceedings of the IEEE/CVF International Conference on Computer Vision, 2023
-
NNSplitter: An Active Defense Solution for DNN Model via Automated Weight Obfuscation
In Proceedings of the 40th International Conference on Machine Learning, 23–29 jul 2023
NNSplitter is a novel IP protection scheme for DNN models, dividing them into an obfuscated portion stored in normal memory and model secrets stored in secure memory. The obfuscated model, with perturbed weights, hampers attackers’ efforts, while authorized users accessing secure memory achieve high performance. This approach ensures protection against adaptive attacks, maintaining security for DNN models.
2022
-
ObfuNAS: A Neural Architecture Search-based DNN Obfuscation Approach (Best Paper Nomination)
In Proceedings of the 41st IEEE/ACM International Conference on Computer-Aided Design, 23–29 jul 2022
ObfuNAS tackles the threat of malicious architecture extraction in DNN security. By converting architecture obfuscation into a neural architecture search problem, it ensures that obfuscated models perform worse than the original.
2021
-
Deep neural network security from a hardware perspective
In 2021 IEEE/ACM International Symposium on Nanoscale Architectures (NANOARCH), 23–29 jul 2021