Study and comparative evaluation of vector and accelerator implementations for the RISC-V architecture
This project will be performed in collaboration with Assistant Professor Georgios Keramidas and Dr. Panagiotis Mousouliotis from the Aristotle University of Thessaloniki
Objective
The RISC-V open instruction set architecture (ISA) has emerged as a leading platform for customizable and energy-efficient computing, especially in edge AI and embedded systems. Its modularity and openness have led to the rapid development of specialized vector extensions and hardware accelerators targeting Machine Learning (ML) and Deep Learning (DL) workloads. However, these implementations differ significantly in design philosophy, performance trade-offs and hardware–software integration, making it difficult to identify the most suitable approaches for specific application domains.
How
You will perform a systematic study and comparative evaluation of the vector extensions and hardware accelerators implementations that have been developed for the RISC-V architecture. The study will cover both academic and research platforms (such as Ara, Spatz, Gemmini, Hwacha, PULP) and commercial solutions (e.g. SiFive Intelligence X280, Andes NX27V, T-Head C910), with the aim of capturing the current state, their characteristics and their suitability for Machine Learning and Deep Learning acceleration.
Outputs
A comparative evaluation framework and reference report detailing the design characteristics, performance metrics and energy efficiency of current RISC-V vector and accelerator solutions for ML/DL. The report will include the taxonomy and classification of RISC-V architectures, quantitative benchmarks of selected platforms, analysis of trade-offs for different performance metrics and guidelines and recommendations for selecting or extending RISC-V accelerators for ML/DL tasks in energy-constrained and edge-AI scenarios.
This research can lead to the publication and presentation of the results in an international conference or journal.
Prerequisites
- Solid understanding of Machine Learning and Deep Learning fundamentals
- High-level coding skills in Python
- Nice to have or willing to learn: Coding skills in C
- Nice to have or willing to learn: Understanding the architecture of modern MCUs
- Willingness to contribute to the state-of-the-art neural network models
- [1] J. Xiao, C. Zhang, Y. Gong, M. Yin, Y. Sui, L. Xiang, D. Tao, and B. Yuan. HALOC: hardware-aware automatic low-rank compression for compact neural networks. In Proceedings of the AAAI Conference on Artificial Intelligence, 2023.