Vimal William

Vimal William

Ph.D. Student, Electrical & Computer Engineering

University of Arizona

🏆 Herbold Fellow

About

I am a first-year Ph.D. student at the University of Arizona, co-advised by Prof. Jyotikrishna Dass and Prof. Ravi Tandon.

My research lies at the intersection of machine learning, computer architecture, and distributed systems, with a focus on designing domain-specific systems for AI and optimizing existing systems to improve AI efficiency.

Currently, I'm working on optimizing transformer-based AI architectures to efficiently handle longer context lengths, with an emphasis on reducing memory utilization and improving scalability for large-scale applications.

Research Interests

  • AI Systems Architecture: Domain-specific hardware and systems for machine learning workloads
  • Transformer Optimization: Efficient long-context processing and memory optimization
  • Compiler Engineering: Graph compilers and optimization frameworks (MLIR, TVM)
  • Distributed Systems: Scalable ML infrastructure and distributed training

Education

2024 - Present

Ph.D. in Electrical Engineering

University of Arizona, Tucson, AZ

Advisors: Prof. Jyotikrishna Dass, Prof. Ravi Tandon

Herbold Fellow

2018 - 2022

B.E. in Electronics and Communication Engineering

Anna University, Chennai, India

Experience

2022 - 2024

System Software & Compiler Engineer

SandLogic Technologies, Bangalore, India

Worked on compiler optimization and system software for AI accelerators

Publications

Brain-Computer Interface

Deep Learning Architecture for Motor Imaged Words

Akshansh Gupta, Vimal William

IC-RA-MSA-ET 2022

Applying DFT and ICA to condition the noisy signals from the motor-cortex region and utilize state-of-the-art AI models to predict associated motor actions.

MFCC Analysis

Study on the Behaviour of Mel Frequency Cepstral Coefficient Algorithm for Different Windows

Vimal William

ICITIIT 2022

Comprehensive study on the effects of different windowing functions on the MFCC algorithm for speech signal processing and audio feature extraction.

Selected Projects

Tiny Graph Compiler

Minimal implementation of a graph compiler based on the MLIR stack, following approaches proposed by the TVM white-paper. Focuses on backend targeting NVPTX for GPU code generation.

MLIR CUDA NVPTX Compiler Design

VStream

GStreamer-based streaming pipeline for on-device object detection. Supports CPU/GPU INT8 inference with optimized performance for edge deployment scenarios.

GStreamer Object Detection Edge AI INT8 Quantization

News

[Aug 2024] Started Ph.D. at University of Arizona as a Herbold Fellow
[Aug 2022] Paper accepted at IC-RA-MSA-ET 2022
[Feb 2022] Paper accepted at ICITIIT 2022