Blog

11 Jun 2025

Quantization Part 1

Modern neural networks like CNNs and large language models (LLMs) are often heavily over-parameterized, leading to significant computational and memory overhead during inference. This makes deployment on resource-constrained devices, such as smartphones or edge hardware, extremely challenging. Quantization has emerged as a powerful technique to address this issue by reducing the precision of model parameters without sacrificing much performance.

24 Sep 2023

GSOC Journey

Ever since I started college, I had my eyes set on Google Summer of Code (GSoC). Towards the end of 2022, I began my search for organizations that focus on machine learning, a field I’m really passionate about. That’s when I discovered Machine Learning for Science (ML4SCI). Their past projects in machine learning fascinated me.