Shaizeen Aga

  • Shaizeen Aga
    Shaizeen Aga

PhD Candidate, University of Michigan, Ann Arbor

Compute Caches

Computing today is dominated by data-centric applications and there is a strong impetus for specialization for this important domain. Conventional processors’ narrow vector units fail to exploit the high degree of data-parallelism in these applications. Also, they expend a disproportionately large fraction of time and energy in moving data over cache hierarchy and in instruction processing, as compared to the actual computation.

In this talk, I will present the Compute Cache architecture which aims to tackle these challenges by enabling in-place computation in caches. I will talk about how they harness the emerging SRAM circuit technology of bit-line computing to re-purpose existing cache elements as active very large vector computational units. Further, this also significantly reduces the overheads in moving data between different levels in the cache hierarchy.

I will also talk about solutions to satisfy new constraints imposed by Compute Caches such as operand locality. Compute Caches increase performance by 1.9x and reduces energy by 2.4x for a suite of data-centric applications, including text and database query processing, cryptographic kernels, and in-memory checkpointing. Applications with a larger fraction of Compute Cache operations could benefit even more, as our micro-benchmarks indicate (54x throughput, 9x dynamic energy savings).

Shaizeen is a PhD Candidate at the University of Michigan, Ann Arbor. Her research encompasses a broad range in compute architecture related topics from efficient hardware support for security, novel near data computing solutions to hardware support to ease programmability.

Our world today increasingly relies on processing massive amounts of data to solve important problems facing us. This data deluge presents interesting challenges. Shaizeen’s research identifies and addresses several of these challenges. Thanks to the data deluge, we are relying more and more on cloud computing to harness on-demand compute resources which has led to increased demand for secure architectures. By exploiting 3D stacked memories, Shaizeen has designed architectures which are secure without compromising performance or energy efficiency. Further, as the amount of data we process increases, current architectures waste lot of energy and time in moving this data towards compute units. Her research brought to forth a novel near-data computing technique which empowers processor caches to perform in-place computations which avoids moving data around and delivers massive performance gains and energy efficiency. Also, today we have to program a variety of compute architectures to harness performance. However, making them programmable is challenging. With her work, she has come up with novel solutions to make multi-core systems more accessible to programmers. Her past projects in this area have been about making more intuitive memory models possible in hardware and efficient runtimes for multi-core systems.

Throughout her career she has had fruitful stints in the industry in the form of internships (Qualcomm Research Silicon Valley (QRSV), Pacific Northwest National Laboratory (PNNL), NVIDIA Graphics Pvt Ltd) and full time positions (Morgan Stanley). There she has made contributions to a wide spectrum of computing: from heterogenous computing (Qualcomm) to efficient multi-core runtime systems (PNNL) and from programming GPUs (NVIDIA) to design and development of a data warehouse (Morgan Stanley).

Shaizeen has received several awards and honors including winning first place at the University of Michigan CSE Graduate Students Honors Competition 2016, a yearly competition which recognizes research of broad interest and exceptional quality done by graduate students. Her work during her undergraduate days won 1st place in Parallel Computing at Imagine Cup 2009, a worldwide student technical competition organized by Microsoft.

Shaizeen Aga’s Research webpage