>>

Séminaire ASTRE : Ali Akoglu

Titre du séminaire et orateur

Modeling for FPGA Architectures, and Parallelizing Applications for FPGA and GPU based Computing.

Ali Akoglu, University of Arizona.

Date et lieu

Jeudi 17 novembre 2016.

ENSEA, salle 384 (séminaire présenté en visio-conférence).

Abstract

In this presentation we will first cover ongoing research activities in the reconfigurable computing lab regarding wirelength prediction based field programmable gate array (FPGA) CAD tools and post-routing analytical models for homogenous FPGA architectures. We will then share the results of our application mapping studies in image processing, signal processing, and bioinformatics areas targeting FPGA, graphics processing unit (GPU), and FPGA-GPU coupled platforms.

The rapid growth in Field Programmable Gate Array (FPGA) architecture design space has led to an explosion in architectural choices that exceed well over 1,000,000 configurations. This makes searching for pareto-optimal solutions using a CAD-based incremental design process near impossible for hardware architects and application engineers. Our aim is to allow hardware architects and application engineers to evaluate the impact of their design choices with model-based trend analysis and rapidly converge to a desired solution without having to launch extensive CAD-based experiments. Despite the proliferation of FPGA models, todays state-of the art modeling tools suffer from two drawbacks. First, they rely on circuit characteristics extracted from various stages of the FPGA CAD flow making them CAD dependent. Second, they lack ability to take routing architecture parameters into account. These two factors pose as a barrier for converging to the desired implementation rapidly. In this research, we address these two challenges and propose the first static power and post-routing wirelength models in academia. Our models are unique as they are CAD-independent, and they take both logic and routing architecture parameters into account.

The ingenuity of parallelizing a an algorithm comes into play when trying to balance the usage of computation and memory resources on the target hardware, and manage the memory all the way down to a byte-by-byte basis.  During the second part of the presentation, we will investigate ways to utilize FPGAs as a visual sensing node in greenhouse based plant production systems for contact-free sensing and plant health monitoring.  As a case study, we will discuss the key modifications we made to the histogram equalization process to make it feasible to implement on a light-weight FPGA based sensor node. We will then present the way we mapped a signal classification algorithm onto GPU and GPU-FPGA integrated platform to achieve real-time classification in environments with varying power consumption constraints. We will finally present a GPU based T-Cell Receptor (TCR) synthesis study we completed for exploring the correlation between disease and immune system responses. We will show that this peta-scale process can be completed in 16 days on a GPU, which originally would take approximately 260 weeks on a 2.8GHz single-threaded processor.

We will conclude the presentation with an overview of some of the ongoing collaborative projects in the reconfigurable computing lab such as the Just in Time Architectures (JITA) project for dynamically assembling and re-assembling a pool of compute and storage resources to meet the dynamic workload changes in future data center applications, and the Heart Cyber Expert (HeartCyPert) project for characterizing ventricular arrhythmias through 3D model and parallel algorithms, and predicting the risk level of ventricular arrhythmia in patients with chronic heart failure.

Bio

Ali Akoglu is an Associate Professor in the Department of Electrical and Computer Engineering at the University of Arizona. He is the co-director of the National Science Foundation (NSF), Center for Cloud and Autonomic Computing regarding the design and development of architectures for achieving self-management capabilities across the layers of cloud computing systems, director of the NVIDIA CUDA Teaching Center for promoting the GPU based computing across the UA campus, and the director of the Reconfigurable Computing Laboratory on design and development of FPGA CAD tools, models for FPGA architectures, and adaptive hardware architectures. He received his Ph.D. degree in Computer Science from the Arizona State University in 2005. He has been involved in many crosscutting collaborative projects with the goal of solving the challenges of bridging the gap between the domain scientist, programming environment and emerging highly-parallel hardware architectures. His research projects have been funded by the National Science Foundation, Office of Naval Research, US Air Force, NASA Jet Propulsion Laboratories, Army Battle Command Battle Laboratory, and industry partners such as NVIDIA and Huawei.

Go back