Skip to main content

Directory

Tarek M. Taha

Professor

Full-Time Faculty

School of Engineering: Department of Electrical and Computer Engineering

Contact

Email: Tarek Taha
Phone: 937-229-3119
Kettering Laboratories Room 261E
Website: Visit Site

Parallel Cognitive Systems Laboratory

At the University of Dayton, Dr. Taha leads the Parallel Cognitive Systems Laboratory. The lab is specialized in artificial intelligence algorithms, applications and hardware. Additionally, the lab works on high performance parallel computing. There is a strong emphasis to apply the research to autonomous systems.

In AI algorithms and applications, the lab examines the following:

  • Deep learning algorithm and applications (medical imaging, image understanding and enhancement, cyber security)
  • Parallel algorithms for cognitive agents (with AFRL)
  • Spiking neural network algorithms for cognitive agents (with AFRL)
  • Application development for spiking neural processors: Intel Loihi and IBM TrueNorth

In neuromorphic computing, the lab examines all aspects of designing these systems including the following:

  • Memristor based mixed signal circuits for learning in neural networks
  • Multicore neuromorphic computing architectures (both digital and mixed signal)
  • The application of these architectures
  • Fabrication and modeling of memristor devices
  • New algorithms for deep learning

In high performance parallel computing, the lab examines the following:

  • Map applications and algorithms to parallel computing architectures
  • FPGA implementation of algorithm
  • The use of neuromorphic computing in high performance systems
  • Develop high performance parallel algorithms for cognitive agents

Lab Website: http://taha-lab.org 

Publications

Selected Publications

  • Alom, M. Z., Hasan, M., Yakopcic, C., Taha, T. M., & Asari. (Accepted). Recurrent residual U-Net (R2U-Net) for medical image segmentation. Journal of Medical Imaging.
  • Yakopcic, C., Taha, T. M., Mountain, D. J., Salter, T., Marinella, M. J., & McLean, M. (Accepted). Memristor model optimization based on parameter extraction from device characterization data. IEEE Transactions on Computer Aided Design.
  • Qi, Y., Hasan, R., & Taha, T. M. (2018). Socrates-D 2.0: A low power high throughput architecture for deep network training. International Joint Conference on Neural Networks (IJCNN). 
  • Hasan, R., Taha, T. M., & Yakopcic, C. (2017). On-chip training of memristor crossbar based multi-layer neural networks. Microelectronics Journal, 66, 31-40.

Click here to go to my Google Scholar page for a list of all of my publications and citations.

Honors and Awards

  • Sigma Xi, 2018
  • IEEE Dayton Section Dr. Fritz J. Russ Bio-Engineering Award, 2018
  • Best Paper Award: IEEE International Joint Conference on Neural Networks, 2013
  • AFOSR Summer Faculty Fellowship Program, 2012, 2013-declined
  • Clemson University Board of Trustees Award for Faculty Excellence, 2008
  • NSF CAREER Award, 2007
  • Eta Kappa Nu, 1999
  • Phi Beta Kappa, 1994

Courses Taught

  • Fundamentals of Computer Architecture (ECE 314)
  • Computer Design (ECE 533)
  • Advanced Computer Architecture (ECE 636)
  • Deep Learning (ECE 595)

Degrees

  • Ph.D., Electrical Engineering, Georgia Institute of Technology, 2002
  • M.S.E.E, Electrical Engineering, Georgia Institute of Technology, 1998
  • B.S.E.E, Electrical Engineering, Georgia Institute of Technology, 1996
  • B.A., Pre-Engineering, DePauw University, 1996

Research Interests

  • Computer architecture
  • High performance computing
  • High throughput neuromorphic computing
  • Memristor based architectures (Deep learning algorithms and applications and medical imaging)
  • Reconfigurable computing
  • Processor performance prediction