Connecting the Michigan Tech Computing Community

The ICC invites Michigan Tech faculty to share their research with the computing community in the ICC Seminar Series.

Upcoming Lectures

Dr. Rishad Shafik
April 3, 2018

Assistant Professor of Electronic Systems, Newcastle University, UK

Dr Rishad Shafik (MIET, MIEEE, FHEA) is an internationally recognised expert in Adaptive Transprecision Computing Systems Design, currently an Assistant Professor of Electronics Systems within Newcastle University. He is also the Director of Adaptive Electronic Systems Lab:, currently consisting of seven PhD students. The lab has been leading research works on energy-efficient hardware/software co-optimisation as part of 4 major EPSRC and EU FP7 projects, which led to 85+ publications in major avenues with 3 best paper nominations and one co-edited book (“Energy-efficient Fault-Tolerant Systems”, published by Springer USA). Dr Rishad is also academic fellow of £5.6m PRiME project (, the PI of a Royal Society Exchange project with MichiganTech, the PI of an EPSRC Impact Acceleration Award, and the Academic Champion UK’s Royal Academy of Engineering Visiting Professorship at Newcastle. He has been involved in many top-tier international conferences, most recently he was the General Co-Chair of DFT’17 (

The dramatic spread of computing, at the scale of trillions of ubiquitous devices, is delivering on the pervasive penetration into the real world. Today, the widely used paradigms directly related to the behaviour of computing systems are those of Real-Time (working to deadlines imposed from the real-world) and Low-Power (prolonging battery life or reducing heat dissipation and electricity bills). Neither addresses the strict requirements on power supply, allocation and utilisation that are imposed by the needs of new devices and applications in the computing swarm — many of which are expected to be confronted with challenges of autonomy and battery-free long-life. Indeed, we need to design and build systems for survival, operating under a wide range of power constraints; we need Real-Power Computing. Challenges and opportunities in Real-Power Computing and how transprecision or approximate computing can advance the foundational technologies will be outlined.

Dr. Robert R. Hoffman
April 10, 2018

Senior Research Scientist
Institute for Human and Machine Cognition

Robert R. Hoffman has been Principal Investigator, Co-Principal Investigator, Principal Scientist, Senior Research Scientist, Principal Author, or Principle Subcontractor on over 60 grants and contracts totaling over $15M. He has led efforts including large, multi-partner, multi-year grant collaborations, contracted alliances of university and private sector partners, and multi-university research initiatives. He has co-authored and co-edited 18 scholarly books and is co-author on over 100 publications in peer-reviewed journals.

Lecture Archive, Spring ’17

Dr. Shuai Wang
March 23, 2018

Lecturer, Computer Science at Michigan Technological Institute
Member of the Center for Scalable Architectures and Systems

Shuai Wang received his B.S. degree in Computer Science from Nanjing University, China and his Ph.D. degree in Computer Engineering from New Jersey Institute of Technology. He is currently a Lecturer in the Computer Science Department at Michigan Technological University. His research interests include Computer Architecture, Reliable Microarchitectures, Power/Thermal-Aware Systems, Embedded Systems High-Performance Computing, On-Chip Networks, and System/Architectural support for Big Data.
The degradation of CMOS devices over the lifetime can cause severe threat to the system performance and reliability at deep submicron semiconductor technologies. The negative bias temperature instability (NBTI) is among the most important sources of the aging mechanisms. Applying the traditional guardbanding technique to address the decreased speed of devices is too costly. On-chip memory structures, such as register files and on-chip caches, suffer a very high NBTI stress. In this talk, the proposed aging-aware design to combat the NBTI-induced aging in integer register files, data caches and instruction caches in high-performance microprocessors will be discussed. The proposed aging-aware design can mitigate the negative aging effects by balancing the duty cycle ratio of the internal bits in on-chip memory structures. Besides the aging problem, the power consumption is also one of the most prominent issues in microprocessor design. Therefore, applying the low power schemes to different memory structures under aging-aware design was further proposed. The low power aging-aware design can achieve a significant power reduction, which will further reduce the temperature and NBTI degradation of the on-chip memory structures.

Dr. Elizabeth Veinott
February 1, 2018

Associate Professor, Cognitive Learning Sciences at Michigan Technological Institute
Member of the Center for Human-Centered Computing

Elizabeth Veinott is a cognitive psychologist who studies collaboration, trust, and decision making of individuals and teams in variety of human-computer interaction environments.  Her work often involves translational research by taking basic theories and applying them in new technologies, work environments, or educational settings.  At Tech, she is focused on two areas: decision making theory and video games for learning.  She conducts empirical video-game research to design, develop, and evaluate games for improving specific critical thinking skills in STEM education with college and K-12 students. Dr. Veinott brings 15 years of experience in industry and government research labs doing human-factors research and has worked as a Principal Scientist at an R&D engineering company and as a contractor at NASA Ames in the Human Performance Division.

Dr. Veinott holds an A.B. from Stanford University and a Ph.D. from the University of Michigan, both in Cognitive Psychology

Problem solving is a key skill developed in video game play over time (Gee, 2003; McGonigal, 2011) and one that is needed in a variety of STEM activities.  Only in video games may finding the key in one room lead to unlocking the pot of gold in another.  This constant shift in objectives and strategies is one aspect of video game play that has the potential to transfer to problems outside of video games (Veinott et al. 2013, 2014, 2017).  However, very little research quantifies higher-level cognitive learning in video games or finds transfer. In this talk, I will describe several experiments, with promising learning effect sizes, that we have done to bridge this gap and the implications for future STEM education.

Hanan Hibshi
May 11, 2017

Ph.D. candidate and Research Assistant in the Societal Computing program at Carnegie Mellon University

Hanan Hibshi is a Ph.D. candidate and a research assistant in the Societal Computing program at Carnegie Mellon University. Hanan’s research area includes: usable security, security requirements and expert’s decision-making. Hanan’s research involves using grounded theory and mixed-methods user experiments to extract rules for use in intelligent systems. Hanan received her MS in Information Security Technology and Management from the Information Networking Institute at Carnegie Mellon University, and her BS in Computer Science from King Abdul-Aziz University in Jeddah, Saudi Arabia.
Organizations rely on security experts to evaluate the security of their systems. These professionals use background knowledge and experience to assess risk and decide on mitigations. The substantial depth of expertise in any one area (e.g., databases, networks, operating systems) precludes the possibility that an expert would have complete knowledge about all threats and vulnerabilities. To begin addressing this problem of fragmented knowledge, we investigate the challenge of developing a security requirements rule base that mimics multi-human expert reasoning to enable new decision-support systems.  In this talk, I will explain how to collect relevant information from cyber security experts to enable the generation of: (1) interval type-2 fuzzy sets that capture intra- and inter-expert uncertainty around vulnerability levels; and (2) fuzzy logic rules driving the decision-making process within the requirements analysis. The proposed method relies on comparative ratings of security requirements in the context of concrete vignettes, providing a novel, interdisciplinary approach to knowledge generation for fuzzy logic systems. I will also highlight results of our study with experts and summarize further research directions.

Josie McCulloch
May 11, 2017

Ph.D, Research Fellow in Computer Science at the University of Nottingham, UK

Josie McCulloch is a research fellow in Computer Science at the University of Nottingham, UK. Her main research focuses on using type-1 and type-2 fuzzy sets to model subjective information collected from multiple sources where data is imprecise and may contain contradictions. Her work involves aggregating and modelling the complexities of such information, and developing useful measures of analysis on the resulting models. She has applied this within the field of recommendation systems, enabling consumers to use human-like queries to find their desired product.
Knowledge based recommendation systems enable consumers to describe their ideal product in relation to one they have seen. These descriptions can often be uncertain and fuzzy in nature. For example, a person may request an approximate preferred time to go out for a meal and will have imprecise preferences on what they want to eat. In this talk, recommendations are created by categorising the consumers’ preferences as either implicit or explicit. Implicit preferences, such as quality of food, can be presumed (e.g., higher is always better). However, explicit preferences, such as spiciness, must be provided by the consumer because different people have different desires. This talk discusses methods of using fuzzy sets to capture subjective information on consumer products and creating recommendations based on the implicit and explicit preferences of the consumer.

Zhaohui Wang
April 6, 2017

Assistant Professor, Electrical and Computer Engineering

Dr. Zhaohui Wang received her Ph.D. degree in Electrical Engineering from the University of Connecticut, Storrs, in 2013, and joined the Department of Electrical and Computer Engineering at Michigan Technological University since then. Her research is in the areas of signal processing, wireless communications and networking, with a major focus on underwater wireless communication networks. She co-authored the first book on OFDM for underwater acoustic communications. Dr. Wang received the 2013 Women of Innovation Award in Collegian Innovation and Leadership by Connecticut Technology Council, the 2014 Outstanding Academic Achievement Award from the University of Connecticut, and the Outstanding Service Award from the tenth ACM WUWNET conference in 2015. She was selected as an Outstanding Reviewer by the Editorial Board of IEEE Journal of Oceanic Engineering in 2012, 2013-2014, 2015, and 2016, respectively. Dr. Wang is a recipient of the NSF CAREER award in 2016.
Underwater wireless communication networks are the enabling techniques for unmanned, in situ and real-time aquatic monitoring and data collection in a wide range of applications, such as scientific studies, pollution detection, offshore oil and gas drilling, and tactical surveillance. Due to the high attenuation of radio waves in water, acoustics has been a major information carrier for underwater transmissions. However, the underwater environment poses grand challenges for acoustic communications and networking, such as the high spatiotemporal dynamics of underwater acoustic channels, the abundance of interferences, the limited frequency band for long-range transmissions, and the large sound propagation latency. Consider the lifespan of underwater systems that varies from a few years to decades. We envision an online-learning-based framework for underwater acoustic communications and networking, where the underwater acoustic system 1) models and predicts the long-term dynamics of the acoustic environment, and 2) proactively adapts its communication and networking strategy to the dynamics of the environment, thereby maximizes the long-term system performance.

In this talk, we will first present our observations of underwater acoustic channel dynamics in a series of field experiments that were conducted in nearby lakes during both open-water and ice-covered seasons. Then, we will explore signal processing techniques to model and predict underwater acoustic channel dynamics at multiple scales. Through forecasting the underwater channel dynamics, efficient acoustic communication and networking strategies can be developed. In the online-learning-based framework, the tradeoff between the acoustic environment exploration and exploitation will be tackled via Bayesian reinforcement learning techniques, which provide a principled approach to weighing the immediate reward of a communication and networking strategy and its associated long-term benefit of revealing the environment’s dynamics during acoustic communications. At the end of the talk, we will give a brief description of an underwater testbed system developed in our research group, which has been serving as an excellent platform for both undergraduate and graduate education.

Hairong Wei
March 31, 2017

Associate Professor, School of Forest Resources and Environmental Science
Department of Computer Science (Adjunct)
Department of Mathematics (Adjunct)

Hairong Wei is exploring the possibility of identifying genes regulating biological traits via systems biology approaches. He achieves this goal through combinatorial approaches: (1) Designing very efficient biological experiments to yield information-enriched gene expression data sets. (2) Performing gene reduction to identify major regulators controlling traits via complex data analysis. (3) Construction of local regulatory networks controlling a trait. He has developed the algorithms and software pipelines for accurately identifying transcription factors controlling a trait with 50-100 % accuracy. He is generally interested in identifying genes regulating the following traits: (1) Wood productivity and quality. (2) Resistance to disease and pest. (3) Biomass production. However, identification of genes regulating any of agriculturally important traits interests him.

Identification of genes that control various biological processes (pathways) and complex traits is an important and challenging problem. However, as the quantity of genomic data exploded, this became increasingly possible. I will introduce two efficient algorithms that enable identification of genes governing biological processes and complex traits. One is to construct collaborative network of all regulatory genes, decompose it, and then rebuild the collaborative subnetworks; each is evidenced to control a biological process or a trait. The other approach is to build multi-layered gene regulatory networks from which we can identify regulators with high hierarchies. These high hierarchical regulators have pleiotropic effects on biological processes and complex traits. The results from both approaches can be validated with existing knowledge base and biological experiments, which indicate both methods are not only cost-effective but also highly accurate in identifying important regulatory genes that govern various biological processes and complex traits.

Zhuo Feng
March 17, 2017

Associate Professor, Electrical and Computer Engineering

Zhuo Feng received a PhD in Computer Engineering from Texas A&M University in 2009, an ME degree in Electrical Engineering from the National University of Singapore in 2005, and a BE in Information Engineering from Xi’an Jiaotong University, in China, in 2003. He is currently an assistant professor in the Department of Electrical and Computer Engineering at Michigan Tech, where he is affiliated with the computer engineering group. His research interests include VLSI designs, computer-aided design, parallel heterogeneous computing algorithms, nonlinear IC simulations in time/freqeuncy domain, on-chip interconnect modeling, parasitics extraction and reduction, full-chip thermal/electrical simulations and optimizations for 3D-ICs, die-package power delivery network (PDN) modeling and optimization for multi-core processor design, variation-aware VLSI circuit modeling/optimization and statistical static timing analysis (SSTA).

Present-day nanoscale Integrated Circuit (IC) technologies allow to integrate billions of transistors into a single chip, while almost every key subsystem of modern chip designs, such as clock distribution networks, power delivery networks (PDNs), embedded memory arrays, as well as analog and mixed-signal systems, may reach an unprecedented complexity of hundreds of millions of components. As a result, it becomes extremely difficult and even intractable to model, simulate, optimize and verify future nanoscale ICs at large scale with existing design methodologies. On the other hand, recent research results (a.k.a. Laplacian Paradigm) from theoretical computer science and applied math communities show that it is possible to construct almost-linear-sized subgraphs (graph sparsifiers) that can very well approximate the spectra of the original graph via spectral graph sparsification, which can immediately lead to the development of much faster numerical and graph-based algorithms. For instances, spectrally sparsified transportation networks will allow to develop much faster navigation algorithms in large transportation systems, spectrally sparsified data networks can allow to more economically store, partition and cluster data networks, spectrally sparsified circuit networks will allow to more efficiently simulate, optimize and verify large circuit systems. However, there still remain extremely challenging issues for designing practically efficient spectral sparsification algorithms for handling large scale real world graphs (networks).

In this talk, I will first introduce our recent work on developing a practically efficient, nearly-linear complexity spectral graph sparsification approach for aggressively sparsifying large graph Laplacian matrices, integrated circuits, and as well as data networks. Next, I will talk about how to leverage the latest result for developing nearly-linear time numerical and graph-based algorithms for solving large partial differential equation (PDE) and sparse matrix problems, design automation of future nanoscale ICs, as well as graph partitioning and data clustering of large data networks. Lastly, I will discuss how to leverage graph sparsification techniques for optimally solving large PDEs and sparse matrices on emerging heterogeneous parallel computing platforms.

Lecture Archive, Fall ’16

Tim Wilkin
November 17, 2016

Deakin University
Senior Lecturer and Computer Science Course Director, School of Information Technology​​​​​​​

Tim Wilkin received the B.Sc. (Hons) degree in applied mathematics from Monash University in 1997, the Grad. Cert. in Higher Education from Deakin University in 2011 and the Ph.D. degree in mathematics and computer science from Deakin University in 2014. He is currently a Senior Lecturer and Director of the computer science program within the School of Information Technology, Deakin University and a member of Deakin’s Institute for Intelligent Systems Research and Innovation.

Prior to his time at Deakin University he worked in a range of academic and industry-based research roles, including on the development of the Aerosonde, Australia’s first commercial UAV and the first autonomous aircraft to cross the North Atlantic Ocean. His research interests include the mathematical formulation of problems in computer vision, image and signal processing, with applications in autonomous systems, robotics and cyber-physical systems.

Dr Wilkin received the Best Paper Award at PRICAI 2000, the Alfred Deakin Medal for Doctoral Thesis from Deakin University in 2015, the Jim & Alison Leslie Award for Outstanding Achievement in Teaching and Learning from Deakin University in 2013 and he was awarded an OLT Citation for Outstanding Contribution to Student Learning by the Commonwealth Government of Australia in 2014.

Aggregation functions play a vital role in areas such as decision support, data analytics and information processing. In particular, averaging functions – also known as means – are commonly applied in statistical analysis, image and signal processing, data fusion and decision problems. With the rise of “big data” problems, dealing with noise, missing data and outliers in real data sets requires averaging functions that are inherently robust, yet the mathematical definition of averaging aggregations makes them inherently susceptible to such corruption, leading to undesirable outcomes. In this presentation I will introduce recent developments in robust averaging, including key theoretical contributions, and demonstrate their application to problems in image and signal processing, and preference and ratings aggregation.

Zhi “Jenny” Zheng
November 18, 2016

ICC Visiting Research Assistant Professor

Zhi Zheng is a Research Assistant Professor in the department of Electrical and Computer Engineering at Michigan Tech. She received her B.S. (2008) in Biomedical Engineering and her M.S. (2011) in Pattern Recognition and Intelligent Systems from Xidian University, as well as her M.S. (2013) and Ph.D. (2016) in Electrical Engineering from Vanderbilt University. Her research interest covers a wide variety of topics in human-robot interaction, human-computer interaction, computer vision, machine learning, and developmental psychology. She was also named as a Blended and Online Learning Fellow of Vanderbilt University for conducting engineering education research. Dr. Zheng has designed and developed multiple advanced machine-assisted adaptive intervention systems for young children with ASD. Until 2016, she had 21 research articles published in leading journals and conference proceedings, and received a best paper award in HCI 2015 as well as a best student paper award nomination in ICORR 2013. Her research was selected by the 20th annual Coalition for National Science Funding Capitol Hill Exhibition & Reception, and has been reported by Medias (e.g. NSF science 360 and WSMV-TV Nashville Channel 4). Dr. Zheng also has been actively involved in teaching and demoing technologies to help K-12 students learn STEM.

Human-robot interaction is being continuously explored as a potential efficacious intervention tool for young children with ASD. ASD is a common developmental disorder with a high prevalence rate of 1 in 68 children in the U.S., which is characterized by social communication impairments. Although applying robots for children with ASD has been studied for decades, several challenges exist, including: 1) how to target the core deficits of ASD using technologies; 2) how to make robotic systems adaptive based on children’s real-time response; and 3) how to detect interaction cues non-invasively. This presentation will describe the design and development of intelligent robotic intervention systems and user studies targeting two core deficits of ASD, which are imitation and joint attention impairments. Specifically, I will discuss novel robotic systems where a humanoid robot is embedded as the intervention administrator, and coordinates with gesture recognition, large range gaze tracking, and adaptive hierarchical intervention protocol.

Hyungchul Yoon
October 28, 2016

Assistant Professor, Civil & Environmental Engineering

Dr. Hyungchul Yoon is currently an assistant professor of Civil and Environmental Engineering. His research interests include developing smart sensing technologies for managing and monitoring civil infrastructure systems. He has been utilizing the cutting edge technologies including smartphones and unmanned aerial vehicles to aid post disaster response and to monitor civil infrastructure systems. His work is inherently multi-disciplinary, involving a broad array of structural engineering, structural dynamics, system identification, computer vision, machine learning, context aware computing, signal processing, His teaching interests include structural analysis, matrix structural analysis, structural dynamics, and numerical methods.

The concept of Smart Cities has been introduced to categorize a vast area of activities to enhance the life quality of its citizens.  A central feature of these activities is the pervasive use of Information and Communication Technologies (ICT), helping cities to make better use of their limited resources. Indeed, the ASCE Vision for Civil Engineering in 2025 portends a future where engineers will rely on and leverage real-time access to living database, sensors, diagnostic tools, and other advanced technologies to ensure informed decisions are made.  However, these advances technologies are against a backdrop of our infrastructure deterioration and natural and human-made disasters. Moreover, recent events constantly remind us of the tremendous devastation that natural and human-made disasters can wreak on society.  As such, emergency response and resilience are among the crucial dimensions of any Smart City. The Department Homeland Security (DHS) has recently launched plans to invest $50 million to develop cutting-edge emergency response technologies for Smart Cities.  Furthermore, after significant disasters, it is imperative that emergency facilities and evacuation routes, including bridges and highways, be assessed for safety. The objective of this research is to provide new framework using commercial off-the-shelf (COTS) devices such as smartphones, digital cameras, and unmanned aerial vehicles for enhancing the functionality of Smart Cities, especially with respect to emergency response and civil infrastructure monitoring/assessment. To achieve this objective, this research focuses on the post-disaster victim localization and assessment, first responders tracking and event localization, vision-based structural monitoring/assessment, including the use of unmanned aerial vehicles (UAVs). This research constitutes a significant step toward realization of Smart City Resilience.

Christian Wagner
September 16, 2016

ICC Visiting Professor


Christian Wagner’s research focuses on modeling & handling of uncertain data arising both from qualitative (people) and quantitative sources (e.g., sensors, process), decision support systems and data-driven policy design. He has published more than 80 peer-reviewed articles, including prize-winning papers in international journals and conferences, most recently having co-authored runners-up for both the best regular and best student papers at the IEEE International Conference on Fuzzy Systems 2016 in Vancouver, Canada. He has attracted around £1 million as principal and £6 million as co-investigator in the last six years. He is an Associate Editor of the IEEE Transactions on Fuzzy Systems journal and is actively involved in the academic community through for example the organization of special sessions and tutorials at premiere IEEE conference. He has developed and been involved in the creation of multiple open source software frameworks, making cutting edge research accessible both to peer researchers as well as to different research communities beyond computer science, including an R toolkit for type-2 fuzzy systems and a new Java based framework for the object oriented implementation of general type-2 fuzzy sets and systems. His current research projects focus on the development, adaptation, deployment and evaluation of artificial intelligence techniques in inter-disciplinary projects bringing together heterogeneous data from stakeholders and quantitative measurements to support informed and transparent decision making.

Most, if not all information sources, whether sensors through measurement, processes through inference, or humans through their opinions, are uncertain. While uncertainty is often either ignored or modeled retrospectively, this risks information loss, potentially resulting in poorly informed decision making, in particular when multiple data sources are combined. In this talk, I will discuss the capture, aggregation and modeling of interval-valued data with a specific focus on human sources. The talk will include a number of techniques and recent applications from crowd-sourcing, to consumer-driven food manufacture and environmental conservation management decision support.