Sharing Knowledge. Fostering Collaboration.

The ICC brings the most eminent scholars and creative professionals in the field of computing to the campus
to exchange state-of-the-art research results and discuss future research directions.

Lecture Archive, Spring 2018

Tom Hou
April 27, 2018

Bradley Distinguished Professor of Electrical and Computer Engineering, Virginia Tech

Tom Hou is the Bradley Distinguished Professor of Electrical and Computer Engineering at Virginia Tech, USA.  He received his Ph.D. degree from NYU Tandon School of Engineering (formerly Polytechnic Univ.) in 1998. His current research focuses on developing innovative solutions to complex science and engineering problems arising from wireless and mobile networks. He is particularly interested in exploring new performance limits at the network layer by exploiting advances at the physical layer. In recent years, he has been actively working on cross-layer optimization problems for cognitive radio wireless networks, cooperative communications, MIMO-based networks and energy related problems. He is also interested in wireless security.  Prof. Hou was named an IEEE Fellow for contributions to modeling and optimization of wireless networks.  He has published two textbooks: Cognitive Radio Communications and Networks: Principles and Practices (Academic Press/Elsevier, 2009) and Applied Optimization Methods for Wireless Networks (Cambridge University Press, 2014).  The first book has been selected as one of the Best Readings on Cognitive Radio by the IEEE Communications Society.  Prof. Hou’s research was recognized by five best paper awards from the IEEE and two paper awards from the ACM. He holds five U.S. patents.

Prof. Hou is a prominent leader in the research community.  He was an Area Editor of IEEE Transaction on Wireless Communications (Wireless Networking area), and an Editor of IEEE Transactions on Mobile Computing, IEEE Journal on Selected Areas in Communications – Cognitive Radio Series, and IEEE Wireless Communications.  Currently, he is an Editor of IEEE/ACM Transactions on Networking and ACM Transactions on Sensor Networks.  He is the Steering Committee Chair of IEEE INFOCOM conference – the largest and top ranked conference in networking.  He is a member of the Board of Governors as well as a Distinguished Lecturer of the IEEE Communications Society.

MIMO is becoming pervasive and has been widely used in many communication systems (e.g., Wi-Fi and cellular communications).  With the increase of number of nodes in the network and the number of antennas at each node, traditional matrix-based model becomes intractable.  In recent years, degree-of-freedom (DoF) based models prove to be very effective in studying MIMO-based wireless networks. However, a serious limitation with these models is that they assume channel matrix is of full-rank.  Although this assumption is valid when the number of antennas is small, it quickly becomes problematic as the number of antennas increases and propagation environment is not close to ideal. In this talk, I present our recent findings on this important problem. In particular, I present a new and general DoF-based model under rank-deficient conditions. To gain a fundamental understanding of this research, I first present layout some basic understanding on how MIMO’s DoFs are consumed for spatial multiplexing (SM) and interference cancellation (IC) in the presence of rank deficiency. Based on this understanding, I present a general DoF model that can be used for identifying DoF region of a multi-link MIMO network and for studying DoF scheduling in MIMO networks. Most interestingly, I show that a shared (cooperative) DoF consumption at both transmit and receive nodes is critical for optimal allocation of DoF for IC.  This is in contrast to existing DoF-based models, which says that DoFs should only be consumed at either transmit or receive node, but not both.  This new finding offers a new understanding on how DoFs are consumed for IC under the general rank-deficient condition and serve as an important tool for future research of many-antenna based MIMO networks.

Indrajit Ray
April 20, 2018

Professor of Computer Science at Colorado State University

Dr. Indrajit Ray is a Professor of Computer Science at Colorado State University. He joined CSU in 2001 moving from the University of Michigan-Dearborn where he worked as an Assistant Professor from August 1997 – July 2001. Dr. Ray obtained his Ph.D. in Information Technology from George Mason University in August 1997. Indrajit’s primary research is in computer security and privacy. His major contributions have been in security risk modeling and security protocol design using applied cryptographic techniques. Other areas in which he has made valuable contributions are trust models for security and micro-data disclosure control. He has published more than 150 technical papers. His research has been well funded through various federal agencies. He has advised several Ph.D. students many of whom hold tenured positions in academia. He has also played leadership roles in the academic community by serving as program chairs in various conferences. In 2015 he served as General Chair of the 2015 ACM CCS conference which is the flagship conference of ACM SIGSAC, and in 2017 as the General Chair of the 2017 IEEE CNS conference. He was the founder of the IFIP TC 11, WG 11.9 on Digital Forensics and its first Chair. Recently, Indrajit has helped establish the CSU site of the NSF funded I/UCRC Center for Configuration Analytics and Automation, where he is Co-Director. This multi-university research center that includes fee-paying members from the industry and FFRDCs works with enterprises and government entities to improve service assurability, security and resiliency of enterprise IT systems, cloud/SDN data centers, and cyber-physical systems by applying innovative analytics and automation. More recently, he has been invited to serve as a Program Director at the National Science Foundation, where he will be responsible for the Secure and Trustworthy Cyberspace program.
The computing landscape has changed drastically over the last several years. Computing is now a mashup of social networks, mobile networks, Internet-of-Things (IoT), cloud computing, cyber physical systems, and the traditional IT network. So has changed the face of cyber-attacks. Social engineering attacks that leverage the unsuspecting end user are among the most prevalent threats. Simple attacks on resource constrained IoT devices get amplified to large scale attacks on the Internet’s IT infrastructure. Anecdotal evidence suggests that we are waging a losing battle. How do we go about defining a better cyber defense framework under this changing landscape?

APTRON is an ongoing project that takes a mission centric view of computing and develops a formal methodology for quantitative security risk assessment and mitigation. In the APTRON model, a mission is abstracted as a complex network of networks defined by dependencies between various system activities, user activities, and resources. The continuity of the mission is more important than protecting the computing infrastructure on which it executes from cyber-attacks. End users become some of the weakest links that now need to addressed. Interestingly, such a change in paradigm from the traditional asymmetric attacker-defender warfare, where the defender is trying to plug all possible security holes and the attacker is trying to exploit just one, enables a defender to proactively define and deploy defensive strategies in a more efficient and cost-effective manner. In this talk, we present the quantitative model that forms the mainstay of APTRON. This model allows the defender to articulate and reason about the dependencies between a mission’s cyber assets, the mission’s activities and objectives, the effect of various types of end-user on the mission, and the effects of a cyber-attack on the continuity of the mission. We discuss some of the risk mitigation methodologies that allow one to adapt the defense response to emerging threats.

Kenneth M. Hopkinson
April 20, 2018

Professor and Interim Head, Department of Electrical and Computer Engineering, Air Force Institute of Technology

Kenneth M. Hopkinson received a B.S. degree from Rensselaer Polytechnic Institute, Troy, New York, in 1997 and M.S. and Ph.D. degrees from Cornell University, Ithaca, New York, all in Computer Science. He is a Professor of Computer Science at the Air Force Institute of Technology, Wright-Patterson AFB, Ohio, the graduate school for the Air Force. His research interests include networking, fault tolerant and reliable distributed systems, information security, the use of networks to enhance critical infrastructures, particularly in smart electric power grid protection and control systems, cognitive radios, space applications, and remote sensing.

As power systems increasingly make use of computer networks, these so-called smart grid systems require new approaches to effectively test and evaluate the evolving next-generation protection and control systems in order to ensure the reliability and information security that is expected in modern critical infrastructures. This talk will discuss three efforts at the Air Force Institute of Technology (AFIT) to address these challenges. First, the presentation will discuss the development and subsequent use of the electric power and communication synchronizing simulator (EPOCHS), a distributed simulation environment, to help evaluate a new generation of communication-based protection schemes. The presentation will then go on to describe recent work to develop a trust-management toolkit, a robust and configurable protection system augmentation, which can successfully function in the presence of untrusted (malfunctioning) smart grid protection system nodes by combining reputation-based trust with network-flow algorithms to identify and mitigate faulty smart-grid protection nodes. The third portion of the talk will focus on the use of software model checking applied to smart grid protection software designs to rigorously assess their fault tolerance. The approaches we take provide valuable feedback to protection engineers during the development of new systems, for assessing the quality of competing designs, and for risk management purposes. The essential elements of the approaches taken also have broader application to distributed system problems, particular those with mission-critical requirements.

Matthew Valenti
April 13, 2018

Professor, Lane Department of Computer Science and Electrical Engineering, West Virginia University

Matthew Valenti is a Professor in the Lane Department of Computer Science and Electrical Engineering at West Virginia University and site director for the Center for Identification Technology Research (CITeR), an NSF Industry/University Cooperative Research Center (I/UCRC).  His research is in the area of wireless communications, including cellular networks, military communication systems, sensor networks, and coded modulation for satellite communications.  He has published over 100 peer-reviewed papers and his research is funded by NSF, DoD, and industry.  He is active in the organization of major IEEE Communication Society (ComSoc) conferences, including serving as the Technical Program Chair for MILCOM 2017 and as chair of the technical steering committee for IEEE GLOBECOM and ICC.  He has served as Editor for several IEEE publications and as the Chair of ComSoc’s Communication Theory Technical Committee.  At WVU, he serves as the Chair of the Faculty Senate and as a faculty representative to the WVU Board of Governors.   He teaches several upper-division and graduate courses on wireless networks, communication theory, and coding theory, is recipient of several teaching, research, and advising awards by his College, and is a recipient of the 2013 WVU Foundation Outstanding Teaching Award, the highest teaching award at WVU.  He is registered as a Professional Engineer in the state of West Virginia and is a Fellow of the IEEE.

To deal with the impending mobile data onslaught, future (5G) wireless networks will rely on the dense deployment of small cells, the opening of previously unavailable bands at millimeter wave, and the development of improved intercell interference coordination. The use of traditional, self-contained base stations for such environments is an expensive proposition.  A viable alternative is to replace expensive stations with simple remote radio heads and perform all of the baseband processing in a centralized computing cloud.  The benefit is a more efficient and elastic use of computing assets, the exploitation of global channel state information, and opportunities for improved intercell coordination.  This presentation reviews the concept of a centralized radio access network (C-RAN), with an emphasis on the interplay between computational efficiency and data throughput.  The concept of “computational outage” is introduced and applied to the analysis of C-RAN networks. The framework is applied to single-cell and multi-cell scenarios using parameters drawn from the LTE standard.  It is found that in computationally limited networks, the effective throughput can be improved by using a computationally aware policy for selecting the modulation and coding scheme, which sacrifices spectral efficiency in order to reduce the computational outage probability.  When signals of multiple base stations are processed centrally, a computational diversity benefit emerges, and the benefit grows with increasing user density.

Mohammed Atiquzzaman
April 6, 2018

Edith J. Kinney Gaylord Presidential Professor, University of Oklahoma

Mohammed Atiquzzaman holds the Edith J Kinney Gaylord Presidential professorship in the School of Computer Science at the University of Oklahoma. Dr. Atiquzzaman is the Editor-in-Chief of Journal of Networks and Computer Applications, the founding Editor-in-Chief of Vehicular Communications, and serves/served on the editoria l boards of many journals including IEEE Communications Magazine, Real Time Imaging Journal, International Journal of Communication Networks and Distributed Systems and Journal of Sensor Networks and International Journal of Communication Systems. He co-chaired the IEEE High Performance Switching and Routing Symposium (2003, 2011), IEEE Globecom and ICC (2014, 2012, 2010, 2009, 2007, 2006), IEEE VTC (2013)  and the SPIE Quality of Service over Next Generation Data Networks conferences (2001, 2002, 2003). He was the panels co-chair of INFOCOM’05, and is/has been in the program committee of many conferences such as INFOCOM, Globecom, ICCCN, ICCIT, Local Computer Networks, and serves on the review panels at the National Science Foundation. Dr. Atiquzzaman received IEEE Communication Society’s Fred W. Ellersick Prize, IEEE Distinguished Technical Achievement Award, IEEE Satellite Communications Technical Contribution Award, and NASA Group Achievement Award for “outstanding work to further NASA Glenn Research Center’s effort in the area of Advanced Communications/Air Traffic Management’s Fiber Optic Signal Distribution for Aeronautical Communications” project.

Data communications between Earth and spacecrafts, such as satellites, have traditionally been carried out through dedicated links. Shared links using Internet Protocol-based communication offers a number of advantages over dedicated links. The movement of spacecrafts however gives rise to mobility management issues. This talk will discuss various mobility management solutions for extending the Internet connection to spacecrafts.  The talk with provide an overview of the network layer based  solution being developed by the Internet Engineering Task Force and compare with the transport layer based solution that have been developed at University of Oklahoma in conjunction with the National Aeronautics and Space Administration. Network in motion is an extension of the host mobility protocols for managing the mobility of networks which are in motion, such as those in airplanes and trains. The application of networks in motion will be illustrated for both terrestrial and space environment.

Andrew Ginter
March 20, 2018

M.Sc., VP Industrial Security, Waterfall Security Solutions

Andrew Ginter is the VP Industrial Security at Waterfall Security Solutions and an Adjunct Assistant Professor at Michigan Technological University. At Waterfall, Andrew leads a team responsible for industrial cyber-security research, contributions to standards and regulations, as well as security architecture recommendations for industrial sites. Before Waterfall, Andrew led the development of technology products for SCADA systems, IT/OT middleware, and ICS cyber security. He holds patents in the fields of IT/OT integration and ICS cyber security, is a co-author of the Industrial Internet Consortium Security Framework, the author of SCADA Security – What’s broken and how to fix it and the author of The Top 20 Cyberattacks on Industrial Control Systems. He is the co-chair of the ISA SP-99 working group updating the ICS Security Technologies report, and a frequent contributor to ICS cyber-security standards and post-secondary curricula. Andrew holds B.Sc. AMAT and M.Sc. CPSC degrees from the University of Calgary.

The most thoroughly-protected industrial sites understand cyber security very differently from the “average” view espoused in most standards and best-practice guidance. Preventing injuries, environmental disaster and costly physical damage is almost always the priority for industrial cyber security, not “protecting the data.” Preventing such consequences means tightly controlling all information flows that might encode attacks, rather than encrypting or authenticating those flows, or trying to detect intrusions after the fact. In this presentation Andrew explains how the most thoroughly-protected sites understand and apply cyber-security concepts, and argues that with the sophistication of cyber attacks constantly increasing, elements of this cautious approach are already migrating into the mainstream of ICS security practice. Andrew also applies this perspective to consumer “Internet of Things” cyber-physical systems, and argues that important safety issues in the consumer IoT space still urgently need to be addressed.

James M. Keller
March 9, 2018

Curators Professor, Electrical Engineering and Computer Science Department,  University of Missouri

James M. Keller received the Ph.D. in Mathematics in 1978. He holds the University of Missouri Curators’ Distinguished Professorship in the Electrical and Computer Engineering and Computer Science Departments on the Columbia campus. He is also the R. L. Tatum Professor in the College of Engineering. His research interests center on computational intelligence: fuzzy set theory and fuzzy logic, neural networks, and evolutionary computation with a focus on problems in computer vision, pattern recognition, and information fusion including bioinformatics, spatial reasoning in robotics, geospatial intelligence, sensor and information analysis in technology for eldercare, and landmine detection. Professor Keller is a Life Fellow of the Institute of Electrical and Electronics Engineers (IEEE), a Fellow of the International Fuzzy Systems Association (IFSA), and a past President of the North American Fuzzy Information Processing Society (NAFIPS). He received the 2007 Fuzzy Systems Pioneer Award and the 2010 Meritorious Service Award from the IEEE Computational Intelligence Society (CIS). He has been a distinguished lecturer for the IEEE CIS and the ACM. Professor Keller has coauthored around 500 technical publications.

In 1998, Lotfi Zadeh, the creator of fuzzy set theory and fuzzy logic, coined the term Recognition Technology, saying that it refers to current or future systems that have the potential to provide a “quantum jump in the capabilities of today’s recognition systems”. Recognition Technology will include systems that incorporate three advances: new sensors, novel signal processing and soft computing. That vision has come to pass. I will discuss these three aspects of recognition technology through two quite different case studies that I am involved in: landmine detection and eldercare technology. They are both recognition systems. The former has a goal of detecting objects, explosive hazards, to help save lives while the latter focuses on recognizing human activities to allow older adults to live independently with a higher quality of life. While the sensors applied to these problems are dissimilar, they share many of the signal processing and pattern recognition approaches. This talk is my tribute to Professor Zadeh who passed away recently at the age of 96.

Lecture Archive, Fall 2017

Richard Brown
November 21, 2017

Program Director, NSF, Computing and Communications Foundations (CCF) Division
Professor, ECE, Worcester Polytechnic Institute

Dr. Richard Brown III is a Program Director at the National Science Foundation in the Computing and Communications Foundations (CCF) division of the Directorate for Computer & Information Science & Engineering (CISE). He is currently on leave from his appointment as Professor in the Department of Electrical and Computer Engineering at Worcester Polytechnic Institute, where he has been a faculty member since 2000. He received a PhD in Electrical Engineering from Cornell University in 2000 and MS and BS degrees in Electrical Engineering from The University of Connecticut in 1996 and 1992, respectively. From 1992-1997, he was a design engineer at General Electric Electrical Distribution and Control in Plainville, Connecticut. From August 2007 to June 2008, he held an appointment as a Visiting Associate Professor at Princeton University. He is also currently serving as an Associate Editor for IEEE Transactions on Wireless Communications.

This talk will provide an overview of distributed coherent communication systems and then discuss highlights of some selected recent results in this area. Distributed coherent communication systems (also called “virtual antenna arrays” or “distributed MIMO”) extend the well-known advantages of antenna arrays to networks of single-antenna devices by closely coordinating the transmissions of the individual devices and pooling their antenna resources. There are additional challenges in realizing the gains of distributed coherent transmission, however, including accounting for independent oscillator dynamics and potentially independent kinematics at each node in the system. These impairments establish fundamental limits on the gains that can be achieved in distributed coherent communication systems. This talk will discuss these challenges and then provide highlights on two topics: (i) channel state tracking and performance characterization of large-scale distributed MIMO communication systems and (ii) asymptotic performance analysis of large-scale distributed reception in the low per-node SNR regime. I will also present some recent work on the use of distributed MIMO for improving the efficiency of far-field wireless power transfer.

Elizabeth Whitaker
November 2, 2017

Principal Research Engineer, Georgia Tech Research Institute

Elizabeth Whitaker is a Georgia Tech Research Institute (GTRI) Principal Research Engineer and GTRI Fellow. Her research focuses are Artificial Intelligence, Cognitive Systems, Intelligent Agents, Case-based Reasoning, Learning and Planning, Socio-cultural and Behavioral modeling, Agent-based and Hybrid Models, and Intelligent Tutoring Systems. She has spent sixteen years as a Principal Research Engineer in the Information and Communications Lab at GTRI, preceded by eight years of research and development in intelligent systems at NCR’s Human Interface Technology Center. Her research has focused on Department of Defense, Intelligence Community, and Commercial research and development. Her Ph.D. is from the University of South Carolina in ECE with a focus in Artificial Intelligence and Intelligent Tutoring Systems. She is currently working on a new approach which uses natural language processing techniques to automatically extract component connections from a system’s technical document in order to reason about their connections and security vulnerabilities.

Cognitive systems, such as Siri and Watson, are intelligent systems characterized by human-like intelligence and based on interdisciplinary research between computer science, cognitive science, behavioral psychology and engineering. How does one use cognitive systems to improve computational reasoning systems?

Dr. Whitaker’s research has focused on development of different computational reasoning approaches to improve human decision-making and situation understanding. In this talk, she will discuss three projects that explore three different cognitive paradigms:

  • Case-based reasoning applied to help analysts search for information
  • Agent-based modeling to improve techniques to represent human behavior through improved cultural variable approaches
  • Student model representation in a serious game to dynamically represent the knowledge state and recommend learning activities.

Cognitive Systems use a variety of knowledge representations and reasoning paradigms with the goal of helping humans make more intelligent interpretations and decisions. These systems are aware of their environments and intelligently adapt their performance to the users’ needs. They use knowledge of the domain and of their environments to provide intelligent solutions. They may learn so that future solutions are improved.

There is a recent significant upsurge of interest in cognitive systems. Funding agencies are launching new programs in cognitive systems, as evidenced by the NSF’s interdisciplinary program on Computational Cognition and DARPA’s programs on Big Mechanisms, Causal Exploration, World Modelers focused on approaches to help analysts build intelligent models, Explainable AI to help provide explanations that provide users insight into the mechanisms of some forms of statistical approaches,   and IARPA’s CREATE program focused on crowdsourcing and argumentation support a human analyst team.

Deming Chen
October 6, 2017

Professor, Donald Biggar Willett Faculty Scholar
Department of Electrical and Computer Engineering
University of Illinois, Urbana-Champaign

Dr. Deming Chen obtained his BS in computer science from University of Pittsburgh, Pennsylvania in 1995, and his MS and PhD in computer science from University of California at Los Angeles in 2001 and 2005 respectively. He worked as a software engineer between 1995-1999 and 2001-2002. He joined the ECE department of University of Illinois at Urbana-Champaign (UIUC) in 2005 and has been a full professor in the same department since 2015. He is a research professor in the Coordinated Science Laboratory and an affiliate professor in the CS department. His current research interests include system-level and high-level synthesis, computational genomics, GPU and reconfigurable computing, and hardware security. He has given more than 90 invited talks sharing these research results worldwide. Dr. Chen is a technical committee member for a series of top conferences and symposia on EDA, FPGA, low-power design, and VLSI systems design. He also served as Program or General Chair, TPC Track Chair, Session Chair, Panelist, Panel Organizer, or Moderator for many of these conferences. He is an associated editor for several IEEE and ACM journals. He received the NSF CAREER Award in 2008. He received the ACM SIGDA Outstanding New Faculty Award in 2010, and IBM Faculty Award in 2014 and 2015. He also received six Best Paper Awards. He is included in the List of Teachers Ranked as Excellent in 2008. He was involved in two startup companies previously, which were both acquired. In 2016, he co-founded a new startup, Inspirit IoT, Inc., for design and synthesis for machine learning targeting the IoT industry.He is the Donald Biggar Willett Faculty Scholar of College of Engineering of UIUC.
The Internet has become the most pervasive technology, which has infiltrated every aspect of our lives. It is predicted that there will be 50 billion devices connected in the Internet of Things (IoT) by 2020. This explosion of devices naturally demands low design cost and fast time-to-market for producing highly energy-efficient ICs. Meanwhile, big data accumulated through these devices need to be processed timely, which demands computing with high processing power under energy constraints for datacenters. In this talk, Dr. Chen will share a series of his recent works that culminated in an open-source automated hardware/software co-design flow that can provide optimal or near-optimal co-design solutions for SoC chips embedded in IoT devices. Meanwhile, he will introduce a unique compiler flow called FCUDA, that uses CUDA language to program FPGAs, offering opportunity to map existing GPU CUDA kernels to FPGAs for low-energy and high-performance computing in the cloud.

Kang G. Shin
September 28, 2017

Kevin and Nancy O’Connor Professor of Computer Science
Professor, Electrical Engineering and Computer Science
University of Michigan

Kang G. Shin is the Kevin & Nancy O’Connor Professor of Computer Science in the Department of Electrical Engineering and Computer Science, The University of Michigan, Ann Arbor. His current research focuses on QoS-sensitive computing and networking as well as on embedded real-time and cyber-physical systems.

He has supervised the completion of 80 PhDs, and authored/coauthored more than 900 technical articles, a textbook and more than 30 patents or invention disclosures, and received numerous best paper awards, including the Best Paper Awards from the 2011 ACM International Conference on Mobile Computing and Networking (MobiCom’11), the 2011 IEEE International Conference on Autonomic Computing, the 2010 and 2000 USENIX Annual Technical Conferences, as well as the 2003 IEEE Communications Society William R. Bennett Prize Paper Award and the 1987 Outstanding IEEE Transactions of Automatic Control Paper Award. He has also received several institutional awards, including the Research Excellence Award in 1989, Outstanding Achievement Award in 1999, Distinguished Faculty Achievement Award in 2001, and Stephen Attwood Award in 2004 from The University of Michigan (the highest honor bestowed to Michigan Engineering faculty); a Distinguished Alumni Award of the College of Engineering, Seoul National University in 2002; 2003 IEEE RTC Technical Achievement Award; and 2006 Ho-Am Prize in Engineering (the highest honor bestowed to Korean-origin engineers).

He was a co-founder of a couple of startups and also licensed some of his technologies to industry.

There has been an exponential growth of Internet of Things (IoTs) that are being developed and deployed for diverse applications and environments. By year 2020, more than 50 billion devices are predicted to be connected (via the Internet). The key challenges of such a rapid growth of IoTs are heterogeneity of HW, SW and users; low-power communications and computation; and privacy and security.

This talk will first cover generic aspects, applications and communications of IoTs and then elaborate on security and privacy challenges associated with IoTs. Last, I will briefly discuss our solutions to part of these challenges: BLE-Guard for securing BLE communications, Vauth for securing users’ voice interactions with IoT devices, and PriQA for automating a privacy questions—answers system with chatbot.

Jie Wu
September 22, 2017

Director of Center for Networked Computing (CNC)
Laura H. Carnell Professor
Department of Computer and Information Sciences
College of Science and Technology
Temple University

Jie Wu is Director of Center for Networked Computing (CNC) and Laura H. Carnell Professor at Temple University. He served as the Associate Vice Provost for International Affairs and Chair in the Department of Computer and Information Sciences at Temple University. Prior to joining Tempe University, he was a program director at the National Science Foundation and was a distinguished professor at Florida Atlantic University. His current research interests include mobile computing and wireless networks, routing protocols, cloud and green computing, network trust and security, and social network applications. Dr. Wu regularly publishes in scholarly journals, conference proceedings, and books. He serves on several editorial boards, including IEEE Transactions on Service Computing and the Journal of Parallel and Distributed Computing. Dr. Wu was general co-chair/chair for IEEE MASS 2006, IEEE IPDPS 2008, IEEE ICDCS 2013, and ACM MobiHoc 2014, as well as program co-chair for IEEE INFOCOM 2011 and CCF CNCC 2013. He was an IEEE Computer Society Distinguished Visitor, ACM Distinguished Speaker, and chair for the IEEE Technical Committee on Distributed Processing (TCDP). Dr. Wu is a CCF Distinguished Speaker and a Fellow of the IEEE. He is the recipient of the 2011 China Computer Federation (CCF) Overseas Outstanding Achievement Award.
This talk gives a survey of crowdsourcing applications, with a focus on algorithmic solutions. The recent search for Malaysia flight 370 is used first as a motivational example. Fundamental issues in crowdsourcing, in particular, incentive mechanisms for paid crowdsourcing, and algorithms and theory for crowdsourced problem-solving, are then reviewed. Several applications of algorithmic crowdsourcing applications are discussed in detail, with a focus on big data. The talk also discusses several on-going projects on crowdsourcing at Temple University.

Lecture Archive, Spring 2017

Kalyan S. Perumalla
April 21, 2017

Distinguished Research and Development Staff Member and manager, Computer Science and Mathematics Division, Oak Ridge National Laboratory

Adjunct Professor, School of Computational Sciences and Engineering at Georgia Institute of Technology

Kalyan Perumalla is a Distinguished Research and Development Staff Member and manager in the Computer Science and Mathematics Division at the Oak Ridge National Laboratory, and an Adjunct Professor in the School of Computational Sciences and Engineering at the Georgia Institute of Technology. Dr. Perumalla founded and currently leads the Discrete Computing Systems Group at the Oak Ridge National Laboratory. In 2015, he was selected as a Fellow of the Institute of Advanced Study at the Durham University, UK. He was appointed to serve on the National Academies of Sciences Engineering and Medicine Technical Advisory Boards on Information Science and on Computational Sciences at the U.S. Army Research Laboratory, 2015-2017. Dr. Perumalla is among the first recipients of the U.S. Department of Energy Early Career Award in Advanced Scientific Computing Research, 2010-2015. Over the past 15 years, he has served as a principal investigator or co-principal investigator on research projects sponsored by the Department of Energy, Department of Homeland Security, Air Force, DARPA, Army Research Laboratory, National Science Foundation, and industry. Dr. Perumalla earned his Ph.D. in computer science from the Georgia Institute of Technology in 1999. His areas of interest include reversible computing, high performance computing, parallel discrete event simulation, and parallel combinatorial optimization. His notable research contributions are in the application of reversible computation to high performance computing and in advancing the vision of a new class of supercomputing applications using real-time, parallel discrete event simulations. High performance simulations spanning over 200,000 processor cores have been achieved by his algorithms and research prototypes on large supercomputing systems. He has published his research and delivered invited lectures and tutorials on topics spanning high performance computing and simulation. His recent book Introduction to Reversible Computing is among the first few in its area. He co-authored another book, three book chapters, and over 100 articles in peer-reviewed conferences and journals. Five of his co-authored papers received the best paper awards, in 1999, 2002, 2005, 2008, and 2014. Some of his research prototype tools in parallel and distributed computing have been disseminated to research institutions worldwide. Dr. Perumalla serves as program committee member and reviewer for international conferences and journals. He is a member of the editorial boards of the ACM Transactions on Modeling and Computer Simulation (TOMACS) and the SCS Transactions of the Society for Modeling and Simulation International (SIMULATION).
“Exascale computing,” being pursued by the US and others across the world, is the next level of supercomputing many times larger than the largest petascale parallel systems available today. Among the challenges in achieving exascale computing are the unprecedented levels of aggregate concurrency and the relatively low memory sizes per computational unit offered. Towards meeting these challenges, we present the design, development, and implementation of a novel technique called Cloning. Cloning efficiently simulates a tree of multiple what-if scenarios dynamically unraveled during the course of a base simulation. The lecture will describe the new conceptual cloning framework, and a prototype software system named CloneX that provides a new programming interface and implementation of cloning runtime scaled to supercomputing systems. CloneX efficiently and dynamically creates whole logical copies of what-if simulation trees across a large parallel system without the inordinate cost of full physical duplication of computation and memory. The approach is illustrated with example applications, including epidemilogical outbreaks and forest fire outbreaks, whose aggregate runtime and memory consumption using cloning are decreased by two to three orders of magnitude relative to replicated runs. Performance data on large simulation trees using 1024 GPUs reinforce the promise of cloning as an effective approach that can be adopted to meet the concurrency and memory challenges at exascale.

Matt W. Mutka
March 28, 2017

Professor and Chairperson, Computer Science and Engineering, Michigan State University

Matt W. Mutka received the B.S. degree in electrical engineering from the University of Missouri-Rolla, the M.S. degree in electrical engineering from Stanford University, and the Ph.D. degree in Computer Sciences from the University of Wisconsin-Madison. He is on the faculty of the Department of Computer Science and Engineering at Michigan State University, where he is currently professor and chairperson. He has been a visiting scholar at the University of Helsinki, Helsinki, Finland, and a member of technical staff at Bell Laboratories in Denver, Colorado. He is an IEEE Fellow and was honored with the MSU Distinguished Faculty Award. His current research interests include mobile computing, sensor networking and wireless networking.
Accurate indoor position and movement information of devices enables numerous opportunities for location-based services. Services such as guiding users through buildings, highlighting nearby services, or tracking the number of steps taken are some of the opportunities available when devices compute accurate positioning information. GPS provides accurate localization results in an outdoor environment, such as navigation information for vehicles. Unfortunately, GPS cannot be applied indoors pervasively due to the various interferences. Although extensive research has been dedicated to this field, accurate indoor location information remains a challenge without the incorporation of expensive devices or sophisticated infrastructures within buildings. We explore some practical approaches for indoor map construction and indoor
positioning.

Lecture Archive, Fall 2016

Zhiru Zhang
December 2, 2016

Assistant Professor, School of Electrical and Computer Engineering at Cornell University

Zhiru Zhang is an assistant professor in the School of ECE at Cornell University and a member of the Computer Systems Laboratory. His current research focuses on high-level design automation for heterogeneous computing. His work has been recognized with a best paper award from TODAES (2012), the Ross Freeman award for technical innovation from Xilinx (2012), an NSF CAREER award (2015), a DARPA Young Faculty Award (2015), the IEEE CEDA Ernest S. Kuh Early Career Award (2015). He co-founded AutoESL Design Technologies, Inc. to commercialize his PhD dissertation research on high-level synthesis. AutoESL was acquired by Xilinx in 2011 and the AutoESL tool was rebranded as Vivado HLS after the acquisition.

Systems across the computing spectrum, from handheld devices to warehouse-sized datacenters, are now power limited and increasingly turning to specialized hardware accelerators for improved performance and energy efficiency. Heterogeneous architectures integrating reconfigurable devices like FPGAs show significant potential in this role. However, there is still a considerable productivity gap between register-transfer-level FPGA design and traditional software design. Enabling high-level programming of FPGAs is a critical step in bridging this gap and pushing FPGAs further into the computing space.

In this talk, Zhiru will briefly review the progress he has made in research and commercialization on high-level synthesis (HLS) for FPGAs. In particular, he will use a few real-life applications as case studies to motivate the need for HLS tools, and explore their benefits and limitations. He will further describe novel synthesis algorithms that significantly improve the quality of the synthesized RTLs. Afterwards, he will outline major research challenges and introduce some of his ongoing work along those directions.

David Pan
October 7, 2016

Professor, Department of Electrical & Computer Engineering at The University of Texas at Austin

Engineering Foundation Endowed Professorship #1

David Z. Pan received his PhD degree in Computer Science from UCLA in 2000. He was a Research Staff Member at IBM T. J. Watson Research Center from 2000 to 2003. He is currently Engineering Foundation Professor at the Department of Electrical and Computer Engineering, University of Texas at Austin. He has published over 250 refereed journal/conference papers and 8 US patents, and graduated 20 PhD students. He has served in many journal editorial boards (TCAD, TVLSI, TCAD-I, TCAS-II, TODAES, Science China Information Science, etc.) and conference committees (DAC, ICCAD, DATE, ASPDAC, ISLPED, ISPD, etc.). He has received a number of awards, including the SRC Technical Excellence Award (2013), DAC Top 10 Author Award in Fifth Decade (2013), DAC Prolific Author Award (2013), ASP-DAC Frequently Cited Author Award (2015), 13 Best Paper Awards at premier venues, Communications of the ACM Research Highlights (2014), ACM/SIGDA Outstanding New Faculty Award (2005), NSF CAREER Award (2007), NSFC Overseas and Hong Kong/Macau Scholars Collaborative Research Award, SRC Inventor Recognition Award three times, IBM Faculty Award four times, UCLA Engineering Distinguished Young Alumnus Award (2009), UT Austin RAISE Faculty Excellence Award, many international CAD contest awards, among others. He is an IEEE Fellow.
As the semiconductor industry enters the era of extreme scaling (14nm, 11nm, and beyond), IC design and manufacturing challenges are exacerbated, due to the adoption of multiple patterning and other emerging lithography technologies. Meanwhile, new ways of “equivalent” scaling such as 2.5D/3D have gained tremendous interest and initial industry adoption, and new devices such as nanophotonics are making their headways to break the interconnect scaling bottleneck. Furthermore, hardware security has become a major concern due to fab outsourcing, extensive IP reuse, etc., thus unique identification/authentication and various IP protection schemes are in high demand. This talk will discuss some key challenges and recent results on bridging the design and technology gaps for manufacturability, reliability, and security for future ICs and integrated systems.

Kamau Bobb
October 4, 2016

NSF Program Officer in the Directorate for Computer & Information Science & Engineering

Research Scientist for Policy Analysis at CEISMC at Georgia Tech

Dr. Kamau Bobb is on assignment to the National Science Foundation where he is a Program Officer in the Directorate for Computer & Information Science & Engineering.  His portfolio includes CSforAll, INCLUDES, computing education, cyberlearning and broadening participation in STEM fields. At Georgia Tech he is a research scientist for Science and Technology Policy Analysis and one of the chief strategists for STEM education for the Georgia Tech Research Institute (GTRI). Prior to his current assignment he served as a liaison to the University System of Georgia (USG) and was the Director of the USG system-wide STEM Initiative.  Dr. Bobb has more than 10 years experience in STEM policy analysis and program implementation.  Prior to joining the faculty at Georgia Tech he was a science and technology policy analyst at SRI International where he conducted research on university strategic planning and STEM workforce analysis for clients in the United States and in the Middle East.  Dr. Bobb holds a Ph.D. in Science and Technology Policy from Georgia Tech and M.S. and B.S. degrees in Mechanical Engineering from the University of California Berkeley.

President Obama’s initiative, CS for All, is a call to the nation to improve computer science
instruction for all students. Against the backdrop of tremendous educational disparities,
the “for all” clause takes on particular importance. Dr. Bobb will outline the
role that the National Science Foundation is taking in this initiative. He will discuss the
nuanced challenges of equity in cs education and the hopeful prospects for the future on
this critical national initiative.

Keith Marzullo
September 15, 2016

Dean of the College of Information Studies (iSchool) at The University of Maryland

Former White House Office of Science and Technology Policy Director of Networking and Information Technology Research and Development (NITRD) Program

Former NSF Division Director for the Computer and Network Systems (CNS) Division in the Computer and Information Science and Engineering (CISE) Directorate

Dr. Keith Marzullo started on August 1, 2016 as the Dean of the College of Information Studies (also known as the iSchool) at the University of Maryland, College Park. He joined the iSchool from the White House Office of Science and Technology Policy, where he directed the Networking and Information Technology Research and Development (NITRD) Program. NITRD enables interagency coordination and cooperation among the over 20 member agencies which together spend over $4B a year in NIT R&D.

Dr. Marzullo joined NITRD from the National Science Foundation (NSF), where he served as the Division Director for the Computer and Network Systems (CNS) Division in the Computer & Information Science & Engineering (CISE) Directorate. He also served as Co-Chair of the NITRD Cybersecurity and Cyber Physical Systems R&D Senior Steering Groups.

Prior to joining NSF, Dr. Marzullo was a faculty member at the University of California, San Diego’s Computer Science and Engineering Department from 1993-2014, and served as the Department Chair from 2006-2010. He has also been on the faculty of the Computer Science Department of Cornell University (1986-1992) a Professor at Large of the Department of Informatics at the University of Tromsø (1999-2005), and was a principal in a startup (ISIS Distributed Systems, 1998-2002). Dr. Marzullo received his Ph.D. in Electrical Engineering from Stanford University, where he developed the Xerox Research Internet Clock Synchronization protocol, one of the first practical fault-tolerant protocols for keeping widely-distributed clocks synchronized with each other. His research interests are in distributed computing, fault-tolerant computing, cybersecurity, and privacy. Dr. Marzullo is also an ACM Fellow.

Computer science and engineering is undergoing explosive increases in enrollment, for which there are many theories and predictions. Computer science is also undergoing an explosive increase in another dimension: what is included in the domain. Dr. Marzullo will discuss this second explosive increase in the context of the President’s Council of Advisors on Science and Technology, and will examine some of the new areas that the Federal government has targeted. He will also speculate on how computer science and engineering departments might consider how to adapt to this situation.

Lecture Archive, Fall ’15 – Spring ’16

Amy Apon

Program Director, National Science Foundation
Professor and Chair of the Computer Science Division in the School of Computing, Clemson University

Dr. Amy Apon currently serves as a rotator to the National Science Foundation from Clemson University. A portion of the talk will include information about the NSF and NSF programs of interest to computer systems researchers.

At NSF, Apon is a Program Director in the Computer Systems Research (CSR), eXploiting Parallelism and Scalability (XPS), BigData, and Smart and Connected Health (SCH) programs. At Clemson University, Apon has held the position of Professor and Chair of the Computer Science Division in the School of Computing since 2011. As Chair, Apon has led the creation of a new program to grow the graduate enrollment, “CI SEEDS – Seeding the Next Generation Cyberinfrastructure Ecosystem” and has seen the number of publications and research expenditures more than double in the Division. Apon is co-Director of the Complex Systems, Analytics, and Visualization Institute (CSAVI), which includes the Big Data Systems and Analytics Lab. Her research focus includes performance modeling and analysis of parallel and distributed system, data-intensive computing in the application area of intelligent transportation systems, technologies for cluster computing, and the impact of high performance computing to research competiveness.

Apon was the elected Vice Chair and then Chair from 2009-2012 of the Coalition for Academic Scientific Computation, an organization of more than 70 leading U.S. academic institutions. Apon has led multiple successful collaborative NSF-funded projects that support high performance computing, including several awards from the NSF MRI program. Prior to joining Clemson, Apon was Professor at the University of Arkansas where she led the effort to develop the high performance computing capability for the State of Arkansas. The Arkansas High Performance Computing (HPC) Center was funded by the Arkansas Science and Technology Authority in May, 2008, and established under her direction. The acquisition of Red Diamond was the first computer in Arkansas ranked on the Top 500 list, in June 2005. The Arkansas Cyber-infrastructure Task Force Act was passed through her efforts in 2009. Dr. Apon has published over 100 peer-reviewed publications in areas of research, education, and impact of parallel and distributed computing.

The challenges of distributed and parallel data processing systems include heterogeneous network communication, a mix of storage, memory, and computing devices, and common failures of communication and devices. These complex computer systems of today are difficult, if not impossible, to model analytically. Experimentation using production-quality hardware and software and realistic data is required to understand system tradeoffs. At the same time, experimental evaluation has challenges, including access to hardware resources at scale, robust workload and data characterization, configuration management of software and systems, and sometimes insidious optimization issues around the mix of software stacks or hardware/software resource allocation. This talk presents a framework for experimental research in computer science, and examples and research challenges of experimentation as a tool in computer systems research within this framework.

Todd Austin

Professor of Electrical Engineering and Computer Science at the University of Michigan in Ann Arbor

Todd Austin is a Professor of Electrical Engineering and Computer Science at the University of Michigan in Ann Arbor. His research interests include computer architecture, robust and secure system design, hardware and software verification, and performance analysis tools and techniques. Currently Todd is director of C-FAR, the Center for Future Architectures Research, a multiuniversity SRC/DARPA funded center that is seeking technologies to scale the performance and efficiency of future computing systems. Prior to joining academia, Todd was a Senior Computer Architect in Intel’s Microcomputer Research Labs, a product-oriented research laboratory in Hillsboro, Oregon. Todd is the first to take credit (but the last to accept blame) for creating the SimpleScalar Tool Set, a popular collection of computer architecture performance analysis tools. Todd is co-author (with Andrew Tanenbaum) of the undergraduate computer architecture textbook, “Structured Computer Architecture, 6th Ed.”

In addition to his work in academia, Todd is founder and President of SimpleScalar LLC and co-founder of InTempo Design LLC.

In 2002, Todd was a Sloan Research Fellow, and in 2007 he received the ACM Maurice Wilkes Award for “innovative contributions in Computer Architecture including the SimpleScalar Toolkit and the DIVA and Razor architectures.”

Todd received his PhD in Computer Science from the University of Wisconsin in 1996.

Energy and power constraints have emerged as one of the greatest lingering challenges to progress in the computing industry. In this talk, I will highlight some of the “rules” of low-power design and show how they bind the creativity and productivity of architects and designers. I believe the best way to deal with these rules is to disregard them, through innovative design solutions that abandon traditional design methodologies. Releasing oneself from these ties is not as hard as one might think. To support my case, I will highlight two rule-breaking design technologies from my work. The first technique (Razor) combines low-power designs with resiliency mechanisms to craft highly introspective and efficient systems. The second technique (Subliminal) embraces subthreshold voltage design, which holds great promise for highly energy efficient systems.

Weisong Shi

Charles H. Gershenson Distinguished Faculty Fellow and a Professor of Computer Science at Wayne State University

Weisong Shi is a Charles H. Gershenson Distinguished Faculty Fellow and a Professor of Computer Science at Wayne State University. There he directs the Mobile and Internet SysTems Laboratory (MIST), Intel IoT Innovator Lab, and the Wayne Wireless Health Initiative, investigating performance, reliability, power- and energy-efficiency, trust and privacy issues of networked computer systems and applications.

Dr. Shi was on leave with the National Science Foundation as a Program Director in the Division of Computer and Network Systems, Directorate of Computer and Information Science and Engineering during 2013 – 2015, where he was responsible for the Computer and Network Systems (CNS) Core CSR Program, and two key crosscutting programs, including Cyber-Innovation for Sustainability Science and Engineering (CyberSEES), Smart and Connected Health (SCH). More information can be found at http://www.cs.wayne.edu/~weisong.

Energy-efficiency is one of the most important design goals of today’s computing platforms, including both mobile devices at the edge of Internet and data centers in the cloud. In this talk, Dr. Shi will share his vision on energy-efficient computing, and their recent work toward this vision, including an energy efficient model for multicore systems, several tools for energy efficient software analysis and optimization, and workload-aware elastic customization techniques on servers. In the second part of the talk, Dr. Shi will share his experience on NSF proposal writing.

Yale Patt

Professor of ECE and the Ernest Cockrell, Jr. Centennial Chair in Engineering at The University of Texas at Austin

Yale N. Patt is Professor of ECE and the Ernest Cockrell, Jr. Centennial Chair in Engineering at The University of Texas at Austin. He continues to thrive on teaching both the large (400+ students) freshman introductory course in computing and advanced graduate courses in microarchitecture, directing the research of eight PhD students, and consulting in the microprocessor industry. Some of his research ideas (e.g., HPS, the two-level branch predictor, ACMP) have ended up in the cutting-edge chips of Intel, AMD, etc. and some of his teaching ideas have resulted in his motivated bottom-up approach for introducing computing to serious students. The textbook for his unconventional approach, “Introduction to Computing Systems: from bits and gates to C and beyond,” co-authored with Prof. Sanjay Jeram Patel of University of Illinois (McGraw-Hill, 2nd ed. 2004), has been adopted by more than 100 universities world-wide. He has received the highest honors in his field for both his reasearch (the 1996 IEEE/ACM Eckert-Mauchly Award) and teaching (the 2000 ACM Karl V. Karlstrom Outstanding Educator Award). He was the inaugural recipient of the recently established IEEE Computer Society Bob Rau Award in 2011, and was named the 2013 recipient of the IEEE Harry Goode Award. He is a Fellow of both IEEE and ACM, and a member of the National Academy of Engineering. More detail can be found on his web page www.ece.utexas.edu/~patt.

After 50 years of teaching, I am convinced that the conventional method of introducing computing to all engineering students at most engineering schools is wrong, and that goes double for computer science and computer engineering majors. Teaching via a high level language (and worse yet an object-oriented course in JAVA) is a mistake and is almost guaranteed to ensure that the student will come away with little more than a superficial awareness of programming and practically no awareness of what the computer is doing. Two things I would like to do today: First, discuss why understanding computers is a core competency for engineering students of 2015 as much as physics and math are. Second, describe my approach, which I call “motivated bottom up,” and explain why I think it makes sense, how it affects the rest of the curriculum, and my experiences with it. I insist it is the correct introduction to computing to prepare students for doing any of the following three things in the future: (1) design a system that includes an embedded controller, whether that system be the gun director of a naval vessel, the cockpit controller of an airplane, or an automobile, (2) program a computer to solve some meaningful problem, or (3) design a computer system, either hardware or software or both for others to use.

The problem with the JAVA approach (or if you prefer, of the FORTRAN approach in the old days) is that students have no understanding of how a computer works and so they are forever memorizing patterns and hope that they can apply those patterns to the application at hand. Unfortunately, memorizing is not learning and the results have been as expected. I introduced the motivated bottom-up approach to the freshman class at Michigan in 1995, and have continually taught and refined it ever since. I start with the switch level behavior of a transistor and build from there. From transistors to gates to muxes, decoders, gated latches, finite state machines, memory, the LC-3 computer, machine language programming, assembly languge programming, and finally C programming. Students continue to build on what they already know, continually raising the level of abstraction. I have taught the course 13 times over the last 20 years to freshmen at Michigan and Texas, and an accelerated version of the course at USTC in Hefei and Zhejiang University in Hangzhou at the invitation of the Chinese.

The end of Moore’s Law we have been hearing about for 30 years. Another ten years and it will probably happen. What will that mean? More recently, there has been the suggestion that the days of the Von Neumann machine are numbered. In fact, we debated that notion at Micro in Cambridge last December, only to realize that most people predicting the demise of Von Neumann don’t even know what a Von Neumann machine is. Finally, we have already seen the end of Dennard Scaling and its influence on microprocessor design. But there is no vacuum when it comes to microprocessor hype. Dark silicon, quantum computers, approximate computing have all rushed in to fill the void. What I will try to do in this talk is examine each of these items from the standpoint of what they will mean for the microprocessor of the year 2025, and why the transformation hierarchy remains more critical than ever.