RESEARCH KEYNOTE SERIES
(Professor, Columbia University)
Bio: Prof. Henning Schulzrinne is Levi Professor of Computer Science at Columbia University, received his Ph.D. from the University of Massachusetts in Amherst, Massachusetts. He was an MTS at AT&T Bell Laboratories and an associate department head at GMD-Fokus (Berlin), before joining the Computer Science and Electrical Engineering departments at Columbia University. He served as chair of the Department of Computer Science from 2004 to 2009, as Engineering Fellow, Technology Advisor and Chief Technology Officer at the US Federal Communications Commission (FCC) from 2010 to 2017. In 2019-2020, he worked as a Technology Fellow in the US Senate.
He has published more than 250 journal and conference papers, and more than 70 Internet RFCs. Protocols co-developed by him, such as RTP, RTSP and SIP, are used by almost all Internet telephony and multimedia applications.
He is a Fellow of the ACM and IEEE, has received the New York City Mayor's Award for Excellence in Science and Technology, the VON Pioneer Award, TCCC service award, IEEE Internet Award, IEEE Region 1 William Terry Award for Lifetime Distinguished Service to IEEE, the UMass Computer Science Outstanding Alumni recognition, and is a member of the Internet Hall of Fame.
Title of Talk:- Why don't we all have broadband already?
Abstract:- For more than a decade, the United States and Europe, in particular, have been struggling to provide high-speed internet connectivity ("broadband") to rural areas and to households with low income, but there are still large swaths of both urban and rural areas without broadband sufficiently fast to support telehealth, interactive distance education and remote work - all applications that are now critical in making society function during the pandemic. I will discuss why providing broadband has proven to be hard, even as technology has improved and why the promise of some technology solutions, such as 5G, TV whitespaces, broadband-over-powerline, and satellite, has not been realized. But there are promising solutions that involve combinations of technology, policy and economic incentives. The talk will also address the challenges and attempts to provide low-cost broadband to low-income households, and some of the legislative and policy ideas that have been suggested.
(Professor, UC Berkeley)
Bio: Professor Pister received a B.A. in Applied Physics from UC San Diego, 1986, and an M.S. and Ph.D. in EECS from UC Berkeley in 1989 and 1992. Prior to joining the faculty of EECS in 1996, he taught in the Electrical Engineering Department, UCLA.
He developed Smart Dust, a project with the goal of putting a complete sensing/communication platform inside a cubic millimeter. For this project, he was awarded the second annual Alexander Schwarzkopf Prize for Technological Innovation, in 2006, from the I/UCRC Association, for developing and successfully commercializing Smart Dust. He has also focused his energies on synthetic insects, which he has characterized as "basically Smart Dust with legs." Professor Pister was award the Alfred F. Sperry Founder Award in 2009 for his "contributions to the science and technology of instrumentation, systems, and automation."
Kris is a co-Director of the Berkeley Sensor and Actuator Center (BSAC) and the Ubiquitous Swarm Lab.
Title of Talk:- The Single-Chip micro-Mote: Crystal-free Standards-Compatible Mesh Networks
Abstract:- To enable ubiquitous wireless sensor networks and the internet of things the sensors need to be inexpensive and small, but they also need to be standards compliant. We have demonstrated a CMOS chip which with only three wire bonds is able to send BLE packets to cell phones, and able to join standards-based time-synchronized channel hopping mesh networks using 6LoWPAN over 2.4 GHz 802.15.4 radios.While many chips speak the underlying RF protocols, the Single Chip micro Mote is the first to be able to do it with no external crystals, bypass capacitors, balun, or in fact any components at all. Simply applying power, ground, and an antenna is all that is needed. In addition, this chip has the ability to receive lighthouse localization beacons, enabling 3D position calculation with centimeter accuracy.
(Professor, Carnegie Mellon University)
Bio: Prof. Vijayakumar Bhagavatula is currently the Director of Carnegie Mellon University Africa in Kigali, Rwanda. Prior to coming to Rwanda, Prof. Kumar was the Associate Dean as well as the interim Dean for the College of Engineering at CMU, Pittsburgh, USA. Prof. Kumar has made pioneering research contributions to computer vision and pattern recognition methods, with applications in biometric recognition and autonomous driving. His publications include the book Correlation Pattern Recognition, 24 book chapters, about 410 conference papers and 210 journal papers. He is also a co-inventor of 15 patents. He is a Fellow of OSA, SPIE, IEEE, IAPR (International Association of Pattern Recognition), NAI (National Academy of Inventors) and AAAS (American Association for Advancement of Science).
Title of Talk:- Learning from weakly-supervised image data
Abstract:- Deep learning (DL) approaches are becoming the dominant solutions in many computer vision applications. The success of DL schemes depends on the availability of labeled training data. However, there are some applications such as semantic segmentation of images where the needed pixel-level labelling is laborious and may be impractical. Thus, it is beneficial that the learning approach is able to make use of weakly-supervised data. In this talk, we will discuss some DL approaches that are aimed at using three types of weakly-supervised data: incompletely supervised (e.g., test data is from a different domain than the training data), inexactly supervised (e.g., labels are at image level rather than object level) and inaccurately supervised (e.g., labels are noisy).
(Professor, University of Texas)
Bio: Sanjay Shakkottai received his Ph.D. from the ECE Department at the University of Illinois at Urbana-Champaign in 2002. He is with The University of Texas at Austin, where he is currently the Temple Foundation Endowed Professor No. 3, and a Professor in the Department of Electrical and Computer Engineering. He received the NSF CAREER award in 2004, and was elected as an IEEE Fellow in 2014. His research interests lie at the intersection of algorithms for resource allocation, statistical learning and networks, with applications to wireless communication networks and online platforms.
Title of Talk:- Multi-agent multi-armed bandits
Abstract:- We consider a multi-agent multi-armed bandit, where N agents collaborate by exchanging only recommendations through pairwise gossip over a network to determine the best action. Such settings are motivated by distributed recommendation systems (e.g. in data centers for online advertisements or in social recommendation systems). We establish that even with very limited communications, the regret per agent is a factor of order N smaller compared to the case of no collaborations. Furthermore, we show that the communication constraints only have a second order effect on regret. We then consider the setting where a few malicious agents could recommend poor actions (e.g. machine faults in distributed computing or spam in social recommendation systems). We propose a scheme where honest agents learn who is malicious and dynamically reduce communication with them (adaptive blocklisting), thus, ensuring that our algorithm is robust against any malicious recommendation strategy. Based on joint work with Ronshee Chawla, Abishek Sankararaman, Ayalvadi Ganesh, Daniel Vial and R. Srikant.
(Professor, University of Texas)
Bio: Prof. CONSTANTINE is a Professor in the ECE department of The University of Texas at Austin. He received a PhD in EECS from The Massachusetts Institute of Technology, in the Laboratory for Information and Decision Systems (LIDS), and an AB in Mathematics from Harvard University. He received the NSF CAREER award in 2011.
His current research interests focus on decision-making in large-scale complex systems, with a focus on learning and computation. Specifically, He is interested in robust and adaptable optimization, high dimensional statistics and machine learning, robustness, and applications to large-scale networks.
Title of Talk:-Tracking and Curing Epidemics with Uncertainty
Abstract:- Epidemic processes can model anything that spreads. As such, they are a useful tool for studying not only human diseases, but also network attacks, spikes in the brain, the propagation of real or fake news, the spread of viral tweets, and other processes. In this talk, we're interested in epidemics spreading on an underlying graph.
Currently, most theoretical research in this field assumes some form of perfect observation of the epidemic process. This is an unrealistic assumption for many real-life applications, as the recent COVID-19 pandemic tragically demonstrated: data is scarce, delayed, and/or imprecise for human epidemics, and symptoms may appear in a non-deterministic fashion and order of infection - if they appear at all. We show in this work not only that the algorithms developed previously are not robust to adding noise into the observation, but that some theoretical results cannot be adapted to this setting. In other words, uncertainty fundamentally changes how we must approach epidemics on graphs.
C. Lee Giles
(Professor, Pennsylvania State University)
Bio: Dr. C. Lee Giles is the David Reese Professor at the College of Information Sciences and Technology at the Pennsylvania State University, University Park, PA. He is also graduate college Professor of
Computer Science and Engineering, courtesy Professor of Supply Chain and Information Systems, and Director of the Intelligent Systems Research Laboratory. He directs the Next Generation CiteSeer,
CiteSeerX, project. He has been associated with Columbia University, the University of Maryland, University of Pennsylvania, Princeton University, and University of Trento. His current research interests are in intelligent information processing systems such as intelligent cyberinfrastructure with a special interest in computer and information science, chemistry, materials science, economics, medicine, biology and archaeology; novel web tools, search engines. He is also interested in scholarly big data for large scale knowledge and information management. This involves information extraction and retrieval, entity disambiguation, metadata and knowledge extraction, text and data mining, machine and deep learning, digital libraries, web services, and social networks.
With collaborators and current and former students he has published over 500 journal and conference papers, book chapters, edited books, and proceedings. His work has over 43,000 citations and a h-index of 100, according to Google Scholar, and is one of the top 200 h-indexes in Computer Science and in Information Retrieval.
He is a Fellow of the ACM, a Fellow of the IEEE, and a Fellow of the International Neural Network Society (INNS). He has twice received the IBM Distinguished Faculty Award. In 2018 he received from the National Federation of Advanced Information Services (NFAIS) the Miles Conrad Award. His previous positions include a Senior Research Scientist at NEC Research Institute (now NEC Labs), Princeton, NJ; a Program Manager at the Air Force Office of Scientific Research; and a research scientist at the Naval Research Laboratory, Washington, D.C.
Title of Talk:- Recurrent Neural Networks: XAI with Automata and Grammars
Abstract:- Neural networks are often considered to be black box models. However, discrete time recurrent neural networks (RNNs), which are one of the most commonly used, have properties that lend themselves to similarities with automata and formal grammars and thus to the extraction and insertion of grammar rules. Assume that we have a discrete time RNN that has been trained on sequential data. For each discrete step in time, or a collection thereof, an input can be associated with the RNNs current and previous activations. We can then cluster these activations into states to obtain a previous state to current state transition that is governed by an input. From a formal grammar perspective, these state-to-state transitions can be considered to be production rules. Once the rules are extracted, a minimal unique
set of states can be readily obtained. It can be shown that, for learning known production rules of regular grammars, the rules extracted are stable and independent of initial conditions and, at times, outperform the trained source neural network in terms of classification accuracy. Theoretical work has also shown that regular expression production rules can be easily inserted into certain types of RNNs and proved that the resulting systems are stable. Since for many problem areas such as finance, medicine, security, etc., black box models are not acceptable. The methods discussed here are towards an explainable AI (XAI). They have the potential to uncover what the trained RNN is doing from a regular grammar and finite state machine perspective. We will discuss the strengths, weaknesses, and issues associated with using these and associated methods and applications such as verification.
Tracy Anne Hammond
(Professor, Texas A&M University)
Bio: Prof. Tracy Hammond (Ph.D. (Computer Science), Massachusetts Institute Technology) is the Director, Sketch Recognition Lab, and Professor, Computer Science & Engineering at Texas A & M University. She is the Chair of Engineering Education Faculty. She is the Member of Institute of Data Science; Center for Remote Health Technologies & Systems; Center for Population Health and Aging.Her research Interests include Sketch recognition, Perception, Cognitive behavior, Computer human interaction, Artificial intelligence, Concept learning, etc.
Title of Talk:- LEVERAGING SKETCH, EYE, AND ACTIVITY RECOGNITION TO UNDERSTAND AND INTERPRET MESSY HUMAN DATA
Abstract:- Natural human interactions are messy and filled with ambiguity. However, using AI and machine learning, we can recognize and interpret these actions to provide valuable insights into the human’s intentions. Using sketch recognition, not only can we recognize what someone has drawn, but we can also identify age and health characteristics. With eye-tracking, we can identify what people are doing, who they are, what their expertise is, and what their opinion is of what they are looking at. With activity recognition, we can recognize activities beyond the standard fitness realm, also recognizing activities of daily living, such as brushing teeth, taking medicine, and washing hands using just the accelerometers in a watch. All of these recognition capabilities are based on a common recognition framework and feature analysis. This talk will provide insights into why we are able to recognize such personal activities so well, even with small amounts of training data.
|Full Paper Submission:||28th September 2020|
|Acceptance Notification:||9th October 2020|
|Final Paper Submission:||24th October 2020|
|Early Bird Registration||19th October 2020|
|Presentation Submission:||26th October 2020|
|Conference:||28th -31st October 2020|
• Conference Proceedings will be submitted for publication at IEEE Xplore® digital library .
• Best Paper Award will be given for each track.
• Conference Record No 51285