KEYNOTE TALK SERIES
(Professor, University of Illinois)
Bio: Tarek Abdelzaher received his Ph.D. in Computer Science from the University of Michigan in 1999. He is currently a Sohaib and Sara Abbasi Professor and Willett Faculty Scholar at the Department of Computer Science, the University of Illinois at Urbana Champaign. He has authored/coauthored more than 300 refereed publications in real-time computing, distributed systems, sensor networks, and control. He served as an Editor-in-Chief of the Journal of Real-Time Systems, and has served as Associate Editor of the IEEE Transactions on Mobile Computing, IEEE Transactions on Parallel and Distributed Systems, IEEE Embedded Systems Letters, the ACM Transaction on Sensor Networks, and the Ad Hoc Networks Journal, among others. Abdelzaher’s research interests lie broadly in understanding and influencing performance and temporal properties of networked embedded, social and software systems in the face of increasing complexity, distribution, and degree of interaction with an external physical and social environment. Tarek Abdelzaher is a recipient of the IEEE Outstanding Technical Achievement and Leadership Award in Real-time Systems (2012), the Xerox Award for Faculty Research (2011), as well as several best paper awards. He is a fellow of IEEE and ACM.
Title: Intelligent Edge Services and Foundation Models for the Ubiquitous Internet of Things
Abstract: Advances in neural networks revolutionized modern machine intelligence, but important challenges remain when applying these solutions in ubiquitous computing and IoT contexts; specifically, on lower-end embedded devices with multimodal sensors and distributed heterogeneous hardware. The talk discusses challenges in offering machine intelligence services to support applications in resource constrained distributed environments. The intersection of ubiquitous IoT applications, real-time requirements, distribution challenges, and AI capabilities motivates several important research directions. For example, how to support efficient execution of machine learning components on embedded edge devices while retaining inference quality? How to reduce the need for expensive manual labeling of application data? How to improve the responsiveness of AI components to critical real-time stimuli in their physical environment? How to prioritize and schedule the execution of intelligent data processing workflows on edge-device GPUs? How to exploit data transformations that lead to sparser representations of external physical phenomena to attain more efficient learning and inference? How to develop foundation models for IoT that offer extended inference capabilities from time-series data analogous to ChatGPT inference capabilities from text? The talk discusses recent advances in edge AI and foundation models and presents evaluation results in the context of different real-time IoT applications.
(Professor, University of Maryland))
Bio: Professor Aloimonos holds a Ph.D. in Computer Science from the University of Rochester. His research is devoted to the principles governing the design and analysis of real-time systems that possess perceptual capabilities, for the purpose of both explaining animal vision and designing seeing machines. Such capabilities have to do with the ability of the system to control its motion and the motion of its parts using visual input (navigation and manipulation) and the ability of the system to break up its environment into a set of categories relevant to its tasks and recognize these categories (categorization and recognition).
Title for talk: The universal grammar of action: A key to AI
Abstract: Robots of the future will need to learn the actions that humans perform. They will need to recognize these actions when they see them and they will need to perform these actions themselves. In this presentation, it is proposed that this learning task can be achieved using the action grammar.
Context-free grammars have been in fashion in linguistics because they provide a simple and precise mechanism for describing the methods by which phrases in some natural language are built from smaller blocks. Also, the basic recursive structure of natural languages, the way in which clauses nest inside other clauses, and the way in which lists of adjectives and adverbs are followed by nouns and verbs, is described exactly. Similarly, for manipulation actions, every complex activity is built from smaller blocks involving hands and their movements, as well as objects, tools and the monitoring of their state. Thus, interpreting a “seen” action is like understanding language, and executing an action from knowledge in memory is like producing language. Several experiments will be shown interpreting human actions in the arts and crafts or assembly domain, through a parsing of the visual input, on the basis of the manipulation grammar. This parsing, in order to be realized, requires a network of visual processes that attend to objects and tools, segment them and recognize them, track the moving objects and hands, and monitor the state of objects to calculate goal completion. These processes will also be explained and we will conclude with demonstrations of robots learning how to perform tasks by watching videos of relevant human activities. Finally we show that the use of LLMs allows the solution to scale up to thousands of actions.
E.(Eric) Glen Weyl
(Research Lead (Microsoft Research))
Bio : E. (Eric) Glen Weyl is Founder and Research Lead of the Microsoft Research Special Project the Plural Technology Collaboratory, Founder of the RadicalxChange Foundation, the leading thinktank in the Web3 space, and Founder and Chair of the Plurality Institute, which coordinates an academic research network developing technology for cooperation across difference. He is also Senior Advisor to the GETTING-Plurality Research Network at Harvard University. He previously led Web3 technical strategy at Microsoft’s Office of the Chief Technology Officer, was Co-Chair and Technical Lead of the Harvard Edmond J. Safra Center for Ethics Rapid Response Task Force on Covid-19, whose recommendations were endorsed by a dozen leading civil Society organizations and the Biden Campaign and taught economics at the University of Chicago, Yale, Princeton and Harvard.
He is co-author with Eric Posner of the 2018 Radical Markets: Uprooting Capitalism and Democracy for a Just Society, with Puja Ohlhaver and Vitalik Buterin of the 2022 paper “Decentralized Society: Finding Web3’s Soul” (which is one of the 30 most downloaded papers of all time on the Social Science Research network) and is working on an open, Web3-based collaborative book project with Taiwan’s Digital Minister, Audrey Tang, Plurality: Technology for Cooperative Diversity and Democracy. He is also the author of dozens of scholarly and popular articles in journals including the Proceedings of the National Academy of Sciences, the American Economic Review, the Harvard Law Review, the Proceedings of the ACM Conference on Economics and Computation and the New York Times.
He has been recognized as one of the 10 most influential people in blockchain by CoinDesk, as one of the 25 people shaping the next 25 years of technology by WIRED and as one of the 50 most influential people by Bloomberg Businessweek, all in 2018. He graduated as Valedictorian of his Princeton undergraduate class in 2007 and received his PhD in economics also from Princeton in 2008.
Title for talk : Plurality: Technology for Cooperative Diversity and Democracy
Abstract: Technology and democracy are on a collision course because we have been pursuing visions for the future of technology (AI and crypto maximalism) antithetical to pluralism. An alternative we label “Plurality” has deep roots in the history of technology, having underpinned the development of the internet, but has not been articulated with the boldness of these competing visions. In this book, we aim to fill this lacuna and describe a path for research, development and multisectoral investment to build a future where technology empowers and bridges the proliferation of social diversity. I will illustrate the agenda with three open problems: a plural approach to “privacy”, harnessing generative foundation models to empower diversity and plural collective decision procedures.
(Principal Research Manager, Microsoft)
Bio: Lijuan Wang serves as the Principal Research Manager at Microsoft Cloud & AI division. She embarked on her journey with Microsoft Research Asia in 2006 as a researcher, right after completing her PhD from Tsinghua University, China. In 2016, she transitioned to Microsoft Research in Redmond. Her research prowess spans across computer vision, vision and language, and multimodal foundational models. Over the years, she has emerged as a key contributor in the development of technologies like vision-language pretraining, image captioning, object detection, among others. Many of these innovations have been integrated into Microsoft products, ranging from Cognitive Services to Office 365. With over 100 papers published in top-tier conferences and journals, she also boasts an impressive portfolio of 20+ granted or pending US patents. She is a senior membership with IEEE.
Talk title: Recent Advances in Vision Foundation Models
Abstract: Visual understanding at different levels of granularity has been a longstanding problem in the computer vision community. The tasks span from image-level tasks (e.g., image classification, image-text retrieval, image captioning, and visual question answering), region-level localization tasks (e.g., object detection and phrase grounding), to pixel-level grouping tasks (e.g., image instance/semantic/panoptic segmentation). Until recently, most of these tasks have been separately tackled with specialized model designs, preventing the synergy of tasks across different granularities from being exploited.
In light of the versatility of transformers and inspired by large-scale vision-language pre-training, the computer vision community is now witnessing a growing interest in building general-purpose vision systems, also called vision foundation models, that can learn from and be applied to various downstream tasks, ranging from image-level , region-level, to pixel-level vision tasks.
In this keynote, we will cover the most recent approaches and principles at the frontier of learning and applying vision foundation models, including (1) Visual and Vision-Language Pre-training; (2) Generic Vision Interface; (3) Alignments in Text-to-image Generation; (4) Large Multimodal Models; and (5) Multimodal Agents.
(Professor, New York University)
Bio: Ernest Davis is Professor of Computer Science at the Courant Institute of Mathematical Sciences, New York University. He has a B.Sc. in mathematics from M.I.T. and a Ph.D. in Computer Science from Yale.
He is a leading scientist in the area of automating commonsense reasoning for artificial intelligence programs, particularly physical and spatial reasoning. He has authored over one hundred scientific papers. sixty book reviews, and thirty articles addressed to a general readership including pieces in the New Yorker and the New York Times. His book, with Gary Marcus, “Rebooting AI: Building Artificial Intelligence We Can Trust” (2019) surveys the state of the art and calls for the development of AI systems with deeper understanding by incorporating insights from cognitive science. Other books include “Representations of Commonsense Knowledge”, a textbook, and “Verses for the Information Age”, a collection of light verse.
Talk title: AI and Elementary Science and Math Word Problems
Abstract: I will survey the state of the art in AI systems, particularly large language models, in solving word problems in science and math at the elementary, high school, and college level. In particular, I will discuss some recent experiments with Scott Aaronson on testing GPT-4 with the Code Interpreter and Wolfram ALpha plug-ins on three collections of original problems.
(Professor, New York University)
Bio: Yevgeniy Dodis is a Fellow of the IACR (International Association for Cryptologic Research), and a Professor of Computer Science at New York University. Dr. Dodis received his summa cum laude Bachelors degree in Mathematics and Computer Science from New York University in 1996, and his PhD degree in Computer Science from MIT in 2000. Dr. Dodis was a post-doc at IBM T.J.Watson Research center in 2000, and joined New York University in 2001. Dr. Dodis’ research is primarily in cryptography and network security. He worked in a variety of areas, including random number generation, secure messaging, leakage-resilient cryptography, cryptography under weak randomness, cryptography with biometrics and other noisy data, hash function and block cipher design, protocol composition and information-theoretic cryptography In addition to being an IACR Fellow, Dr. Dodis is the recipient of 2021 and 2019 IACR Test-of-Time Awards for his work on Fuzzy Extractors and Verifiable Random Functions, National Science Foundation CAREER Award, Faculty Awards from Facebook, Google, IBM, Algorand and VMware, and Best Paper Award at 2005 Public Key Cryptography Conference. As an undergraduate student, he was also a winner of the US-Canada Putnam Mathematical Competition in 1995. Dr. Dodis has more than 150 scientific publications at various top venues, was the Program co-Chair for the 2022 CRYPTO and 2015 Theory of Cryptography Conference, the editor of Journal of Cryptology (2012-2019), has been on program committees of many international conferences (including FOCS, STOC, CRYPTO and Eurocrypt), and gave numerous invited lectures and courses at various venues.
Title for Talk: Random Number Generation and Extraction
Abstract: Generating random numbers is an essential task in cryptography. They are necessary not only for generating cryptographic keys, but are also needed in steps of cryptographic algorithms or protocols (e.g. initialization vectors for symmetric encryption, password generation, nonce generation). Indeed, the lack of insurance about the generated random numbers can cause serious damages in cryptographic protocols, and vulnerabilities that can be exploited by attackers. In this talk we revisit a surprisingly rich landscape of the area of random number generation, ranging from theoretical impossibility results to building real-world random-number generators (RNGs) for Windows, Apple and Linux. Some example topics include impossibility of basing cryptography on entropy alone, improved key derivation functions, seedless randomness extraction, design and analysis of “super-fast” entropy accumulation found in most modern RNGs, and post-compromise security of RNGs in light of “premature next” attacks.
(Senior Research Scientist, Facebook Meta)
Bio: He is a Senior Research Scientist working in the conversational AI team in Facebook Reality Lab. His research focuses are machine learning and optimization. Particularly, his current research focus on 1. Federated Learning 2. online learning for Conversational AI agents.
He have also worked on optimization and machine learning problems inspired by Intelligent Transportation Systems applications (e.g online and temporal prediction, learning for discrete choice modeling).
TITLE OF TALK: Practical Asynchronous Federated Learning With Buffered Aggregation