E.(Eric) Glen Weyl

Research Lead (Microsoft Research)

Bio: E. (Eric) Glen Weyl is Founder and Research Lead of the Microsoft Research Special Project the Plural Technology Collaboratory, Founder of the RadicalxChange Foundation, the leading thinktank in the Web3 space, and Founder and Chair of the Plurality Institute, which coordinates an academic research network developing technology for cooperation across difference. He is also Senior Advisor to the GETTING-Plurality Research Network at Harvard University. He previously led Web3 technical strategy at Microsoft’s Office of the Chief Technology Officer, was Co-Chair and Technical Lead of the Harvard Edmond J. Safra Center for Ethics Rapid Response Task Force on Covid-19, whose recommendations were endorsed by a dozen leading civil Society organizations and the Biden Campaign and taught economics at the University of Chicago, Yale, Princeton and Harvard.

He is co-author with Eric Posner of the 2018 Radical Markets: Uprooting Capitalism and Democracy for a Just Society, with Puja Ohlhaver and Vitalik Buterin of the 2022 paper “Decentralized Society: Finding Web3’s Soul” (which is one of the 30 most downloaded papers of all time on the Social Science Research network) and is working on an open, Web3-based collaborative book project with Taiwan’s Digital Minister, Audrey Tang, Plurality: Technology for Cooperative Diversity and Democracy. He is also the author of dozens of scholarly and popular articles in journals including the Proceedings of the National Academy of Sciences, the American Economic Review, the Harvard Law Review, the Proceedings of the ACM Conference on Economics and Computation and the New York Times.

He has been recognized as one of the 10 most influential people in blockchain by CoinDesk, as one of the 25 people shaping the next 25 years of technology by WIRED and as one of the 50 most influential people by Bloomberg Businessweek, all in 2018. He graduated as Valedictorian of his Princeton undergraduate class in 2007 and received his PhD in economics also from Princeton in 2008.

Title for talk : Plurality: Technology for Cooperative Diversity and Democracy

Abstract: Technology and democracy are on a collision course because we have been pursuing visions for the future of technology (AI and crypto maximalism) antithetical to pluralism. An alternative we label "Plurality" has deep roots in the history of technology, having underpinned the development of the internet, but has not been articulated with the boldness of these competing visions. In this book, we aim to fill this lacuna and describe a path for research, development and multisectoral investment to build a future where technology empowers and bridges the proliferation of social diversity. I will illustrate the agenda with three open problems: a plural approach to "privacy", harnessing generative foundation models to empower diversity and plural collective decision procedures.


Lijuan Wang

(Principal Research Manager, Microsoft)

Bio: Lijuan Wang serves as the Principal Research Manager at Microsoft Cloud & AI division. She embarked on her journey with Microsoft Research Asia in 2006 as a researcher, right after completing her PhD from Tsinghua University, China. In 2016, she transitioned to Microsoft Research in Redmond. Her research prowess spans across computer vision, vision and language, and multimodal foundational models. Over the years, she has emerged as a key contributor in the development of technologies like vision-language pretraining, image captioning, object detection, among others. Many of these innovations have been integrated into Microsoft products, ranging from Cognitive Services to Office 365. With over 100 papers published in top-tier conferences and journals, she also boasts an impressive portfolio of 20+ granted or pending US patents. She is a senior membership with IEEE.

Talk title: Recent Advances in Vision Foundation Models

Abstract: Visual understanding at different levels of granularity has been a longstanding problem in the computer vision community. The tasks span from image-level tasks (e.g., image classification, image-text retrieval, image captioning, and visual question answering), region-level localization tasks (e.g., object detection and phrase grounding), to pixel-level grouping tasks (e.g., image instance/semantic/panoptic segmentation). Until recently, most of these tasks have been separately tackled with specialized model designs, preventing the synergy of tasks across different granularities from being exploited.

In light of the versatility of transformers and inspired by large-scale vision-language pre-training, the computer vision community is now witnessing a growing interest in building general-purpose vision systems, also called vision foundation models, that can learn from and be applied to various downstream tasks, ranging from image-level , region-level, to pixel-level vision tasks. In this keynote, we will cover the most recent approaches and principles at the frontier of learning and applying vision foundation models, including (1) Visual and Vision-Language Pre-training; (2) Generic Vision Interface; (3) Alignments in Text-to-image Generation; (4) Large Multimodal Models; and (5) Multimodal Agents.

Hongyuan Zhan

(Senior Research Scientist, Facebook Meta)

Bio: He is a Senior Research Scientist working in the conversational AI team in Facebook Reality Lab. His research focuses are machine learning and optimization. Particularly, his current research focus on 1. Federated Learning 2. online learning for Conversational AI agents. He have also worked on optimization and machine learning problems inspired by Intelligent Transportation Systems applications (e.g online and temporal prediction, learning for discrete choice modeling).

Important Deadlines

Full Paper Submission: 23rd August 2023
Acceptance Notification: 10th September 2023
Final Paper Submission: 20th September 2023
Early Bird Registration 21st September 2023
Presentation Submission: 28th September 2023
Conference: 12 -14 October 2023

Previous Conference

Sister Conferences



  • Best Paper Award will be given for each track.
  • Conference Record no- 59035