ACM SIGMOBILE Community Engagement Program


Time: June 15, Wednesday, 9AM PT / 12PM ET / 6PM CET
Speaker (s): Prof. Junchen Jiang, University of Chicago

Abstract:

Today, videos captured and analyzed by computer vision models grow faster in volume than videos watched by human viewers. Despite that, traditional video delivery and processing systems are designed for human perception, whereas computer vision models (deep neural nets or DNNs) “perceive” video data differently—they value high inference accuracy and low inference delay. This discrepancy in performance objectives has far-reaching consequences. In this talk, I will introduce two new abstractions that enable video analytics systems to self-adapt to dynamic characteristics of input video streams and the idiosyncrasies of different DNN models. The key insight is that, while eliciting real-time feedback from human users is hard, DNN models can naturally provide rich and instantaneous feedback that could be harnessed to guide efficient system adaptation.

Junchen Jiang is an Assistant Professor of Computer Science at the University of Chicago. He received his PhD degree from CMU in 2017 and his bachelor's degree from Tsinghua in 2011. His research interests are networked systems and their intersections with machine learning. He is a recipient of a Google Faculty Research Award, NSF CAREER Award, and CMU Computer Science Doctoral Dissertation Award.

Please find the video recording of the talk: here


Time: May 18, Wednesday, 9AM PT / 12PM ET / 6PM CET
Speaker (s): Prof. Elahe Soltanaghaei, UIUC

Abstract:

Emerging applications such as smart cities, autonomous vehicles, and mixed reality rely on embedded systems that are engaging with the physical environment through sensors. Building upon this connection, my vision is to advance Omnipresent Sensing by harnessing the wireless infrastructure in and around buildings and cities to act as a non-intrusive sensing platform. This is possible by innovating at the crossroads of two recent trends in mobile computing: (1) Wireless technologies such as Millimeter-wave (mmWave) and Massive MIMO systems can now support higher bandwidth for communication and improved resolution for RF sensing applications. (2) Advancements in CPUs and RF front-ends are making it easier to develop software-defined sensing and communication systems. This is affording edge devices the ability to do more advanced signal processing and machine learning.

In this talk, I will focus on how to design an RF-equivalent of optical retro-reflectors and use them as fiducial markers in autonomous vehicles, robotics, and mixed reality applications. I will then discuss how nuances from the environment itself can be leveraged to improve sensing quality in the context of human sensing, object tracking, and indoor localization. I will conclude this talk with a roadmap of combining radar-style RF sensors and wireless communication links for the wireless embedded systems of the future.

About the Speaker (s):

Elahe Soltanaghaei is an assistant professor of Computer Science at the University of Illinois Urbana-Champaign. Her research builds a foundation for joint communication and sensing in wireless embedded systems of the future with applications in IoT and Cyber-Physical Systems. Previously, she was a postdoctoral researcher at Carnegie Mellon University in CyLab. She received her PhD in Computer Science from University of Virginia. Her work has been published in premier conferences and journals in the areas of mobile and ubiquitous computing, wireless networks, and energy and infrastructure. She has been named one of the 10 Rising Star Women in Networking and Communications in 2021. She is also the recipient of 2020 ACM SIGMOBILE Dissertation Award and 2019 EECS Rising Stars.

Please find the video recording of the talk: here


Time: May 04, Wednesday, 12PM PT (3PM ET)
Speaker (s): Prof. Yuanjie Li

Abstract:

The mobile networks (4G, 5G, and beyond) facilitate "anywhere, anytime" Internet access for billions of users and trillions of smart "things." Despite this great success, however, the mobile networks remain a complex "black box." This design is vulnerable to various security threats and reliability/performance deficiencies. It also hurts the fundamental mutual trust between mobile users and operators.
This talk will show how end clients can help strive for a smart and secure mobile network. We will start with a data-driven approach to let smart clients understand the complex and "black-box" mobile network operations. This new capability empowers various IoT/mobile applications, and rewards the 4G/5G infrastructure with simplified yet verifiable designs and operations. We will demonstrate this with two examples: Enhancing the mutual trust between users and operators in the intelligent edge, and resolving the policy conflicts in the distributed mobility management. We last discuss some research directions toward smart and secure 6G and future mobile Internet.

About the Speaker (s):

Yuanjie Li is an assistant professor at the Institute for Network Sciences and Cyberspace at Tsinghua University. He was a researcher at Hewlett Packard Labs from 2018 to 2020, and the co-founder of MobIQ Technologies from 2017 to 2018. He received his Ph.D. in Computer Science from UCLA in 2017, and B.E. in Electronic Engineering from Tsinghua University in 2012. His research interests are network systems and security, with a recent focus on mobile networking, intelligent wireless edge, and Internet-of-Things (IoT). He is a recipient of ACM MobiCom’17 and MobiCom'16 Best Community Paper Awards, and UCLA Dissertation Year Fellowship in 2016. His work has resulted in an open-source community tool (MobileInsight), which has been used by 268 universities and companies since its release in 2016.

Please find the video recording of the talk: here


Time: April 20, Wednesday, 9AM PT / 12PM ET / 6PM CET
Speaker (s): Prof. Sandip Chakraborty, IIT Kharagpur, India

Abstract:

Smart infrastructures are usually driven by intelligent context-aware services that often need strong ML (or DL) models working in the backend. However, building such sophisticated models often needs a significant amount of labeled training data. The most accepted and the conventional source of such data annotation considers human-in-the-loop. Nevertheless, this task is getting expensive and often highly tedious with multimodal data sources in place. Considering the broad use case of sensor-based human activity recognition, in this talk, we first discuss the possibility of choosing auxiliary modalities that can provide enough information to get the data annotated without human-in-the-loop. Subsequently, we also look into zero-shot approaches to fine-tune the coarse-grain annotations received from external annotators to capture more information regarding the confounded actions hidden within any complex activity of daily living.

About the Speaker (s):

Sandip Chakraborty is working as an Associate Professor in the Department of Computer Science and Engineering at the Indian Institute of Technology (IIT) Kharagpur. He obtained his Ph.D. from IIT Guwahati, India in 2014, and has been a visiting DAAD Fellow at MPI Saarbrucken, Germany, in 2016. His current researches explores design of ubiquitous systems for computer human interactions, particularly on multi-modal sensing, pervasive systems development, distributed computing, etc. His works have been published in conferences like ACM BuildSys, ACM RecSys, ACM MobileHCI, TheWebConfn (WWW), IEEE PerCom, IEEE INFOCOM, ACM SIGSPATIAL, etc. He is one of the founding members of ACM IMOBILE, the ACM SIGMOBILE chapter in India. He is working as an Area Editor of Elsevier Ad Hoc Networks journal and Elsevier Pervasive and Mobile Computing journal. He has received various awards and accolades including INAE Young Engineers’ Award, Fellow of National Internet Exchange of India (NIXI), and so on. He is actively involved in organizing various conferences including IEEE PerCom, IEEE SmartComp, COMSNETS, ICDCN, etc., to name a few.

Please find the video recording of the talk: here


Time: April 06, Wednesday, 9AM PT / 12PM ET / 6PM CET
Speaker (s): Prof. Colleen Josephson, UC Santa Cruz

Abstract:

We need affordable and innovative sensing to assist in tackling global-scale problems like sustainably feeding the next billion. RF backscatter systems such as RFID are well-known in the sensing community because of their low power consumption. This talk explores how we can enable robust indoor/outdoor RF-backscatter sensing by leveraging advances in low-power embedded systems. The talk will describe the challenges faced in low-power and sustainable sensor networks, and provides an overview of an outdoor radar backscatter sensing system that uses RF to measure soil moisture with accuracy comparable to state-of-the-art commercial sensors, but at a fraction of the cost. Also to be discussed is recent work on renewably powering outdoor sensor systems via energy harvested from non-traditional sources like microbes.

About the Speaker (s):

Colleen Josephson is an incoming Assistant Professor at UC Santa Cruz and a research scientist at VMware. Her research interests include wireless communication and sensing systems, with a focus on technologies to enable and improve sustainable practices. She is also co-chair of the GreenG Working Group within the ATIS NextG Alliance, which aims to position North America as the global leader in sustainable next-generation mobile networks. Colleen completed her PhD in Electrical Engineering in 2021 at Stanford University, where she was advised by Sachin Katti and Keith Winstein. Before beginning her PhD, she worked at Cisco Meraki as a wireless engineer, and even before that she received her SB and MEng degrees from MIT. She is also a former Microsoft Research intern, a 2020 Rising Star in EECS, a finalist in the 2019 MIT Bay Area Research Slam, and a recipient of the Stanford Graduate, Schlumberger Innovation and D.E. Shaw Exploration fellowships.

Please find the video recording of the talk: here


Time: March 23, Wednesday, 9AM PT / 12PM ET / 5PM CET
Speaker (s): Dr. Chulhong Min, Nokia Bell Labs, Cambridge, UK

Abstract:

For long, mobile sensing and AI research have focused on creating accurate, robust, and multi-modal sensory models, making it intelligent to understand us and the world around us. However, with an unprecedented rise of on/near-body devices - smartphones, wearables, or IoT devices - it is common today to find ourselves surrounded by multiple sensory devices in daily lives. Such multi-device dynamics offer exciting opportunities to leverage multiple, diverse sensory signals to develop accurate and robust sensing models quantifying diverse and richer human context continuously and at scale. In this talk, I will explore system challenges and research efforts in building sensory AI systems to support multi-device environments. Specifically, I will cover the systems issue from three key perspectives: a) device form, b) device intelligence, and c) device system.

About the Speaker (s):

Chulhong Min is a research scientist in the Software and Data Systems Research (SDSR) lab at Nokia Bell Labs, Cambridge, UK since 2017. He received the Ph.D. degree in Computer Science from KAIST, South Korea in 2016 and was a visitor fellow at University of Cambridge between 2019 and 2020. His current research explores the next-generation sensory systems to realise transformative multi-modal, multi-device sensing for disruptive mobile, wearable, and IoT services. Broadly, his research interests include mobile and embedded systems, AI/ML, Internet of things (IoT), and human-computer interaction. His work is published at ACM MobiSys, ACM SenSys, ACM UbiComp, ACM BuildSys, and prestigious journals. He won the best paper award at ACM CSCW 2014 and multiple demonstration awards. He served on a number of technical program committees and organising committees of various premier conferences and is an Associate Editor of ACM Proceedings on Interactive, Mobile, Wearable and Ubiquitous Technologies (ACM IMWUT).



Time: Feb 23, Wednesday, 9AM PT / 12PM ET / 6PM CET
Speaker (s): Prof. Tingjun Chen, Duke University

Abstract:

This talk will provide an overview of the COSMOS testbed (www.cosmos-lab.org), that is being deployed as part of the NSF Platforms for Advanced Wireless Research (PAWR) program, and the supported experiments in advanced wireless, millimeter-wave (mmWave), optical networking, and edge cloud. COSMOS (Cloud-Enhanced Open Software-Defined Mobile-Wireless Testbed for City-Scale Deployment) is being deployed in West Harlem (New York City) by Rutgers, Columbia, and NYU in partnership with NYC, CCNY, U. Arizona, IBM, and Silicon Harlem. It targets the technology “sweet spot” of ultra-high bandwidth and ultra-low latency, a capability that will enable a broad new class of applications including augmented/virtual reality and cloud-based autonomous vehicles. Realization of such high bandwidth/low latency wireless applications involves research not only on faster radio links, but also on the system as a whole including aspects such as spectrum use, networking, and edge computing.
We will present an overview of COSMOS’ key enabling technologies, which include millimeter-wave (mmWave) radios, software-defined radios, optical/SDN x-haul network, and edge cloud, followed by the deployment and outreach efforts. We will then describe extensive millimeter-wave channel measurements and experiments on open-access full-duplex wireless, as well as the development and integration of programmable mmWave radios in the COSMOS testbed. We will conclude by converged optical-wireless x-haul networking with edge cloud that we conducted in the COSMOS testbed.
The design, development, and deployment of the COSMOS testbed are joint work with the COSMOS team (www.cosmos-lab.org). The mmWave measurement results are based on joint work with Manav Kohli, Tianyi Dai, Angel Daniel Estigarribia, Gil Zussman (Columbia), Rodolfo Feick (UTFSM), Dmitry Chizhik, Jinfeng Du, and Reinaldo A. Valenzuela (Nokia Bell Labs). The full-duplex results are based on joint work with Mahmood Baraani Dastjerdi, Manav Kohli, Harish Krishnaswamy, and Gil Zussman (Columbia). The programmable mmWave radios are joint work with Xiaoxiong Gu, Arun Paidimarri, Sadhu Bodhisatwa, Alberto Valdes-Garcia (IBM Research), Gil Zussman (Columbia), and Ivan Seskar (Rutgers/WINLAB).

About the Speaker (s):

Tingjun Chen received the Ph.D. degree in Electrical Engineering from Columbia University in 2020 and the B.Eng. degree in Electronic Engineering from Tsinghua University in 2014. Between 2020–2021 he was a Postdoctoral Associate at Yale University. Since Fall 2021 he has been with Duke University where he is an Assistant Professor in the Department of Electrical and Computer Engineering. His research interests are in the area of networking and communications with a specific focus on next-generation wireless, mobile, and optical networks, as well as Internet-of-Things (IoT) systems. In particular, he is interested in emerging communication technologies and their interactions with the higher layers as well as in cross-layer optimization and performance evaluation of networked systems. Tingjun received the IBM Academic Award, the Google Research Scholar Award, the Columbia Engineering Morton B. Friedman Memorial Prize for Excellence, the Columbia University Eli Jury Award, the Facebook Fellowship, and the Wei Family Private Foundation (WFPF) Fellowship. He also received ACM CoNEXT 2016 Best Paper Award and the ACM MobiHoc 2019 Best Paper Award finalist, and his Ph.D. thesis received the ACM SIGMOBILE Dissertation Award Runner-up. He is also the Co-founder and Networks Lead of WiLO Networks Inc., a start-up focusing on low-power sensor hardware and end-to-end systems.

Please find the video recording of the talk: here