News

From Self-supervised learning to LLMs for Timeseries: Adopting "GPT" paradigm for modelling behaviours at scale

 

Download the slides here!
 
Friday September 22 - 2p.m. 
Main meeting room (Sala Consiglio), 8th floor, Department of Computer Science
Università degli Studi di Milano, Via Celoria 18, Milan
 
Speaker: Prof. Flora Salim, University of New South Wales (UNSW) Sydney, Australia.
 
Abstract
The initial release of ChatGPT that has seen a worldwide uptake of over than 100 million in just two months after its launch in November 2022. There were several key milestones leading to the development of these foundation models, which underpin the ChatGPT technology, including the introduction of Transformers architecture and the self-supervised learning paradigm. How has the underpinning technologies been applied in the pervasive computing domain, such as for human behaviour modelling? Access to annotated human behaviour data has been expensive and often infeasible. How can the GPT paradigm be adopted for modelling behaviours at scale, utilising the proliferation of sensors and IoT data in our world? This demands new ways for modelling behaviours at scale, moving away from fully-supervised learning approaches, and from narrow tasks. The heterogeneity of both the data sources and the downstream tasks, as well as lack of annotations, makes self-supervised learning to be a compelling choice, as they require no labelled data and can be made compact and generalisable. I will present our self-supervised learning (SSL) pretraining approaches for multimodal sensor data. Further, I will explain why Transformer architecture, designed for sequence-to-sequence modelling, with multi-head attention mechanism, is a perfect fit for time-series data. I will also present a new versatile paradigm, leveraging Large Language Models (LLMs) for time-series modelling, such as for traffic forecasting and energy demand forecasting, using natural language prompts.
 
Bio
Professor Flora Salim is the inaugural Cisco Chair of Digital Transport and AI, University of New South Wales (UNSW) Sydney, Australia, and the Deputy Director (Engagement) of the UNSW AI Institute. Her research is on ubiquitous computing, behaviour modelling, trustworthy and robust AI, and machine learning for multimodal sensor data. She is a Chief Investigator of the ARC Centre of Excellence in Automated Decision Making and Society (ADM+S), and the Co-Lead of the ADM+S Machines Program, and the Transport and Mobilities Focus area. She serves as a member of the Australian Research Council (ARC) College of Experts, an Editor of Proceedings of the ACM on Interactive, Mobile, Wearable, Ubiquitous Technologies (IMWUT), the Associate Editor-in-Chief (AEIC) of IEEE Pervasive Computing, and an Associate Editor of ACM Transactions on Spatial Algorithms and Systems.

Two papers have been accepted at the 20th IEEE PerCom conference which will take place in Pisa from 21st to 25th of March:

  • The paper "FedCLAR: Federated Clustering for Personalized Sensor-Based Human Activity Recognitionby Riccardo Presotto, Gabriele Civitarese, and Claudio Bettini has been accepted as a full paper at IEEE PerCom main conference. 
  • The paper "Preliminary Results on Sensitive Data Leakage in Federated Human Activity Recognition" by Riccardo Presotto, Gabriele Civitarese, and Claudio Bettini has been accepted at CoMoRea workshop, a satellite event co-located with the PerCom conference.

The paper "ProCAVIAR: Hybrid Data-Driven and Probabilistic Knowledge-Based Activity Recognition" by Claudio Bettini, Gabriele Civitarese, Davide Giancane, and Riccardo Presotto has been accepted for publication on IEEE Access.

Page 1 of 5