TPP-LLM: Modeling Temporal Point Processes by Efficiently Fine-Tuning Large Language Models

Zefang Liu, Yinzhu Quan

arXiv preprint arXiv:2410.02062, 2024

Abstract

Temporal point processes (TPPs) are widely used to model the timing and occurrence of events in domains such as social networks, transportation systems, and e-commerce. In this paper, we introduce TPP-LLM, a novel framework that integrates large language models (LLMs) with TPPs to capture both the semantic and temporal aspects of event sequences. Unlike traditional methods that rely on categorical event type representations, TPP-LLM directly utilizes the textual descriptions of event types, enabling the model to capture rich semantic information embedded in the text. While LLMs excel at understanding event semantics, they are less adept at capturing temporal patterns. To address this, TPP-LLM incorporates temporal embeddings and employs parameter-efficient fine-tuning (PEFT) methods to effectively learn temporal dynamics without extensive retraining. This approach improves both predictive accuracy and computational efficiency. Experimental results across diverse real-world datasets demonstrate that TPP-LLM outperforms state-of-the-art baselines in sequence modeling and event prediction, highlighting the benefits of combining LLMs with TPPs.

Recommended citation: Liu, Zefang and Quan, Yinzhu. "TPP-LLM: Modeling Temporal Point Processes by Efficiently Fine-Tuning Large Language Models." arXiv preprint arXiv:2410.02062 (2024).
[Download Paper] [Download Code] [Download Data]