Sitemap
A list of all the posts and pages found on the site. For you robots out there is an XML version available for digesting as well.
Pages
Posts
portfolio
publications
Yelp Review Rating Prediction: Machine Learning and Deep Learning Models
Zefang Liu
arXiv preprint arXiv:2012.06690, 2020
This paper predicts Yelp restaurant ratings using both traditional machine learning models and transformer-based models.
Deep Reinforcement Learning based Group Recommender System
Zefang Liu, Shuran Wen, Yinzhu Quan
arXiv preprint arXiv:2106.06900, 2021
This paper introduces a Deep Reinforcement learning-based Group Recommender System (DRGR) using actor-critic networks and the deep deterministic policy gradient algorithm.
A Multi-Region Multi-Timescale Burning Plasma Dynamics Model for Tokamaks
Zefang Liu, Weston M. Stacey
64th Annual Meeting of the APS Division of Plasma Physics, 2022
This paper develops a multi-region, multi-timescale transport model to simulate burning plasma dynamics in tokamaks, addressing energy transport and radiation effects to prevent thermal runaway instability in ITER scenarios.
Anomaly Detection of Command Shell Sessions based on DistilBERT: Unsupervised and Supervised Approaches
Zefang Liu, John F. Buford
2023 Conference on Applied Machine Learning in Information Security (CAMLIS), 2023
This paper leverages DistilBERT for anomaly detection in Unix shell sessions using both unsupervised and supervised methods, demonstrating effective detection of anomalous behavior with minimal data labeling.
SecQA: A Concise Question-Answering Dataset for Evaluating Large Language Models in Computer Security
Zefang Liu
arXiv preprint arXiv:2312.15838, 2023
SecQA introduces a specialized dataset for evaluating the performance of large language models in computer security through multiple-choice questions.
CyberBench: A Multi-Task Benchmark for Evaluating Large Language Models in Cybersecurity
Zefang Liu, Jialei Shi, John F. Buford
AAAI 2024 Workshop on Artificial Intelligence for Cyber Security (AICS), 2024
CyberBench introduces a benchmark for evaluating large language models in cybersecurity, alongside CyberInstruct, a fine-tuned LLM that performs competitively in this domain.
A Review of Advancements and Applications of Pre-trained Language Models in Cybersecurity
Zefang Liu
2024 International Symposium on Digital Forensics and Security (ISDFS), 2024
This paper examines how pre-trained language models enhance cybersecurity across various tasks and calls for continued innovation in this field.
[Paper]
Application of Neural Ordinary Differential Equations for Tokamak Plasma Dynamics Analysis
Zefang Liu, Weston M. Stacey
ICLR 2024 Workshop on AI4DifferentialEquations in Science (AI4DiffEqtnsInSci), 2024
A novel Neural ODE-based model is introduced for simulating tokamak plasma dynamics, offering precise energy transfer analysis crucial for advancing controlled thermonuclear fusion.
AdaMoLE: Fine-Tuning Large Language Models with Adaptive Mixture of Low-Rank Adaptation Experts
Zefang Liu, Jiahua Luo
2024 Conference on Language Modeling (COLM), 2024
AdaMoLE introduces a dynamic approach to fine-tuning LLMs by using an adaptive mixture of LoRA experts, outperforming traditional top-k routing methods in various tasks.
EconLogicQA: A Question-Answering Benchmark for Evaluating Large Language Models in Economic Sequential Reasoning
Yinzhu Quan, Zefang Liu
2024 Conference on Empirical Methods in Natural Language Processing (EMNLP) Findings, 2024
We introduce EconLogicQA, a benchmark designed to evaluate large language models’ ability to understand and sequence complex economic events, demonstrating its effectiveness through evaluations of various models.
InvAgent: A Large Language Model based Multi-Agent System for Inventory Management in Supply Chains
Yinzhu Quan, Zefang Liu
arXiv preprint arXiv:2407.11384, 2024
InvAgent leverages large language models for adaptive and explainable multi-agent inventory management in supply chains, significantly improving efficiency and resilience.
Application of Neural Ordinary Differential Equations for ITER Burning Plasma Dynamics
Zefang Liu, Weston M. Stacey
AAAI 2025 Workshop on AI to Accelerate Science and Engineering (AI2ASE), 2024
A neural ODE-based burning plasma dynamics model, NeuralPlasmaODE, simulates energy transfer in ITER plasmas for both inductive and non-inductive scenarios.
TPP-LLM: Modeling Temporal Point Processes by Efficiently Fine-Tuning Large Language Models
Zefang Liu, Yinzhu Quan
arXiv preprint arXiv:2410.02062, 2024
This paper introduces TPP-LLM, a framework that integrates large language models with temporal embeddings to improve event sequence modeling and prediction, demonstrating strong performance across multiple real-world datasets.
Efficient Retrieval of Temporal Event Sequences from Textual Descriptions
Zefang Liu, Yinzhu Quan
arXiv preprint arXiv:2410.14043, 2024
We propose TPP-LLM-Embedding, a model that combines temporal and event-type information for efficient retrieval of event sequences from textual descriptions, outperforming baseline models across diverse datasets.
Multi-Agent Collaboration in Incident Response with Large Language Models
Zefang Liu
arXiv preprint arXiv:2412.00652, 2024
This paper explores LLM-based multi-agent collaboration in incident response, analyzing team structures using the Backdoors & Breaches card game.