MIRA Lab
MIRA Lab
People
Publications
News
Courses
Courses Fall 2025
Courses Fall 2024
Courses Fall 2023
Courses Fall 2022
Courses Fall 2021
Courses Spring 2021
Courses Spring 2020
Admission
Admission 2025 保研
Photos
Social Media
Contact Us
Bin Li
Latest
Tackling Heavy-Tailed Q-Value Bias in Offline-to-Online Reinforcement Learning with Laplace-Robust Modeling
Why Attention Patterns Exist: A Unifying Temporal Perspective Analysis
AttentionPredictor: Temporal Patterns Matter for KV Cache Compression
Benchmarking End-To-End Performance of AI-Based Chip Placement Algorithms
LogicTree: Improving Complex Reasoning of LLMs via Instantiated Multi-step Synthetic Logical Data
Accurate and Scalable Graph Neural Networks via Message Invariance
Differentiable Integer Linear Programming
Learning Robust Representations with Long-Term Information for Generalization in Visual Reinforcement Learning
MILP-StuDio: MILP Instance Generation via Block Structure Decomposition
Towards Next-Generation Logic Synthesis: A Scalable Neural Circuit Generation Framework
Uncertainty-based Offline Variational Bayesian Reinforcement Learning for Robustness under Diverse Data Corruptions
Coarse-to-Fine Highlighting: Reducing Knowledge Hallucination in Large Language Models
Deep symbolic optimization for combinatorial optimization: Accelerating node selection by discovering potential heuristics
Accelerating Data Generation for Neural Operators via Krylov Subspace Recycling (Spotlight)
Rethinking Branching on Exact Combinatorial OPtimization Solver: The First Deep Symbolic Discovery Framework
Learning Task-relevant Representations for Generalization via Characteristic Functions of Reward Sequence Distributions
Learning Robust Policy against Disturbance in Transition Dynamics via State-Conservative Policy Optimization
Sample-Efficient Reinforcement Learning via Conservative Model-Based Actor-Critic
Interpreting the Latent Space of GANs via Measuring Decoupling
Cite
×