Ruiyu Wang
I hold an HBSc. degree with high distinction from the University of Toronto, where I completed a Computer Science Specialist with an NLP focus in June 2024. I am currently engaged in the research of Natural Language Processing. I am fortunate to be supervised by and work with Prof. Gerald Penn, Prof. Jimeng Sun, and Prof. Qiang Sun. I am currently a research assistant at Microsoft Research Asia, in which I work with Dr. Shizhao Sun at the Machine Learning Group.
I am interested in building sequential models. Given the current surge of Large Language Models, I believe that understanding their behaviour on sequential, formatted data is beneficial to our knowledge of the internal mechanisms of how they perceive, acquire, and produce logically structured data. This is beneficial to not only the application of LLMs (e.g. building foundation models) but also our investigation to the essence of language, which is also a type of formatted data. I wish to contribute to researches involving LLM multimodality, empirical interpretability, and applications.
I will return to the University of Toronto as a PhD student in Computer Science. See you soon in Canada!
(Last Updated: May 16. 2025)
Email /
Resume /
Scholar /
Github /
LinkedIn
|
|
Research
There are a lot of projects ongoing. I only post the projects I finished here.
|
|
Text-to-CAD Generation Through Infusing Visual Feedback in Large Language Models
Ruiyu Wang, Yu Yuan, Shizhao Sun, Jiang Bian
ICML 2025 , Website
Visual-rewarded Text2CAD generation foundation model.
|
|
Lifelong Learning with Task-Specific Adaptation: Addressing the Stability-Plasticity Dilemma
Ruiyu Wang, Sen Wang, Xinxin Zuo, Qiang Sun
ArXiv Preprint, 2025, Website
Improving incremental learning with task-specific adapters and regularization.
|
|
Revisiting GloVe, Word2Vec and BERT: On the Homogeneity of Word Vectors
Ruiyu Wang
Undergrad Capstone, 2024
A study on the homogeneity of word vectors: how they are transformed to each other.
|
|
Can Language Model Understand Word Semantics as A Chatbot? An Empirical Study of Language Model Internal External Mismatch
Jinman Zhao, Xueyan Zhang, Xingyu Yue, Weizhe Chen, Zifan Qian, Ruiyu Wang
ArXiv, 2024
A study on the internal-external mismatch of word semantics understanding in language models.
|
|
Large Language Models on Lexical Semantic Change Detection: An Evaluation
Ruiyu Wang*, Matthew Choi*
ArXiv Preprint, 2023
An evaluation on low-source lexical semantic change (LSC) detection that involves the traditional models, BERT and LLMs.
|
|
UniPredict: Large Language Models are Universal Tabular Predictors
Ruiyu Wang*, Zifeng Wang*, Jimeng Sun
ArXiv Preprint, 2023
An LLM-based tabular prediction system that handles any inputs and any targets.
|
The style of this page was shamelessly ripped off from here. If you want to use this template, go visit the original website.
|
|