Our long-term mission is to advance AI for humanity. We are dedicated to pioneering a new paradigm of Generative AI, focusing on both technologies and models. Previously, we worked on scaling recipes with paradigm-shifting technologies for new generations of foundation models. [Last Update: January 2025]
Building Frontier Efficiency Model [03/2025 - ]
A New Paradigm of Generative AI [01/2025 - ]
==
The Next Recipe [06/2024 - ]
==
The Second Curve of Scaling Law
==
Foundation Models
Foundation Architecture
Science of Intelligence
LLMOps: Research and technology for building AI products w/ foundation models.
In addition to the research achievements, these models are significant parts of Microsoft's own family of large AI (foundation) models powering language and multimodal tasks and scenarios across products in Microsoft. Moreover, our research tops public benchmarks and leaderboards across language, vision, speech, and multimodal tasks, and hugely contributes to the open source community through GitHub and Hugging Face.
More information about our Research.
microsoft/unilm: Large-scale Self-supervised Pre-training Across Tasks, Languages, and Modalities
microsoft/BitNet: 1-bit AI Infra: Official inference framework for 1-bit LLMs
We are hiring at all levels (including FTE researchers and interns)! If you are interested in working with us on Foundation Models and General AI, NLP, MT, Speech, Document AI and Multimodal AI, please send your resume to fuwei@microsoft.com.