Our long-term mission is to advance AI for humanity. We are currently committed to innovating the next scaling recipe(s) w/ paradigm-shift technologies for new generations of foundation models.


The Next Recipe [08/2024 - ]


The Second Curve of Scaling Law


Foundation Models


Foundation Architecture


Science of Intelligence


LLMOps: Research and technology for building AI products w/ foundation models.


In addition to the research achievements, these models are significant parts of Microsoft's own family of large AI (foundation) models powering language and multimodal tasks and scenarios across products in Microsoft. Moreover, our research tops public benchmarks and leaderboards across language, vision, speech, and multimodal tasks, and hugely contributes to the open source community through GitHub and Hugging Face.


More information about our Research and Highlights.


microsoft/unilm: Large-scale Self-supervised Pre-training Across Tasks, Languages, and Modalities

microsoft/torchscale: Neural Architecture for General AI

microsoft/lmops: General technology for enabling AI capabilities w/ (M)LLMs


We are hiring at all levels (including FTE researchers and interns)! If you are interested in working with us on Foundation Models and General AI, NLP, MT, Speech, Document AI and Multimodal AI, please send your resume to fuwei@microsoft.com.