ChatGLM
ChatGLM is a family of open-source large language models designed for dialogue tasks, developed jointly by Zhipu AI and Tsinghua University's Knowledge Engineering Group (KEG). These models, ranging from 6 billion to 130 billion parameters, are trained on massive Chinese and English corpora, optimized for question-answering and conversational interactions. The series includes ChatGLM-6B, ChatGLM2-6B, and the latest ChatGLM3-6B, each improving upon its predecessor with enhanced performance, longer context understanding, and more efficient inference capabilities. The models are versatile, supporting both Chinese and English language processing, and can be deployed locally on consumer-grade hardware, making them accessible for a wide range of applications.
ChatGLM models utilize advanced training techniques such as supervised fine-tuning, feedback bootstrapping, and reinforcement learning with human feedback to improve performance. They are fully open for academic research and free for commercial use after registration, promoting community-driven development and innovation.