fazon - Fotolia

Singapore to develop Southeast Asia’s first large language model

Singapore has spearheaded a S$70m initiative to build research and engineering capabilities in LLMs, including the development of Southeast Asia’s first LLM

Singapore has launched a S$70m (US$52m) initiative to build research and engineering capabilities in multimodal large language models (LLMs), including the development of Southeast Asia’s first LLM.

Dubbed the Multimodal LLM Programme (NMLP), the two-year initiative will also train artificial intelligence (AI) professionals, foster collaboration with industry partners on AI use cases, drive deeper understanding of how LLMs work and advance research on AI governance.

With most LLMs originating from the West and hence not taking into account Southeast Asia’s cultures, values and norms, a key cornerstone of the NMLP is to build multimodal, localised LLMs for Singapore and the region.

These LLMs would understand the context and values related to the diverse cultures and languages of Southeast Asia, such as managing context-switching between languages in multilingual Singapore.

Singapore is not starting from scratch in building the region’s first LLM. It will build on the work that went into AI Singapore’s Sea-Lion (Southeast Asian Languages in One Network) model, an open-source LLM that is more representative of Southeast Asia’s cultural contexts and linguistic nuances.

Sea-Lion is designed to be relatively smaller, flexible and faster than the commonly used LLMs in the market today. It is also a relatively inexpensive and more efficient option for cost-sensitive and throughput–constrained organisations that would like to incorporate AI into their workflows.

“Language is an essential enabler for collaboration,” said Ong Chen Hui, assistant chief executive of the Infocomm Media Development Authority’s business technology group. “By investing in talent and investing in large language AI models for regional languages, we want to foster industry collaboration across borders and drive the next wave of AI innovation in Southeast Asia.”

Read more about AI in APAC

Technology suppliers such as Alibaba Cloud are also doubling down on multimodal LLMs. It recently open-sourced two LLMs, Qwen-72B and Qwen-1.8B, the 72-billion-parameter and 1.8-billion-parameter versions of its proprietary foundation model, Tongyi Qianwen.

The company claimed that Qwen-72B, which was pre-trained with over three trillion tokens, outperforms other major open-source models in 10 benchmarks, including massive multi-task language understanding (MMLU), which measures a model’s multitask accuracy.

Companies and research institutions can access the Qwen-72B model’s code, model weights and documentation and use them for free for research purposes. For commercial uses, the models will be free to use for companies with fewer than 100 million monthly active users.

“Building up an open source ecosystem is critical to promoting the development of LLMs and AI applications,” said Jingren Zhou, chief technology officer of Alibaba Cloud. “We aspire to become the most open cloud, and make generative AI capabilities accessible to everyone.”

Read more on Artificial intelligence, automation and robotics

CIO
Security
Networking
Data Center
Data Management
Close