Trustworthy AI Technology Forum Series - Part Two: Group photo of distinguished guests attending the Hybrid GenAI Diverse Terminals and Applications.
Taizhi Cloud held a second installment of its Trusted AI Technology Forum series, focusing on "Hybrid GenAI Diverse Terminals and Applications." By integrating the technological strengths of ASUS Group, Intel, and others, Taizhi Cloud aims to create a market-leading Hybrid GenAI ecosystem. Partners such as the Institute for Information Industry (III) and Chuangxin Smart Technology were invited to share the latest trends and technologies. In addition, Taizhi Cloud showcased various hardware and software integrations, as well as generative AI application scenarios from its partners at its booth.
ASUS COO and TAIZHIYUN Chairman Hsieh Ming-Chieh pointed out that generative AI swept the globe by the end of 2022, and enterprises showed strong interest in promoting AI 2.0 projects. However, promoting generative AI independently is extremely difficult and inevitably requires the assistance of partners. In addition to providing reliable and affordable large language model solutions, TAIZHIYUN is also actively collaborating with partners such as Intel, ASUS Cloud, and Chuangxin Smart to accelerate the implementation of generative AI for enterprises and seize global business opportunities.
Intel Vice President and General Manager of the Client Platform Solutions Group, Gao Song, stated that at a time when the market is focused on the security issues behind AI, the Trusted AI Platform launched by Taizhi Cloud is invaluable, especially its open architecture, which is a crucial strategy for lowering the barrier to entry into AI 2.0. Intel and ASUS have a long-standing partnership and close collaboration in the AI field. Through Intel's newly released architecture, more users can enjoy the advantages of hybrid AI architecture.
Lee Yu-chieh, head of the TAIDE (Trustworthy AI Dialogue Engine) project, pointed out that considering that most large language models on the market are based on English, the National Science Council's investment in the development of TAIDE, based on Traditional Chinese, in early 2023 is a necessary and important national investment. He also welcomed the launch of Taiwan's first fully optimized Formosa Model (FFM) in the middle of the year, which has brought new application opportunities to the industry, and expressed hope for more cooperation opportunities between the two sides in the future.

The event showcased the visitor experience at the booth area.
Among the many booths at the event, a variety of open-source FFM large language models were on display, as well as the "Hybrid GenAl Diverse Terminal Applications" themed area. Taizhi Cloud, with its AFS (Al Foundry Service) service, which can run in the cloud, on the ground and on terminal devices, brings broader possibilities to GenAl applications, while taking into account model performance and privacy requirements.
The "Large-Scale Language Model AI Workstation Deployment Solution" utilizes ASUS's ground-level products in conjunction with Taizhiyun FFM to provide an integrated hardware and software private cloud deployment solution. Furthermore, the Zenbo Junior robot can be paired with a large language model to enable various innovative applications, allowing the robot to engage in richer, more natural dialogue and become more human-like.
The rise of AI PCs expands the scope of generative AI applications.
The generative AI technology wave has been going on for nearly a year, and enterprises are gradually shifting their AI model deployment models from mainframe platforms to edge devices, creating more diversified applications.
Chen Chung-cheng, CTO of Taizhiyun, pointed out that generative AI should be able to run in environments such as cloud, ground, and edge (AI PC) to create more diverse application scenarios, allowing more people to enjoy the advantages of AI technology while ensuring privacy and security. However, considering the limited computing power of AI PCs, we have effectively reduced the 16-bit model of the FFM-Llama 2 7B model to 4 bits using GPTQ high-performance compression technology, which can now run on general AI PCs. For example, using the CCS of generative AI to build the Stable Diffusion XL model, it can not only produce text, but also quickly generate images through generative AI, accelerating the creation of high-quality images by users.
Taking smart manufacturing as an example, through the model fine-tuning function in AFS, hundreds of thousands of return review RMA data can be fine-tuned within 3 hours without tedious data preprocessing. This model has the ability to analyze time-series data for insight and prediction. The model shows excellent performance and interpretability in statistical indicators such as correlation coefficient (0.907) and p-value (0.001). It can predict the product's service life and maintenance costs. This process can also open up new areas of business intelligence (BI) applications, such as e-commerce retail and precision marketing.
Intel's Commercial Business Director, Cheng Chih-cheng, also mentioned, "Our newly launched Intel Core Ultra mobile PC processor integrates an NPU that provides AI acceleration capabilities, helping to enhance the PC's ability to perform generative AI while maintaining low power consumption. It can be used for tasks such as image generation, text and program generation, becoming a digital assistant that improves work efficiency." ASUS laptops equipped with Intel Core Ultra mobile PC processors were showcased at the event, demonstrating the benefits of low latency and higher performance in edge computing environments.
Kao Ni-Wei, Manager of ASUS Cloud Business Planning Department, stated that in order to help enterprises build a reliable AI cloud-based second brain, we have integrated cloud space and LLM large language model solutions. Enterprises can use RAG search enhancement to generate applications, improve the accuracy and flexibility of knowledge exploration, easily manage, extract, organize and find relevant information, and help employees do tedious document management tasks.
ASUS Computer Manager Li Zongjie pointed out that generative AI applications bring significant benefits to code writing assistance. It not only improves work productivity but also enhances the speed and quality of software project development through procedural suggestions and automated program generation mechanisms, while reducing security issues and allowing developers to focus more on execution. Taizhi Cloud has also integrated the code Llama open-source model into AFS, providing programmers with a better development environment and making the software project development process more efficient and accurate.
Chen Jianliang, COO of Chuangxin Smart, added that promoting generative AI projects requires high-performance GPU chips, resulting in high overall costs and high power consumption, posing a significant challenge to enterprises. To address this, an AI accelerator solution specifically designed for generative AI can improve computing speed and energy efficiency, offering features such as accuracy, power saving, and scalability. This makes large language model inference simpler and more accessible, lowering the barrier to entry for AI projects.

Edge AI Ecosystem Expert Symposium (from left to right): Moderator Huang Yiping, Vice President of DIGITIMES; Chen Zhongcheng, Chief Technology Officer of Taizhiyun; Zheng Zhicheng, Director of Commercial Business at Intel Corporation; and He Wenzhen, Director of the Software Institute of the Institute for Information Industry.
During the expert dialogue, Huang Yiping, Vice President of DIGITIMES, pointed out that Taiwan AI Cloud has taken the lead in launching complete solutions for FFM and generative AI. In the future, more than half of the applications will occur in edge AI, Taiwan's digital services will flourish, and AI Foundry will become an important driving force supporting overall economic growth.
Ho Wen-chen, director of the Software Institute at the Institute for Information Industry (III), believes that as the world becomes increasingly familiar with large language models, more and more people are starting to shrink these models to enable inference on personal computers in order to expand the scope of generative AI applications. Taiwan is very strong in the hardware field and is expected to gain a market-leading advantage as the AI PC era arrives.
With the rise of generative AI, promoting inclusive AI and enabling more people to enjoy the benefits of AI technology necessitates a shift towards hybrid cloud AI architectures. The Hybrid GenAI ecosystem built by Taiwan AI Cloud in collaboration with its partners will be a major driving force for the development of Taiwan's industries.
For more information, please see:
[Source:]DIGITIMES】