TWS AIHPC Online Technical Forum: Best Practices for Performing Image Streaming Inference in a Containerized Environment (AIHPC x OneAI x K8S)

Classification: Events
The 2023 TWS AIHPC Technology Forum series will guide you through understanding the advantages and importance of TWCC's AIHPC high-speed computing performance for AI deep learning!

In the future, cloud-native platforms will become the core resource for the new type of digital services, 95%.
Gartner has identified Cloud-Native Platforms (CNPs) as one of the top 12 strategic technology trends for enterprises in 2022, estimating that by 2025, over 951 TP3T of digital jobs worldwide will be running on these platforms. Cloud-native technologies are based on containerization and centered around the Kubernetes container management platform. Enterprises can leverage this infrastructure to implement CI/CD continuous integration and deployment, further transitioning to a microservices architecture. This allows enterprise applications to achieve rapid scaling, updates, and migrations through the elasticity of the cloud, driving product application development and deployment.

AIHPC x OneAI x K8S: The Best Tools for Enhancing AI Application Inference Development
The first AIHPC online technical forum of 2023 was held with the delight of two leading experts who demonstrated how to train models within TWCC OneAI, establish a Kubernetes environment in a virtual machine using GOC Nomos for inference, and leverage NVIDIA Triton Inference Server as an advanced infrastructure for even greater benefits. GOC Nomos employs a Kubernetes cloud-native architecture, utilizing the "Infrastructure as Code" feature to achieve rapid deployment and continuous model updates, improving the recognition performance of AI inference services. It is not limited by the complexity of traditional architectures and supports multiple Kubernetes deployment environments, accelerating and enhancing the productivity of AI teams.

| Date: February 23, 2023 (Thursday)
| Time: 15:00-16:30
| Target audience: Data scientists, AI engineers, development and operations engineers, cloud platform (next-generation IT) managers, and architects who want to embrace cloud-native technologies.
You will learn:

  • Object detection model training was performed using the YOLOv4 framework through the TWCC OneAI service.
  • Use GOC Nomos for unified management of Kubernetes, rapid deployment of AI inference cloud services and monitoring, and object detection based on real-time video inference.
  • Deploy Triton Inference Server in a Kubernetes environment to run multi-version model inference services.

Organizers: Taiwan Smart Cloud, Gemini Cloud Computing
Co-organizer: Taiwan Cloud & IoT Industry Association

Register now

 

EDM Subscription

EDM Subscription

On-Demand AI Cloud Consulting

Sales Contact
Sales Contact Form