The leading self-driving cloud provider ZeroStack is adding AI-as-a-service capability to its platform. With the new capability, the company will enable its customers to provide one-click deployment of GPU resources and deep learning frameworks to users.
Enterprises and MSPs leverage ZeroStack platform to automate cloud infrastructure, applications, and operations. It allows them to focus on services that accelerate their businesses, simplify operations, and reduce costs.
Artificial intelligence (AI) and machine learning solutions are trending today, and reshaping the experiences in computing. With the availability of modern machine learning and deep learning frameworks like TensorFlow, PyTorch, and MXNet, the AI applications have become more viable than ever.
However, enterprises and MSPs often find it difficult to deploy, configure, and execute the AI frameworks and tools. Also, it becomes complicated to manage their inter-dependencies, versioning, and compatibility with servers and GPUs.
With new AI-as-a-service capability, ZeroStack aims to give its customers the power to automatically detect GPUs and make them available for users. The new innovation will also take care of all the operating system (OS) and CUDA library dependencies, allowing users to focus on AI development.
“ZeroStack is offering the next level of cloud by delivering a collection of point-and-click service templates,” said Michael Lin, director of product management at ZeroStack. “Our new AI-as-a-service template automates provisioning of key AI tool sets and GPU resources for DevOps organizations.”
Additionally, the company mentioned that users can enable GPU acceleration with dedicated access to multiple GPU resources for an order-of-magnitude faster inference latency and user responsiveness. GPUs within hosts can be shared across users in a multi-tenant manner.
Also read: Top 4 AI engines to look out for in 2019
To optimize the utilization of new AI-as-a-service capability, admins of ZeroStack self-driving cloud will be able to configure, scale, and allow fine-grained access control of GPU resources to end users.