At Mobile Word Congress (MWC) 2019, Microsoft has unveiled a new software development kit (SDK) for developers and enterprises, aimed to help them develop computer vision and speech models.
Called Azure Kinect DK, the new developer kit and PC peripheral comes with advanced AI sensors, spatial microphone array with video camera, and an orientation sensor.
The computer vision and speech models built using Azure Kinect DK can be used across various industries, including health care, retail, and manufacturing, for delivering exceptional care, build seamless shopping experiences, and enhance training.
The tech giant had launched Kinect sensor around a decade ago for Xbox 360, but it failed to gain traction. As a result, Microsoft ended its production in 2017. Now it is back with the power of artificial intelligence (AI) and cloud.
“Azure Kinect is an intelligent edge device that doesn’t just see and hear but understands the people, the environment, the objects and their actions. The level of accuracy you can achieve is unprecedented,” said Julia White, Corporate Vice President, Microsoft, at MWC 2019.
Azure Kinect DK is less than half the size of Kinect for Windows v2. It features 1-MP depth sensor with wide and narrow field of vision (FOV) options, and accelerator and gyroscope to enable sensor orientation and spatial tracking.
The array of 7 microphones will enable customers to capture speech and sound from far-field. The 12-MP RGB video camera will provide additional color stream to match the depth stream.
Furthermore, to help developers and enterprises take their Kinect project to new level, Microsoft will allow Azure integration. They will be able to combine the advanced sensors on Azure Kinect with Azure Cognitive Services, and use machine learning for training the models.
Azure Kinect hardware is currently available for preorder in the US and China, and will be shipped by June 27, 2019. Following the shipping of hardware, Microsoft will open source the SDKs for developers via GitHub.