Skip to main content
  1. Blog
  2. Article

Alexia Emmanoulopoulou
on 29 July 2015

ODS Video: OpenStack Interoperability Lab


As OpenStack cloud innovation progresses, we’re focused on finding ways to fully automate OpenStack deployment. As regular readers know, part of this goal sees us work with technology vendors across the gamut of hardware, networking and storage solutions to test, validate and guarantee their components. The aim? To drive quality of deployment and scale for the enterprise.

Listen to Corey Bryant from our OpenStack engineering team describe how we’re working with vendors within our OpenStack Interoperability Lab (OIL) to validate code quickly and easily. Hear how speed of deployment is only one of the benefits realised; quality and quality of scale is taking significant time and pain out of OpenStack deployments and dramatically hastening time-to-market for a host of companies.

Related posts


Massimiliano Gori
30 March 2026

How to Harden Ubuntu SSH: From static keys to cloud identity

Cloud and server Article

30 years after its introduction, Secure Shell (SSH) remains the ubiquitous gateway for administration, making it a primary target for brute force attacks and lateral movement within enterprise environments. For system administrators and security architects operating under the weight of regulatory frameworks like SOC2, HIPAA, and PCI-DSS, ...


Massimiliano Gori
27 March 2026

Modern Linux identity management: from local auth to the cloud with Ubuntu

Cloud and server Article

The modern enterprise operates in a hybrid world where on-premises infrastructure coexists with cloud services, and security threats evolve daily. IT administrators are tasked with a difficult balancing act: maintaining traditional local workflows while managing the inevitable shift toward cloud-native architectures. Identity has emerged ...


Abdelrahman Hosny
24 March 2026

Canonical welcomes NVIDIA’s donation of the GPU DRA driver to CNCF

Partners Article

At KubeCon Europe in Amsterdam, NVIDIA announced that it will donate the GPU Dynamic Resource Allocation (DRA) Driver to the Cloud Native Computing Foundation (CNCF). This marks an important milestone for the Kubernetes ecosystem and for the future of AI infrastructure. For years, GPUs have been central to modern machine learning and high ...