David Moreau Simard

5 minute read

I joined Red Hat last September to help the RDO community ship a quality OpenStack package distribution faster.

Previously, I worked for Internap, where I helped operate their cloud. There’s a lot of challenges involved in deploying and supporting a large scale OpenStack public cloud.

At Internap, we did extensive integration testing with a representation of our production cloud in development environments. Looking back, we were really just testing limited configurations: we only tested what we were deploying.

Upstream OpenStack, packagers and vendors have to test everything.

Testing OpenStack is hard

  • There is currently almost 50 official OpenStack projects
  • There is over 1000 source and binary packages built and provided by the official RDO repositories
  • These packages are bundled, integrated or provided by many different installers and vendors
  • There are countless deployment use cases ranging from simple private clouds to large scale complex and highly available public clouds

The OpenStack promise is that you can mix and match projects, components, drivers, software, hardware and it’ll work. Every vendor wants their software and hardware to be integrated in OpenStack.

These numbers are not going to shrink and OpenStack will continue to rise in popularity.

How do we make sure that it actually works ?

When you submit a patch to an OpenStack project, your code will usually run against a series of automated tests:

Code syntax and guidelines

Does the patch respect the syntax and standard guidelines ?

OpenStack attracts thousands of contributors from all around the world. By enforcing code guidelines, we can make sure that the code base remains as clean and standard as possible with the help of tools such as pep8, flake8, bashate, puppet-lint or ansible-lint.

Unit tests

Does the patch introduce any regressions ? Does it behave like you think it does ?

With a code base as large as OpenStack, you need tests to protect against regressions. You can expect projects to ensure unit test coverage with ostestr, unittest or puppet-rspec to name a few.

Integration tests

When installed with your patch, does the project work by itself ? What about with other components in different scenarios ?

Perhaps you’re deploying Nova with the KVM hypervisor, or maybe Xen … or even Hyper-V ?

 Integration tests will install Nova with a submitted patch, spin up a virtual machine and will make sure that the patch works against KVM, Xen and Hyper-V (amongst others). You’d hate to be an Operator that updates OpenStack to find out that your Xen servers are broken. Nova developers try to prevent that from happening.

Maybe you’d like to improve how puppet-keystone installs and configures Keystone.

Integration tests are green for Ubuntu on your puppet-keystone patch but came back with failures on CentOS. Better make sure it works on both distributions before the reviewers can merge your patch !

You’re a Ceph fan and you’re submitting a patch for a cool new feature for the Cinder backend.

You’re good to go on your Ceph patch: everything is green! You didn’t break the iSCSI, NFS or the other drivers. It went through different suites of Tempest runs and different third-party integration tests provided by vendors.

Testing s/OpenStack/everything/ is hard

When the source from every OpenStack project gets pulled to be packaged in RDO, we will run additional batches of tests to ensure that what we package works and nothing is broken.

RDO has it’s own series of automated tests that will, for example, install OpenStack with Packstack or RDO Manager in various configurations and topologies. This is done with the help of Khaleesi, a framework for testing the large matrix of different possibilities.

Some tests will use Packstack on a single node (“all in one”), others spread across multiple nodes. Some tests will enforce selinux, others won’t. Some will run on virtual machines, others will use bare metal servers instead. Neutron will be tested with GRE, VXLAN or VLAN configurations. Some tests will also include popular backend or driver implementations like Ceph.

There’s a lot of external components that also need to be tested as part of the distribution too. We tend to forget that software like Apache, MySQL or RabbitMQ can also break or introduce incompatible changes with what we ship. Extensive coverage makes sure that this works as well.

Testing trunk is even harder

You don’t want to wait until OpenStack drafts a stable release to start testing what is shipped.

If RDO would do that, it would take several weeks to ship our packages. Our users want to try out the new shiny release as soon as possible and we have to deliver quickly without compromise. We’ve already begun testing packages built from trunk that will be in the Mitaka release meant next April.

There’s a lot of changes that impacts packagers and vendors every releases - new, changed or removed package dependencies are just one of them. By testing trunk all the time, we find these changes as they happen and react accordingly.

Trunk tends to break a lot but that’s the way it is and finding exactly what part of the puzzle changed is often challenging and takes time.

Thanks to everyone involved

Testing is not fun: it’s hard, time consuming and expensive.

I wanted to personally extend my thanks to every developer, contributor, vendor and packager to help make OpenStack as great, reliable and stable as it can be. We couldn’t do it without you.

Thanks guys, keep on rocking.