Tripleo HA Federation Proof-of-Concept

Keystone has supported identity federation for several releases. I have been working on a proof-of-concept integration of identity federation in a TripleO deployment. I was able to successfully login to Horizon via WebSSO, and want to share my notes.

A federation deployment requires changes to the network topology, Keystone, the HTTPD service, and Horizon. The various OpenStack deployment tools will have their own ways of applying these changes. While this proof-of-concept can’t be called production-ready, it does demonstrate that TripleO can support Federation using SAML. From this proof-of-concept, we should be to deduce the necessary steps needed for a production deployment.

Prerequisites

  • Single physical node – Large enough to run multiple virtual machines.  I only ended up using 3, but scaled up to 6 at one point and ran out of resources.  Tested with 8 CPUs and 32 GB RAM.
  • Centos 7.2 – Running as the base operating system.
  • FreeIPA – Particularly, the CentOS repackage of Red Hat Identity Management. Running on the base OS.
  • Keycloak – Actually an alpha build of Red Hat SSO, running on the base OS. This was fronted by Apache HTTPD, and proxied through ajp://localhost:8109. This gave me HTTPS support using the CA Certificate from the IPA server.  This will be important later when the controller nodes need to talk to the identity provider to set up metadata.
  • Tripleo Quickstart – deployed in HA mode, using an undercloud.
    • ./quickstart.sh –config config/general_config/ha.yml ayoung-dell-t1700.test

In addition, I did some sanity checking of the cluster, but deploying the overcloud using the quickstart helper script, and tore it down using heat stack-delete overcloud.

Reproducing Results

When doing development testing, you can expect to rebuild and teardown your cloud on a regular basis.  When you redeploy, you want to make sure that the changes are just the delta from what you tried last time.  As the number of artifacts grew, I found I needed to maintain a repository of files that included the environment passed to openstack overcloud deploy.  To manage these, I create a git repository in /home/stack/deployment. Inside that directory, I copied the overcloud-deploy.sh and deploy_env.yml files generated by the overcloud, and modified them accordingly.

In my version of overcloud-deploy.sh, I wanted to remove the deploy_env.yml generation, to avoid confusion during later deployments.  I also wanted to preserve the environment file across deployments (and did not want it in /tmp). This file has three parts: the Keystone configuration values, HTTPS/Network setup, and configuration for a single node deployment. This last part was essential for development, as chasing down fixes across three HA nodes was time-consuming and error prone. The DNS server value I used is particular to my deployment, and reflects the IPA server running on the base host.

For reference, I’ve included those files at the end of this post.

Identity Provider Registration and Metadata

While it would have been possible to run the registration of the identity provider on one of the nodes, the Heat-managed deployment process does not provide a clean way to gather those files and package them for deployment to other nodes.  While I deployed on a single node for development, it took me a while to realize that I could do that, and had already worked out an approach to call the registration from the undercloud node, and produce a tarball.

As a result, I created a script, again to allow for reproducing this in the future:

register_sp_rhsso.sh

#!/bin/sh 

basedir=$(dirname $0)
ipa_domain=`hostname -d`
rhsso_master_admin_password=FreeIPA4All

keycloak-httpd-client-install \
   --client-originate-method registration \
   --force \
   --mellon-https-port 5000 \
   --mellon-hostname openstack.$ipa_domain  \
   --mellon-root '/v3' \
   --keycloak-server-url https://identity.$ipa_domain  \
   --keycloak-auth-role root-admin \
   --keycloak-admin-password  $rhsso_master_admin_password \
   --app-name v3 \
   --keycloak-realm openstack \
   --mellon-https-port 5000 \
   --log-file $basedir/rhsso.log \
   --httpd-dir $basedir/rhsso/etc/httpd \
   -l "/v3/auth/OS-FEDERATION/websso/saml2" \
   -l "/v3/auth/OS-FEDERATION/identity_providers/rhsso/protocols/saml2/websso" \
   -l "/v3/OS-FEDERATION/identity_providers/rhsso/protocols/saml2/auth"

This does not quite generate the right paths, as it turns out that the $basename is not quite what we want, so I had to post-edit the generated file: rhsso/etc/httpd/conf.d/v3_mellon_keycloak_openstack.conf

Specifically, the path:
./rhsso/etc/httpd/saml2/v3_keycloak_openstack_idp_metadata.xml

has to be changed to:
/etc/httpd/saml2/v3_keycloak_openstack_idp_metadata.xml

While I created a tarball that I then manually deployed, the preferred approach would be to use tripleo-heat-templates/puppet/deploy-artifacts.yaml to deploy them. The problem I faced is that the generated files include Apache module directives from mod_auth_mellon.  If mod_auth_mellon has not been installed into the controller, the Apache server won’t start, and the deployment will fail.

Federation Operations

The Federation setup requires a few calls. I documented them in Rippowam, and attempted to reproduce them locally using Ansible and the Rippowam code. I was not a purist though, as A) I needed to get this done and B) the end solution is not going to use Ansible anyway. The general steps I performed:

  • yum install mod_auth_mellon
  • Copy over the metadata tarball, expand it, and tweak the configuration (could be done prior to building the tarball).
  • Run the following commands.
openstack identity provider create --remote-id https://identity.{{ ipa_domain }}/auth/realms/openstack
openstack mapping create --rules ./mapping_rhsso_saml2.json rhsso_mapping
openstack federation protocol create --identity-provider rhsso --mapping rhsso_mapping saml2

The Mapping file is the one from Rippowm

The keystone service calls only need to be performed once, as they are stored in the database. The expansion of the tarball needs to be performed on every node.

Dashboard

As in previous Federation setups, I needed to modify the values used for WebSSO. The values I ended up setting in /etc/openstack-dashboard/local_settings resembled this:

OPENSTACK_KEYSTONE_URL = "https://openstack.ayoung-dell-t1700.test:5000/v3"
OPENSTACK_KEYSTONE_DEFAULT_ROLE = "Member"
WEBSSO_ENABLED = True
WEBSSO_INITIAL_CHOICE = "saml2"
WEBSSO_CHOICES = (
    ("saml2", _("Rhsso")),
    ("credentials", _("Keystone Credentials")),
)

Important: Make sure that the auth URL is using a FQDN name that matches the value in the signed certificate.

Redirect Support for SAML

The several differences between how HTTPD and HA Proxy operate require us to perform certain configuration modifications.  Keystone runs internally over HTTP, not HTTPS.  However, the SAML Identity Providers are public, and are transmitting cryptographic data, and need to be protected using HTTPS.  As a result, HA Proxy needs to expose an HTTPS-based endpoint for the Keystone public service.  In addition, the redirects that come from mod_auth_mellon need to reflect the public protocol, hostname, and port.

The solution I ended up with involved changes on both sides:

In haproxy.cfg, I modified the keystone public stanza so it looks like this:

listen keystone_public
bind 10.0.0.4:13000 transparent ssl crt /etc/pki/tls/private/overcloud_endpoint.pem
bind 10.0.0.4:5000 transparent ssl crt /etc/pki/tls/private/overcloud_endpoint.pem
bind 172.16.2.4:5000 transparent
redirect scheme https code 301 if { hdr(host) -i 10.0.0.4 } !{ ssl_fc }
rsprep ^Location:\ http://(.*) Location:\ https://\1

While this was necessary, it also proved to be insufficient. When the signed assertion from the Identity Provider is posted to the Keystone server, mod_auth_mellon checks that the destination value matches what it expects the hostname should be. Consequently, in order to get this to match in the file:

/etc/httpd/conf.d/10-keystone_wsgi_main.conf

I had to set the following:

<VirtualHost 172.16.2.6:5000>
ServerName https://openstack.ayoung-dell-t1700.test

Note that the protocol is set to https even though the Keystone server is handling HTTP. This might break elswhere. If if does, then the Keystone configuration in Apache may have to be duplicated.

Federation Mapping

For the WebSSO login to successfully complete, the user needs to have a role on at least one project. The Rippowam mapping file maps the user to the Member role in the demo group, so the most straightforward steps to complete are to add a demo group, add a demo project, and assign the Member role on the demo project to the demo group. All this should be done with a v3 token:

openstack group create demo
openstack role create Member
openstack project create demo
openstack role add --group demo --project demo Member

Complete helper files

Below are the complete files that were too long to put inline.

overcloud-deploy.sh

#!/bin/bash
# Simple overcloud deploy script

set -eux

# Source in undercloud credentials.
source /home/stack/stackrc

# Wait until there are hypervisors available.
while true; do
    count=$(openstack hypervisor stats show -c count -f value)
    if [ $count -gt 0 ]; then
        break
    fi
done

deploy_status=0

# Deploy the overcloud!
openstack overcloud deploy --debug --templates --libvirt-type qemu --control-flavor oooq_control --compute-flavor oooq_compute --ceph-storage-flavor oooq_ceph --timeout 90 -e /usr/share/openstack-tripleo-heat-templates/environments/network-isolation.yaml -e /usr/share/openstack-tripleo-heat-templates/environments/net-single-nic-with-vlans.yaml -e $HOME/deployment/network-environment.yaml --control-scale 3 --neutron-network-type vxlan --neutron-tunnel-types vxlan -e /usr/share/openstack-tripleo-heat-templates/environments/puppet-pacemaker.yaml --ntp-server pool.ntp.org -e $HOME/deployment/deploy_env.yaml   --force-postconfig "$@"    || deploy_status=1

# We don't always get a useful error code from the openstack deploy command,
# so check `heat stack-list` for a CREATE_FAILED status.
if heat stack-list | grep -q 'CREATE_FAILED'; then
    deploy_status=1

    for failed in $(heat resource-list \
        --nested-depth 5 overcloud | grep FAILED |
        grep 'StructuredDeployment ' | cut -d '|' -f3)
    do heat deployment-show $failed > failed_deployment_$failed.log
    done
fi

exit $deploy_status

deploy-env.yml

parameter_defaults:
  controllerExtraConfig:
    keystone::using_domain_config: true
    keystone::config::keystone_config:
      identity/domain_configurations_from_database:
        value: true
      auth/methods:
        value: external,password,token,oauth1,saml2
      federation/trusted_dashboard:
        value: http://openstack.ayoung-dell-t1700.test/dashboard/auth/websso/
      federation/sso_callback_template:
        value: /etc/keystone/sso_callback_template.html
      federation/remote_id_attribute:
        value: MELLON_IDP

    # In releases before Mitaka, HeatWorkers doesn't modify
    # num_engine_workers, so handle via heat::config 
    heat::config::heat_config:
      DEFAULT/num_engine_workers:
        value: 1
    heat::api_cloudwatch::enabled: false
    heat::api_cfn::enabled: false
  HeatWorkers: 1
  CeilometerWorkers: 1
  CinderWorkers: 1
  GlanceWorkers: 1
  KeystoneWorkers: 1
  NeutronWorkers: 1
  NovaWorkers: 1
  SwiftWorkers: 1
  CloudName: openstack.ayoung-dell-t1700.test
  CloudDomain: ayoung-dell-t1700.test
  DnsServers: 10.18.57.26


  #TLS Setup from enable-tls.yaml
  PublicVirtualFixedIPs: [{'ip_address':'10.0.0.4'}]
  SSLCertificate: |
    -----BEGIN CERTIFICATE-----
    #certificate removed for space
    -----END CERTIFICATE-----

    The contents of your certificate go here
  SSLIntermediateCertificate: ''
  SSLKey: |
    -----BEGIN RSA PRIVATE KEY-----
    #key removed for space
    -----END RSA PRIVATE KEY-----

  EndpointMap:
    AodhAdmin: {protocol: 'http', port: '8042', host: 'IP_ADDRESS'}
    AodhInternal: {protocol: 'http', port: '8042', host: 'IP_ADDRESS'}
    AodhPublic: {protocol: 'https', port: '13042', host: 'CLOUDNAME'}
    CeilometerAdmin: {protocol: 'http', port: '8777', host: 'IP_ADDRESS'}
    CeilometerInternal: {protocol: 'http', port: '8777', host: 'IP_ADDRESS'}
    CeilometerPublic: {protocol: 'https', port: '13777', host: 'CLOUDNAME'}
    CinderAdmin: {protocol: 'http', port: '8776', host: 'IP_ADDRESS'}
    CinderInternal: {protocol: 'http', port: '8776', host: 'IP_ADDRESS'}
    CinderPublic: {protocol: 'https', port: '13776', host: 'CLOUDNAME'}
    GlanceAdmin: {protocol: 'http', port: '9292', host: 'IP_ADDRESS'}
    GlanceInternal: {protocol: 'http', port: '9292', host: 'IP_ADDRESS'}
    GlancePublic: {protocol: 'https', port: '13292', host: 'CLOUDNAME'}
    GnocchiAdmin: {protocol: 'http', port: '8041', host: 'IP_ADDRESS'}
    GnocchiInternal: {protocol: 'http', port: '8041', host: 'IP_ADDRESS'}
    GnocchiPublic: {protocol: 'https', port: '13041', host: 'CLOUDNAME'}
    HeatAdmin: {protocol: 'http', port: '8004', host: 'IP_ADDRESS'}
    HeatInternal: {protocol: 'http', port: '8004', host: 'IP_ADDRESS'}
    HeatPublic: {protocol: 'https', port: '13004', host: 'CLOUDNAME'}
    HorizonPublic: {protocol: 'https', port: '443', host: 'CLOUDNAME'}
    KeystoneAdmin: {protocol: 'http', port: '35357', host: 'IP_ADDRESS'}
    KeystoneInternal: {protocol: 'http', port: '5000', host: 'IP_ADDRESS'}
    KeystonePublic: {protocol: 'https', port: '13000', host: 'CLOUDNAME'}
    NeutronAdmin: {protocol: 'http', port: '9696', host: 'IP_ADDRESS'}
    NeutronInternal: {protocol: 'http', port: '9696', host: 'IP_ADDRESS'}
    NeutronPublic: {protocol: 'https', port: '13696', host: 'CLOUDNAME'}
    NovaAdmin: {protocol: 'http', port: '8774', host: 'IP_ADDRESS'}
    NovaInternal: {protocol: 'http', port: '8774', host: 'IP_ADDRESS'}
    NovaPublic: {protocol: 'https', port: '13774', host: 'CLOUDNAME'}
    NovaEC2Admin: {protocol: 'http', port: '8773', host: 'IP_ADDRESS'}
    NovaEC2Internal: {protocol: 'http', port: '8773', host: 'IP_ADDRESS'}
    NovaEC2Public: {protocol: 'https', port: '13773', host: 'CLOUDNAME'}
    NovaVNCProxyAdmin: {protocol: 'http', port: '6080', host: 'IP_ADDRESS'}
    NovaVNCProxyInternal: {protocol: 'http', port: '6080', host: 'IP_ADDRESS'}
    NovaVNCProxyPublic: {protocol: 'https', port: '13080', host: 'CLOUDNAME'}
    SaharaAdmin: {protocol: 'http', port: '8386', host: 'IP_ADDRESS'}
    SaharaInternal: {protocol: 'http', port: '8386', host: 'IP_ADDRESS'}
    SaharaPublic: {protocol: 'https', port: '13386', host: 'CLOUDNAME'}
    SwiftAdmin: {protocol: 'http', port: '8080', host: 'IP_ADDRESS'}
    SwiftInternal: {protocol: 'http', port: '8080', host: 'IP_ADDRESS'}
    SwiftPublic: {protocol: 'https', port: '13808', host: 'CLOUDNAME'}

resource_registry:
  OS::TripleO::NodeTLSData: /usr/share/openstack-tripleo-heat-templates/puppet/extraconfig/tls/tls-cert-inject.yaml

parameters:
   ControllerCount: 1 

3 thoughts on “Tripleo HA Federation Proof-of-Concept

  1. Hi Adam,

    one of the things that I have noticed with federation is that heat (+Sahara, Murano) does not work, heat can’t find the federated users role – I’m guessing because it’s using the python-keystoneclient? – for whatever reason, it doesn’t find the users role viz groups, wondering if we can change the auth plugin in heat.conf [trustee] section to support federated users creating trusts, have you ever seen that?

  2. I did file a bug with the distribution I’m using, heat uses keystoneclient and throws 404 finding the federated user role assignment and subsequently, cannot create a trust – of course, the user has no role at that level – presumably I will need to wait for the shadow-users mapping – certainly for the distribution I’m using (Mirantis) federated users cannot create keystone trusts, in Mitaka and now in Newton. I though perhaps because heat was set to use “auth_plugin=password” in [trustee] section. Anyway I felt like I was getting into developer turf and rolled back to ldap (boo!) here’s what happened if you are interested
    https://bugs.launchpad.net/fuel/+bug/1626046

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.