FreeIPA for Tripleo

My last post showed how to allocate an additional VM for Tripleo. Now I’m going to go through the steps to deploy FreeIPA on it. However, since I went through all of the effort to write Ossipee and Rippowam, I am going to use those to do the heavy lifting.

This one is pretty grungy. I’m going to generate a punch-list from, and will continue to clean up the steps as I go, but first I want to just get it working.

To start, turn the ironic node into a server:

openstack server create  --flavor compute  --image overcloud-full  --key-name default idm

Now, in order to run Ansible, we need a custom inventory. I’ve done a small hack to Ossipee to get it to Generate the appropriate Inventory from the Nova servers in Tripleo’s undercloud.

Ossipee needs to use the V3 version of the Keystone API, so lets convert the V2 stackrc into a v3 and source that. Grab the script from http://adam.younglogic.com/2016/03/v3fromv2/ and run

./v3fromv2.sh stackrc > stackrc.v3
. ./stackrc.v3 

A good way to check that you are using V3 is to do a V3 only operation, like list domains:

openstack domain list
+----------------------------------+------------+---------+--------------------+
| ID                               | Name       | Enabled | Description        |
+----------------------------------+------------+---------+--------------------+
| 33c86e573f094787adb2e808c723dcca | heat_stack | True    |                    |
| default                          | Default    | True    | The default domain |
+----------------------------------+------------+---------+--------------------+

Grab Ossipee and run the inventoroy generator.

git clone https://github.com/admiyo/ossipee.git
python ./ossipee/ossipee-inventory.py > inventory.ini

This gets an inventory file that looks roughly like the ones Ossipee created before, but uses the same host group names as the rest of Tripleo:

[idm]
10.149.2.15
[idm:vars]
ipa_realm=AYOUNG.DELLT1700.TEST
cloud_user=heat-admin
ipa_server_password=FreeIPA4All
ipa_domain=ayoung.dellt1700.test
ipa_forwarder=192.168.23.1
ipa_admin_user_password=FreeIPA4All
ipa_nova_join=False
nameserver=192.168.52.4

[overcloud-controller-0]
10.149.2.13
[overcloud-controller-0:vars]
ipa_realm=AYOUNG.DELLT1700.TEST
cloud_user=heat-admin
ipa_server_password=FreeIPA4All
ipa_domain=ayoung.dellt1700.test
ipa_forwarder=192.168.23.1
ipa_admin_user_password=FreeIPA4All
ipa_nova_join=False
nameserver=192.168.52.4

[overcloud-novacompute-0]
10.149.2.12
[overcloud-novacompute-0:vars]
ipa_realm=AYOUNG.DELLT1700.TEST
cloud_user=heat-admin
ipa_server_password=FreeIPA4All
ipa_domain=ayoung.dellt1700.test
ipa_forwarder=192.168.23.1
ipa_admin_user_password=FreeIPA4All
ipa_nova_join=False
nameserver=192.168.52.4

IN addition, I think the ipa_forwarder values are deployment specific, and I have them wrong. Look in the controller resolv.conf to see what they should be:

$ ssh heat-admin@10.149.2.12 cat /etc/resolv.conf
; generated by /usr/sbin/dhclient-script
nameserver 10.149.2.5

The nameserver value should be the IPA address of the newly created idm vm.

openstack server list
+--------------------------------------+-------------------------+--------+----------------------+
| ID                                   | Name                    | Status | Networks             |
+--------------------------------------+-------------------------+--------+----------------------+
| f81e7122-5d8e-4377-b855-80c28116197d | idm                     | ACTIVE | ctlplane=10.149.2.15 |
| c1bf48cb-659f-4f9f-aa9d-1c0d4bcae06d | overcloud-controller-0  | ACTIVE | ctlplane=10.149.2.13 |
| c1e2069f-4ef1-461b-86a9-2fd2bb321a8a | overcloud-novacompute-0 | ACTIVE | ctlplane=10.149.2.12 |
+--------------------------------------+-------------------------+--------+----------------------+

Obviously, Ossipee needs some work here, but this is likely going to be replaced by Heat work shortly. Anyway, adjust the IP addresses accordingly.

Now grab Rippowman :

git clone https://github.com/admiyo/rippowam.git

And Ansible from EPEL. Note that this needs to be two calls, as the first installs the repo used by the second.

sudo yum -y install epel-release
sudo yum -y install ansible

Rippowam still has the host name as ipa in the ipa playbook. You can change either the inventory or the Rippowam code to match. I changed Rippowman like this:

diff --git a/ipa.yml b/ipa.yml
index e0ea50c..c17c2b5 100644
--- a/ipa.yml
+++ b/ipa.yml
@@ -1,10 +1,10 @@
 
-- hosts: ipa
+- hosts: idm
   remote_user: "{{ cloud_user }}"
   tags: all
   tasks: []
 
-- hosts: ipa
+- hosts: idm
   sudo: yes
   remote_user: "{{ cloud_user }}"
   tags:

The inventory file is set up for later when it needs to talk to the Overcloud controllers. Heat changes the cloud user to heat-admin. Create this on the idm machine.

ssh centos@10.149.2.15 sudo useradd -m  heat-admin  -G wheel
ssh centos@10.149.2.15 sudo mkdir /home/heat-admin/.ssh
ssh centos@10.149.2.15 sudo chown heat-admin:heat-admin /home/heat-admin/.ssh
ssh centos@10.149.2.15 sudo cp /home/centos/.ssh/authorized_keys /home/heat-admin/.ssh/
ssh centos@10.149.2.15 sudo chown heat-admin:heat-admin /home/heat-admin/.ssh/authorized_keys
ssh heat-admin@10.149.2.15 ls
ssh heat-admin@10.149.2.15 pwd

I also manually went in an tweaked the /etc/sudoers values to let password-less sudo work for heat-admin. Not a long term approach I would suggest, but these are just my current development notes.

Make sure that ansible works:

 ansible -i inventory.ini --user heat-admin --sudo idm -m setup

Output not pasted here for brevity.

The machine needs a FQDN to deploy. I am going to continue the pattern from before, where the clustername jhas some aspectof the user name. Since the baremetal host is
ayoung-dell-t1700 this cluster will be ayoung-dell-t1700.test and the FQDN for this host will be idm.ayoung-dell-t1700.test

sudo vi /etc/hostname
sudo hostname `cat /etc/hostname`

Run the ipa playbook.

 ansible-playbook -i inventory.ini rippowam/ipa.yml 

Assuming that runs successfully, do the same kind of thing with the keyclock.yml play book: edit it to change the hostgroup to idm. and run.

Also, seems i have some typos in roles/keycloak/tasks/main.yml:

index 59f67c7..cce462d 100644
--- a/roles/keycloak/tasks/main.yml
+++ b/roles/keycloak/tasks/main.yml
@@ -114,7 +114,7 @@
     - keycloak
   copy: src=keycloak-proxy.conf 
         dest=/etc/httpd/conf.d/keycloak-proxy.conf 
-        owner=root group=rootmode="u=rw,g=r,o=r"
+        owner=root group=root mode="u=rw,g=r,o=r"
@@ -122,5 +122,5 @@
     - keycloak
   service: name=httpd
            enabled=yes
-           state=irestarted
+           state=restarted
 

Fix those and then:

 ansible-playbook -i inventory.ini rippowam/ipa.yml 

Ugh, that was messy. need to clean it up. But it did work.

Now, how to go look at our newly deployed servers? The best bet seems to be to use sshuttle.

From the desktop (not the undercloud)

sshuttle -e "ssh -F $HOME/.quickstart/ssh.config.ansible" -r undercloud -v 10.149.2.0/24

In order to point a browser at it, need to have an entry in the hosts file. For me:


10.149.2.15 idm.ayoung-dell-t1700.test

Keycloak needs to be initialized. Start by SSHing to the idm machine, and then.

$ cd /var/lib/keycloak/keycloak-1.9.0.Final  
$ sudo bin/add-user.sh -u admin
Press ctrl-d (Unix) or ctrl-z (Windows) to exit
Password: 
Added 'admin' to '/var/lib/keycloak/keycloak-1.9.0.Final/standalone/configuration/keycloak-add-user.json', restart server to load user
(reverse-i-search)`': ^C
$ sudo systemctl restart keycloak.service

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.