Sunday, February 15, 2015

Testing the most recent Nova-Docker driver on Ubuntu 14.04 in devstack environment recoverable between reboots

*******************************************************************************
UPDATE : as of 02/25/2015  I compared three devstack installs

        First  as of 02/15/2015
        Second as of 02/22/2015
        Third  as of 02/24/2015
        Third install generates file /home/ubuntu/stack/nova/openstack-common.conf
        without entry "module=log"
and this openstack-common.conf works for stack.sh installing libvirt driver
(I had no doubt, even before tested)
Check once more n-cpu screen log :
2015-02-24 06:04:01.780 TRACE nova.virt.driver File "/usr/local/lib/python2.7/dist-packages/novadocker/virt/docker/driver.py", line 41, in
2015-02-24 06:04:01.780 TRACE nova.virt.driver from nova.openstack.common import log
2015-02-24 06:04:01.780 TRACE nova.virt.driver ImportError: cannot import name log

In meantime files driver.py,vifs.py, network.py from cloned folder nova-docker under  /nova-docker/novadocker/virt/docker  have entry
 from nova.openstack.common import log
*******************************************************************************
UPDATE : As of 02/24/2015 follow word in word same blog entry doing fresh installation (new VM) stack.sh fails:
+ service=n-cpu.failure
2015-02-24 10:07:05.567 | + service=n-cpu
2015-02-24 10:07:05.568 | + echo 'Error: Service n-cpu is not running'
2015-02-24 10:07:05.568 | Error: Service n-cpu is not running
2015-02-24 10:07:05.569 | + '[' -n /home/ubuntu/stack/status/stack/n-cpu.failure ']'

Here is n-cpu screen log 
*******************************************************************************
UPDATE As of 02/22/2015 procedure described at   http://blog.oddbit.com/2015/02/11/installing-novadocker-with-devstack/
may be performed, but created Nova-Docker driver is no longer functional
Since 02/21  I attempted 3 fresh installs described  by Lars Kellog-Stedman
every time ./stack.sh completed OK. However, any launched Nova-Docker container was unavailable via floating IP and via corresponding qdhcp-namespace by it's private IP.  Nova boot just loaded docker container and nothing else demonstrating obvious regression versus status as of 02/15 , when text bellow has been written
*******************************************************************************
 
Recently new patch https://review.openstack.org/#/c/154750/  merged
https://github.com/stackforge/nova-docker.git  what made possible
to test Nova-docker driver built via current git tree status with the most
recent openstack code obtained by devstack by cloning https://git.openstack.org/openstack-dev/devstack  However,nova-docker containers have been lost after every reboot due to bridge br-ex came up with no IP  and running ./rejoin-stack.sh didn't help much.  This post describes workaround for this issue.

   First part of article actually follows   http://blog.oddbit.com/2015/02/11/installing-novadocker-with-devstack/
written by Lars Kellogg-Stedman with non-critical  changes in local.conf file.

   Second part of article provides workaround making created nova-docker
instances and all  devstack environment recoverable between reboots.

Reproducing the first part I also installed horizon launching nova-docker containers and assigning floating IPs clicking by mouse ( via admin login working with preinstalled Demo project ) 
Run as root ( post install ) MASQUERADE for public
network installed by devstack.
*************************************************************************
iptables -t nat -A POSTROUTING -s 172.24.4.0/24 -j MASQUERADE
*************************************************************************
$ sudo apt-get update
$ sudo apt-get -y install git git-review python-pip python-dev
$ sudo apt-get -y upgrade

$ sudo apt-key adv --keyserver hkp://keyserver.ubuntu.com:80 --recv-keys 36A1D7869245C8950F966E92D8576A8BA88D21E9
$ sudo sh -c "echo deb https://get.docker.com/ubuntu docker main  \
   > /etc/apt/sources.list.d/docker.list"
$ sudo apt-get update
$ sudo apt-get install lxc-docker

*********************************************
Update  /etc/default/docker and setting:
*********************************************
DOCKER_OPTS='-G ubuntu'

#service docker restart

*******************************
Installing nova-docker
*******************************
View  for details https://review.openstack.org/#/c/154750/
As of time of writing patch was already merged so I cloned
the all git tree :-

$ git clone http://github.com/stackforge/nova-docker.git
$ cd nova-docker
$ sudo pip install .

*****************************
Configuring devstack
*****************************

Now we're ready to get devstack up and running. Start by cloning the repository:

$ git clone https://git.openstack.org/openstack-dev/devstack
$ cd devstack
1. Create local.conf under devstack ( original version )
***************
local.conf
***************
[[local|localrc]]
ADMIN_PASSWORD=secret
DATABASE_PASSWORD=$ADMIN_PASSWORD
RABBIT_PASSWORD=$ADMIN_PASSWORD
SERVICE_PASSWORD=$ADMIN_PASSWORD
SERVICE_TOKEN=super-secret-admin-token
VIRT_DRIVER=novadocker.virt.docker.DockerDriver

DEST=$HOME/stack
SERVICE_DIR=$DEST/status
DATA_DIR=$DEST/data
LOGFILE=$DEST/logs/stack.sh.log
LOGDIR=$DEST/logs

# The default fixed range (10.0.0.0/24) conflicted with an address
# range I was using locally.
FIXED_RANGE=10.254.1.0/24
NETWORK_GATEWAY=10.254.1.1

# Services
disable_service n-net
enable_service q-svc
enable_service q-agt
enable_service q-dhcp
enable_service q-l3
enable_service q-meta
enable_service horizon
disable_service tempest

# Introduce glance to docker images
[[post-config|$GLANCE_API_CONF]]
[DEFAULT]
container_formats=ami,ari,aki,bare,ovf,ova,docker

# Configure nova to use the nova-docker driver
[[post-config|$NOVA_CONF]]
[DEFAULT]
compute_driver=novadocker.virt.docker.DockerDriver

*****************************************************************************
My version of local.conf which allows define floating pool as you need,
a bit more flexible then original
*****************************************************************************
[[local|localrc]]
HOST_IP=192.168.1.57
ADMIN_PASSWORD=secret
MYSQL_PASSWORD=secret
RABBIT_PASSWORD=secret
SERVICE_PASSWORD=secret
FLOATING_RANGE=192.168.10.0/24
FLAT_INTERFACE=eth0
Q_FLOATING_ALLOCATION_POOL=start=192.168.10.150,end=192.168.10.254
PUBLIC_NETWORK_GATEWAY=192.168.10.15

SERVICE_TOKEN=super-secret-admin-token
VIRT_DRIVER=novadocker.virt.docker.DockerDriver

DEST=$HOME/stack
SERVICE_DIR=$DEST/status
DATA_DIR=$DEST/data
LOGFILE=$DEST/logs/stack.sh.log
LOGDIR=$DEST/logs

# The default fixed range (10.0.0.0/24) conflicted with an address
# range I was using locally.
FIXED_RANGE=10.254.1.0/24
NETWORK_GATEWAY=10.254.1.1

# Services
disable_service n-net
enable_service q-svc
enable_service q-agt
enable_service q-dhcp
enable_service q-l3
enable_service q-meta
enable_service horizon
disable_service tempest

# Introduce glance to docker images
[[post-config|$GLANCE_API_CONF]]
[DEFAULT]
container_formats=ami,ari,aki,bare,ovf,ova,docker

# Configure nova to use the nova-docker driver
[[post-config|$NOVA_CONF]]
[DEFAULT]
compute_driver=novadocker.virt.docker.DockerDriver

**************************************
Corresponding iptables entry
**************************************
iptables -t nat -A POSTROUTING -s 192.168.10.0/24 -j MASQUERADE

At this point you are ready to run :-

$ ./stack.sh

*****************************************************************************
Attention skipping this step causes message "No hosts available"
when launching, either causes failure to launch nova-docker instances
in case of stack.sh rerun after ./unstack.sh
******************************************************************************

$ sudo cp nova-docker/etc/nova/rootwrap.d/docker.filters \
  /etc/nova/rootwrap.d/

$ .   openrc admin

For docker pull && docker save

$ .   openrc  demo

To launch instances

*********************************************************************************
Next issue , you have run `sudo ./unstack.sh` , rebooted box hosting devstack  instance and OVS bridge "br-ex" came up with no IP no matter which one of local.conf has been used for ./stack.sh deployment.
Before running ./rejoin-stack.sh following actions have to be undertaken
*********************************************************************************
 This version is supposed to work with second version of local.conf
 PUBLIC_NETWORK_GATEWAY=192.168.10.15

    sudo ip addr flush dev br-ex
    sudo ip addr add 192.168.10.15/24 dev br-ex

    sudo ip link set br-ex up
    sudo route add -net 10.254.1.0/24 gw 192.168.10.15



******************************************************
Verify correct environment installed:-
******************************************************

ubuntu@ubuntu-System-Product-Name:~$ ifconfig
br-ex     Link encap:Ethernet  HWaddr de:64:4b:ba:a7:48 
          inet addr:192.168.10.15  Bcast:0.0.0.0  Mask:255.255.255.0
          UP BROADCAST RUNNING  MTU:1500  Metric:1
          RX packets:2186 errors:0 dropped:0 overruns:0 frame:0
          TX packets:2649 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:0
          RX bytes:1801780 (1.8 MB)  TX bytes:2194422 (2.1 MB)


br-int    Link encap:Ethernet  HWaddr b2:cf:54:c5:a0:49 
          inet6 addr: fe80::b007:79ff:fe87:4260/64 Scope:Link
          UP BROADCAST RUNNING  MTU:1500  Metric:1
          RX packets:648 errors:0 dropped:0 overruns:0 frame:0
          TX packets:8 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:0
          RX bytes:120474 (120.4 KB)  TX bytes:648 (648.0 B)

br-tun    Link encap:Ethernet  HWaddr 3a:fb:71:08:1a:45 
          inet6 addr: fe80::899:bcff:fed6:8d8d/64 Scope:Link
          UP BROADCAST RUNNING  MTU:1500  Metric:1
          RX packets:0 errors:0 dropped:0 overruns:0 frame:0
          TX packets:8 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:0
          RX bytes:0 (0.0 B)  TX bytes:648 (648.0 B)

docker0   Link encap:Ethernet  HWaddr 56:84:7a:fe:97:99 
          inet addr:172.17.42.1  Bcast:0.0.0.0  Mask:255.255.0.0
          UP BROADCAST MULTICAST  MTU:1500  Metric:1
          RX packets:0 errors:0 dropped:0 overruns:0 frame:0
          TX packets:0 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:0
          RX bytes:0 (0.0 B)  TX bytes:0 (0.0 B)

eth0      Link encap:Ethernet  HWaddr 90:e6:ba:2d:11:eb 
          inet addr:192.168.1.37  Bcast:192.168.1.255  Mask:255.255.255.0
          inet6 addr: fe80::92e6:baff:fe2d:11eb/64 Scope:Link
          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
          RX packets:64604 errors:0 dropped:0 overruns:0 frame:0
          TX packets:37999 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:1000
          RX bytes:88470764 (88.4 MB)  TX bytes:3455868 (3.4 MB)

eth1      Link encap:Ethernet  HWaddr 00:0c:76:e0:1e:c5 
          inet6 addr: fe80::20c:76ff:fee0:1ec5/64 Scope:Link
          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
          RX packets:239 errors:0 dropped:0 overruns:0 frame:0
          TX packets:389 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:1000
          RX bytes:58024 (58.0 KB)  TX bytes:75526 (75.5 KB)

lo        Link encap:Local Loopback 
          inet addr:127.0.0.1  Mask:255.0.0.0
          inet6 addr: ::1/128 Scope:Host
          UP LOOPBACK RUNNING  MTU:65536  Metric:1
          RX packets:30804 errors:0 dropped:0 overruns:0 frame:0
          TX packets:30804 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:0
          RX bytes:10921200 (10.9 MB)  TX bytes:10921200 (10.9 MB)

ns44923080-eb Link encap:Ethernet  HWaddr 9a:db:d0:5a:ad:02 
          inet6 addr: fe80::98db:d0ff:fe5a:ad02/64 Scope:Link
          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
          RX packets:621 errors:0 dropped:0 overruns:0 frame:0
          TX packets:289 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:1000
          RX bytes:119156 (119.1 KB)  TX bytes:55649 (55.6 KB)

ns9cb8e46e-35 Link encap:Ethernet  HWaddr 6e:f3:23:93:b4:11 
          inet6 addr: fe80::6cf3:23ff:fe93:b411/64 Scope:Link
          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
          RX packets:637 errors:0 dropped:0 overruns:0 frame:0
          TX packets:271 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:1000
          RX bytes:121878 (121.8 KB)  TX bytes:52144 (52.1 KB)

tap44923080-eb Link encap:Ethernet  HWaddr ee:b3:16:a3:f9:ed 
          inet6 addr: fe80::ecb3:16ff:fea3:f9ed/64 Scope:Link
          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
          RX packets:289 errors:0 dropped:0 overruns:0 frame:0
          TX packets:621 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:1000
          RX bytes:55649 (55.6 KB)  TX bytes:119156 (119.1 KB)

tap8897281a-3f Link encap:Ethernet  HWaddr 9a:2a:eb:a5:3d:60 
          inet6 addr: fe80::982a:ebff:fea5:3d60/64 Scope:Link
          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
          RX packets:2236 errors:0 dropped:0 overruns:0 frame:0
          TX packets:3452 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:1000
          RX bytes:1812589 (1.8 MB)  TX bytes:2351741 (2.3 MB)

tap9cb8e46e-35 Link encap:Ethernet  HWaddr 06:3c:cc:e5:30:4a 
          inet6 addr: fe80::43c:ccff:fee5:304a/64 Scope:Link
          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
          RX packets:271 errors:0 dropped:0 overruns:0 frame:0
          TX packets:637 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:1000
          RX bytes:52144 (52.1 KB)  TX bytes:121878 (121.8 KB)

virbr0    Link encap:Ethernet  HWaddr e2:93:d0:a0:2c:f6 
          inet addr:192.168.122.1  Bcast:192.168.122.255  Mask:255.255.255.0
          UP BROADCAST MULTICAST  MTU:1500  Metric:1
          RX packets:0 errors:0 dropped:0 overruns:0 frame:0
          TX packets:0 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:0
          RX bytes:0 (0.0 B)  TX bytes:0 (0.0 B)


ubuntu@ubuntu-System-Product-Name:~$ route -n

Kernel IP routing table
Destination     Gateway           Genmask             Flags Metric Ref    Use Iface
0.0.0.0            192.168.1.1       0.0.0.0                UG        0      0        0 eth0
10.254.1.0      192.168.10.15   255.255.255.0      UG        0      0        0 br-ex
172.17.0.0      0.0.0.0               255.255.0.0          U           0      0        0 docker0
192.168.1.0     0.0.0.0              255.255.255.0      U           1      0        0 eth0
192.168.10.0    0.0.0.0             255.255.255.0      U           0      0        0 br-ex
192.168.122.0   0.0.0.0            255.255.255.0      U           0      0        0 virbr0


****************************************
At this point run you may run
****************************************

    cd devstack ; ./rejoin-stack.sh

and it will bring your devstack environment back

********************************************************************
Actually, on Ubuntu 14.04 box doing this kind of testing
********************************************************************
root@ubuntu-P5Q3 :~# cat /etc/rc.local
#!/bin/sh -e
#
# rc.local
#
# This script is executed at the end of each multiuser runlevel.
# Make sure that the script will "exit 0" on success or any other
# value on error.
#
# In order to enable or disable this script just change the execution
# bits.
#
# By default this script does nothing.
ip addr flush dev br-ex ;
ip addr add 192.168.10.15/24 dev br-ex ;
ip link set br-ex up ;
route add -net 10.254.1.0/24 gw 192.168.10.15 ;
exit 0



Vncviewer started from Ubuntu VM with devstack environment installed
connecting to vncserver screen running on Ubuntu Rastasheep nova-docker instance

  




Running  Glassfish 4.1 nova-docker container on real Ubuntu 14.04 box
  

   SQLDeveloper connection to Oracle XE database running inside nova-docker
   container


  Launching nova-docker container via CLI on real Ubuntu 14.04 box

ubuntu@ubuntu-P5Q3 :~/devstack$ nova boot --image rastasheep/ubuntu-sshd:latest  --flavor m1.small UbuntuRST
+--------------------------------------+----------------------------------------------------------------------+
| Property                             | Value                                                                |
+--------------------------------------+----------------------------------------------------------------------+
| OS-DCF:diskConfig                    | MANUAL                                                               |
| OS-EXT-AZ:availability_zone          | nova                                                                 |
| OS-EXT-STS:power_state               | 0                                                                    |
| OS-EXT-STS:task_state                | scheduling                                                           |
| OS-EXT-STS:vm_state                  | building                                                             |
| OS-SRV-USG:launched_at               | -                                                                    |
| OS-SRV-USG:terminated_at             | -                                                                    |
| accessIPv4                           |                                                                      |
| accessIPv6                           |                                                                      |
| adminPass                            | n56arrfUdTLY                                                         |
| config_drive                         |                                                                      |
| created                              | 2015-02-16T20:18:38Z                                                 |
| flavor                               | m1.small (2)                                                         |
| hostId                               |                                                                      |
| id                                   | 85acb8d4-2387-4a21-9b77-321480f03163                                 |
| image                                | rastasheep/ubuntu-sshd:latest (87956634-9708-4d63-8daf-cdd15d288d86) |
| key_name                             | -                                                                    |
| metadata                             | {}                                                                   |
| name                                 | UbuntuRST                                                            |
| os-extended-volumes:volumes_attached | []                                                                   |
| progress                             | 0                                                                    |
| security_groups                      | default                                                              |
| status                               | BUILD                                                                |
| tenant_id                            | 2f34beaaa0684e899f28c1b6fef521ac                                     |
| updated                              | 2015-02-16T20:18:38Z                                                 |
| user_id                              | a78cae8feb1f40b081db787629a407af                                     |
+--------------------------------------+----------------------------------------------------------------------+

ubuntu@ubuntu-P5Q3 :~/devstack$ nova list
+--------------------------------------+------------------+--------+------------+-------------+------------------------------------+
| ID                                   | Name             | Status | Task State | Power State | Networks                           |
+--------------------------------------+------------------+--------+------------+-------------+------------------------------------+
| 85acb8d4-2387-4a21-9b77-321480f03163 | UbuntuRST        | ACTIVE | -          | Running     | private=10.254.1.6                 |
| fc0a6180-d177-4f04-bdf6-382820c5f8da | derbyGlassfish41 | ACTIVE | -          | Running     |
|  private=10.254.1.5, 192.168.10.152 |
+--------------------------------------+------------------+--------+------------+--------------


ubuntu@ubuntu-P5Q3 :~/devstack$ nova floating-ip-create
+----------------+-----------+----------+--------+
| Ip             | Server Id | Fixed Ip | Pool   |
+----------------+-----------+----------+--------+
| 192.168.10.153 | -         | -        | public |
+----------------+-----------+----------+--------+

ubuntu@ubuntu-P5Q3 :~/devstack$ nova floating-ip-associate UbuntuRST 192.168.10.153

ubuntu@ubuntu-P5Q3 :~/devstack$ ping -c 3 192.168.10.153
PING 192.168.10.153 (192.168.10.153) 56(84) bytes of data.
64 bytes from 192.168.10.153: icmp_seq=1 ttl=63 time=0.667 ms
64 bytes from 192.168.10.153: icmp_seq=2 ttl=63 time=0.274 ms
64 bytes from 192.168.10.153: icmp_seq=3 ttl=63 time=0.084 ms

--- 192.168.10.153 ping statistics ---
3 packets transmitted, 3 received, 0% packet loss, time 1998ms
rtt min/avg/max/mdev = 0.084/0.341/0.667/0.243 ms

ubuntu@ubuntu-P5Q3 :~/devstack$ ssh root@192.168.10.153
The authenticity of host '192.168.10.153 (192.168.10.153)' can't be established.
ECDSA key fingerprint is cf:f3:e5:fd:ce:d9:99:b6:79:2d:34:73:e8:a3:2e:21.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added '192.168.10.153' (ECDSA) to the list of known hosts.
root@192.168.10.153's password:
root@instance-00000004:~# ps -ef
UID        PID  PPID  C STIME TTY          TIME CMD
root         1     0  0 20:18 ?        00:00:00 /usr/sbin/sshd -D
root         5     1  0 20:22 ?        00:00:00 sshd: root@pts/0   
root         7     5  0 20:22 pts/0    00:00:00 -bash
root        18     7  0 20:22 pts/0    00:00:00 ps -ef

root@instance-00000004:~# ifconfig
lo        Link encap:Local Loopback 
          inet addr:127.0.0.1  Mask:255.0.0.0
          inet6 addr: ::1/128 Scope:Host
          UP LOOPBACK RUNNING  MTU:65536  Metric:1
          RX packets:0 errors:0 dropped:0 overruns:0 frame:0
          TX packets:0 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:0
          RX bytes:0 (0.0 B)  TX bytes:0 (0.0 B)

nsa7183e2e-09 Link encap:Ethernet  HWaddr fa:16:3e:3d:0f:68 
          inet addr:10.254.1.6  Bcast:10.254.1.255  Mask:255.255.255.0
          inet6 addr: fe80::f816:3eff:fe3d:f68/64 Scope:Link
          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
          RX packets:2378 errors:0 dropped:12 overruns:0 frame:0
          TX packets:1425 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:1000
          RX bytes:2586320 (2.5 MB)  TX bytes:132646 (132.6 KB)


*************************************************************
Login via qdhcp-namespace into UbuntuRST
*************************************************************

ubuntu@ubuntu-P5Q3 :~/devstack$ sudo ip netns exec qdhcp-c9e35028-bb1b-4141-b02b-9f35c7524dd2 ssh root@10.254.1.6
The authenticity of host '10.254.1.6 (10.254.1.6)' can't be established.
ECDSA key fingerprint is cf:f3:e5:fd:ce:d9:99:b6:79:2d:34:73:e8:a3:2e:21.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added '10.254.1.6' (ECDSA) to the list of known hosts.
root@10.254.1.6's password:

Last login: Mon Feb 16 20:22:28 2015 from 192.168.10.15
root@instance-00000004:~# ping 8.8.8.8
PING 8.8.8.8 (8.8.8.8) 56(84) bytes of data.
64 bytes from 8.8.8.8: icmp_seq=1 ttl=55 time=19.3 ms
64 bytes from 8.8.8.8: icmp_seq=2 ttl=55 time=18.3 ms
64 bytes from 8.8.8.8: icmp_seq=3 ttl=55 time=19.2 ms
64 bytes from 8.8.8.8: icmp_seq=4 ttl=55 time=18.4 ms

References
1.  https://gist.github.com/charlesflynn/5576114

Friday, February 06, 2015

Set up Two Node RDO Juno ML2&OVS&VXLAN Cluster runnig Docker Hypervisor on Compute Node (CentOS 7, kernel 3.10.0-123.20.1.el7.x86_64)

It's quite obvious that Nova-Docker driver set up success for real application is important to get on Compute Nodes . It's nice when everything works on AIO
Juno host or Controller, but  just as demonstration. Might be I did something wrong , might be due to some other reason but kernel version 3.10.0-123.20.1.el7.x86_64 seems to be the first brings  success on RDO Juno Compute nodes.

Follow http://lxer.com/module/newswire/view/209851/index.html  up to section
"Set up Nova-Docker on Controller&&Network Node"

***************************************************
Set up  Nova-Docker Driver on Compute Node
***************************************************

# yum install python-pbr

# yum install docker-io -y
# git clone https://github.com/stackforge/nova-docker
# cd nova-docker
# git checkout stable/juno
# python setup.py install
# systemctl start docker
# systemctl enable docker
# chmod 660  /var/run/docker.sock
#  mkdir /etc/nova/rootwrap.d


************************************************
Create the docker.filters file:
************************************************

vi /etc/nova/rootwrap.d/docker.filters

Insert Lines

# nova-rootwrap command filters for setting up network in the docker driver
# This file should be owned by (and only-writeable by) the root user
[Filters]
# nova/virt/docker/driver.py: 'ln', '-sf', '/var/run/netns/.*'
ln: CommandFilter, /bin/ln, root

*****************************************
Add line /etc/glance/glance-api.conf
*****************************************
container_formats=ami,ari,aki,bare,ovf,ova,docker
:wq


******************************
Update nova.conf
******************************
vi /etc/nova/nova.conf
set "compute_driver = novadocker.virt.docker.DockerDriver"


************************
Restart Services
************************

usermod -G docker nova
systemctl restart openstack-nova-compute (on Compute)
systemctl status openstack-nova-compute
systemctl restart openstack-glance-api (on Controller&&Network )

At this point `scp  /root/keystonerc_admin compute:/root`  from Controller to
Compute Node

*********************************************************************************
Test installation Nova-Docker Driver on Compute Node (RDO Juno , CentOS 7,
kernel 3.10.0-123.20.1.el7.x86_64 )
**********************************************************************************


*******************************************
Setup Ubuntu 14.04 with SSH access
*******************************************
First on Compute node

# docker pull rastasheep/ubuntu-sshd:14.04
# . keystonerc_admin
# docker save rastasheep/ubuntu-sshd:14.04 | glance image-create --is-public=True   --container-format=docker --disk-format=raw --name rastasheep/ubuntu-sshd:14.04

Second on Controller node launch Nova-Docker container , running on Compute, via dashboard and assign floating IP address

   
  
*********************************************
Verify `docker ps ` on Compute Node
*********************************************
[root@juno1dev ~]# ssh 192.168.1.137
Last login: Fri Feb  6 15:38:49 2015 from juno1dev.localdomain

[root@juno2dev ~]# docker ps
CONTAINER ID        IMAGE                          COMMAND               CREATED             STATUS              PORTS               NAMES
ef23d030e35a        rastasheep/ubuntu-sshd:14.04   "/usr/sbin/sshd -D"   7 hours ago         Up 6 minutes                            nova-211bcb54-35ba-4f0a-a150-7e73546d8f46  

[root@juno2dev ~]# ip netns
ef23d030e35af63c17698d1f4c6f7d8023c29455e9dff0288ce224657828993a
ca9aa6cb527f2302985817d3410a99c6f406f4820ed6d3f62485781d50f16590
fea73a69337334b36625e78f9a124e19bf956c73b34453f1994575b667e7401b
58834d3bbea1bffa368724527199d73d0d6fde74fa5d24de9cca41c29f978e31

********************************
On Controller run :-
********************************

[root@juno1dev ~]# ssh root@192.168.1.173
root@192.168.1.173's password:

Last login: Fri Feb  6 12:11:19 2015 from 192.168.1.127
root@instance-0000002b:~# apt-get update
Ign http://archive.ubuntu.com trusty InRelease
Ign http://archive.ubuntu.com trusty-updates InRelease
Ign http://archive.ubuntu.com trusty-security InRelease
Hit http://archive.ubuntu.com trusty Release.gpg
Get:1 http://archive.ubuntu.com trusty-updates Release.gpg [933 B]
Get:2 http://archive.ubuntu.com trusty-security Release.gpg [933 B]
Hit http://archive.ubuntu.com trusty Release
Get:3 http://archive.ubuntu.com trusty-updates Release [62.0 kB]
Get:4 http://archive.ubuntu.com trusty-security Release [62.0 kB]
Hit http://archive.ubuntu.com trusty/main Sources
Hit http://archive.ubuntu.com trusty/restricted Sources
Hit http://archive.ubuntu.com trusty/universe Sources
Hit http://archive.ubuntu.com trusty/main amd64 Packages
Hit http://archive.ubuntu.com trusty/restricted amd64 Packages
Hit http://archive.ubuntu.com trusty/universe amd64 Packages
Get:5 http://archive.ubuntu.com trusty-updates/main Sources [208 kB]
Get:6 http://archive.ubuntu.com trusty-updates/restricted Sources [1874 B]
Get:7 http://archive.ubuntu.com trusty-updates/universe Sources [124 kB]
Get:8 http://archive.ubuntu.com trusty-updates/main amd64 Packages [524 kB]
Get:9 http://archive.ubuntu.com trusty-updates/restricted amd64 Packages [14.8 kB]
Get:10 http://archive.ubuntu.com trusty-updates/universe amd64 Packages [318 kB]
Get:11 http://archive.ubuntu.com trusty-security/main Sources [79.8 kB]       
Get:12 http://archive.ubuntu.com trusty-security/restricted Sources [1874 B]  
Get:13 http://archive.ubuntu.com trusty-security/universe Sources [19.1 kB]   
Get:14 http://archive.ubuntu.com trusty-security/main amd64 Packages [251 kB] 
Get:15 http://archive.ubuntu.com trusty-security/restricted amd64 Packages [14.8 kB]
Get:16 http://archive.ubuntu.com trusty-security/universe amd64 Packages [110 kB]
Fetched 1793 kB in 9s (199 kB/s)                                              
Reading package lists... Done

If network operations like `apt-get install ... ` run afterwards with no problems
Nova-Docker driver is installed  and works on Compute Node

**************************************************************************************
Finally I've set up openstack-nova-compute on Controller ,  to run several instances with  Qemu/Libvirt driver :-
**************************************************************************************
  
     

Sunday, January 25, 2015

Set up Two Node RDO Juno ML2&OVS&VXLAN Cluster runnig Docker Hypervisor on Controller and KVM on Compute (CentOS 7, Fedora 21)

****************************************************************************************
UPDATE as of 01/31/2015 to get Docker && Nova-Docker working on Fedora 21
****************************************************************************************
Per https://github.com/docker/docker/issues/10280
download systemd-218-3.fc22.src.rpm && build 218-3 rpms and upgrade systemd
First packages for rpmbuild :-

 $ sudo yum install audit-libs-devel autoconf  automake cryptsetup-devel \
    dbus-devel docbook-style-xsl elfutils-devel  \
    glib2-devel  gnutls-devel  gobject-introspection-devel \
    gperf     gtk-doc intltool kmod-devel libacl-devel \
    libblkid-devel     libcap-devel libcurl-devel libgcrypt-devel \
    libidn-devel libmicrohttpd-devel libmount-devel libseccomp-devel \
    libselinux-devel libtool pam-devel python3-devel python3-lxml \
    qrencode-devel  python2-devel  xz-devel

Second:-

$cd rpmbuild/SPEC
$rpmbuild -bb systemd.spec
$ cd ../RPMS/x86_64

Third:-

$ sudo yum install libgudev1-218-3.fc21.x86_64.rpm \
libgudev1-devel-218-3.fc21.x86_64.rpm \
systemd-218-3.fc21.x86_64.rpm \
systemd-compat-libs-218-3.fc21.x86_64.rpm \
systemd-debuginfo-218-3.fc21.x86_64.rpm \
systemd-devel-218-3.fc21.x86_64.rpm \
systemd-journal-gateway-218-3.fc21.x86_64.rpm \
systemd-libs-218-3.fc21.x86_64.rpm \
systemd-python-218-3.fc21.x86_64.rpm \
systemd-python3-218-3.fc21.x86_64.rpm

.  .  .  .  .  .  .  .  .  .

Dependencies Resolved

=================================================================================================
 Package                  Arch    Version      Repository                                   Size
=================================================================================================
Installing:
 libgudev1-devel          x86_64  218-3.fc21   /libgudev1-devel-218-3.fc21.x86_64          281 k
 systemd-debuginfo        x86_64  218-3.fc21   /systemd-debuginfo-218-3.fc21.x86_64         69 M
 systemd-journal-gateway  x86_64  218-3.fc21   /systemd-journal-gateway-218-3.fc21.x86_64  571 k
Updating:
 libgudev1                x86_64  218-3.fc21   /libgudev1-218-3.fc21.x86_64                 51 k
 systemd                  x86_64  218-3.fc21   /systemd-218-3.fc21.x86_64                   22 M
 systemd-compat-libs      x86_64  218-3.fc21   /systemd-compat-libs-218-3.fc21.x86_64      237 k
 systemd-devel            x86_64  218-3.fc21   /systemd-devel-218-3.fc21.x86_64            349 k
 systemd-libs             x86_64  218-3.fc21   /systemd-libs-218-3.fc21.x86_64             1.0 M
 systemd-python           x86_64  218-3.fc21   /systemd-python-218-3.fc21.x86_64           185 k
 systemd-python3          x86_64  218-3.fc21   /systemd-python3-218-3.fc21.x86_64          191 k

Transaction Summary
=================================================================================================
Install  3 Packages
Upgrade  7 Packages

Total size: 94 M
Is this ok [y/d/N]: y

  View also  https://ask.openstack.org/en/question/59789/attempt-to-install-nova-docker-driver-on-fedora-21/
*************************************************************************************** 
As a final result of performing configuration bellow Juno dashboard will automatically  spawn,launch  and run Nova-Dockers containers on Controller and usual nova instances supposed to run on KVM Hypervisor (Libvirt driver) on Compute Node

Set up initial configuration via RDO Juno packstack run

- Controller node: Nova, Keystone, Cinder, Glance, Neutron (using Open vSwitch plugin && VXLAN )
- Compute node: Nova (nova-compute), Neutron (openvswitch-agent)


juno1dev.localdomain   -  Controller (192.168.1.127)
juno2dev.localdomain   -  Compute   (192.168.1.137)

Management&&Public  network is 192.168.1.0/24
VXLAN tunnel is (192.168.0.127,192.168.0.137)


Answer File :-

[general]
CONFIG_SSH_KEY=/root/.ssh/id_rsa.pub
CONFIG_DEFAULT_PASSWORD=
CONFIG_MARIADB_INSTALL=y
CONFIG_GLANCE_INSTALL=y
CONFIG_CINDER_INSTALL=y
CONFIG_NOVA_INSTALL=y
CONFIG_NEUTRON_INSTALL=y
CONFIG_HORIZON_INSTALL=y
CONFIG_SWIFT_INSTALL=y
CONFIG_CEILOMETER_INSTALL=y
CONFIG_HEAT_INSTALL=n
CONFIG_CLIENT_INSTALL=y
CONFIG_NTP_SERVERS=
CONFIG_NAGIOS_INSTALL=y
EXCLUDE_SERVERS=
CONFIG_DEBUG_MODE=n
CONFIG_CONTROLLER_HOST=192.168.1.127
CONFIG_COMPUTE_HOSTS=192.168.1.137
CONFIG_NETWORK_HOSTS=192.168.1.127
CONFIG_VMWARE_BACKEND=n
CONFIG_UNSUPPORTED=n
CONFIG_VCENTER_HOST=
CONFIG_VCENTER_USER=
CONFIG_VCENTER_PASSWORD=
CONFIG_VCENTER_CLUSTER_NAME=
CONFIG_STORAGE_HOST=192.168.1.127
CONFIG_USE_EPEL=y
CONFIG_REPO=
CONFIG_RH_USER=
CONFIG_SATELLITE_URL=
CONFIG_RH_PW=
CONFIG_RH_OPTIONAL=y
CONFIG_RH_PROXY=
CONFIG_RH_PROXY_PORT=
CONFIG_RH_PROXY_USER=
CONFIG_RH_PROXY_PW=
CONFIG_SATELLITE_USER=
CONFIG_SATELLITE_PW=
CONFIG_SATELLITE_AKEY=
CONFIG_SATELLITE_CACERT=
CONFIG_SATELLITE_PROFILE=
CONFIG_SATELLITE_FLAGS=
CONFIG_SATELLITE_PROXY=
CONFIG_SATELLITE_PROXY_USER=
CONFIG_SATELLITE_PROXY_PW=
CONFIG_AMQP_BACKEND=rabbitmq
CONFIG_AMQP_HOST=192.168.1.127
CONFIG_AMQP_ENABLE_SSL=n
CONFIG_AMQP_ENABLE_AUTH=n
CONFIG_AMQP_NSS_CERTDB_PW=PW_PLACEHOLDER
CONFIG_AMQP_SSL_PORT=5671
CONFIG_AMQP_SSL_CERT_FILE=/etc/pki/tls/certs/amqp_selfcert.pem
CONFIG_AMQP_SSL_KEY_FILE=/etc/pki/tls/private/amqp_selfkey.pem
CONFIG_AMQP_SSL_SELF_SIGNED=y
CONFIG_AMQP_AUTH_USER=amqp_user
CONFIG_AMQP_AUTH_PASSWORD=PW_PLACEHOLDER
CONFIG_MARIADB_HOST=192.168.1.127
CONFIG_MARIADB_USER=root
CONFIG_MARIADB_PW=7207ae344ed04957
CONFIG_KEYSTONE_DB_PW=abcae16b785245c3
CONFIG_KEYSTONE_REGION=RegionOne
CONFIG_KEYSTONE_ADMIN_TOKEN=3ad2de159f9649afb0c342ba57e637d9
CONFIG_KEYSTONE_ADMIN_PW=7049f834927e4468
CONFIG_KEYSTONE_DEMO_PW=bf737b785cfa4398
CONFIG_KEYSTONE_TOKEN_FORMAT=UUID
CONFIG_KEYSTONE_SERVICE_NAME=keystone
CONFIG_GLANCE_DB_PW=41264fc52ffd4fe8
CONFIG_GLANCE_KS_PW=f6a9398960534797
CONFIG_GLANCE_BACKEND=file
CONFIG_CINDER_DB_PW=5ac08c6d09ba4b69
CONFIG_CINDER_KS_PW=c8cb1ecb8c2b4f6f
CONFIG_CINDER_BACKEND=lvm
CONFIG_CINDER_VOLUMES_CREATE=y
CONFIG_CINDER_VOLUMES_SIZE=20G
CONFIG_CINDER_GLUSTER_MOUNTS=
CONFIG_CINDER_NFS_MOUNTS=
CONFIG_CINDER_NETAPP_LOGIN=
CONFIG_CINDER_NETAPP_PASSWORD=
CONFIG_CINDER_NETAPP_HOSTNAME=
CONFIG_CINDER_NETAPP_SERVER_PORT=80
CONFIG_CINDER_NETAPP_STORAGE_FAMILY=ontap_cluster
CONFIG_CINDER_NETAPP_TRANSPORT_TYPE=http
CONFIG_CINDER_NETAPP_STORAGE_PROTOCOL=nfs
CONFIG_CINDER_NETAPP_SIZE_MULTIPLIER=1.0
CONFIG_CINDER_NETAPP_EXPIRY_THRES_MINUTES=720
CONFIG_CINDER_NETAPP_THRES_AVL_SIZE_PERC_START=20
CONFIG_CINDER_NETAPP_THRES_AVL_SIZE_PERC_STOP=60
CONFIG_CINDER_NETAPP_NFS_SHARES_CONFIG=
CONFIG_CINDER_NETAPP_VOLUME_LIST=
CONFIG_CINDER_NETAPP_VFILER=
CONFIG_CINDER_NETAPP_VSERVER=
CONFIG_CINDER_NETAPP_CONTROLLER_IPS=
CONFIG_CINDER_NETAPP_SA_PASSWORD=
CONFIG_CINDER_NETAPP_WEBSERVICE_PATH=/devmgr/v2
CONFIG_CINDER_NETAPP_STORAGE_POOLS=
CONFIG_NOVA_DB_PW=1e1b5aeeeaf342a8
CONFIG_NOVA_KS_PW=d9583177a2444f06
CONFIG_NOVA_SCHED_CPU_ALLOC_RATIO=16.0
CONFIG_NOVA_SCHED_RAM_ALLOC_RATIO=1.5
CONFIG_NOVA_COMPUTE_MIGRATE_PROTOCOL=tcp
CONFIG_NOVA_COMPUTE_PRIVIF=enp5s1
CONFIG_NOVA_NETWORK_MANAGER=nova.network.manager.FlatDHCPManager
CONFIG_NOVA_NETWORK_PUBIF=enp2s0
CONFIG_NOVA_NETWORK_PRIVIF=enp5s1
CONFIG_NOVA_NETWORK_FIXEDRANGE=192.168.32.0/22
CONFIG_NOVA_NETWORK_FLOATRANGE=10.3.4.0/22
CONFIG_NOVA_NETWORK_DEFAULTFLOATINGPOOL=nova
CONFIG_NOVA_NETWORK_AUTOASSIGNFLOATINGIP=n
CONFIG_NOVA_NETWORK_VLAN_START=100
CONFIG_NOVA_NETWORK_NUMBER=1
CONFIG_NOVA_NETWORK_SIZE=255
CONFIG_NEUTRON_KS_PW=808e36e154bd4cee
CONFIG_NEUTRON_DB_PW=0e2b927a21b44737
CONFIG_NEUTRON_L3_EXT_BRIDGE=br-ex
CONFIG_NEUTRON_L2_PLUGIN=ml2
CONFIG_NEUTRON_METADATA_PW=a965cd23ed2f4502
CONFIG_LBAAS_INSTALL=n
CONFIG_NEUTRON_METERING_AGENT_INSTALL=n
CONFIG_NEUTRON_FWAAS=n
CONFIG_NEUTRON_ML2_TYPE_DRIVERS=vxlan
CONFIG_NEUTRON_ML2_TENANT_NETWORK_TYPES=vxlan
CONFIG_NEUTRON_ML2_MECHANISM_DRIVERS=openvswitch
CONFIG_NEUTRON_ML2_FLAT_NETWORKS=*
CONFIG_NEUTRON_ML2_VLAN_RANGES=
CONFIG_NEUTRON_ML2_TUNNEL_ID_RANGES=1001:2000
CONFIG_NEUTRON_ML2_VXLAN_GROUP=239.1.1.2
CONFIG_NEUTRON_ML2_VNI_RANGES=1001:2000
CONFIG_NEUTRON_L2_AGENT=openvswitch
CONFIG_NEUTRON_LB_TENANT_NETWORK_TYPE=local
CONFIG_NEUTRON_LB_VLAN_RANGES=
CONFIG_NEUTRON_LB_INTERFACE_MAPPINGS=
CONFIG_NEUTRON_OVS_TENANT_NETWORK_TYPE=vxlan
CONFIG_NEUTRON_OVS_VLAN_RANGES=
CONFIG_NEUTRON_OVS_BRIDGE_MAPPINGS=physnet1:br-ex
CONFIG_NEUTRON_OVS_BRIDGE_IFACES=
CONFIG_NEUTRON_OVS_TUNNEL_RANGES=1001:2000
CONFIG_NEUTRON_OVS_TUNNEL_IF=enp5s1
CONFIG_NEUTRON_OVS_VXLAN_UDP_PORT=4789

CONFIG_HORIZON_SSL=n
CONFIG_SSL_CERT=
CONFIG_SSL_KEY=
CONFIG_SSL_CACHAIN=
CONFIG_SWIFT_KS_PW=8f75bfd461234c30
CONFIG_SWIFT_STORAGES=
CONFIG_SWIFT_STORAGE_ZONES=1
CONFIG_SWIFT_STORAGE_REPLICAS=1
CONFIG_SWIFT_STORAGE_FSTYPE=ext4
CONFIG_SWIFT_HASH=a60aacbedde7429a
CONFIG_SWIFT_STORAGE_SIZE=2G
CONFIG_PROVISION_DEMO=y
CONFIG_PROVISION_TEMPEST=n
CONFIG_PROVISION_TEMPEST_USER=
CONFIG_PROVISION_TEMPEST_USER_PW=44faa4ebc3da4459
CONFIG_PROVISION_DEMO_FLOATRANGE=172.24.4.224/28
CONFIG_PROVISION_TEMPEST_REPO_URI=https://github.com/openstack/tempest.git
CONFIG_PROVISION_TEMPEST_REPO_REVISION=master
CONFIG_PROVISION_ALL_IN_ONE_OVS_BRIDGE=n
CONFIG_HEAT_DB_PW=PW_PLACEHOLDER
CONFIG_HEAT_AUTH_ENC_KEY=fc3fb7fee61e46b0
CONFIG_HEAT_KS_PW=PW_PLACEHOLDER
CONFIG_HEAT_CLOUDWATCH_INSTALL=n
CONFIG_HEAT_USING_TRUSTS=y
CONFIG_HEAT_CFN_INSTALL=n
CONFIG_HEAT_DOMAIN=heat
CONFIG_HEAT_DOMAIN_ADMIN=heat_admin
CONFIG_HEAT_DOMAIN_PASSWORD=PW_PLACEHOLDER
CONFIG_CEILOMETER_SECRET=19ae0e7430174349
CONFIG_CEILOMETER_KS_PW=337b08d4b3a44753
CONFIG_MONGODB_HOST=192.168.1.127
CONFIG_NAGIOS_PW=02f168ee8edd44e4

Only on Controller updates :-
[root@juno1 network-scripts(keystone_admin)]# cat ifcfg-br-ex
DEVICE="br-ex"
BOOTPROTO="static"
IPADDR="192.168.1.127"
NETMASK="255.255.255.0"
DNS1="83.221.202.254"
BROADCAST="192.168.1.255"
GATEWAY="192.168.1.1"
NM_CONTROLLED="no"
DEFROUTE="yes"
IPV4_FAILURE_FATAL="yes"
IPV6INIT=no
ONBOOT="yes"
TYPE="OVSIntPort"
OVS_BRIDGE=br-ex

DEVICETYPE="ovs"

[root@juno1 network-scripts(keystone_admin)]# cat ifcfg-enp2s0
DEVICE="enp2s0"
# HWADDR=00:22:15:63:E4:E2
ONBOOT="yes"
TYPE="OVSPort"
DEVICETYPE="ovs"
OVS_BRIDGE=br-ex
NM_CONTROLLED=no
IPV6INIT=no

************************
On Controller :-
************************
# chkconfig network on
# systemctl stop NetworkManager
# systemctl disable NetworkManager
# service network restart
# reboot

[root@juno1dev ~(keystone_admin)]# ifconfig

br-ex: flags=4163  mtu 1500
        inet 192.168.1.127  netmask 255.255.255.0  broadcast 192.168.1.255
        inet6 fe80::222:15ff:fe63:e4e2  prefixlen 64  scopeid 0x20
        ether 00:22:15:63:e4:e2  txqueuelen 0  (Ethernet)
        RX packets 516087  bytes 305856360 (291.6 MiB)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 474282  bytes 62485754 (59.5 MiB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0


enp2s0: flags=4163  mtu 1500
        inet6 fe80::222:15ff:fe63:e4e2  prefixlen 64  scopeid 0x20
        ether 00:22:15:63:e4:e2  txqueuelen 1000  (Ethernet)
        RX packets 1121900  bytes 1194013198 (1.1 GiB)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 768667  bytes 82497428 (78.6 MiB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0
        device interrupt 17

enp5s1: flags=4163  mtu 1500
        inet 192.168.0.127  netmask 255.255.255.0  broadcast 192.168.0.255
        inet6 fe80::2e0:53ff:fe13:174c  prefixlen 64  scopeid 0x20
        ether 00:e0:53:13:17:4c  txqueuelen 1000  (Ethernet)
        RX packets 376087  bytes 49012215 (46.7 MiB)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 1136402  bytes 944635587 (900.8 MiB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

lo: flags=73  mtu 65536
        inet 127.0.0.1  netmask 255.0.0.0
        inet6 ::1  prefixlen 128  scopeid 0x10
        loop  txqueuelen 0  (Local Loopback)
        RX packets 1381792  bytes 250829475 (239.2 MiB)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 1381792  bytes 250829475 (239.2 MiB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0



After packstack completion  switch both nodes to IPv4  iptables firewall
*********************************************************************************
As of 01/25/2015 dnsmasq fails to serve private subnets, unless following lines
to be commented out
*********************************************************************************

# -A INPUT -j REJECT --reject-with icmp-host-prohibited
# -A FORWARD -j REJECT --reject-with icmp-host-prohibited
 
Set up Nova-Docker on Controller&&Network Node
***************************
Initial docker setup
***************************
# yum install python-pbr

# yum install docker-io -y
# yum install -y python-pip git
 
# git clone https://github.com/stackforge/nova-docker
# cd nova-docker
# git checkout stable/juno
# python setup.py install
# systemctl start docker
# systemctl enable docker
# chmod 660  /var/run/docker.sock
#  mkdir /etc/nova/rootwrap.d


************************************************************************************
On Fedora 21 even running systemd 218-3 you should expect
six.__version__  to be dropped to 1.2 right after `python setup.py install`

Then run:-

# pip install --upgrade six

Downloading/unpacking six from https://pypi.python.org/packages/3.3/s/six/six-1.9.0-py2.py3-none-any.whl#md5=9ac7e129a80f72d6fc1f0216f6e9627b
  Downloading six-1.9.0-py2.py3-none-any.whl
Installing collected packages: six
  Found existing installation: six 1.7.3
    Uninstalling six:
      Successfully uninstalled six
Successfully installed six
Cleaning up...
***************************************************************************************

Proceed as normal.

************************************************
Create the docker.filters file:
************************************************

vi /etc/nova/rootwrap.d/docker.filters

Insert Lines

# nova-rootwrap command filters for setting up network in the docker driver
# This file should be owned by (and only-writeable by) the root user
[Filters]
# nova/virt/docker/driver.py: 'ln', '-sf', '/var/run/netns/.*'
ln: CommandFilter, /bin/ln, root

*****************************************
Add line /etc/glance/glance-api.conf
*****************************************
container_formats=ami,ari,aki,bare,ovf,ova,docker
:wq

*************************************
Restart Service glance-api
*************************************
usermod -G docker nova
systemctl restart openstack-glance-api

********************************************************************************
  Creating openstack-nova-docker service per http://blog.oddbit.com/2015/01/17/running-novalibvirt-and-novadocker-on-the-same-host/
Due to configuration of answer-file  in our case /etc/nova/nova.conf on Controller doesn't have any compute_driver at all , and libvirt driver on Compute node.
*********************************************************************************

Create new file /etc/nova/nova-docker.conf


[DEFAULT]
 host=juno1dev.localdomain
 compute_driver=novadocker.virt.docker.DockerDriver
 log_file=/var/log/nova/nova-docker.log
 state_path=/var/lib/nova-docker
 
Create openstack-nova-compute.service unit on  system, and saved it as
/etc/systemd/system/openstack-nova-docker.service
 
[Unit]
Description=OpenStack Nova Compute Server (Docker)
After=syslog.target network.target

[Service]
Environment=LIBGUESTFS_ATTACH_METHOD=appliance
Type=notify
Restart=always
User=nova
ExecStart=/usr/bin/nova-compute --config-file /etc/nova/nova.conf \
          --config-file /etc/nova/nova-docker.conf

[Install]
WantedBy=multi-user.target

 
SCP /usr/bin/nova-compute  from Compute node to Controller and run :- 
 
# systemctl enable openstack-nova-docker
# systemctl start openstack-nova-docker
 
Update /etc/nova/nova.conf on Compute Node

vif_plugging_is_fatal=False 
vif_pligging_timeout=0
# systemctl restart openstack-nova-compute 
 

********************************************************************************
On Fedora 21 keep this entries as is ( no changes),however to launch new
instance on Compute, you would have stop service openstcak-nova-docker
on Controller. Just for 2-3 min coming from spawn => active , restart
openstcak-nova-docker on Controller
********************************************************************************

As final result dashboard will autonatically spawn,load and run Nova-Dockers containers on Controller and usual nova instances supposed to run on KVM Hypervisor (Libvirt driver) on Compute Node
 
 
  
 
 
[root@juno1dev ~(keystone_admin)]# nova service-list
+----+------------------+----------------------+----------+---------+-------+----------------------------+-----------------+
| Id | Binary           | Host                 | Zone     | Status  | State | Updated_at                 | Disabled Reason |
+----+------------------+----------------------+----------+---------+-------+----------------------------+-----------------+
| 1  | nova-consoleauth | juno1dev.localdomain | internal | enabled | up    | 2015-01-26T06:42:16.000000 | -               |
| 2  | nova-scheduler   | juno1dev.localdomain | internal | enabled | up    | 2015-01-26T06:42:16.000000 | -               |
| 3  | nova-conductor   | juno1dev.localdomain | internal | enabled | up    | 2015-01-26T06:42:24.000000 | -               |
| 4  | nova-cert        | juno1dev.localdomain | internal | enabled | up    | 2015-01-26T06:42:16.000000 | -               |
| 5  | nova-compute     | juno2dev.localdomain | nova     | enabled | up    | 2015-01-26T06:42:23.000000 | -               |
| 6  | nova-compute     | juno1dev.localdomain | nova     | enabled | up    | 2015-01-26T06:42:24.000000 | -               |
+----+------------------+----------------------+----------+---------+-------+----------------------------+-----------------+

[root@juno1dev ~(keystone_admin)]# systemctl | grep nova

openstack-nova-api.service          loaded active running   OpenStack Nova API Server
openstack-nova-cert.service         loaded active running   OpenStack Nova Cert Server
openstack-nova-conductor.service    loaded active running   OpenStack Nova Conductor Server
openstack-nova-consoleauth.service  loaded active running   OpenStack Nova VNC console auth Server
openstack-nova-docker.service       loaded active running   OpenStack Nova Compute Server (Docker)
openstack-nova-novncproxy.service   loaded active running   OpenStack Nova NoVNC Proxy Server
openstack-nova-scheduler.service    loaded active running   OpenStack Nova Scheduler Server
 
 
 
  
 
******************************************* 
Tunning VNC Console in dashboard :-
*******************************************
 
Controller - 192.168.1.127 


running: nova-consoleauth nova-novncproxy nova.conf: 

novncproxy_host=0.0.0.0 
novncproxy_port=6080 
novncproxy_base_url=http://192.168.1.127:6080/vnc_auto.html 


Compute - 192.168.1.137 

running: nova-compute nova.conf:
 
vnc_enabled=True
novncproxy_base_url=http://192.168.1.137:6080/vnc_auto.html
vncserver_listen=0.0.0.0
vncserver_proxyclient_address=192.168.1.137

References
 
https://ask.openstack.org/en/question/520/vnc-console-in-dashboard-fails-to-connect-ot-server-code-1006/

Saturday, January 17, 2015

Set up LVMiSCSI cinder backend for RDO Juno on Fedora 21 for Two Node Cluster (Controller&&Network and Compute)

During RDO Juno set up on Fedora 21 Workstation service target is deactivated
on boot up, and tgtd is started (versus CentOS 7 installation procedure) , what
requires some additional efforts to tune LVMiSCSI cinder back end on newest Fedora release. Actually, RDO Juno packstack multi node setup follows procedure posted here http://lxer.com/module/newswire/view/207415/index.html

Service tgtd should be stopped and disabled on Controller
Service target should be enabled and started on Controller

[root@juno1f21 ~(keystone_admin)]# service target status
Redirecting to /bin/systemctl status  target.service
● target.service - Restore LIO kernel target configuration
   Loaded: loaded (/usr/lib/systemd/system/target.service; enabled)
   Active: active (exited) since Sat 2015-01-17 15:45:44 MSK; 12min ago
  Process: 1512 ExecStart=/usr/bin/targetctl restore (code=exited, status=0/SUCCESS)
 Main PID: 1512 (code=exited, status=0/SUCCESS)
   CGroup: /system.slice/target.service

In general - here is a summary of the iSCSI fabric objects hierarchy (see also the underlying configFS layout) . View http://linux-iscsi.org/wiki/Targetcli : -


+-targetcli
  |
  +-Targets
    | Identified by their WWNs or IQN (for iSCSI).
    | Targets identify a group of Endpoints.
    |
    +-TPGs (Target Portal Groups, iSCSI only)
      | The TPG is identified by its numerical Tag, starting at 1. It
      | groups several Network Portals, and caps LUNs and Node ACLs.
      | For fabrics other than iSCSI, targetcli masks the TPG level.
      |
      +-Network Portals (iSCSI only)
      |   A Network Portal adds an IP address and a port. Without at
      |   least one Network Portal, the Target remains disabled.
      |
      +-LUNs
      |   LUNs point at the Storage Objects, and are numbered 0-255.
      |
      +-ACLs
        | Identified by initiator WWNs/IQNs, ACLs group permissions
        | for that specific initiator. If ACLs are enabled, one
        | NodeACL is required per authorized initiator.
        |
        + Mapped LUNs
            Determine which LUNs an initiator will see. E.g., if
            Mapped LUN 1 points at LUN 0, the initiator referenced
            by the NodeACL will see LUN 0 as LUN 1.


In targetcli environment follow procedure described here http://www.server-world.info/en/note?os=Fedora_21&p=iscsi
create  ACL iqn.1994-05.com.redhat:28205be4fa2c just matching InitiatorName
in file /etc/iscsi/initiatorname.iscsi on Compute Node


   On  Compute Node follow http://www.server-world.info/en/note?os=Fedora_21&p=iscsi&f=2

*************************************************
Update /etc/iscsi/iscsid.conf  to match :-
*************************************************

  node.session.auth.username = username
  node.session.auth.password = password

  assigned in targetcli set up on Controller , then run on Compute node

  # systemctl  start iscsid
  # systemctl  enable iscsid

*****************************************************************************
 Update /etc/cinder/cinder.conf on Controller as follows in DEFAULT section
******************************************************************************

  enabled_backends = lvm001

  Then  place in bottom of cinder.conf:

   [lvm001]
   iscsi_helper=lioadm
   volume_group=cinder-volumes001
   iscsi_ip_address=192.168.1.127
   volume_driver=cinder.volume.drivers.lvm.LVMISCSIDriver
   volume_backend_name=LVM_iSCSI001

   ****************
   Now run :-
   ****************

   [root@juno1f21 ~(keystone_admin)]#  cinder type-create lvms

   [root@juno1f21 ~(keystone_admin)]#  cinder type-key lvms set    volume_backend_name=LVM_iSCSI001

   [root@juno1f21 ~(keystone_admin)]# for i in api scheduler volume ; do service openstack-cinder-${i} restart ; done

*******************************************************************************
  Via drop down menu "volume type" create VF21LVMS01 with lvms type :-
*******************************************************************************



 
 and launch instance of Fedora 21 via LVMiSCSI volume created

 Compute node will report :-

[root@juno2f21 ~]# iscsiadm -m discovery -t sendtargets -p 192.168.1.127
192.168.1.127:3260,1 iqn.2010-10.org.openstack:volume-d96a5ad7-bd0b-438a-8ffb-4cb631ed8752


[root@juno2f21 ~]# service iscsid status

Redirecting to /bin/systemctl status  iscsid.service
● iscsid.service - Open-iSCSI
   Loaded: loaded (/usr/lib/systemd/system/iscsid.service; enabled)
   Active: active (running) since Sat 2015-01-17 15:24:12 MSK; 1h 4min ago
     Docs: man:iscsid(8)
           man:iscsiadm(8)
  Process: 27674 ExecStop=/sbin/iscsiadm -k 0 2 (code=exited, status=0/SUCCESS)
  Process: 27680 ExecStart=/usr/sbin/iscsid (code=exited, status=0/SUCCESS)
 Main PID: 27682 (iscsid)
   CGroup: /system.slice/iscsid.service
           ├─27681 /usr/sbin/iscsid
           └─27682 /usr/sbin/iscsid

Jan 17 15:44:57 juno2f21.localdomain iscsid[27681]: connect to 192.168.1.127:3260 failed (No...t)
Jan 17 15:45:03 juno2f21.localdomain iscsid[27681]: connect to 192.168.1.127:3260 failed (No...t)
Jan 17 15:45:39 juno2f21.localdomain iscsid[27681]: connect to 192.168.1.127:3260 failed (No...t)
Jan 17 15:45:43 juno2f21.localdomain iscsid[27681]: connect to 192.168.1.127:3260 failed (Co...d)
Jan 17 15:45:46 juno2f21.localdomain iscsid[27681]: connection1:0 is operational after recov...s)
Jan 17 15:45:43 juno2f21.localdomain iscsid[27681]: connect to 192.168.1.127:3260 failed (Co...d)

Jan 17 15:45:46 juno2f21.localdomain iscsid[27681]: connection1:0 is 
operational after recov...s)
Hint: Some lines were ellipsized, use -l to show in full.

**********************************************************************
 Verify  volume-id shown it targetcli>ls report :
**********************************************************************

[root@juno1f21 ~(keystone_admin)]# nova list --all-tenants
+--------------------------------------+------------------+-----------+------------+-------------+---------------------------------------+
| ID                                   | Name             | Status    | Task State | Power State | Networks                              |
+--------------------------------------+------------------+-----------+------------+-------------+---------------------------------------+
| 3f06cb34-797d-45d1-989e-cba14e902b6c | UbuntuUtopicRX01 | SUSPENDED | -          | Shutdown    | demo_network=40.0.0.17, 192.168.1.154 |
| 7fcbcf6f-67a7-4603-9c09-6e725d403a04 | VF21GLX01        | SUSPENDED | -          | Shutdown    | demo_network=40.0.0.16, 192.168.1.153 |
| a731443e-1355-44c0-811b-97cf9eab987e | VF21LVX001       | ACTIVE    | -          | Running     | demo_network=40.0.0.18, 192.168.1.155 |
+--------------------------------------+------------------+-----------+------------+-------------+---------------------------------------+
[root@juno1f21 ~(keystone_admin)]# nova show a731443e-1355-44c0-811b-97cf9eab987e
+--------------------------------------+----------------------------------------------------------+
| Property                             | Value                                                    |
+--------------------------------------+----------------------------------------------------------+
| OS-DCF:diskConfig                    | AUTO                                                     |
| OS-EXT-AZ:availability_zone          | nova                                                     |
| OS-EXT-SRV-ATTR:host                 | juno2f21.localdomain                                     |
| OS-EXT-SRV-ATTR:hypervisor_hostname  | juno2f21.localdomain                                     |
| OS-EXT-SRV-ATTR:instance_name        | instance-00000008                                        |
| OS-EXT-STS:power_state               | 1                                                        |
| OS-EXT-STS:task_state                | -                                                        |
| OS-EXT-STS:vm_state                  | active                                                   |
| OS-SRV-USG:launched_at               | 2015-01-17T12:33:50.000000                               |
| OS-SRV-USG:terminated_at             | -                                                        |
| accessIPv4                           |                                                          |
| accessIPv6                           |                                                          |
| config_drive                         |                                                          |
| created                              | 2015-01-17T12:33:38Z                                     |
| demo_network network                 | 40.0.0.18, 192.168.1.155                                 |
| flavor                               | m1.small (2)                                             |
| hostId                               | 40dba45d18a87067afdd4187c4467eed967a11c3b59df8b921f6b16e |
| id                                   | a731443e-1355-44c0-811b-97cf9eab987e                     |
| image                                | Attempt to boot from volume - no image supplied          |
| key_name                             | oskey57                                                  |
| metadata                             | {}                                                       |
| name                                 | VF21LVX001                                               |
| os-extended-volumes:volumes_attached | [{"id": "d96a5ad7-bd0b-438a-8ffb-4cb631ed8752"}]         |
| progress                             | 0                                                        |
| security_groups                      | default                                                  |
| status                               | ACTIVE                                                   |
| tenant_id                            | 25f74c1d135c4727b1406cb35f9df70a                         |
| updated                              | 2015-01-17T12:52:06Z                                     |
| user_id                              | 0025c17969f64708a886d4bb1fa354cc                         |
+--------------------------------------+---------------------------------------------------------

[root@juno1f21 ~(keystone_admin)]# targetcli
targetcli shell version 2.1.fb38
Copyright 2011-2013 by Datera, Inc and others.
For help on commands, type 'help'.

/> ls
o- / ...................................................................................... [...]
  o- backstores ........................................................................... [...]
  | o- block ............................................................... [Storage Objects: 1]
  | | o- iqn.2010-10.org.openstack:volume-d96a5ad7-bd0b-438a-8ffb-4cb631ed8752  [/dev/cinder-volumes001/volume-d96a5ad7-bd0b-438a-8ffb-4cb631ed8752 (5.0GiB) write-thru activated]


Creating volume for Ubuntu Utopic


    
   

    Reporting from Compute side :-

    [root@juno1f21 ~(keystone_admin)]# ssh 192.168.1.137
Last login: Sat Jan 17 16:24:44 2015 from juno1f21.localdomain
[root@juno2f21 ~]# service iscsid status
Redirecting to /bin/systemctl status  iscsid.service
● iscsid.service - Open-iSCSI
   Loaded: loaded (/usr/lib/systemd/system/iscsid.service; enabled)
   Active: active (running) since Sat 2015-01-17 15:24:12 MSK; 1h 53min ago
     Docs: man:iscsid(8)
           man:iscsiadm(8)
  Process: 27674 ExecStop=/sbin/iscsiadm -k 0 2 (code=exited, status=0/SUCCESS)
  Process: 27680 ExecStart=/usr/sbin/iscsid (code=exited, status=0/SUCCESS)
 Main PID: 27682 (iscsid)
   CGroup: /system.slice/iscsid.service
           ├─27681 /usr/sbin/iscsid
           └─27682 /usr/sbin/iscsid


Jan 17 15:45:03 juno2f21.localdomain iscsid[27681]: connect to 192.168.1.127:3260 failed (No...t)
Jan 17 15:45:09 juno2f21.localdomain iscsid[27681]: connect to 192.168.1.127:3260 failed (No...t)
Jan 17 15:45:43 juno2f21.localdomain iscsid[27681]: connect to 192.168.1.127:3260 failed (Co...d)
Jan 17 15:45:46 juno2f21.localdomain iscsid[27681]: connection1:0 is operational after recov...s)
Jan 17 17:05:38 juno2f21.localdomain iscsid[27681]: Connection2:0 to [target: iqn.2010-10.or...ow

Hint: Some lines were ellipsized, use -l to show in full.

[root@juno2f21 ~]# iscsiadm -m discovery -t sendtargets -p 192.168.1.127

192.168.1.127:3260,1 iqn.2010-10.org.openstack:volume-d96a5ad7-bd0b-438a-8ffb-4cb631ed8752
192.168.1.127:3260,1 iqn.2010-10.org.openstack:volume-e87a3ee8-fa04-4bab-aedc-31bd2f4d4c02