Sunday, April 19, 2015

Nested KVM set up on Fedora 22 && Running devstack on Ubuntu 14.04 guests

Following bellow are brief instructions how to achieve extremely high performance of VMs created via devstack ( stack.sh ) inside another virtual machine created with Fedora 22 KVM Hypervisor and having Nested KVM feature enabled, working with sufficiently advanced Intel CPUs (Haswell kernel or above which have newer hardware virt extensions ) and 16 GB or more RAM.

****************************************
Create non-default libvirt subnet
****************************************

1. Create a new libvirt network (other than your default 198.162.x.x) file:

$ cat devstackvms.xml

<network>
   <name>devstackvms</name>
   <uuid>d0e9964a-f91a-40c0-b769-a609aee41bf2</uuid>
   <forward mode='nat'>
     <nat>
       <port start='1024' end='65535'/>
     </nat>
   </forward>
   <bridge name='virbr1' stp='on' delay='0' />
   <mac address='52:54:00:60:f8:6e'/>
   <ip address='192.157.141.1' netmask='255.255.255.0'>
     <dhcp>
       <range start='192.157.141.2' end='192.157.141.254' />
     </dhcp>
   </ip>
 </network>


 $ virsh net-define devstackvms.xml

 Then start the network and enable "autostart"

 $ virsh net-start devstackvms
 $ virsh net-autostart devstackvms


4. List your libvirt networks to see if it reflects:

$ virsh net-list

  Name              State      Autostart     Persistent
  ----------------------------------------------------------
  default              active     yes           yes
 devstackvms      active     yes           yes



Launch VM Ubuntu1404 attached to subnet created. Set Disk && Network to "Virtio" mode before start installation 

**********************************************************************************
 Procedure to enable nested virtualization (on Intel-based machines) [ 1 ]
**********************************************************************************

1. List modules and ensure KVM Kernel modules are enabled on L0:

    $ lsmod | grep -i kvm
    kvm_intel             133627  0
    kvm                   435079  1 kvm_intel


2. Show information for `kvm_intel` module:

    $ modinfo kvm_intel | grep -i nested
    parm:           nested:boolkvm                   435079  1 kvm_intel


3. Ensure nested virt is persistent across reboots by adding it as a
   config directive:

    $ cat /etc/modprobe.d/dist.conf
    options kvm-intel nested=y


4. Reboot the host.


5. Check if the Nested KVM Kernel module option is enabled:

    $ cat /sys/module/kvm_intel/parameters/nested
    Y


6. Before you boot your L1 guest (i.e. the guest hypervisor that runs
   the nested guest), expose virtualization extensions to it. The
   following exposes all the CPU features of host to your guest
   unconditionally:

    $ virt-xml Ubuntu1404 --edit  --cpu host-passthrough,clearxml=yes


7. Start your L1 guest (i.e. guest hypervisor):

    $ virsh start Ubuntu1404  --console


8. Ensure KVM extensions are enabled in L1 guest by running the below
   command:

$ file /dev/kvm      
    /dev/kvm: character special


You might enable Shadow VMCS, APIC Virtualization on the physical host (L0):
    $ cat /sys/module/kvm_intel/parameters/enable_shadow_vmcs
    Y

    $ cat /sys/module/kvm_intel/parameters/enable_apicv
    N

    $ cat /sys/module/kvm_intel/parameters/ept
    Y

 
   


***************************************************************
Devstack installation procedure on Ubuntu 14.04.2 VM
***************************************************************


$ git clone https://git.openstack.org/openstack-dev/devstack
$ cd devstack

********************************************
Create local.conf
********************************************

[[local|localrc]]
HOST_IP= 192.157.141.57
ADMIN_PASSWORD=secret
DATABASE_PASSWORD=$ADMIN_PASSWORD
RABBIT_PASSWORD=$ADMIN_PASSWORD
SERVICE_PASSWORD=$ADMIN_PASSWORD
SERVICE_TOKEN=a682f596-76f3-11e3-b3b2-e716f9080d50

FLOATING_RANGE=192.168.12.0/24
FLAT_INTERFACE=eth0
Q_FLOATING_ALLOCATION_POOL=start=192.168.12.150,end=192.168.12.254
PUBLIC_NETWORK_GATEWAY=192.168.12.15

# Useful logging options for debugging:
DEST=/opt/stack
LOGFILE=$DEST/logs/stack.sh.log
SCREEN_LOGDIR=$DEST/logs/screen

# The default fixed range (10.0.0.0/24) conflicted with an address
# range I was using locally.
FIXED_RANGE=10.254.1.0/24
NETWORK_GATEWAY=10.254.1.1

# Services
disable_service n-net
enable_service  n-cauth
enable_service q-svc
enable_service q-agt
enable_service q-dhcp
enable_service q-l3
enable_service q-meta
enable_service horizon
disable_service tempest


Then run ./stack.sh

 

 

  

  


****************************************************************************
To provide outbound  connectivity  run from within VM running stack instance
****************************************************************************

 # iptables -t nat -A POSTROUTING -o eth0 -j MASQUERADE



****************************************************************************
To provide inbound  connectivity (from host running KVM Hypervisor)
to VMs (L2) created  run from within VM (L1)
****************************************************************************

# route add -net 192.168.1.0/24  gw 192.157.141.57 

where 192.157.141.57 is KVM's IP on non-standard libvirt subnet - devstackvms  192.168.1.0/24 is subnet hosting machine 192.168.1.47 running KVM Hypervisor


On machine 192.168.1.47 (L0) ,which is Fedora 22 box plus KVM/QEMU/LIBVIRT
run :-

# route add -net 192.168.12.0/24 gw 192.157.141.57


where 192.168.12.0/24 is devstack public network ( view local.conf).


 

Wednesday, April 15, 2015

Nova libvirt-xen driver fails to schedule instance under Xen 4.4.1 Hypervisor with libxl toolstack

UPDATE 16/04/2015
For now http://www.slideshare.net/xen_com_mgr/openstack-xenfinal
is supposed to work only with nova networking per Anthony PERARD
Neutron appears to be an issue.
Please, view details of troubleshooting and diagnostic obtained (thanks to Ian Campbell) :-
http://lists.xen.org/archives/html/xen-devel/2015-04/msg01856.html
END UPDATE

This post is written in regards of two publications done in February 2015
First:   http://wiki.xen.org/wiki/OpenStack_via_DevStack
Second : http://www.slideshare.net/xen_com_mgr/openstack-xenfinal
Both of them are devoted to same problem nova libvirt-xen driver. Second one states that everything is supposed to be fine as far as some mysterious patch will merge mainline libvirt .Both of them don’t work for me generating errors  in  libxl-driver.log even with  libvirt 1.2.14 ( the most recent version as of time of writing).
For better understanding problem been raised up view also https://ask.openstack.org/en/question/64942/nova-libvirt-xen-driver-and-patch-feb-2015-in-upstream-libvirt/
I’ve followed more accurately written second one :-
On Ubuntu 14.04.2

# apt-get update
# apt-get -y upgrade
# apt-get install xen-hypervisor-4.4-amd64
# sudo reboot
$ git clone https://git.openstack.org/openstack-dev/devstack

Created local.conf under devstack folder as follows :-

[[local|localrc]]
HOST_IP=192.168.1.57
ADMIN_PASSWORD=secret
DATABASE_PASSWORD=$ADMIN_PASSWORD
RABBIT_PASSWORD=$ADMIN_PASSWORD
SERVICE_PASSWORD=$ADMIN_PASSWORD
SERVICE_TOKEN=a682f596-76f3-11e3-b3b2-e716f9080d50
FLOATING_RANGE=192.168.10.0/24
FLAT_INTERFACE=eth0
Q_FLOATING_ALLOCATION_POOL=start=192.168.10.150,end=192.168.10.254
PUBLIC_NETWORK_GATEWAY=192.168.10.15
# Useful logging options for debugging:
DEST=/opt/stack
LOGFILE=$DEST/logs/stack.sh.log
SCREEN_LOGDIR=$DEST/logs/screen
# The default fixed range (10.0.0.0/24) conflicted with an address
# range I was using locally.
FIXED_RANGE=10.254.1.0/24
NETWORK_GATEWAY=10.254.1.1
# Services
disable_service n-net
enable_service n-cauth
enable_service q-svc
enable_service q-agt
enable_service q-dhcp
enable_service q-l3
enable_service q-meta
enable_service horizon
disable_service tempest
# This is a Xen Project host:
LIBVIRT_TYPE=xen

 Ran ./stack.sh and successfully completed installation, versions of libvirt 1.2.2,1.2.9.1.2.24 have been tested. The first one is default on Trusty, 1.2.9 && 1.2.14 have been built and installed after stack.sh completion. For every version of libvirt been tested new hardware instance of Ubuntu 14.04.2 has been created.

Manual libvirt upgrade was done via :-

# apt-get build-dep libvirt
# tar xvzf libvirt-1.2.14.tar.gz -C /usr/src
# cd /usr/src/libvirt-1.2.14
# ./configure –prefix=/usr/
# make
# make install
# service libvirt-bin restart

root@ubuntu-system:~# virsh –connect xen:///
Welcome to virsh, the virtualization interactive terminal.
Type: ‘help’ for help with commands
‘quit’ to quit
virsh # version
Compiled against library: libvirt 1.2.14
Using library: libvirt 1.2.14
Using API: Xen 1.2.14
Running hypervisor: Xen 4.4.0

*********************************
Per page 19 of second post
*********************************
xen.gz command line tuned
ubuntu@ubuntu-system:~/devstack$ nova image-meta cirros-0.3.2-x86_64-uec set vm_mode=HVM
ubuntu@ubuntu-system:~/devstack$ nova image-meta cirros-0.3.2-x86_64-uec delete vm_mode

Attempt to launch instance ( nova-compute is up ) error “No available host found” in n-sch.log from Nova side

The libxl-driver.log reports :-
root@ubuntu-system:/var/log/libvirt/libxl# ls -l
total 32
-rw-r–r– 1 root root 30700 Apr 12 03:47 libxl-driver.log
*****************************************************************************************
libxl: debug: libxl_dm.c:1320:libxl__spawn_local_dm: Spawning device-model /usr/bin/qemu-system-i386 with arguments:
libxl: debug: libxl_dm.c:1322:libxl__spawn_local_dm: /usr/bin/qemu-system-i386
libxl: debug: libxl_dm.c:1322:libxl__spawn_local_dm: -xen-domid
libxl: debug: libxl_dm.c:1322:libxl__spawn_local_dm: 2
libxl: debug: libxl_dm.c:1322:libxl__spawn_local_dm: -chardev
libxl: debug: libxl_dm.c:1322:libxl__spawn_local_dm: socket,id=libxl-cmd,path=/var/run/xen/qmp-libxl-2,server,nowait
libxl: debug: libxl_dm.c:1322:libxl__spawn_local_dm: -mon
libxl: debug: libxl_dm.c:1322:libxl__spawn_local_dm: chardev=libxl-cmd,mode=control
libxl: debug: libxl_dm.c:1322:libxl__spawn_local_dm: -nodefaults
libxl: debug: libxl_dm.c:1322:libxl__spawn_local_dm: -xen-attach
libxl: debug: libxl_dm.c:1322:libxl__spawn_local_dm: -name
libxl: debug: libxl_dm.c:1322:libxl__spawn_local_dm: instance-00000002
libxl: debug: libxl_dm.c:1322:libxl__spawn_local_dm: -vnc
libxl: debug: libxl_dm.c:1322:libxl__spawn_local_dm: 127.0.0.1:1
libxl: debug: libxl_dm.c:1322:libxl__spawn_local_dm: -display
libxl: debug: libxl_dm.c:1322:libxl__spawn_local_dm: none
libxl: debug: libxl_dm.c:1322:libxl__spawn_local_dm: -k
libxl: debug: libxl_dm.c:1322:libxl__spawn_local_dm: en-us
libxl: debug: libxl_dm.c:1322:libxl__spawn_local_dm: -machine
libxl: debug: libxl_dm.c:1322:libxl__spawn_local_dm: xenpv
libxl: debug: libxl_dm.c:1322:libxl__spawn_local_dm: -m
libxl: debug: libxl_dm.c:1322:libxl__spawn_local_dm: 513
libxl: debug: libxl_event.c:570:libxl__ev_xswatch_register: watch w=0x7f36cc001990 wpath=/local/domain/0/device-model/2/state token=3/3: register slotnum=3
libxl: debug: libxl_create.c:1356:do_domain_create: ao 0x7f36cc0012e0: inprogress: poller=0x7f36d8013130, flags=i
libxl: debug: libxl_event.c:514:watchfd_callback: watch w=0x7f36cc001990 wpath=/local/domain/0/device-model/2/state token=3/3: event epath=/local/domain/0/device-model/2/state
libxl: debug: libxl_event.c:514:watchfd_callback: watch w=0x7f36cc001990 wpath=/local/domain/0/device-model/2/state token=3/3: event epath=/local/domain/0/device-model/2/state
libxl: debug: libxl_event.c:606:libxl__ev_xswatch_deregister: watch w=0x7f36cc001990 wpath=/local/domain/0/device-model/2/state token=3/3: deregister slotnum=3
libxl: debug: libxl_event.c:618:libxl__ev_xswatch_deregister: watch w=0x7f36cc001990: deregister unregistered
libxl: debug: libxl_qmp.c:696:libxl__qmp_initialize: connected to /var/run/xen/qmp-libxl-2
libxl: debug: libxl_qmp.c:296:qmp_handle_response: message type: qmp
libxl: debug: libxl_qmp.c:546:qmp_send_prepare: next qmp command: ‘{
“execute”: “qmp_capabilities”,
“id”: 1
}

libxl: debug: libxl_qmp.c:296:qmp_handle_response: message type: return
libxl: debug: libxl_qmp.c:546:qmp_send_prepare: next qmp command: ‘{
“execute”: “query-chardev”,
“id”: 2
}

libxl: debug: libxl_qmp.c:296:qmp_handle_response: message type: return
libxl: debug: libxl_qmp.c:546:qmp_send_prepare: next qmp command: ‘{
“execute”: “query-vnc”,
“id”: 3
}

libxl: debug: libxl_qmp.c:296:qmp_handle_response: message type: return
libxl: debug: libxl_event.c:570:libxl__ev_xswatch_register: watch w=0x7f36f284b3e8 wpath=/local/domain/0/backend/vif/2/0/state token=3/4: register slotnum=3
libxl: debug: libxl_event.c:514:watchfd_callback: watch w=0x7f36f284b3e8 wpath=/local/domain/0/backend/vif/2/0/state token=3/4: event epath=/local/domain/0/backend/vif/2/0/state
libxl: debug: libxl_event.c:657:devstate_watch_callback: backend /local/domain/0/backend/vif/2/0/state wanted state 2 still waiting state 1
libxl: debug: libxl_event.c:514:watchfd_callback: watch w=0x7f36f284b3e8 wpath=/local/domain/0/backend/vif/2/0/state token=3/4: event epath=/local/domain/0/backend/vif/2/0/state
libxl: debug: libxl_event.c:653:devstate_watch_callback: backend /local/domain/0/backend/vif/2/0/state wanted state 2 ok
libxl: debug: libxl_event.c:606:libxl__ev_xswatch_deregister: watch w=0x7f36f284b3e8 wpath=/local/domain/0/backend/vif/2/0/state token=3/4: deregister slotnum=3
libxl: debug: libxl_event.c:618:libxl__ev_xswatch_deregister: watch w=0x7f36f284b3e8: deregister unregistered
libxl: debug: libxl_device.c:1023:device_hotplug: calling hotplug script: /etc/xen/scripts/vif-bridge online
libxl: debug: libxl_event.c:618:libxl__ev_xswatch_deregister: watch w=0x7f36f284b470: deregister unregistered
libxl: error: libxl_exec.c:118:libxl_report_child_exitstatus: /etc/xen/scripts/vif-bridge online [-1] exited with error status 1
libxl: error: libxl_device.c:1085:device_hotplug_child_death_cb: script: ip link set vif2.0 name tap5600079c-9e failed
libxl: debug: libxl_event.c:618:libxl__ev_xswatch_deregister: watch w=0x7f36f284b470: deregister unregistered
libxl: error: libxl_create.c:1226:domcreate_attach_vtpms: unable to add nic devices

libxl: debug: libxl_dm.c:1495:kill_device_model: Device Model signaled

Tuesday, April 14, 2015

Establishing access to public devstack (stack) network from LAN

To access VMs running within  stack (devstack) AIO instance on Ubuntu 14.04 host 192.168.1.57 from another boxes located on the same office LAN having address 192.168.1.0/24 manage as follows :-

Run on Devstack Node
# Add route to LAN
$ sudo route add -net  192.168.1.0/24 gw 192.168.1.57

Run on LAN box
# Add route to devstack public network  via HOST_IP
$ sudo route add -net 192.168.10.0/24 gw 192.168.1.57

where 192.168.1.57 HOST_IP of  Devstack Node running stack instance
192.168.10.0/24  is  Devstack's public  network. 192.168.1.0/24 is  LAN address

*************************************************************************************
If stack instance is running on KVM (Ubuntu 14.04) on Libvirt Subnet to access stack VMs running inside KVM (Ubuntu 14.04) from F21 box hosting KVM Hypervisor  run from within  KVM (Ubuntu 14.04)
*************************************************************************************

# route add -net 192.168.1.0/24  gw 192.168.122.57 

where 192.168.122.57 is KVM's IP on standard libvirt subnet 192.168.122.0/24 , 192.168.1.0/24 is subnet hosting machine 192.168.1.47 running KVM Hypervisor


On machine 192.168.1.47,which is Fedora 21 box plus KVM/QEMU/LIBVIRT
run :-

# route add -net 192.168.12.0/24 gw 192.168.122.57

where 192.168.12.0/24 is devstack public subnet running on KVM (Ubuntu 14.04) hosting  stack (e.g. devstack) instance.

Saturday, April 11, 2015

RDO Juno multi node setup && Switching to eth(X) interfaces on Fedora 21

This post is closely related to RDO Juno Multi Node deployment via packstack on Fedora 21 landscape with boxes having different boards and different Ethernet NICs integrated on boards either  plugged into systems.
Originally tested on Two Node Controller&&Network and Compute  Fedora 21 .

[root@junoVHS01 ~(keystone_admin)]# uname -a
Linux junoVHS01.localdomain 3.19.3-200.fc21.x86_64 #1 SMP Thu Mar 26 21:39:42 UTC 2015 x86_64 x86_64 x86_64 GNU/Linux


Interfaces on first board  (enp3s0,enp5s0) on second board (enp2s0,enp5s1).Converted on both boards to (eth0,eth1), creating udev rules to rename Ethernet interfaces allows to set one to one correspondence between MAC adresses  and  eth(X) names. Just updating /boot/grub2/grub.cfg is not
enough on systems having several NICs. View also [ 1 ].


***************************************
Update  /etc/default/grub
***************************************
Append to GRUB_CMDLINE_LINUX line append "net.ifnames=0 biosdevname=0"

Issue :-
# grub2-mkconfig -o /boot/grub2/grub.cfg

******************************************************
Run ifconfig to get MAC addresses of your NICS
******************************************************
[root@junoVHS01 network-scripts]# ifconfig

enp3s0: flags=4163  mtu 1500
        inet 192.168.1.127  netmask 255.255.255.0  broadcast 192.168.1.255
        inet6 fe80::7a24:afff:fe43:1b53  prefixlen 64  scopeid 0x20
        ether 78:24:af:43:1b:53  txqueuelen 1000  (Ethernet)
        RX packets 44533  bytes 64844663 (61.8 MiB)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 23881  bytes 1625287 (1.5 MiB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

enp5s0 : flags=4163  mtu 1500
        inet 192.168.0.127  netmask 255.255.255.0  broadcast 192.168.0.255
        inet6 fe80::2e0:53ff:fe13:174c  prefixlen 64  scopeid 0x20
        ether 00:e0:53:13:17:4c  txqueuelen 1000  (Ethernet)
        RX packets 65  bytes 22230 (21.7 KiB)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 34  bytes 3466 (3.3 KiB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0



********************************************************
Create /etc/udev/rules.d/60-net.rules
********************************************************

SUBSYSTEM=="net", ACTION=="add", DRIVERS=="?*", ATTR{address}=="78:24:af:43:1b:53", ATTR{dev_id}=="0x0", ATTR{type}=="1", KERNEL=="eth*", NAME="eth0"

{SUBSYSTEM=="net", ACTION=="add", DRIVERS=="?*", ATTR{address}=="00:e0:53:13:17:4c", ATTR{dev_id}=="0x0", ATTR{type}=="1", KERNEL=="eth*", NAME="eth1"

***********************************************************
Got to /etc/sysconfig/network-scripts
***********************************************************
cp ifcfg-enp3s0 ifcfg-eth0
cp ifcfg-enp5s0 ifcfg-eth1

and set

DEVICE="eth0"
DEVICE="eth1"

in corresponding files

# rm  -f ifcfg-enp*s*

************************
System reboot.
************************

RDO Juno Multi Node setup , easily updated from two to three node with
separate box  for Network Node (CONFIG_NETWORK_HOSTS=192.168.1.147)
Several Compute Nodes may be added via CONFIG_COMPUTE_HOSTS


*******************************************************************************
Setup configuration

- Controller node: Nova, Keystone, Cinder, Glance, Neutron (using Open vSwitch plugin && VXLAN )
- Compute node: Nova (nova-compute), Neutron (openvswitch-agent)


junoVHS01.localdomain   -  Controller&&Network Node (192.168.1.127)
junoVHS02.localdomain   -  Compute Node   (192.168.1.137)

VTEPS (192.168.0.127 - Controller, 192.168.0.137 - Compute )

********************************************************************************
 

Answer file been used by packstack

[general]
CONFIG_SSH_KEY=/root/.ssh/id_rsa.pub
CONFIG_DEFAULT_PASSWORD=
CONFIG_MARIADB_INSTALL=y
CONFIG_GLANCE_INSTALL=y
CONFIG_CINDER_INSTALL=y
CONFIG_NOVA_INSTALL=y
CONFIG_NEUTRON_INSTALL=y
CONFIG_HORIZON_INSTALL=y
CONFIG_SWIFT_INSTALL=y
CONFIG_CEILOMETER_INSTALL=y
CONFIG_HEAT_INSTALL=n
CONFIG_CLIENT_INSTALL=y
CONFIG_NTP_SERVERS=
CONFIG_NAGIOS_INSTALL=y
EXCLUDE_SERVERS=
CONFIG_DEBUG_MODE=n
CONFIG_CONTROLLER_HOST=192.168.1.127
CONFIG_COMPUTE_HOSTS=192.168.1.137
CONFIG_NETWORK_HOSTS=192.168.1.127

CONFIG_VMWARE_BACKEND=n
CONFIG_UNSUPPORTED=n
CONFIG_VCENTER_HOST=
CONFIG_VCENTER_USER=
CONFIG_VCENTER_PASSWORD=
CONFIG_VCENTER_CLUSTER_NAME=
CONFIG_STORAGE_HOST=192.168.1.127
CONFIG_USE_EPEL=y
CONFIG_REPO=
CONFIG_RH_USER=
CONFIG_SATELLITE_URL=
CONFIG_RH_PW=
CONFIG_RH_OPTIONAL=y
CONFIG_RH_PROXY=
CONFIG_RH_PROXY_PORT=
CONFIG_RH_PROXY_USER=
CONFIG_RH_PROXY_PW=
CONFIG_SATELLITE_USER=
CONFIG_SATELLITE_PW=
CONFIG_SATELLITE_AKEY=
CONFIG_SATELLITE_CACERT=
CONFIG_SATELLITE_PROFILE=
CONFIG_SATELLITE_FLAGS=
CONFIG_SATELLITE_PROXY=
CONFIG_SATELLITE_PROXY_USER=
CONFIG_SATELLITE_PROXY_PW=
CONFIG_AMQP_BACKEND=rabbitmq
CONFIG_AMQP_HOST=192.168.1.127
CONFIG_AMQP_ENABLE_SSL=n
CONFIG_AMQP_ENABLE_AUTH=n
CONFIG_AMQP_NSS_CERTDB_PW=PW_PLACEHOLDER
CONFIG_AMQP_SSL_PORT=5671
CONFIG_AMQP_SSL_CERT_FILE=/etc/pki/tls/certs/amqp_selfcert.pem
CONFIG_AMQP_SSL_KEY_FILE=/etc/pki/tls/private/amqp_selfkey.pem
CONFIG_AMQP_SSL_SELF_SIGNED=y
CONFIG_AMQP_AUTH_USER=amqp_user
CONFIG_AMQP_AUTH_PASSWORD=PW_PLACEHOLDER
CONFIG_MARIADB_HOST=192.168.1.127
CONFIG_MARIADB_USER=root
CONFIG_MARIADB_PW=7207ae344ed04957
CONFIG_KEYSTONE_DB_PW=abcae16b785245c3
CONFIG_KEYSTONE_REGION=RegionOne
CONFIG_KEYSTONE_ADMIN_TOKEN=3ad2de159f9649afb0c342ba57e637d9
CONFIG_KEYSTONE_ADMIN_PW=7049f834927e4468
CONFIG_KEYSTONE_DEMO_PW=bf737b785cfa4398
CONFIG_KEYSTONE_TOKEN_FORMAT=UUID
CONFIG_KEYSTONE_SERVICE_NAME=keystone
CONFIG_GLANCE_DB_PW=41264fc52ffd4fe8
CONFIG_GLANCE_KS_PW=f6a9398960534797
CONFIG_GLANCE_BACKEND=file
CONFIG_CINDER_DB_PW=5ac08c6d09ba4b69
CONFIG_CINDER_KS_PW=c8cb1ecb8c2b4f6f
CONFIG_CINDER_BACKEND=lvm
CONFIG_CINDER_VOLUMES_CREATE=y
CONFIG_CINDER_VOLUMES_SIZE=20G
CONFIG_CINDER_GLUSTER_MOUNTS=
CONFIG_CINDER_NFS_MOUNTS=
CONFIG_CINDER_NETAPP_LOGIN=
CONFIG_CINDER_NETAPP_PASSWORD=
CONFIG_CINDER_NETAPP_HOSTNAME=
CONFIG_CINDER_NETAPP_SERVER_PORT=80
CONFIG_CINDER_NETAPP_STORAGE_FAMILY=ontap_cluster
CONFIG_CINDER_NETAPP_TRANSPORT_TYPE=http
CONFIG_CINDER_NETAPP_STORAGE_PROTOCOL=nfs
CONFIG_CINDER_NETAPP_SIZE_MULTIPLIER=1.0
CONFIG_CINDER_NETAPP_EXPIRY_THRES_MINUTES=720
CONFIG_CINDER_NETAPP_THRES_AVL_SIZE_PERC_START=20
CONFIG_CINDER_NETAPP_THRES_AVL_SIZE_PERC_STOP=60
CONFIG_CINDER_NETAPP_NFS_SHARES_CONFIG=
CONFIG_CINDER_NETAPP_VOLUME_LIST=
CONFIG_CINDER_NETAPP_VFILER=
CONFIG_CINDER_NETAPP_VSERVER=
CONFIG_CINDER_NETAPP_CONTROLLER_IPS=
CONFIG_CINDER_NETAPP_SA_PASSWORD=
CONFIG_CINDER_NETAPP_WEBSERVICE_PATH=/devmgr/v2
CONFIG_CINDER_NETAPP_STORAGE_POOLS=
CONFIG_NOVA_DB_PW=1e1b5aeeeaf342a8
CONFIG_NOVA_KS_PW=d9583177a2444f06
CONFIG_NOVA_SCHED_CPU_ALLOC_RATIO=16.0
CONFIG_NOVA_SCHED_RAM_ALLOC_RATIO=1.5
CONFIG_NOVA_COMPUTE_MIGRATE_PROTOCOL=tcp
CONFIG_NOVA_COMPUTE_PRIVIF=eth1
CONFIG_NOVA_NETWORK_MANAGER=nova.network.manager.FlatDHCPManager
CONFIG_NOVA_NETWORK_PUBIF=eth0
CONFIG_NOVA_NETWORK_PRIVIF=eth1
CONFIG_NOVA_NETWORK_FIXEDRANGE=192.168.32.0/22
CONFIG_NOVA_NETWORK_FLOATRANGE=10.3.4.0/22
CONFIG_NOVA_NETWORK_DEFAULTFLOATINGPOOL=nova
CONFIG_NOVA_NETWORK_AUTOASSIGNFLOATINGIP=n
CONFIG_NOVA_NETWORK_VLAN_START=100
CONFIG_NOVA_NETWORK_NUMBER=1
CONFIG_NOVA_NETWORK_SIZE=255
CONFIG_NEUTRON_KS_PW=808e36e154bd4cee
CONFIG_NEUTRON_DB_PW=0e2b927a21b44737
CONFIG_NEUTRON_L3_EXT_BRIDGE=br-ex
CONFIG_NEUTRON_L2_PLUGIN=ml2
CONFIG_NEUTRON_METADATA_PW=a965cd23ed2f4502
CONFIG_LBAAS_INSTALL=n
CONFIG_NEUTRON_METERING_AGENT_INSTALL=n
CONFIG_NEUTRON_FWAAS=n
CONFIG_NEUTRON_ML2_TYPE_DRIVERS=vxlan
CONFIG_NEUTRON_ML2_TENANT_NETWORK_TYPES=vxlan
CONFIG_NEUTRON_ML2_MECHANISM_DRIVERS=openvswitch
CONFIG_NEUTRON_ML2_FLAT_NETWORKS=*
CONFIG_NEUTRON_ML2_VLAN_RANGES=
CONFIG_NEUTRON_ML2_TUNNEL_ID_RANGES=1001:2000
CONFIG_NEUTRON_ML2_VXLAN_GROUP=239.1.1.2

CONFIG_NEUTRON_ML2_VNI_RANGES=1001:2000
CONFIG_NEUTRON_L2_AGENT=openvswitch
CONFIG_NEUTRON_LB_TENANT_NETWORK_TYPE=local
CONFIG_NEUTRON_LB_VLAN_RANGES=
CONFIG_NEUTRON_LB_INTERFACE_MAPPINGS=
CONFIG_NEUTRON_OVS_TENANT_NETWORK_TYPE=vxlan
CONFIG_NEUTRON_OVS_VLAN_RANGES=
CONFIG_NEUTRON_OVS_BRIDGE_MAPPINGS=physnet1:br-ex
CONFIG_NEUTRON_OVS_BRIDGE_IFACES=
CONFIG_NEUTRON_OVS_TUNNEL_RANGES=1001:2000
CONFIG_NEUTRON_OVS_TUNNEL_IF=eth1
CONFIG_NEUTRON_OVS_VXLAN_UDP_PORT=4789

CONFIG_HORIZON_SSL=n
CONFIG_SSL_CERT=
CONFIG_SSL_KEY=
CONFIG_SSL_CACHAIN=
CONFIG_SWIFT_KS_PW=8f75bfd461234c30
CONFIG_SWIFT_STORAGES=
CONFIG_SWIFT_STORAGE_ZONES=1
CONFIG_SWIFT_STORAGE_REPLICAS=1
CONFIG_SWIFT_STORAGE_FSTYPE=ext4
CONFIG_SWIFT_HASH=a60aacbedde7429a
CONFIG_SWIFT_STORAGE_SIZE=2G
CONFIG_PROVISION_DEMO=y
CONFIG_PROVISION_TEMPEST=n
CONFIG_PROVISION_TEMPEST_USER=
CONFIG_PROVISION_TEMPEST_USER_PW=44faa4ebc3da4459
CONFIG_PROVISION_DEMO_FLOATRANGE=172.24.4.224/28
CONFIG_PROVISION_TEMPEST_REPO_URI=https://github.com/openstack/tempest.git
CONFIG_PROVISION_TEMPEST_REPO_REVISION=master
CONFIG_PROVISION_ALL_IN_ONE_OVS_BRIDGE=n
CONFIG_HEAT_DB_PW=PW_PLACEHOLDER
CONFIG_HEAT_AUTH_ENC_KEY=fc3fb7fee61e46b0
CONFIG_HEAT_KS_PW=PW_PLACEHOLDER
CONFIG_HEAT_CLOUDWATCH_INSTALL=n
CONFIG_HEAT_USING_TRUSTS=y
CONFIG_HEAT_CFN_INSTALL=n
CONFIG_HEAT_DOMAIN=heat
CONFIG_HEAT_DOMAIN_ADMIN=heat_admin
CONFIG_HEAT_DOMAIN_PASSWORD=PW_PLACEHOLDER
CONFIG_CEILOMETER_SECRET=19ae0e7430174349
CONFIG_CEILOMETER_KS_PW=337b08d4b3a44753
CONFIG_MONGODB_HOST=192.168.1.127
CONFIG_NAGIOS_PW=02f168ee8edd44e4

********************************************************************
Up on successful completion you are supposed to get
********************************************************************

[root@junoVHS01 ~(keystone_admin)]# openstack-status
== Nova services ==
openstack-nova-api:                     active
openstack-nova-cert:                    active
openstack-nova-compute:                 inactive  (disabled on boot)
openstack-nova-network:                 inactive  (disabled on boot)
openstack-nova-scheduler:               active
openstack-nova-conductor:               active
== Glance services ==
openstack-glance-api:                   active
openstack-glance-registry:              active
== Keystone service ==
openstack-keystone:                     active
== Horizon service ==
openstack-dashboard:                    active
== neutron services ==
neutron-server:                         active
neutron-dhcp-agent:                     active
neutron-l3-agent:                       active
neutron-metadata-agent:                 active
neutron-lbaas-agent:                    inactive  (disabled on boot)
neutron-openvswitch-agent:              active
== Swift services ==
openstack-swift-proxy:                  active
openstack-swift-account:                active
openstack-swift-container:              active
openstack-swift-object:                 active
== Cinder services ==
openstack-cinder-api:                   active
openstack-cinder-scheduler:             active
openstack-cinder-volume:                active
openstack-cinder-backup:                active
== Ceilometer services ==
openstack-ceilometer-api:               active
openstack-ceilometer-central:           active
openstack-ceilometer-compute:           inactive  (disabled on boot)
openstack-ceilometer-collector:         active
openstack-ceilometer-alarm-notifier:    active
openstack-ceilometer-alarm-evaluator:   active
openstack-ceilometer-notification:      active
== Support services ==
openvswitch:                            active
dbus:                                   active
target:                                 inactive  (disabled on boot)
rabbitmq-server:                        active
memcached:                              active
== Keystone users ==
+----------------------------------+------------+---------+----------------------+
|                id                |    name    | enabled |        email         |
+----------------------------------+------------+---------+----------------------+
| 82fb089130a64902a3c0cdfefc25aadb |   admin    |   True  |    root@localhost    |
| 8d20be7fd2e04054992bde8af6658b5f | ceilometer |   True  | ceilometer@localhost |
| 91def7a2ef424ef287041a88341c886a |   cinder   |   True  |   cinder@localhost   |
| 77a7997146ca4a9ea8cc4572f79a111a |    demo    |   True  |                      |
| 94079d20cd6a457db9a0ab319c0d1f0f |   glance   |   True  |   glance@localhost   |
| ebf0369d9a6b49f088a10e80eabe683d |  neutron   |   True  |  neutron@localhost   |
| cae11d29ca204dee97fb3bc426afc78f |    nova    |   True  |    nova@localhost    |
| 53188618a56f4dc0a59e06703349fa39 |   swift    |   True  |   swift@localhost    |
+----------------------------------+------------+---------+----------------------+
== Glance images ==
+--------------------------------------+--------------------+-------------+------------------+-----------+--------+
| ID                                   | Name               | Disk Format | Container Format | Size      | Status |
+--------------------------------------+--------------------+-------------+------------------+-----------+--------+
| fbc1f97a-c176-4a64-a495-bf72580e3d9e | cirros             | qcow2       | bare             | 13200896  | active |
| 0abaa464-f41f-4871-b73d-7d264b773597 | Fedora 21 image    | qcow2       | bare             | 158443520 | active |
| 469f7921-2ffa-4f4b-b223-2cd6e9a101e2 | Ubuntu 15.04 image | qcow2       | bare             | 284492288 | active |
+--------------------------------------+--------------------+-------------+------------------+-----------+--------+
== Nova managed services ==
+----+------------------+-----------------------+----------+---------+-------+----------------------------+-----------------+
| Id | Binary           | Host                  | Zone     | Status  | State | Updated_at                 | Disabled Reason |
+----+------------------+-----------------------+----------+---------+-------+----------------------------+-----------------+
| 1  | nova-consoleauth | junoVHS01.localdomain | internal | enabled | up    | 2015-04-11T17:22:34.000000 | -               |
| 2  | nova-scheduler   | junoVHS01.localdomain | internal | enabled | up    | 2015-04-11T17:22:33.000000 | -               |
| 3  | nova-conductor   | junoVHS01.localdomain | internal | enabled | up    | 2015-04-11T17:22:33.000000 | -               |
| 4  | nova-cert        | junoVHS01.localdomain | internal | enabled | up    | 2015-04-11T17:22:34.000000 | -               |
| 5  | nova-compute     | junoVHS02.localdomain | nova     | enabled | up    | 2015-04-11T17:22:34.000000 | -               |
+----+------------------+-----------------------+----------+---------+-------+----------------------------+-----------------+
== Nova networks ==
+--------------------------------------+----------+------+
| ID                                   | Label    | Cidr |
+--------------------------------------+----------+------+
| 39b4dd7b-dc1d-4752-84eb-caeadd0e5781 | public   | -    |
| 8b1f58fd-924b-4b85-9ab6-e2ea249ac0ea | demo_net | -    |
| a5f04387-2663-4f05-9eb4-95bd30f30e9c | private  | -    |
+--------------------------------------+----------+------+
== Nova instance flavors ==
+----+-----------+-----------+------+-----------+------+-------+-------------+-----------+
| ID | Name      | Memory_MB | Disk | Ephemeral | Swap | VCPUs | RXTX_Factor | Is_Public |
+----+-----------+-----------+------+-----------+------+-------+-------------+-----------+
| 1  | m1.tiny   | 512       | 1    | 0         |      | 1     | 1.0         | True      |
| 2  | m1.small  | 2048      | 20   | 0         |      | 1     | 1.0         | True      |
| 3  | m1.medium | 4096      | 40   | 0         |      | 2     | 1.0         | True      |
| 4  | m1.large  | 8192      | 80   | 0         |      | 4     | 1.0         | True      |
| 5  | m1.xlarge | 16384     | 160  | 0         |      | 8     | 1.0         | True      |
+----+-----------+-----------+------+-----------+------+-------+-------------+-----------+
== Nova instances ==
+----+------+--------+------------+-------------+----------+
| ID | Name | Status | Task State | Power State | Networks |
+----+------+--------+------------+-------------+----------+
+----+------+--------+------------+-------------+----------+
************************************
In more details
************************************  

[root@junoVHS01 ~(keystone_admin)]# nova-manage service list
Binary           Host                                 Zone             Status     State Updated_At
nova-consoleauth junoVHS01.localdomain                internal         enabled    :-)   2015-04-11 17:40:44
nova-scheduler   junoVHS01.localdomain                internal         enabled    :-)   2015-04-11 17:40:44
nova-conductor   junoVHS01.localdomain                internal         enabled    :-)   2015-04-11 17:40:44
nova-cert        junoVHS01.localdomain                internal         enabled    :-)   2015-04-11 17:40:45
nova-compute     junoVHS02.localdomain                nova             enabled    :-)   2015-04-11 17:40:44

[root@junoVHS01 ~(keystone_admin)]# neutron agent-list
+--------------------------------------+--------------------+-----------------------+-------+----------------+---------------------------+
| id                                   | agent_type         | host                  | alive | admin_state_up | binary                    |
+--------------------------------------+--------------------+-----------------------+-------+----------------+---------------------------+
| 50b9df88-58a1-4a16-84ed-38c423bdd76f | Metadata agent     | junoVHS01.localdomain | :-)   | True           | neutron-metadata-agent    |
| 65afc586-c15e-48eb-bb29-1fd664f88960 | Open vSwitch agent | junoVHS02.localdomain | :-)   | True           | neutron-openvswitch-agent |
| b6351d3f-ffbd-4839-a6b9-5f01cee6a9b7 | Open vSwitch agent | junoVHS01.localdomain | :-)   | True           | neutron-openvswitch-agent |
| c1a55d0a-b1b1-461f-bc56-dbac4ef7a538 | L3 agent           | junoVHS01.localdomain | :-)   | True           | neutron-l3-agent          |
| d3847d47-8b08-4f23-aa8c-887ca4534b9f | DHCP agent         | junoVHS01.localdomain | :-)   | True           | neutron-dhcp-agent        |
+--------------------------------------+--------------------+-----------------------+-------+----------------+---------------------------+


            

     

   

********************************************************************************
Only on Controller (generally in case of 3 node deployment on Network Node) updates :-
********************************************************************************

[root@junoVHS01  network-scripts(keystone_admin)]# cat ifcfg-br-ex
DEVICE="br-ex"
BOOTPROTO="static"
IPADDR="192.168.1.127"
NETMASK="255.255.255.0"
DNS1="83.221.202.254"
BROADCAST="192.168.1.255"
GATEWAY="192.168.1.1"
NM_CONTROLLED="no"
DEFROUTE="yes"
IPV4_FAILURE_FATAL="yes"
IPV6INIT=no
ONBOOT="yes"
TYPE="OVSIntPort"
OVS_BRIDGE=br-ex

DEVICETYPE="ovs"

[root@junoVHS01 network-scripts(keystone_admin)]# cat ifcfg-eth0
DEVICE="eth0"
# HWADDR=00:22:15:63:E4:E2
ONBOOT="yes"
TYPE="OVSPort"
DEVICETYPE="ovs"
OVS_BRIDGE=br-ex
NM_CONTROLLED=no
IPV6INIT=no

Service network restarted, NetworkManager disabled

****************************************************************
OVS_VSCTL SHOW REPORT ON CONTROLLER
****************************************************************

[root@junoVHS01 ~(keystone_admin)]# ovs-vsctl show
14e6125c-c108-4369-b461-4fb2e68c4884
    Bridge br-int
        fail_mode: secure
        Port "qr-bdc3038d-50"
            tag: 2
            Interface "qr-bdc3038d-50"
                type: internal
        Port patch-tun
            Interface patch-tun
                type: patch
                options: {peer=patch-int}
        Port br-int
            Interface br-int
                type: internal
        Port int-br-ex
            Interface int-br-ex
                type: patch
                options: {peer=phy-br-ex}
        Port "tapd91e13c6-54"
            tag: 3
            Interface "tapd91e13c6-54"
                type: internal
        Port "tap117fa529-b1"
            tag: 2
            Interface "tap117fa529-b1"
                type: internal
    Bridge br-ex
        Port phy-br-ex
            Interface phy-br-ex
                type: patch
                options: {peer=int-br-ex}
        Port "eth0"
            Interface "eth0"
        Port br-ex
            Interface br-ex
                type: internal
        Port "qg-0289d92f-ca"
            Interface "qg-0289d92f-ca"
                type: internal
    Bridge br-tun
        Port patch-int
            Interface patch-int
                type: patch
                options: {peer=patch-tun}
        Port "vxlan-c0a80089"
            Interface "vxlan-c0a80089"
                type: vxlan
                options: {df_default="true", in_key=flow, local_ip="192.168.0.127", out_key=flow, remote_ip="192.168.0.137"}
        Port br-tun
            Interface br-tun
                type: internal
    ovs_version: "2.3.1-git4750c96"

**********************************************************
OVS_VSCTL SHOW REPORT ON COMPUTE
**********************************************************
[root@junoVHS02 ~]# ovs-vsctl show
2fd00c5e-ac58-460b-8c3e-0fdb36afa8d4
    Bridge br-int
        fail_mode: secure
        Port "qvo6447cf52-0e"
            tag: 1
            Interface "qvo6447cf52-0e"
        Port "qvob88ccbd4-0c"
            tag: 1
            Interface "qvob88ccbd4-0c"
        Port int-br-ex
            Interface int-br-ex
                type: patch
                options: {peer=phy-br-ex}
        Port br-int
            Interface br-int
                type: internal
        Port patch-tun
            Interface patch-tun
                type: patch
                options: {peer=patch-int}
        Port "qvo089db78b-b0"
            tag: 1
            Interface "qvo089db78b-b0"
    Bridge br-tun
        Port br-tun
            Interface br-tun
                type: internal
        Port patch-int
            Interface patch-int
                type: patch
                options: {peer=patch-tun}
        Port "vxlan-c0a8007f"
            Interface "vxlan-c0a8007f"
                type: vxlan
                options: {df_default="true", in_key=flow, local_ip="192.168.0.137", out_key=flow, remote_ip="192.168.0.127"}
    Bridge br-ex
        Port br-ex
            Interface br-ex
                type: internal
        Port phy-br-ex
            Interface phy-br-ex
                type: patch
                options: {peer=int-br-ex}
    ovs_version: "2.3.1-git4750c96"

References
1. http://unix.stackexchange.com/questions/81834/how-can-i-change-the-default-ens33-network-device-to-old-eth0-on-fedora-19

Monday, March 23, 2015

Setup the most recent Nova Docker Driver via Devstack on F21

********************************************************************************
UPDATE as 03/26/2015
********************************************************************************
To make devstack configuration persistent between reboots on Fedora 21,
e.g. restart-able via ./rejoin-stack.sh, following services must be enabled :-

   systemctl enable rabbitmq-server 
  systemctl enable openvswitch 
  systemctl enable httpd 
  systemctl enable mariadb 
  systemctl enable mysqld

 File /etc/rc.d/rc.local should contain ( in my case ) :-

#!/bin/bash
ip addr flush dev br-ex ;
ip addr add 192.168.10.15/24 dev br-ex ;
ip link set br-ex up ;
route add -net 10.254.1.0/24 gw 192.168.10.15 ;
System is supposed to be shutdown via :-
$sudo ./unstack.sh
********************************************************************************
     Due to  switching Nova in Kilo Openstack release to  oslo logging, nova docker driver was also switched to oslo logging,what makes impossible test this driver with nova-compute service been built for Juno Release. Running devstack on systems different from Ubuntu 14.04 is affected usually by lower version of python modules then required by devstack.Post bellow is solving this issue on Fedora 21 upgrading requiered modules via Fedora Rawhide and also provides workaround for dropping python-six version caused by driver build,which is specific F21 bug. Shortly , it's brief instruction how to run devstack on Fedora 21 without crashing. It is targeting only development issues.
Actually, it  follows up http://blog.oddbit.com/2015/02/06/installing-nova-docker-on-fedora-21/  however , RDO Juno is not pre-installed and Nova Docker driver is built first based on the top commit of https://git.openstack.org/cgit/stackforge/nova-docker/ , next step is :-

$ git clone https://git.openstack.org/openstack-dev/devstack
$ cd devstack

Creating local.conf under devstack following any of two links provided
and run ./stack.sh performing AIO Openstack installation, like it does
it on Ubuntu 14.04. All steps preventing stack.sh from crash on F21 described
right bellow.

 # yum -y install git docker-io python-six  fedora-repos-rawhide
 # yum --enablerepo=rawhide install  python-pip python-pbr systemd
 # systemctl enable docker.service
 # systemctl start docker.service
 # groupadd nova

  Edit  /etc/sysconfig/docker

   OPTIONS='--selinux-enabled -G nova'
 
 # systemctl restart docker.service
 # reboot

 Next

 # chmod 666 /var/run/docker.sock
 # yum - y install gcc python-devel ( required for driver build )

 $ git clone http://github.com/stackforge/nova-docker.git
 $ cd nova-docker
 $ sudo pip install . 

  You might experience problems with cloning nova-docker.git
  to fedora box (vm), then install Ubuntu 14.04.2 VM
  ( for instance @KVM F21 Hypervisor)
  Log into VM and run:-

  # git clone git://github.com/stackforge/nova-docker.git
  # scp -r nova-docker    fedora21-box-ip:/root

 To encrease to 1.9 version python-six dropped to 1.2 during driver's build

   # yum -y reinstall python-six
   # mkdir -p /opt/stack
   # chmod -R 755 /opt/stack

 Run devstack as user stack:-

  $ git clone https://git.openstack.org/openstack-dev/devstack
  $ cd devstack
  1. Create local.conf
  2. Verify docker service availability
   $ docker version
      Client version: 1.5.0
      Client API version: 1.17
      Go version (client): go1.3.3
      Git commit (client): a8a31ef/1.5.0
      OS/Arch (client): linux/amd64
      Server version: 1.5.0
      Server API version: 1.17
      Go version (server): go1.3.3
      Git commit (server): a8a31ef/1.5.0

 3. Then run :-
  $ ./stack.sh

per http://blog.oddbit.com/2015/02/11/installing-novadocker-with-devstack/
or view  http://bderzhavets.blogspot.com/2015/02/set-up-nova-docker-driver-on-ubuntu.html   for another version of local.conf

*****************************************************************************
My version of local.conf which allows define floating pool as you need,
a bit more flexible then original
*****************************************************************************
[[local|localrc]]
HOST_IP=192.168.1.57
ADMIN_PASSWORD=secret
MYSQL_PASSWORD=secret
RABBIT_PASSWORD=secret
SERVICE_PASSWORD=secret
FLOATING_RANGE=192.168.10.0/24
FLAT_INTERFACE=eth0
Q_FLOATING_ALLOCATION_POOL=start=192.168.10.150,end=192.168.10.254
PUBLIC_NETWORK_GATEWAY=192.168.10.15

SERVICE_TOKEN=super-secret-admin-token
VIRT_DRIVER=novadocker.virt.docker.DockerDriver

DEST=$HOME/stack
SERVICE_DIR=$DEST/status
DATA_DIR=$DEST/data
LOGFILE=$DEST/logs/stack.sh.log
LOGDIR=$DEST/logs

# The default fixed range (10.0.0.0/24) conflicted with an address
# range I was using locally.
FIXED_RANGE=10.254.1.0/24
NETWORK_GATEWAY=10.254.1.1

# Services
disable_service n-net
enable_service q-svc
enable_service q-agt
enable_service q-dhcp
enable_service q-l3
enable_service q-meta
enable_service horizon
disable_service tempest

# Introduce glance to docker images
[[post-config|$GLANCE_API_CONF]]
[DEFAULT]
container_formats=ami,ari,aki,bare,ovf,ova,docker

# Configure nova to use the nova-docker driver
[[post-config|$NOVA_CONF]]
[DEFAULT]
compute_driver=novadocker.virt.docker.DockerDriver

**************************************************************************************
After stack.sh completion disable firewalld, because devstack has no interaction with fedoras firewalld bringing up openstack daemons requiring corresponding ports  to be opened
***************************************************************************************
$ sudo cp nova-docker/etc/nova/rootwrap.d/docker.filters \
  /etc/nova/rootwrap.d/
 
#  systemctl stop firewalld
#  systemtcl disable firewalld

$ cd dev*
$ . openrc demo 

$ neutron security-group-rule-create --protocol icmp \
  --direction ingress --remote-ip-prefix 0.0.0.0/0 default

$ neutron security-group-rule-create --protocol tcp \
  --port-range-min 22 --port-range-max 22 \
  --direction ingress --remote-ip-prefix 0.0.0.0/0 default

$ neutron security-group-rule-create --protocol tcp \
  --port-range-min 80 --port-range-max 80 \
  --direction ingress --remote-ip-prefix 0.0.0.0/0 default

Uploading docker image to glance
$ . openrc admin
$  docker pull rastasheep/ubuntu-sshd:14.04
$  docker save rastasheep/ubuntu-sshd:14.04 | glance image-create --is-public=True   --container-format=docker --disk-format=raw --name rastasheep/ubuntu-sshd:14.04

Launch new instance via uploaded image :-
$ . openrc demo
$  nova boot --image "rastasheep/ubuntu-sshd:14.04" --flavor m1.tiny
    --nic net-id=private-net-id UbuntuDocker

To provide internet access for launched nova-docker instance run :-


# iptables -t nat -A POSTROUTING -o eth0 -j MASQUERADE

************************
On real F21 box
************************
# iptables -t nat -A POSTROUTING -o enp2s0 -j MASQUERADE

    or whatever ifconfig reports on machine

# iptables -t nat -A POSTROUTING -o em1 -j MASQUERADE

   *************************
   To use Horizon
   *************************
   # yum -y install nodejs
   # systemctl restart httpd.service

 



 

     

 
   System has been setup on real F21 box :-
    


  

  



References
http://blog.oddbit.com/2015/02/06/installing-nova-docker-on-fedora-21/
https://www.berrange.com/posts/2012/11/19/walk-through-of-running-openstack-on-fedora-17-using-devstack/ 

Sunday, February 15, 2015

Testing the most recent Nova-Docker driver on Ubuntu 14.04 in devstack environment recoverable between reboots

*******************************************************************************
UPDATE : As of 03/11/2015  Patch bellow merged upstream
In meantime  instructions in UPDATE : As of 03/09/2015 are already a history

View :  https://review.openstack.org/#/c/163022/
View : https://git.openstack.org/cgit/stackforge/nova-docker/

*******************************************************************************
UPDATE : As of 03/09/2015

 View What is missing commit 9d06520645f28d96ef905a709f8ff0c27842b58b in nova-docker master branch ? 

for details and explanation what is wrong with commit mentioned above.
To succeed with Nova Docker driver build on Ubuntu 14.04.2 proceed as
follows, otherwise you will be able load driver via stack.sh run, but network
- floating and private IPs wouldn't work . Nova will just boot container and nothing else. Patch bellow is easy to apply manually .  It will result bringing container's interface up, and network alive and ready to work for you.


$ git clone http://github.com/stackforge/nova-docker.git
$ cd nova-docker

Apply patch

diff --git a/novadocker/virt/docker/vifs.py b/novadocker/virt/docker/vifs.py
index a2e7b23..1d159f7 100644
--- a/novadocker/virt/docker/vifs.py
+++ b/novadocker/virt/docker/vifs.py
@@ -248,6 +248,8 @@ class DockerGenericVIFDriver(object):
                           run_as_root=True)
             utils.execute('ip', 'netns', 'exec', container_id, 'ip', 'addr',
                           'add', ip, 'dev', if_remote_name, run_as_root=True)
+            utils.execute('ip', 'netns', 'exec', container_id, 'ip', 'link',
+                          'set', if_remote_name,'up',run_as_root=True)
             if gateway is not None:
                 utils.execute('ip', 'netns', 'exec', container_id,
                               'ip', 'route', 'replace', 'default', 'via',
Then build driver
$ sudo pip install .
********************************************************************************
Recently new patch https://review.openstack.org/#/c/154750/  merged
https://github.com/stackforge/nova-docker.git  what made possible
to test Nova-docker driver built via current git tree status with the most
recent openstack code obtained by devstack by cloning https://git.openstack.org/openstack-dev/devstack  However,nova-docker containers have been lost after every reboot due to bridge br-ex came up with no IP  and running ./rejoin-stack.sh didn't help much.  This post describes workaround for this issue.


   First part of article actually follows   http://blog.oddbit.com/2015/02/11/installing-novadocker-with-devstack/
written by Lars Kellogg-Stedman with non-critical  changes in local.conf file.

   Second part of article provides workaround making created nova-docker
instances and all  devstack environment recoverable between reboots.

Reproducing the first part I also installed horizon launching nova-docker containers and assigning floating IPs clicking by mouse ( via admin login working with preinstalled Demo project ) 
Run as root ( post install ) to open way out for VMs
*************************************************************************
# iptables -t nat -A POSTROUTING -o eth0 -j MASQUERADE
*************************************************************************
$ sudo apt-get update
$ sudo apt-get -y install git git-review python-pip python-dev
$ sudo apt-get -y upgrade

$ sudo apt-key adv --keyserver hkp://keyserver.ubuntu.com:80 --recv-keys 36A1D7869245C8950F966E92D8576A8BA88D21E9
$ sudo sh -c "echo deb https://get.docker.com/ubuntu docker main  \
   > /etc/apt/sources.list.d/docker.list"
$ sudo apt-get update
$ sudo apt-get install lxc-docker

*********************************************
Update  /etc/default/docker and setting:
*********************************************
DOCKER_OPTS='-G ubuntu'

#service docker restart

*******************************
Installing nova-docker
*******************************
This block is a subject to change as far as commits done
after e9dcf7e790e4df2f9025b19896173995a32692fc
in particular 85071220cbc3c1edb4a4c67db3e7060284f35c6b
will be tested as not disabling floating IPs.

$ git clone http://github.com/stackforge/nova-docker.git
$ cd nova-docker
$ git checkout e9dcf7e790e4df2f9025b19896173995a32692fc
$ sudo pip install .
***************************************************************************
UPDATE 03/12/2015  To get floating IPs working in meantime I have
***************************************************************************
$ git clone http://github.com/stackforge/nova-docker.git
$ cd nova-docker
$ git revert -m 1 661998214962d3e86063196bda0b3a619b7f4e26
$ sudo pip install .

************************************
UPDATE 03/13/2015
************************************
$ git clone http://github.com/stackforge/nova-docker.git
$ cd nova-docker
$ sudo pip install .

Seems to be working , however I've noticed strange issue with

# iptables -t nat -A POSTROUTING -o eth0 -j  MASQUERADE

This directive has a potential danger to lock your floating IPs, if you MASQUERADE a concrete sub-net which is providing floating IPs.


iptables -t nat -A POSTROUTING -o eth0 -j
*****************************
Configuring devstack
*****************************

Now we're ready to get devstack up and running. Start by cloning the repository:

$ git clone https://git.openstack.org/openstack-dev/devstack
$ cd devstack
1. Create local.conf under devstack ( original version )
***************
local.conf
***************
[[local|localrc]]
ADMIN_PASSWORD=secret
DATABASE_PASSWORD=$ADMIN_PASSWORD
RABBIT_PASSWORD=$ADMIN_PASSWORD
SERVICE_PASSWORD=$ADMIN_PASSWORD
SERVICE_TOKEN=super-secret-admin-token
VIRT_DRIVER=novadocker.virt.docker.DockerDriver

DEST=$HOME/stack
SERVICE_DIR=$DEST/status
DATA_DIR=$DEST/data
LOGFILE=$DEST/logs/stack.sh.log
LOGDIR=$DEST/logs

# The default fixed range (10.0.0.0/24) conflicted with an address
# range I was using locally.
FIXED_RANGE=10.254.1.0/24
NETWORK_GATEWAY=10.254.1.1

# Services
disable_service n-net
enable_service q-svc
enable_service q-agt
enable_service q-dhcp
enable_service q-l3
enable_service q-meta
enable_service horizon
disable_service tempest

# Introduce glance to docker images
[[post-config|$GLANCE_API_CONF]]
[DEFAULT]
container_formats=ami,ari,aki,bare,ovf,ova,docker

# Configure nova to use the nova-docker driver
[[post-config|$NOVA_CONF]]
[DEFAULT]
compute_driver=novadocker.virt.docker.DockerDriver

*****************************************************************************
My version of local.conf which allows define floating pool as you need,
a bit more flexible then original
*****************************************************************************
[[local|localrc]]
HOST_IP=192.168.1.57
ADMIN_PASSWORD=secret
MYSQL_PASSWORD=secret
RABBIT_PASSWORD=secret
SERVICE_PASSWORD=secret
FLOATING_RANGE=192.168.10.0/24
FLAT_INTERFACE=eth0
Q_FLOATING_ALLOCATION_POOL=start=192.168.10.150,end=192.168.10.254
PUBLIC_NETWORK_GATEWAY=192.168.10.15

SERVICE_TOKEN=super-secret-admin-token
VIRT_DRIVER=novadocker.virt.docker.DockerDriver

DEST=$HOME/stack
SERVICE_DIR=$DEST/status
DATA_DIR=$DEST/data
LOGFILE=$DEST/logs/stack.sh.log
LOGDIR=$DEST/logs

# The default fixed range (10.0.0.0/24) conflicted with an address
# range I was using locally.
FIXED_RANGE=10.254.1.0/24
NETWORK_GATEWAY=10.254.1.1

# Services
disable_service n-net
enable_service q-svc
enable_service q-agt
enable_service q-dhcp
enable_service q-l3
enable_service q-meta
enable_service horizon
disable_service tempest

# Introduce glance to docker images
[[post-config|$GLANCE_API_CONF]]
[DEFAULT]
container_formats=ami,ari,aki,bare,ovf,ova,docker

# Configure nova to use the nova-docker driver
[[post-config|$NOVA_CONF]]
[DEFAULT]
compute_driver=novadocker.virt.docker.DockerDriver

**************************************
Corresponding iptables entry
**************************************
# iptables -t nat -A POSTROUTING -o eth0 -j MASQUERADE

At this point you are ready to run :-

$ ./stack.sh

*****************************************************************************
Attention skipping this step causes message "No hosts available"
when launching, either causes failure to launch nova-docker instances
in case of stack.sh rerun after ./unstack.sh
******************************************************************************

$ sudo cp nova-docker/etc/nova/rootwrap.d/docker.filters \
  /etc/nova/rootwrap.d/

$ .   openrc admin

For docker pull && docker save

$ .   openrc  demo

To launch instances

*********************************************************************************
Next issue , you have run `sudo ./unstack.sh` , rebooted box hosting devstack  instance and OVS bridge "br-ex" came up with no IP no matter which one of local.conf has been used for ./stack.sh deployment.
Before running ./rejoin-stack.sh following actions have to be undertaken
*********************************************************************************
 This version is supposed to work with second version of local.conf
 PUBLIC_NETWORK_GATEWAY=192.168.10.15

    sudo ip addr flush dev br-ex
    sudo ip addr add 192.168.10.15/24 dev br-ex

    sudo ip link set br-ex up
    sudo route add -net 10.254.1.0/24 gw 192.168.10.15



******************************************************
Verify correct environment installed:-
******************************************************

ubuntu@ubuntu-System-Product-Name:~$ ifconfig
br-ex     Link encap:Ethernet  HWaddr de:64:4b:ba:a7:48 
          inet addr:192.168.10.15  Bcast:0.0.0.0  Mask:255.255.255.0
          UP BROADCAST RUNNING  MTU:1500  Metric:1
          RX packets:2186 errors:0 dropped:0 overruns:0 frame:0
          TX packets:2649 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:0
          RX bytes:1801780 (1.8 MB)  TX bytes:2194422 (2.1 MB)


br-int    Link encap:Ethernet  HWaddr b2:cf:54:c5:a0:49 
          inet6 addr: fe80::b007:79ff:fe87:4260/64 Scope:Link
          UP BROADCAST RUNNING  MTU:1500  Metric:1
          RX packets:648 errors:0 dropped:0 overruns:0 frame:0
          TX packets:8 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:0
          RX bytes:120474 (120.4 KB)  TX bytes:648 (648.0 B)

br-tun    Link encap:Ethernet  HWaddr 3a:fb:71:08:1a:45 
          inet6 addr: fe80::899:bcff:fed6:8d8d/64 Scope:Link
          UP BROADCAST RUNNING  MTU:1500  Metric:1
          RX packets:0 errors:0 dropped:0 overruns:0 frame:0
          TX packets:8 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:0
          RX bytes:0 (0.0 B)  TX bytes:648 (648.0 B)

docker0   Link encap:Ethernet  HWaddr 56:84:7a:fe:97:99 
          inet addr:172.17.42.1  Bcast:0.0.0.0  Mask:255.255.0.0
          UP BROADCAST MULTICAST  MTU:1500  Metric:1
          RX packets:0 errors:0 dropped:0 overruns:0 frame:0
          TX packets:0 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:0
          RX bytes:0 (0.0 B)  TX bytes:0 (0.0 B)

eth0      Link encap:Ethernet  HWaddr 90:e6:ba:2d:11:eb 
          inet addr:192.168.1.37  Bcast:192.168.1.255  Mask:255.255.255.0
          inet6 addr: fe80::92e6:baff:fe2d:11eb/64 Scope:Link
          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
          RX packets:64604 errors:0 dropped:0 overruns:0 frame:0
          TX packets:37999 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:1000
          RX bytes:88470764 (88.4 MB)  TX bytes:3455868 (3.4 MB)

eth1      Link encap:Ethernet  HWaddr 00:0c:76:e0:1e:c5 
          inet6 addr: fe80::20c:76ff:fee0:1ec5/64 Scope:Link
          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
          RX packets:239 errors:0 dropped:0 overruns:0 frame:0
          TX packets:389 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:1000
          RX bytes:58024 (58.0 KB)  TX bytes:75526 (75.5 KB)

lo        Link encap:Local Loopback 
          inet addr:127.0.0.1  Mask:255.0.0.0
          inet6 addr: ::1/128 Scope:Host
          UP LOOPBACK RUNNING  MTU:65536  Metric:1
          RX packets:30804 errors:0 dropped:0 overruns:0 frame:0
          TX packets:30804 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:0
          RX bytes:10921200 (10.9 MB)  TX bytes:10921200 (10.9 MB)

ns44923080-eb Link encap:Ethernet  HWaddr 9a:db:d0:5a:ad:02 
          inet6 addr: fe80::98db:d0ff:fe5a:ad02/64 Scope:Link
          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
          RX packets:621 errors:0 dropped:0 overruns:0 frame:0
          TX packets:289 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:1000
          RX bytes:119156 (119.1 KB)  TX bytes:55649 (55.6 KB)

ns9cb8e46e-35 Link encap:Ethernet  HWaddr 6e:f3:23:93:b4:11 
          inet6 addr: fe80::6cf3:23ff:fe93:b411/64 Scope:Link
          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
          RX packets:637 errors:0 dropped:0 overruns:0 frame:0
          TX packets:271 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:1000
          RX bytes:121878 (121.8 KB)  TX bytes:52144 (52.1 KB)

tap44923080-eb Link encap:Ethernet  HWaddr ee:b3:16:a3:f9:ed 
          inet6 addr: fe80::ecb3:16ff:fea3:f9ed/64 Scope:Link
          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
          RX packets:289 errors:0 dropped:0 overruns:0 frame:0
          TX packets:621 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:1000
          RX bytes:55649 (55.6 KB)  TX bytes:119156 (119.1 KB)

tap8897281a-3f Link encap:Ethernet  HWaddr 9a:2a:eb:a5:3d:60 
          inet6 addr: fe80::982a:ebff:fea5:3d60/64 Scope:Link
          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
          RX packets:2236 errors:0 dropped:0 overruns:0 frame:0
          TX packets:3452 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:1000
          RX bytes:1812589 (1.8 MB)  TX bytes:2351741 (2.3 MB)

tap9cb8e46e-35 Link encap:Ethernet  HWaddr 06:3c:cc:e5:30:4a 
          inet6 addr: fe80::43c:ccff:fee5:304a/64 Scope:Link
          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
          RX packets:271 errors:0 dropped:0 overruns:0 frame:0
          TX packets:637 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:1000
          RX bytes:52144 (52.1 KB)  TX bytes:121878 (121.8 KB)

virbr0    Link encap:Ethernet  HWaddr e2:93:d0:a0:2c:f6 
          inet addr:192.168.122.1  Bcast:192.168.122.255  Mask:255.255.255.0
          UP BROADCAST MULTICAST  MTU:1500  Metric:1
          RX packets:0 errors:0 dropped:0 overruns:0 frame:0
          TX packets:0 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:0
          RX bytes:0 (0.0 B)  TX bytes:0 (0.0 B)


ubuntu@ubuntu-System-Product-Name:~$ route -n

Kernel IP routing table
Destination     Gateway           Genmask             Flags Metric Ref    Use Iface
0.0.0.0            192.168.1.1       0.0.0.0                UG        0      0        0 eth0
10.254.1.0      192.168.10.15   255.255.255.0      UG        0      0        0 br-ex
172.17.0.0      0.0.0.0               255.255.0.0          U           0      0        0 docker0
192.168.1.0     0.0.0.0              255.255.255.0      U           1      0        0 eth0
192.168.10.0    0.0.0.0             255.255.255.0      U           0      0        0 br-ex
192.168.122.0   0.0.0.0            255.255.255.0      U           0      0        0 virbr0


****************************************
At this point run you may run
****************************************

    cd devstack ; ./rejoin-stack.sh

and it will bring your devstack environment back

********************************************************************
Actually, on Ubuntu 14.04 box doing this kind of testing
********************************************************************
root@ubuntu-P5Q3 :~# cat /etc/rc.local
#!/bin/sh -e
#
# rc.local
#
# This script is executed at the end of each multiuser runlevel.
# Make sure that the script will "exit 0" on success or any other
# value on error.
#
# In order to enable or disable this script just change the execution
# bits.
#
# By default this script does nothing.
ip addr flush dev br-ex ;
ip addr add 192.168.10.15/24 dev br-ex ;
ip link set br-ex up ;
route add -net 10.254.1.0/24 gw 192.168.10.15 ;
exit 0

*****************************************************************
Establishing access to public devstack net from LAN
*****************************************************************

Run on Devstack Node
# Add route to LAN
$ sudo route add -net  192.168.1.0/24 gw 192.168.1.57

Run on LAN box
# Add route to devstack public network  via HOST_IP
$ sudo route add -net 192.168.10.0/24 gw 192.168.1.57

where 192.168.1.57 HOST_IP on Devstack Node
192.168.10.0/24   devstack's public  network
192.168.1.0/24    LAN address 


Vncviewer started from Ubuntu VM with devstack environment installed
connecting to vncserver screen running on Ubuntu Rastasheep nova-docker instance

  




Running  Glassfish 4.1 nova-docker container on real Ubuntu 14.04 box
  

   SQLDeveloper connection to Oracle XE database running inside nova-docker
   container


  Launching nova-docker container via CLI on real Ubuntu 14.04 box

ubuntu@ubuntu-P5Q3 :~/devstack$ nova boot --image rastasheep/ubuntu-sshd:latest  --flavor m1.small UbuntuRST
+--------------------------------------+----------------------------------------------------------------------+
| Property                             | Value                                                                |
+--------------------------------------+----------------------------------------------------------------------+
| OS-DCF:diskConfig                    | MANUAL                                                               |
| OS-EXT-AZ:availability_zone          | nova                                                                 |
| OS-EXT-STS:power_state               | 0                                                                    |
| OS-EXT-STS:task_state                | scheduling                                                           |
| OS-EXT-STS:vm_state                  | building                                                             |
| OS-SRV-USG:launched_at               | -                                                                    |
| OS-SRV-USG:terminated_at             | -                                                                    |
| accessIPv4                           |                                                                      |
| accessIPv6                           |                                                                      |
| adminPass                            | n56arrfUdTLY                                                         |
| config_drive                         |                                                                      |
| created                              | 2015-02-16T20:18:38Z                                                 |
| flavor                               | m1.small (2)                                                         |
| hostId                               |                                                                      |
| id                                   | 85acb8d4-2387-4a21-9b77-321480f03163                                 |
| image                                | rastasheep/ubuntu-sshd:latest (87956634-9708-4d63-8daf-cdd15d288d86) |
| key_name                             | -                                                                    |
| metadata                             | {}                                                                   |
| name                                 | UbuntuRST                                                            |
| os-extended-volumes:volumes_attached | []                                                                   |
| progress                             | 0                                                                    |
| security_groups                      | default                                                              |
| status                               | BUILD                                                                |
| tenant_id                            | 2f34beaaa0684e899f28c1b6fef521ac                                     |
| updated                              | 2015-02-16T20:18:38Z                                                 |
| user_id                              | a78cae8feb1f40b081db787629a407af                                     |
+--------------------------------------+----------------------------------------------------------------------+

ubuntu@ubuntu-P5Q3 :~/devstack$ nova list
+--------------------------------------+------------------+--------+------------+-------------+------------------------------------+
| ID                                   | Name             | Status | Task State | Power State | Networks                           |
+--------------------------------------+------------------+--------+------------+-------------+------------------------------------+
| 85acb8d4-2387-4a21-9b77-321480f03163 | UbuntuRST        | ACTIVE | -          | Running     | private=10.254.1.6                 |
| fc0a6180-d177-4f04-bdf6-382820c5f8da | derbyGlassfish41 | ACTIVE | -          | Running     |
|  private=10.254.1.5, 192.168.10.152 |
+--------------------------------------+------------------+--------+------------+--------------


ubuntu@ubuntu-P5Q3 :~/devstack$ nova floating-ip-create
+----------------+-----------+----------+--------+
| Ip             | Server Id | Fixed Ip | Pool   |
+----------------+-----------+----------+--------+
| 192.168.10.153 | -         | -        | public |
+----------------+-----------+----------+--------+

ubuntu@ubuntu-P5Q3 :~/devstack$ nova floating-ip-associate UbuntuRST 192.168.10.153

ubuntu@ubuntu-P5Q3 :~/devstack$ ping -c 3 192.168.10.153
PING 192.168.10.153 (192.168.10.153) 56(84) bytes of data.
64 bytes from 192.168.10.153: icmp_seq=1 ttl=63 time=0.667 ms
64 bytes from 192.168.10.153: icmp_seq=2 ttl=63 time=0.274 ms
64 bytes from 192.168.10.153: icmp_seq=3 ttl=63 time=0.084 ms

--- 192.168.10.153 ping statistics ---
3 packets transmitted, 3 received, 0% packet loss, time 1998ms
rtt min/avg/max/mdev = 0.084/0.341/0.667/0.243 ms

ubuntu@ubuntu-P5Q3 :~/devstack$ ssh root@192.168.10.153
The authenticity of host '192.168.10.153 (192.168.10.153)' can't be established.
ECDSA key fingerprint is cf:f3:e5:fd:ce:d9:99:b6:79:2d:34:73:e8:a3:2e:21.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added '192.168.10.153' (ECDSA) to the list of known hosts.
root@192.168.10.153's password:
root@instance-00000004:~# ps -ef
UID        PID  PPID  C STIME TTY          TIME CMD
root         1     0  0 20:18 ?        00:00:00 /usr/sbin/sshd -D
root         5     1  0 20:22 ?        00:00:00 sshd: root@pts/0   
root         7     5  0 20:22 pts/0    00:00:00 -bash
root        18     7  0 20:22 pts/0    00:00:00 ps -ef

root@instance-00000004:~# ifconfig
lo        Link encap:Local Loopback 
          inet addr:127.0.0.1  Mask:255.0.0.0
          inet6 addr: ::1/128 Scope:Host
          UP LOOPBACK RUNNING  MTU:65536  Metric:1
          RX packets:0 errors:0 dropped:0 overruns:0 frame:0
          TX packets:0 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:0
          RX bytes:0 (0.0 B)  TX bytes:0 (0.0 B)

nsa7183e2e-09 Link encap:Ethernet  HWaddr fa:16:3e:3d:0f:68 
          inet addr:10.254.1.6  Bcast:10.254.1.255  Mask:255.255.255.0
          inet6 addr: fe80::f816:3eff:fe3d:f68/64 Scope:Link
          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
          RX packets:2378 errors:0 dropped:12 overruns:0 frame:0
          TX packets:1425 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:1000
          RX bytes:2586320 (2.5 MB)  TX bytes:132646 (132.6 KB)


*************************************************************
Login via qdhcp-namespace into UbuntuRST
*************************************************************

ubuntu@ubuntu-P5Q3 :~/devstack$ sudo ip netns exec qdhcp-c9e35028-bb1b-4141-b02b-9f35c7524dd2 ssh root@10.254.1.6
The authenticity of host '10.254.1.6 (10.254.1.6)' can't be established.
ECDSA key fingerprint is cf:f3:e5:fd:ce:d9:99:b6:79:2d:34:73:e8:a3:2e:21.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added '10.254.1.6' (ECDSA) to the list of known hosts.
root@10.254.1.6's password:

Last login: Mon Feb 16 20:22:28 2015 from 192.168.10.15
root@instance-00000004:~# ping 8.8.8.8
PING 8.8.8.8 (8.8.8.8) 56(84) bytes of data.
64 bytes from 8.8.8.8: icmp_seq=1 ttl=55 time=19.3 ms
64 bytes from 8.8.8.8: icmp_seq=2 ttl=55 time=18.3 ms
64 bytes from 8.8.8.8: icmp_seq=3 ttl=55 time=19.2 ms
64 bytes from 8.8.8.8: icmp_seq=4 ttl=55 time=18.4 ms

References
1.  https://gist.github.com/charlesflynn/5576114