Saturday, June 06, 2015

Switching to Dashboard Spice Console in RDO Kilo on Fedora 22

*************************
UPDATE 06/27/2015
*************************
# dnf install -y https://rdoproject.org/repos/rdo-release.rpm
# dnf  install -y openstack-packstack  
# dnf install fedora-repos-rawhide
# dnf  --enablerepo=rawhide update openstack-packstack

Fedora - Rawhide - Developmental packages for the next Fedora re 1.7 MB/s |  45 MB     00:27   
Last metadata expiration check performed 0:00:39 ago on Sat Jun 27 13:23:03 2015.
Dependencies resolved.
==============================================================
 Package                       Arch      Version                                Repository  Size
==============================================================
Upgrading:
 openstack-packstack           noarch    2015.1-0.7.dev1577.gc9f8c3c.fc23       rawhide    233 k
 openstack-packstack-puppet    noarch    2015.1-0.7.dev1577.gc9f8c3c.fc23       rawhide     23 k

Transaction Summary
==============================================================
Upgrade  2 Packages
 .  .  .  .  .

# dnf install python3-pyOpenSSL.noarch python-service-identity.noarch python-ndg_httpsclient.noarch

At this point run :-

# packstack  --gen-answer-file answer-file-aio.txt

and set

CONFIG_KEYSTONE_SERVICE_NAME=httpd

I also commented out second line in  /etc/httpd/conf.d/mod_dnssd.conf
Then run `packstack --answer-file=./answer-file-aio.txt` , however you will still
need pre-patch provision_demo.pp at the moment
( see third patch at http://textuploader.com/yn0v ) , the rest should work fine.

Upon completion you may try follow :-
https://www.rdoproject.org/Neutron_with_existing_external_network
I didn't test it on Fedora 22, just creating external and private networks of VXLAN type and configure
 
[root@ServerFedora22 network-scripts(keystone_admin)]# cat ifcfg-br-ex
DEVICE="br-ex"
BOOTPROTO="static"
IPADDR="192.168.1.32"
NETMASK="255.255.255.0"
DNS1="8.8.8.8"
BROADCAST="192.168.1.255"
GATEWAY="192.168.1.1"
NM_CONTROLLED="no"
TYPE="OVSIntPort"
OVS_BRIDGE=br-ex
DEVICETYPE="ovs"
DEFROUTE="yes"
IPV4_FAILURE_FATAL="yes"
IPV6INIT=no

[root@ServerFedora22 network-scripts(keystone_admin)]# cat ifcfg-enp2s0
DEVICE="enp2s0"
ONBOOT="yes"
HWADDR="90:E6:BA:2D:11:EB"
TYPE="OVSPort"
DEVICETYPE="ovs"
OVS_BRIDGE=br-ex
NM_CONTROLLED=no
IPV6INIT=no

When configuration above is done :-

# chkconfig network on
# systemctl stop NetworkManager
# systemctl disable NetworkManager
# reboot

*************************
UPDATE 06/26/2015
*************************
To install RDO Kilo on Fedora 22 :-
after `dnf -y install openstack-packstack `
# cd /usr/lib/python2.7/site-packages/packstack/puppet/templates
Then apply following 3 patches   
# cd ; packstack  --gen-answer-file answer-file-aio.txt
Set "CONFIG_NAGIOS_INSTALL=n" in  answer-file-aio.txt
# packstack --answer-file=./answer-file-aio.txt
************************
UPDATE 05/19/2015
************************
MATE Desktop supports sound ( via patch mentioned bellow) on RDO Kilo  Cloud instances F22, F21, F20. RDO Kilo AIO install performed on bare metal.
Also Windows Server 2012 (evaluation version) cloud VM provides pretty stable "video/sound" ( http://www.cloudbase.it/windows-cloud-images/ ) .
************************
UPDATE 05/14/2015
************************ 
I've  got sound working on CentOS 7 VM ( connection  to console via virt-manager)  with slightly updated patch of Y.Kawada , self.type set "ich6" RDO Kilo installed on bare metal AIO testing host, Fedora 22. Same results have been  obtained for RDO Kilo on CentOS 7.1. However , connection to spice console having cut&&paste and sound enabled features may be obtained via spicy ( remote connection)
Generated libvirt.xml
<domain type="kvm">
  <uuid>455877f2-7070-48a7-bb24-e0702be2fbc5</uuid>
  <name>instance-00000003</name>
  <memory>2097152</memory>
  <vcpu cpuset="0-7">1</vcpu>
  <metadata>
    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.0">
      <nova:package version="2015.1.0-3.el7"/>
      <nova:name>CentOS7RSX05</nova:name>
      <nova:creationTime>2015-06-14 18:42:11</nova:creationTime>
      <nova:flavor name="m1.small">
        <nova:memory>2048</nova:memory>
        <nova:disk>20</nova:disk>
        <nova:swap>0</nova:swap>
        <nova:ephemeral>0</nova:ephemeral>
        <nova:vcpus>1</nova:vcpus>
      </nova:flavor>
      <nova:owner>
        <nova:user uuid="da79d2c66db747eab942bdbe20bb3f44">demo</nova:user>
        <nova:project uuid="8c9defac20a74633af4bb4773e45f11e">demo</nova:project>
      </nova:owner>
      <nova:root type="image" uuid="4a2d708c-7624-439f-9e7e-6e133062e23a"/>
    </nova:instance>
  </metadata>
  <sysinfo type="smbios">
    <system>
      <entry name="manufacturer">Fedora Project</entry>
      <entry name="product">OpenStack Nova</entry>
      <entry name="version">2015.1.0-3.el7</entry>
      <entry name="serial">b3fae7c3-10bd-455b-88b7-95e586342203</entry>
      <entry name="uuid">455877f2-7070-48a7-bb24-e0702be2fbc5</entry>
    </system>
  </sysinfo>
  <os>
    <type>hvm</type>
    <boot dev="hd"/>
    <smbios mode="sysinfo"/>
  </os>
  <features>
    <acpi/>
    <apic/>
  </features>
  <cputune>
    <shares>1024</shares>
  </cputune>
  <clock offset="utc">
    <timer name="pit" tickpolicy="delay"/>
    <timer name="rtc" tickpolicy="catchup"/>
    <timer name="hpet" present="no"/>
  </clock>
  <cpu mode="host-model" match="exact">
    <topology sockets="1" cores="1" threads="1"/>
  </cpu>
  <devices>
    <disk type="file" device="disk">
      <driver name="qemu" type="qcow2" cache="none"/>
      <source file="/var/lib/nova/instances/455877f2-7070-48a7-bb24-e0702be2fbc5/disk"/>
      <target bus="virtio" dev="vda"/>
    </disk>
    <interface type="bridge">
      <mac address="fa:16:3e:87:4b:29"/>
      <model type="virtio"/>
      <source bridge="qbr8ce9ae7b-f0"/>
      <target dev="tap8ce9ae7b-f0"/>
    </interface>
    <serial type="file">
      <source path="/var/lib/nova/instances/455877f2-7070-48a7-bb24-e0702be2fbc5/console.log"/>
    </serial>
    <serial type="pty"/>
    <channel type="spicevmc">
      <target type="virtio" name="com.redhat.spice.0"/>
    </channel>
    <graphics type="spice" autoport="yes" keymap="en-us" listen="0.0.0.0   "/>
    <video>
      <model type="qxl"/>
    </video>
    <sound model="ich6"/>
    <memballoon model="virtio">
      <stats period="10"/>
    </memballoon>
  </devices>
</domain>

  

*****************
END UPDATE
*****************

The post follows up http://lxer.com/module/newswire/view/214893/index.html
The most recent `yum update` on F22 significantly improved network performance on cloud VMs (L2) . Watching movies running on cloud F22 VM (with "Mate Desktop" been installed and functioning pretty smoothly) without sound refreshes spice memories,view https://bugzilla.redhat.com/show_bug.cgi?format=multiple&id=913607


# dnf -y install spice-html5 ( installed on Controller && Compute)
# dnf -y install  openstack-nova-spicehtml5proxy (Compute Node)
# rpm -qa | grep openstack-nova-spicehtml5proxy
openstack-nova-spicehtml5proxy-2015.1.0-3.fc23.noarch

*********************************************************************** 
Update /etc/nova/nova.conf on Controller && Compute Node as follows :-
***********************************************************************

[DEFAULT]

. . . . .
web=/usr/share/spice-html5 
. . . . . .
spicehtml5proxy_host=0.0.0.0  (only Compute)
spicehtml5proxy_port=6082     (only Compute)
. . . . . . .
# Disable VNC
vnc_enabled=false
. . . . . . .
[spice]
# Compute Node Management IP 192.169.142.137

html5proxy_base_url=http://192.169.142.137:6082/spice_auto.html
server_proxyclient_address=127.0.0.1 ( only  Compute )
server_listen=0.0.0.0 ( only  Compute )
enabled=true
agent_enabled=true
keymap=en-us

:wq


# service httpd restart ( on Controller )

Next actions to be performed on Compute Node

# service openstack-nova-compute restart
# service openstack-nova-spicehtml5proxy start
# systemctl enable openstack-nova-spicehtml5proxy

  

On Controller

[root@ip-192-169-142-127 ~(keystone_admin)]# nova list --all-tenants

+--------------------------------------+-----------+----------------------------------+---------+------------+-------------+----------------------------------+
| ID                                   | Name      | Tenant ID                        | Status  | Task State | Power State | Networks                         |
+--------------------------------------+-----------+----------------------------------+---------+------------+-------------+----------------------------------+
| 6c8ef008-e8e0-4f1c-af17-b5f846f8b2d9 | CirrOSDev | 7e5a0f3ec3fe45dc83ae0947ef52adc3 | SHUTOFF | -          | Shutdown    | demo_net=50.0.0.11, 172.24.4.228 |
| cfd735ea-d9a8-4c4e-9a77-03035f01d443 | VF22DEVS  | 7e5a0f3ec3fe45dc83ae0947ef52adc3 | ACTIVE  | -          | Running     | demo_net=50.0.0.14, 172.24.4.231 |
+--------------------------------------+-----------+----------------------------------+---------+------------+-------------+----------------------------------+

[root@ip-192-169-142-127 ~(keystone_admin)]# nova get-spice-console cfd735ea-d9a8-4c4e-9a77-03035f01d443  spice-html5

+-------------+----------------------------------------------------------------------------------------+
| Type        | Url                                                                                    |
+-------------+----------------------------------------------------------------------------------------+
| spice-html5 | http://192.169.142.137:6082/spice_auto.html?token=24fb65c7-e7e9-4727-bad3-ba7c2c29f7f4 |
+-------------+----------------------------------------------------------------------------------------+

  

   

     Session running by virt-manager on Virtualization Host ( F22 )   
   Connection to Compute Node 192.169.142.137 has been activated

  

 Active VM features :-


Actually , not much spice benefits enabled , just QXL video mode

[root@fedora22wks 2b75c461-fbe0-4527-a031-08d2e729db91]# pwd
/var/lib/nova/instances/2b75c461-fbe0-4527-a031-08d2e729db91

[root@fedora22wks 2b75c461-fbe0-4527-a031-08d2e729db91]# cat libvirt.xml

<domain type="kvm">
  <uuid>2b75c461-fbe0-4527-a031-08d2e729db91</uuid>
  <name>instance-00000003</name>
  <memory>2097152</memory>
  <vcpu cpuset="0-3">1</vcpu>
  <metadata>
    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.0">
      <nova:package version="2015.1.0-3.fc23"/>
      <nova:name>VF22Devs</nova:name>
      <nova:creationTime>2015-06-06 16:50:07</nova:creationTime>
      <nova:flavor name="m1.small">
        <nova:memory>2048</nova:memory>
        <nova:disk>20</nova:disk>
        <nova:swap>0</nova:swap>
        <nova:ephemeral>0</nova:ephemeral>
        <nova:vcpus>1</nova:vcpus>
      </nova:flavor>
      <nova:owner>
        <nova:user uuid="6a89f1e00f554e37b3c288f20daa34ec">demo</nova:user>
        <nova:project uuid="22cd2b8ca101493ba621c1656141cea6">demo</nova:project>
      </nova:owner>
      <nova:root type="image" uuid="19c62e6f-527e-4e4a-b84a-c92f8caa7334"/>
    </nova:instance>
  </metadata>
  <sysinfo type="smbios">
    <system>
      <entry name="manufacturer">Fedora Project</entry>
      <entry name="product">OpenStack Nova</entry>
      <entry name="version">2015.1.0-3.fc23</entry>
      <entry name="serial">75cbcf76-d9ef-479e-8f2e-99b89adfc667</entry>
      <entry name="uuid">2b75c461-fbe0-4527-a031-08d2e729db91</entry>
    </system>
  </sysinfo>
  <os>
    <type>hvm</type>
    <boot dev="hd"/>
    <smbios mode="sysinfo"/>
  </os>
  <features>
    <acpi/>
    <apic/>
  </features>
  <cputune>
    <shares>1024</shares>
  </cputune>
  <clock offset="utc">
    <timer name="pit" tickpolicy="delay"/>
    <timer name="rtc" tickpolicy="catchup"/>
    <timer name="hpet" present="no"/>
  </clock>
  <cpu mode="host-model" match="exact">
    <topology sockets="1" cores="1" threads="1"/>
  </cpu>
  <devices>
    <disk type="file" device="disk">
      <driver name="qemu" type="qcow2" cache="none"/>
      <source file="/var/lib/nova/instances/2b75c461-fbe0-4527-a031-08d2e729db91/disk"/>
      <target bus="virtio" dev="vda"/>
    </disk>
    <interface type="bridge">
      <mac address="fa:16:3e:20:b9:4f"/>
      <model type="virtio"/>
      <source bridge="qbr8af1434b-25"/>
      <target dev="tap8af1434b-25"/>
    </interface>
    <serial type="file">
      <source path="/var/lib/nova/instances/2b75c461-fbe0-4527-a031-08d2e729db91/console.log"/>
    </serial>
    <serial type="pty"/>
    <channel type="pty">
      <target type="virtio" name="com.redhat.spice.0"/>
    </channel>
    <graphics type="spice" autoport="yes" keymap="en-us" listen="0.0.0.0"/>
    <video>
      <model type="qxl"/>
    </video>
    <memballoon model="virtio">
      <stats period="10"/>
    </memballoon>
  </devices>
</domain>



   References
   1.  http://blog.felipe-alfaro.com/2014/05/13/html5-spice-console-in-openstack/
   2.  https://www.rdoproject.org/Neutron_with_existing_external_network

Friday, May 29, 2015

RDO Kilo Set up for three F22 VM Nodes Controller&Network&Compute (ML2&OVS&VXLAN)

************************
UPDATE 07/02/2015
************************
  During last month procedure of RDO Kilo install has been significantly changed.
View details here Switching to Dashboard Spice Console in RDO Kilo on Fedora 22.   Patching per Javier Pena is no longer important. Now mentioned install
requires `dnf --enablerepo=rawhide update openstack-packstack`.
View also https://www.redhat.com/archives/rdo-list/2015-July/msg00002.html
section about  "Switching to Dashboard Spice in RDO Kilo on Fedora 22"
  The most recent version ( as of time of writing ) of openstack-packstack in rawhide is  2015.1 Release 0.8.dev1589.g1d6372f.fc23
http://arm.koji.fedoraproject.org/koji/buildinfo?buildID=294991
****************
END UPDATE
****************
     Following bellow is brief instruction  for three node deployment test Controller&&Network&&Compute across Fedora 22 VMs for RDO Kilo, which was performed on Fedora 22 host with QEMU/KVM/Libvirt Hypervisor (16 GB RAM, Intel Core i7-4771 Haswell CPU, ASUS Z97-P ).
    Three VMs (4 GB RAM,4 VCPUS)  have been setup. Controller VM one (management subnet) VNIC, Network Node VM three VNICS (management,vtep's external subnets), Compute Node VM two VNICS (management,vtep's subnets)

SELINUX converted to permissive mode on all depoyment nodes

Actually, straight forward install RDO Kilo on F22 crashes due to relatively simple puppet mistake. Workaround for this issue was recently suggested by   Javier Pena.
1. Manually switch to testing repo after :-
   yum install -y https://rdoproject.org/repos/rdo-release.rpm
2.Then :-  yum install -y openstack-packstack

3.Start packstack for multinode deployment as normal to get files require updates.

After first packstack crash update  /usr/share/ruby/vendor_ruby/puppet/provider/service/systemd.rb to include "22" (in quite obvious place)  on all deployment nodes. Restart packstack multi node deployment.

Expect one more packstack crash , then respond :-
   [root@fedora22wks ~]# systemctl start target
   [root@fedora22wks ~]# systemctl enable  target
restart packstack --answer-file=./answer3Node.txt

I avoid using default libvirt subnet 192.168.122.0/24 for any purposes related
with VM serves as RDO Kilo Nodes, by some reason it causes network congestion when forwarding packets to Internet and vice versa.
 

Three Libvirt networks created

# cat openstackvms.xml
<network>
   <name>openstackvms</name>
   <uuid>d0e9964a-f91a-40c0-b769-a609aee41bf2</uuid>
   <forward mode='nat'>
     <nat>
       <port start='1024' end='65535'/>
     </nat>
   </forward>
   <bridge name='virbr1' stp='on' delay='0' />
   <mac address='52:54:00:60:f8:6d'/>
   <ip address='192.169.142.1' netmask='255.255.255.0'>
     <dhcp>
       <range start='192.169.142.2' end='192.169.142.254' />
     </dhcp>
   </ip>
 </network>

# cat public.xml
<network>
   <name>public</name>
   <uuid>d0e9965b-f92c-40c1-b749-b609aed42cf2</uuid>
   <forward mode='nat'>
     <nat>
       <port start='1024' end='65535'/>
     </nat>
   </forward>
   <bridge name='virbr2' stp='on' delay='0' />
   <mac address='52:54:00:60:f8:6d'/>
   <ip address='172.24.4.225' netmask='255.255.255.240'>
     <dhcp>
       <range start='172.24.4.226' end='172.24.4.238' />
     </dhcp>
   </ip>
 </network>

# cat vteps.xml
<network>
   <name>vteps</name>
   <uuid>d0e9965b-f92c-40c1-b749-b609aed42cf2</uuid>
   <forward mode='nat'>
     <nat>
       <port start='1024' end='65535'/>
     </nat>
   </forward>
   <bridge name='virbr3' stp='on' delay='0' />
   <mac address='52:54:00:60:f8:6d'/>
   <ip address='10.0.0.1' netmask='255.255.255.0'>
     <dhcp>
       <range start='10.0.0.1' end='10.0.0.254' />
     </dhcp>
   </ip>
 </network>

# virsh net-list
 Name                 State      Autostart     Persistent
--------------------------------------------------------------------------
 default               active        yes           yes
 openstackvms    active        yes           yes
 public                active        yes           yes
 vteps                 active         yes          yes


*********************************************************************************
1. First Libvirt subnet "openstackvms"  serves as management network.
All 3 VM are attached to this subnet
**********************************************************************************
2. Second Libvirt subnet "public" serves for simulation external network  Network Node attached to public,latter on "eth2" interface (belongs to "public") is supposed to be converted into OVS port of br-ex on Network Node. This Libvirt subnet via bridge virbr2 172.24.4.225 provides VMs running on Compute Node access to Internet due to match to external network created by packstack installation 172.24.4.224/28.

  


*************************************************
On Hypervisor Host ( Fedora 22)
*************************************************
# iptables -S -t nat 
. . . . . .
-A POSTROUTING -s 172.24.4.224/28 -d 255.255.255.255/32 -j RETURN
-A POSTROUTING -s 172.24.4.224/28 ! -d 172.24.4.224/28 -p tcp -j MASQUERADE --to-ports 1024-65535
-A POSTROUTING -s 172.24.4.224/28 ! -d 172.24.4.224/28 -p udp -j MASQUERADE --to-ports 1024-65535
-A POSTROUTING -s 172.24.4.224/28 ! -d 172.24.4.224/28 -j MASQUERADE
. . . . . .
***********************************************************************************
3. Third Libvirt subnet "vteps" serves  for VTEPs endpoint simulation. Network and Compute Node VMs are attached to this subnet.
********************************************************************************


************************************
Answer-file - answer3Node.txt
************************************
[root@ip-192-169-142-127 ~(keystone_admin)]# cat answer3Node.txt
[general]
CONFIG_SSH_KEY=/root/.ssh/id_rsa.pub
CONFIG_DEFAULT_PASSWORD=
CONFIG_MARIADB_INSTALL=y
CONFIG_GLANCE_INSTALL=y
CONFIG_CINDER_INSTALL=y
CONFIG_NOVA_INSTALL=y
CONFIG_NEUTRON_INSTALL=y
CONFIG_HORIZON_INSTALL=y
CONFIG_SWIFT_INSTALL=y
CONFIG_CEILOMETER_INSTALL=y
CONFIG_HEAT_INSTALL=n
CONFIG_CLIENT_INSTALL=y
CONFIG_NTP_SERVERS=
CONFIG_NAGIOS_INSTALL=y
EXCLUDE_SERVERS=
CONFIG_DEBUG_MODE=n
CONFIG_CONTROLLER_HOST=192.169.142.127
CONFIG_COMPUTE_HOSTS=192.169.142.137
CONFIG_NETWORK_HOSTS=192.169.142.147
CONFIG_VMWARE_BACKEND=n
CONFIG_UNSUPPORTED=n
CONFIG_VCENTER_HOST=
CONFIG_VCENTER_USER=
CONFIG_VCENTER_PASSWORD=
CONFIG_VCENTER_CLUSTER_NAME=
CONFIG_STORAGE_HOST=192.169.142.127
CONFIG_USE_EPEL=y
CONFIG_REPO=
CONFIG_RH_USER=
CONFIG_SATELLITE_URL=
CONFIG_RH_PW=
CONFIG_RH_OPTIONAL=y
CONFIG_RH_PROXY=
CONFIG_RH_PROXY_PORT=
CONFIG_RH_PROXY_USER=
CONFIG_RH_PROXY_PW=
CONFIG_SATELLITE_USER=
CONFIG_SATELLITE_PW=
CONFIG_SATELLITE_AKEY=
CONFIG_SATELLITE_CACERT=
CONFIG_SATELLITE_PROFILE=
CONFIG_SATELLITE_FLAGS=
CONFIG_SATELLITE_PROXY=
CONFIG_SATELLITE_PROXY_USER=
CONFIG_SATELLITE_PROXY_PW=
CONFIG_AMQP_BACKEND=rabbitmq
CONFIG_AMQP_HOST=192.169.142.127
CONFIG_AMQP_ENABLE_SSL=n
CONFIG_AMQP_ENABLE_AUTH=n
CONFIG_AMQP_NSS_CERTDB_PW=PW_PLACEHOLDER
CONFIG_AMQP_SSL_PORT=5671
CONFIG_AMQP_SSL_CERT_FILE=/etc/pki/tls/certs/amqp_selfcert.pem
CONFIG_AMQP_SSL_KEY_FILE=/etc/pki/tls/private/amqp_selfkey.pem
CONFIG_AMQP_SSL_SELF_SIGNED=y
CONFIG_AMQP_AUTH_USER=amqp_user
CONFIG_AMQP_AUTH_PASSWORD=PW_PLACEHOLDER
CONFIG_MARIADB_HOST=192.169.142.127
CONFIG_MARIADB_USER=root
CONFIG_MARIADB_PW=7207ae344ed04957
CONFIG_KEYSTONE_DB_PW=abcae16b785245c3
CONFIG_KEYSTONE_REGION=RegionOne
CONFIG_KEYSTONE_ADMIN_TOKEN=3ad2de159f9649afb0c342ba57e637d9
CONFIG_KEYSTONE_ADMIN_PW=7049f834927e4468
CONFIG_KEYSTONE_DEMO_PW=bf737b785cfa4398
CONFIG_KEYSTONE_TOKEN_FORMAT=UUID
# Here 2 options available
# CONFIG_KEYSTONE_SERVICE_NAME=httpd
CONFIG_KEYSTONE_SERVICE_NAME=keystone
CONFIG_GLANCE_DB_PW=41264fc52ffd4fe8
CONFIG_GLANCE_KS_PW=f6a9398960534797
CONFIG_GLANCE_BACKEND=file
CONFIG_CINDER_DB_PW=5ac08c6d09ba4b69
CONFIG_CINDER_KS_PW=c8cb1ecb8c2b4f6f
CONFIG_CINDER_BACKEND=lvm
CONFIG_CINDER_VOLUMES_CREATE=y
CONFIG_CINDER_VOLUMES_SIZE=10G
CONFIG_CINDER_GLUSTER_MOUNTS=
CONFIG_CINDER_NFS_MOUNTS=
CONFIG_CINDER_NETAPP_LOGIN=
CONFIG_CINDER_NETAPP_PASSWORD=
CONFIG_CINDER_NETAPP_HOSTNAME=
CONFIG_CINDER_NETAPP_SERVER_PORT=80
CONFIG_CINDER_NETAPP_STORAGE_FAMILY=ontap_cluster
CONFIG_CINDER_NETAPP_TRANSPORT_TYPE=http
CONFIG_CINDER_NETAPP_STORAGE_PROTOCOL=nfs
CONFIG_CINDER_NETAPP_SIZE_MULTIPLIER=1.0
CONFIG_CINDER_NETAPP_EXPIRY_THRES_MINUTES=720
CONFIG_CINDER_NETAPP_THRES_AVL_SIZE_PERC_START=20
CONFIG_CINDER_NETAPP_THRES_AVL_SIZE_PERC_STOP=60
CONFIG_CINDER_NETAPP_NFS_SHARES_CONFIG=
CONFIG_CINDER_NETAPP_VOLUME_LIST=
CONFIG_CINDER_NETAPP_VFILER=
CONFIG_CINDER_NETAPP_VSERVER=
CONFIG_CINDER_NETAPP_CONTROLLER_IPS=
CONFIG_CINDER_NETAPP_SA_PASSWORD=
CONFIG_CINDER_NETAPP_WEBSERVICE_PATH=/devmgr/v2
CONFIG_CINDER_NETAPP_STORAGE_POOLS=
CONFIG_NOVA_DB_PW=1e1b5aeeeaf342a8
CONFIG_NOVA_KS_PW=d9583177a2444f06
CONFIG_NOVA_SCHED_CPU_ALLOC_RATIO=16.0
CONFIG_NOVA_SCHED_RAM_ALLOC_RATIO=1.5
CONFIG_NOVA_COMPUTE_MIGRATE_PROTOCOL=tcp
CONFIG_NOVA_COMPUTE_PRIVIF=eth1
CONFIG_NOVA_NETWORK_MANAGER=nova.network.manager.FlatDHCPManager
CONFIG_NOVA_NETWORK_PUBIF=eth0
CONFIG_NOVA_NETWORK_PRIVIF=eth1
CONFIG_NOVA_NETWORK_FIXEDRANGE=192.168.32.0/22
CONFIG_NOVA_NETWORK_FLOATRANGE=10.3.4.0/22
CONFIG_NOVA_NETWORK_DEFAULTFLOATINGPOOL=nova
CONFIG_NOVA_NETWORK_AUTOASSIGNFLOATINGIP=n
CONFIG_NOVA_NETWORK_VLAN_START=100
CONFIG_NOVA_NETWORK_NUMBER=1
CONFIG_NOVA_NETWORK_SIZE=255
CONFIG_NEUTRON_KS_PW=808e36e154bd4cee
CONFIG_NEUTRON_DB_PW=0e2b927a21b44737
CONFIG_NEUTRON_L3_EXT_BRIDGE=br-ex
CONFIG_NEUTRON_L2_PLUGIN=ml2
CONFIG_NEUTRON_METADATA_PW=a965cd23ed2f4502
CONFIG_LBAAS_INSTALL=n
CONFIG_NEUTRON_METERING_AGENT_INSTALL=n
CONFIG_NEUTRON_FWAAS=n
CONFIG_NEUTRON_ML2_TYPE_DRIVERS=vxlan
CONFIG_NEUTRON_ML2_TENANT_NETWORK_TYPES=vxlan
CONFIG_NEUTRON_ML2_MECHANISM_DRIVERS=openvswitch
CONFIG_NEUTRON_ML2_FLAT_NETWORKS=*
CONFIG_NEUTRON_ML2_VLAN_RANGES=
CONFIG_NEUTRON_ML2_TUNNEL_ID_RANGES=1001:2000
CONFIG_NEUTRON_ML2_VXLAN_GROUP=239.1.1.2
CONFIG_NEUTRON_ML2_VNI_RANGES=1001:2000
CONFIG_NEUTRON_L2_AGENT=openvswitch
CONFIG_NEUTRON_LB_TENANT_NETWORK_TYPE=local
CONFIG_NEUTRON_LB_VLAN_RANGES=
CONFIG_NEUTRON_LB_INTERFACE_MAPPINGS=
CONFIG_NEUTRON_OVS_TENANT_NETWORK_TYPE=vxlan
CONFIG_NEUTRON_OVS_VLAN_RANGES=
CONFIG_NEUTRON_OVS_BRIDGE_MAPPINGS=physnet1:br-ex
CONFIG_NEUTRON_OVS_BRIDGE_IFACES=
CONFIG_NEUTRON_OVS_TUNNEL_RANGES=1001:2000
CONFIG_NEUTRON_OVS_TUNNEL_IF=eth1
CONFIG_NEUTRON_OVS_VXLAN_UDP_PORT=4789
CONFIG_HORIZON_SSL=n
CONFIG_SSL_CERT=
CONFIG_SSL_KEY=
CONFIG_SSL_CACHAIN=
CONFIG_SWIFT_KS_PW=8f75bfd461234c30
CONFIG_SWIFT_STORAGES=
CONFIG_SWIFT_STORAGE_ZONES=1
CONFIG_SWIFT_STORAGE_REPLICAS=1
CONFIG_SWIFT_STORAGE_FSTYPE=ext4
CONFIG_SWIFT_HASH=a60aacbedde7429a
CONFIG_SWIFT_STORAGE_SIZE=2G
CONFIG_PROVISION_DEMO=y
CONFIG_PROVISION_TEMPEST=n
CONFIG_PROVISION_TEMPEST_USER=
CONFIG_PROVISION_TEMPEST_USER_PW=44faa4ebc3da4459
CONFIG_PROVISION_DEMO_FLOATRANGE=172.24.4.224/28
CONFIG_PROVISION_TEMPEST_REPO_URI=https://github.com/openstack/tempest.git
CONFIG_PROVISION_TEMPEST_REPO_REVISION=master
CONFIG_PROVISION_ALL_IN_ONE_OVS_BRIDGE=n
CONFIG_HEAT_DB_PW=PW_PLACEHOLDER
CONFIG_HEAT_AUTH_ENC_KEY=fc3fb7fee61e46b0
CONFIG_HEAT_KS_PW=PW_PLACEHOLDER
CONFIG_HEAT_CLOUDWATCH_INSTALL=n
CONFIG_HEAT_USING_TRUSTS=y
CONFIG_HEAT_CFN_INSTALL=n
CONFIG_HEAT_DOMAIN=heat
CONFIG_HEAT_DOMAIN_ADMIN=heat_admin
CONFIG_HEAT_DOMAIN_PASSWORD=PW_PLACEHOLDER
CONFIG_CEILOMETER_SECRET=19ae0e7430174349
CONFIG_CEILOMETER_KS_PW=337b08d4b3a44753
CONFIG_MONGODB_HOST=192.169.142.127
CONFIG_NAGIOS_PW=02f168ee8edd44e4


**********************************************************************************
Up on packstack completion on Network Node create following files ,
designed to  match created by installer external network
**********************************************************************************

[root@ip-192-169-142-147 network-scripts]# cat ifcfg-br-ex
DEVICE="br-ex"
BOOTPROTO="static"
IPADDR="172.24.4.232"
NETMASK="255.255.255.240"
DNS1="83.221.202.254"
BROADCAST="172.24.4.239"
GATEWAY="172.24.4.225"
NM_CONTROLLED="no"
TYPE="OVSIntPort"
OVS_BRIDGE=br-ex
DEVICETYPE="ovs"
DEFROUTE="yes"
IPV4_FAILURE_FATAL="yes"
IPV6INIT=no


[root@ip-192-169-142-147 network-scripts]# cat ifcfg-eth2
DEVICE="eth2"
# HWADDR=00:22:15:63:E4:E2
ONBOOT="yes"
TYPE="OVSPort"
DEVICETYPE="ovs"
OVS_BRIDGE=br-ex
NM_CONTROLLED=no
IPV6INIT=no

*************************************************
Next step to performed on Network Node :-
*************************************************
# chkconfig network on
# systemctl stop NetworkManager
# systemctl disable NetworkManager
#reboot

*************************************************
General Three node RDO Kilo system layout
*************************************************



***********************
 Controller Node
***********************
[root@ip-192-169-142-127 neutron(keystone_admin)]# cat /etc/neutron/plugins/ml2/ml2_conf.ini| grep -v ^# | grep -v ^$
[ml2]
type_drivers = vxlan
tenant_network_types = vxlan
mechanism_drivers =openvswitch
[ml2_type_flat]
[ml2_type_vlan]
[ml2_type_gre]
[ml2_type_vxlan]
vni_ranges =1001:2000
vxlan_group =239.1.1.2
[securitygroup]
enable_security_group = True

   


   Network Node


*********************
Network Node
*********************
[root@ip-192-169-142-147 openvswitch(keystone_admin)]# cat ovs_neutron_plugin.ini | grep -v ^$| grep -v ^#
[ovs]
integration_bridge = br-int
tunnel_bridge = br-tun
local_ip =10.0.0.147
bridge_mappings =physnet1:br-ex
enable_tunneling=True
[agent]
polling_interval = 2
tunnel_types =vxlan
vxlan_udp_port =4789
l2_population = False
arp_responder = False
enable_distributed_routing = False
[securitygroup]
firewall_driver = neutron.agent.linux.iptables_firewall.OVSHybridIptablesFirewallDriver



********************
Compute Node
*******************
[root@ip-192-169-142-137 openvswitch(keystone_admin)]# cat ovs_neutron_plugin.ini | grep -v ^$| grep -v ^#
[ovs]
integration_bridge = br-int
tunnel_bridge = br-tun
local_ip =10.0.0.137
bridge_mappings =physnet1:br-ex
enable_tunneling=True
[agent]
polling_interval = 2
tunnel_types =vxlan
vxlan_udp_port =4789
l2_population = False
arp_responder = False
enable_distributed_routing = False
[securitygroup]
firewall_driver = neutron.agent.linux.iptables_firewall.OVSHybridIptablesFirewallDriver

   


   By some reasons virt-manager doesn't allow to set remote connection to Spice
   Session running locally on F22 Virtualization Host 192.168.1.95

   So from remote Fedora host run :-
    
  # ssh -L 5900:127.0.0.1:5900 -N -f -l root 192.168.1.95
    # ssh -L 5901:127.0.0.1:5901 -N -f -l root 192.168.1.95
  # ssh -L 5902:127.0.0.1:5902 -N -f -l root 192.168.1.95

  Then spicy installed on remote host would connect

   1)  to VM 192.169.142.127
        $ spicy -h localhost -p 5902  
   2)  to VM 192.169.142.147
        $ spicy -h localhost -p 5901
   3) to VM 192.169.142.137
        $ spicy -h localhost -p 5900
   


   Dashboard snapshots

  
  
  


Wednesday, May 27, 2015

How VMs access metadata via qrouter-namespace in Openstack Kilo

It is actually an update for Neutron on Kilo of original blog entry
http://techbackground.blogspot.ie/2013/06/metadata-via-quantum-router.html 
considering  Quantum implementation on Grizzly.
From my standpoint understanding of core architecture of Neutron openstack flow in regards of nova-api metadata service access (and getting proper response from nova-api service) by VMs launching via nova causes a lot of problems due to leak of understanding of core concepts.

Neutron proxies metadata requests to Nova adding HTTP headers which Nova uses to identify the source instance. Neutron actually uses two proxies to do this: a namespace proxy and a metadata agent. This post shows how a metadata request gets from an instance to the Nova metadata service via a namespace proxy running in a Neutron router.

   


    Here both services openstack-nova-api && neutron-server are    running on Controller 192.169.142.127.

[root@ip-192-169-142-127 ~(keystone_admin)]# systemctl | grep nova-api
openstack-nova-api.service  loaded active running   OpenStack Nova API Server

[root@ip-192-169-142-127 ~(keystone_admin)]# systemctl | grep neutron-server
neutron-server.service         loaded active running   OpenStack Neutron Server

Regarding architecture in general,please,view http://lxer.com/module/newswire/view/214009/index.html


*************************************
1.Instance makes request
*************************************
[root@vf22rls ~]# curl http://169.254.169.254/latest/meta-data
ami-id
ami-launch-index
ami-manifest-path
block-device-mapping/
hostname
instance-action
instance-id
instance-type
kernel-id
local-hostname
local-ipv4
placement/
public-hostname
public-ipv4
public-keys/
ramdisk-id
reservation-id
security-groups

[root@vf22rls ~]# ip -4 address show dev eth0
2: eth0: mtu 1400 qdisc fq_codel state UP group default qlen 1000
    inet 50.0.0.15/24 brd 50.0.0.255 scope global dynamic eth0
       valid_lft 85770sec preferred_lft 85770sec

[root@vf22rls ~]#  ip route
default via 50.0.0.1 dev eth0  proto static  metric 100
50.0.0.0/24 dev eth0  proto kernel  scope link  src 50.0.0.15  metric 100


******************************************************************************
2.Namespace proxy receives request. The default gateway 50.0.0.1  exists within a Neutron router namespace on the network node.The Neutron-l3-agent started a namespace proxy in this namespace and added some iptables rules to redirect metadata requests to it. There are no special routes, so the request goes out the default gateway of course a Neutron router needs to have an interface on the subnet.
*******************************************************************************
Network Node 192.169.142.147
**********************************
[root@ip-192-169-142-147 ~(keystone_admin)]# ip netns
qdhcp-1bd1f3b8-8e4e-4193-8af0-023f0be4a0fb
qrouter-79801567-a0b5-4780-bfae-ac00e185a148

[root@ip-192-169-142-147 ~(keystone_admin)]# ip netns exec qdhcp-1bd1f3b8-8e4e-4193-8af0-023f0be4a0fb route -n
Kernel IP routing table
Destination     Gateway         Genmask         Flags Metric Ref    Use Iface
0.0.0.0         50.0.0.1        0.0.0.0         UG    0      0        0 tapd6da9bb8-0e
50.0.0.0        0.0.0.0         255.255.255.0   U     0      0        0 tapd6da9bb8-0e

[root@ip-192-169-142-147 ~(keystone_admin)]# neutron router-list
+--------------------------------------+-----------+------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+-------------+-------+
| id                                   | name      | external_gateway_info                                                                                                                                                                    | distributed | ha    |
+--------------------------------------+-----------+------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+-------------+-------+
| 79801567-a0b5-4780-bfae-ac00e185a148 | RouerDemo | {"network_id": "1faee6ae-faea-4775-9c4e-abbf22c5815c", "enable_snat": true, "external_fixed_ips": [{"subnet_id": "35262e52-e288-4244-b107-dd093a2254d5", "ip_address": "172.24.4.227"}]} | False       | False |
+--------------------------------------+-----------+------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+-------------+-------+


[root@ip-192-169-142-147 ~(keystone_admin)]# ip netns exec qrouter-79801567-a0b5-4780-bfae-ac00e185a148 ifconfig
lo: flags=73  mtu 65536
        inet 127.0.0.1  netmask 255.0.0.0
        inet6 ::1  prefixlen 128  scopeid 0x10
        loop  txqueuelen 0  (Local Loopback)
        RX packets 0  bytes 0 (0.0 B)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 0  bytes 0 (0.0 B)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

qg-1feb35d8-b6: flags=4163  mtu 1500
        inet 172.24.4.227  netmask 255.255.255.240  broadcast 172.24.4.239
        inet6 fe80::f816:3eff:fe7b:7be0  prefixlen 64  scopeid 0x20
        ether fa:16:3e:7b:7b:e0  txqueuelen 0  (Ethernet)
        RX packets 868209  bytes 1181713676 (1.1 GiB)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 413610  bytes 32594119 (31.0 MiB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

qr-6b8bf870-d4: flags=4163  mtu 1500
        inet 50.0.0.1  netmask 255.255.255.0  broadcast 50.0.0.255
        inet6 fe80::f816:3eff:feb3:30bf  prefixlen 64  scopeid 0x20
        ether fa:16:3e:b3:30:bf  txqueuelen 0  (Ethernet)
        RX packets 414032  bytes 32641578 (31.1 MiB)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 868416  bytes 1181753564 (1.1 GiB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

[root@ip-192-169-142-147 ~(keystone_admin)]# ip netns exec qrouter-79801567-a0b5-4780-bfae-ac00e185a148 iptables-save| grep 9697
-A neutron-l3-agent-INPUT -p tcp -m tcp --dport 9697 -j DROP
-A neutron-l3-agent-PREROUTING -d 169.254.169.254/32 -i qr-+ -p tcp -m tcp --dport 80 -j REDIRECT --to-ports 9697

[root@ip-192-169-142-147 ~(keystone_admin)]# ip netns exec qrouter-79801567-a0b5-4780-bfae-ac00e185a148  netstat -anpt
Active Internet connections (servers and established)
Proto Recv-Q Send-Q Local Address           Foreign Address         State       PID/Program name   
tcp        0      0 0.0.0.0:9697            0.0.0.0:*               LISTEN      3210/python2       

[root@ip-192-169-142-147 ~(keystone_admin)]# ps -f --pid 3210 | fold -s -w 82
UID        PID  PPID  C STIME TTY          TIME CMD
neutron   3210     1  0 08:14 ?        00:00:00 /usr/bin/python2
/bin/neutron-ns-metadata-proxy
--pid_file=/var/lib/neutron/external/pids/79801567-a0b5-4780-bfae-ac00e185a148.pid
 --metadata_proxy_socket=/var/lib/neutron/metadata_proxy
--router_id=79801567-a0b5-4780-bfae-ac00e185a148 --state_path=/var/lib/neutron
--metadata_port=9697 --metadata_proxy_user=990 --metadata_proxy_group=988
--verbose
--log-file=neutron-ns-metadata-proxy-79801567-a0b5-4780-bfae-ac00e185a148.log
--log-dir=/var/log/neutron



The nameserver proxy adds two HTTP headers to the request:
    X-Forwarded-For: with the instance's IP address
    X-Neutron-Router-ID: with the uuid of the Neutron router
and proxies it to a Unix domain socket with name /var/lib/neutron/metadata_proxy.

***********************************************************************************
3. Metadata agent receives request and queries the Neutron service
The metadata agent listens on this Unix socket. It is a normal
Linux service that runs in the main operating system IP namespace,
and so it is able to reach the Neutron and Nova metadata services.
Its configuration file has all the information required to do so.
***********************************************************************************

[root@ip-192-169-142-147 ~(keystone_admin)]# netstat -lxp | grep metadata
unix  2      [ ACC ]     STREAM     LISTENING     36208    1291/python2         /var/lib/neutronmetadata_proxy

[root@ip-192-169-142-147 ~(keystone_admin)]# ps -f --pid 1291 | fold -w 80 -s
UID        PID  PPID  C STIME TTY          TIME CMD
neutron   1291     1  0 08:12 ?        00:00:06 /usr/bin/python2
/usr/bin/neutron-metadata-agent --config-file
/usr/share/neutron/neutron-dist.conf --config-file /etc/neutron/neutron.conf
--config-file /etc/neutron/metadata_agent.ini --config-dir
/etc/neutron/conf.d/neutron-metadata-agent --log-file
/var/log/neutron/metadata-agent.log

[root@ip-192-169-142-147 ~(keystone_admin)]# lsof /var/lib/neutron/metadata_proxy
lsof: WARNING: can't stat() fuse.gvfsd-fuse file system /run/user/1000/gvfs
      Output information may be incomplete.
COMMAND    PID    USER   FD   TYPE             DEVICE SIZE/OFF  NODE NAME
neutron-m 1291 neutron    5u  unix 0xffff8801375ecb40      0t0 36208 /var/lib/neutron/metadata_proxy
neutron-m 2764 neutron    5u  unix 0xffff8801375ecb40      0t0 36208 /var/lib/neutron/metadata_proxy
neutron-m 2765 neutron    5u  unix 0xffff8801375ecb40      0t0 36208 /var/lib/neutron/metadata_proxy

[root@ip-192-169-142-147 ~(keystone_admin)]# ps -f --pid 1291 | fold -w 80 -s
UID        PID  PPID  C STIME TTY          TIME CMD
neutron   1291     1  0 08:12 ?        00:00:06 /usr/bin/python2
/usr/bin/neutron-metadata-agent --config-file
/usr/share/neutron/neutron-dist.conf --config-file /etc/neutron/neutron.conf
--config-file /etc/neutron/metadata_agent.ini --config-dir
/etc/neutron/conf.d/neutron-metadata-agent --log-file
/var/log/neutron/metadata-agent.log

[root@ip-192-169-142-147 ~(keystone_admin)]# grep -v '^#\|^\s*$' /etc/neutron/metadata_agent.ini
[DEFAULT]
debug = False
auth_url = http://192.169.142.127:35357/v2.0
auth_region = RegionOne
auth_insecure = False
admin_tenant_name = services
admin_user = neutron
admin_password = 808e36e154bd4cee
nova_metadata_ip = 192.169.142.127
nova_metadata_port = 8775
nova_metadata_protocol = http
metadata_proxy_shared_secret =a965cd23ed2f4502
metadata_workers =2
metadata_backlog = 4096
cache_url = memory://?default_ttl=5

It reads the X-Forwarded-For and X-Neutron-Router-ID headers in the request and queries the Neutron service to find the ID of the instance that created the request.

***********************************************************************************
4.Metadata agent proxies request to Nova metadata service
It then adds these headers:
    X-Instance-ID: the instance ID returned from Neutron
    X-Instance-ID-Signature: instance ID signed with the shared-secret
    X-Forwarded-For: the instance's IP address
and proxies the request to the Nova metadata service.

5. Nova metadata service receives request
The metadata service was started by nova-api. The handler checks the X-Instance-ID-Signature with the shared key, looks up the data and returns the response which travels back via the two proxies to the instance.
************************************************************************************


*****************************
Controller 192.169.142.127
*****************************

[root@ip-192-169-142-127 ~(keystone_admin)]# grep metadata /etc/nova/nova.conf | grep -v ^# | grep -v ^$
enabled_apis=ec2,osapi_compute,metadata
metadata_listen=0.0.0.0
metadata_workers=2
metadata_host=192.169.142.127
service_metadata_proxy=True
metadata_proxy_shared_secret=a965cd23ed2f4502


[root@ip-192-169-142-127 ~(keystone_admin)]# grep metadata /etc/nova/nova.conf | grep -v ^# | grep -v ^$
enabled_apis=ec2,osapi_compute,metadata
metadata_listen=0.0.0.0
metadata_workers=2
metadata_host=192.169.142.127
service_metadata_proxy=True
metadata_proxy_shared_secret=a965cd23ed2f4502


[root@ip-192-169-142-127 ~(keystone_admin)]#  grep metadata /var/log/nova/nova-api.log | tail -15
2015-05-27 10:23:25.232 3987 INFO nova.metadata.wsgi.server [-] 50.0.0.15,192.169.142.147 "GET /latest/meta-data/local-ipv4 HTTP/1.1" status: 200 len: 125 time: 0.0006239
2015-05-27 10:23:25.271 3986 INFO nova.metadata.wsgi.server [-] 50.0.0.15,192.169.142.147 "GET /latest/meta-data/reservation-id HTTP/1.1" status: 200 len: 127 time: 0.0006211
2015-05-27 10:23:25.309 3986 INFO nova.metadata.wsgi.server [-] 50.0.0.15,192.169.142.147 "GET /latest/meta-data/local-hostname HTTP/1.1" status: 200 len: 134 time: 0.0006039
2015-05-27 10:23:25.348 3987 INFO nova.metadata.wsgi.server [-] 50.0.0.15,192.169.142.147 "GET /latest/meta-data/security-groups HTTP/1.1" status: 200 len: 116 time: 0.0006092
2015-05-27 10:23:25.386 3987 INFO nova.metadata.wsgi.server [-] 50.0.0.15,192.169.142.147 "GET /latest/meta-data/ami-launch-index HTTP/1.1" status: 200 len: 117 time: 0.0006170
2015-05-27 10:23:25.424 3987 INFO nova.metadata.wsgi.server [-] 50.0.0.15,192.169.142.147 "GET /latest/meta-data/ramdisk-id HTTP/1.1" status: 200 len: 120 time: 0.0006149
2015-05-27 10:23:25.463 3987 INFO nova.metadata.wsgi.server [-] 50.0.0.15,192.169.142.147 "GET /latest/meta-data/public-hostname HTTP/1.1" status: 200 len: 134 time: 0.0006301
2015-05-27 10:23:25.502 3987 INFO nova.metadata.wsgi.server [-] 50.0.0.15,192.169.142.147 "GET /latest/meta-data/hostname HTTP/1.1" status: 200 len: 134 time: 0.0006180
2015-05-27 10:23:25.541 3987 INFO nova.metadata.wsgi.server [-] 50.0.0.15,192.169.142.147 "GET /latest/meta-data/ami-id HTTP/1.1" status: 200 len: 129 time: 0.0006082
2015-05-27 10:23:25.581 3986 INFO nova.metadata.wsgi.server [-] 50.0.0.15,192.169.142.147 "GET /latest/meta-data/kernel-id HTTP/1.1" status: 200 len: 120 time: 0.0006080
2015-05-27 10:23:25.618 3986 INFO nova.metadata.wsgi.server [-] 50.0.0.15,192.169.142.147 "GET /latest/meta-data/instance-action HTTP/1.1" status: 200 len: 120 time: 0.0006869
2015-05-27 10:23:25.656 3987 INFO nova.metadata.wsgi.server [-] 50.0.0.15,192.169.142.147 "GET /latest/meta-data/public-ipv4 HTTP/1.1" status: 200 len: 129 time: 0.0006471
2015-05-27 10:23:25.696 3987 INFO nova.metadata.wsgi.server [-] 50.0.0.15,192.169.142.147 "GET /latest/meta-data/ami-manifest-path HTTP/1.1" status: 200 len: 121 time: 0.0007231
2015-05-27 10:23:25.735 3987 INFO nova.metadata.wsgi.server [-] 50.0.0.15,192.169.142.147 "GET /latest/meta-data/instance-type HTTP/1.1" status: 200 len: 124 time: 0.0006821
2015-05-27 10:23:25.775 3987 INFO nova.metadata.wsgi.server [-] 50.0.0.15,192.169.142.147 "GET /latest/meta-data/instance-id HTTP/1.1" status: 200 len: 127 time: 0.0007501

Monday, May 25, 2015

Setup Nova-Docker Driver with RDO Kilo on Fedora 21

    Set up RDO Kilo on Fedora 21 per https://www.rdoproject.org/Quickstart
Next step supposed to be is upgrade several python packages via Fedora
Rawhide, build Nova-Docker Driver based on stable/kilo branch and
switch openstack-nova-compute to run Nova-Docker Driver been built  via stable/kilo branch of http://github.com/stackforge/nova-docker.git

 # yum -y install git docker-io python-six  fedora-repos-rawhide
 # yum --enablerepo=rawhide install  python-pip python-pbr systemd
 # reboot
 **********************
 Next
 **********************
 # chmod 666 /var/run/docker.sock
 # yum - y install gcc python-devel
 # git clone http://github.com/stackforge/nova-docker.git
 # cd nova-docker
 # git checkout -b kilo origin/stable/kilo
 # git branch -v -a
 * kilo                           d556444 Do not enable swift/ceilometer/sahara
  master                         d556444 Do not enable swift/ceilometer/sahara
  remotes/origin/HEAD            -> origin/master
  remotes/origin/master          d556444 Do not enable swift/ceilometer/sahara
  remotes/origin/stable/icehouse 9045ca4 Fix lockpath for tests
  remotes/origin/stable/juno     b724e65 Fix tests on stable/juno
  remotes/origin/stable/kilo     d556444 Do not enable swift/ceilometer/sahara

 # python setup.py install
 # systemctl start docker
 # systemctl enable docker
 # chmod 666  /var/run/docker.sock
 # mkdir /etc/nova/rootwrap.d

******************************
Update nova.conf
******************************
vi /etc/nova/nova.conf
set "compute_driver = novadocker.virt.docker.DockerDriver"

************************************************
Next, create the docker.filters file:
************************************************
$ vi /etc/nova/rootwrap.d/docker.filters

Insert Lines

# nova-rootwrap command filters for setting up network in the docker driver
# This file should be owned by (and only-writeable by) the root user
[Filters]
# nova/virt/docker/driver.py: 'ln', '-sf', '/var/run/netns/.*'
ln: CommandFilter, /bin/ln, root

*****************************************
Add line /etc/glance/glance-api.conf
*****************************************
container_formats=ami,ari,aki,bare,ovf,ova,docker

Restart Services
************************
# systemctl restart openstack-nova-compute
# systemctl status openstack-nova-compute
# systemctl restart openstack-glance-api

***************************************************
 For docker pull && docker save
 Uploading docker image to glance
***************************************************
 # .  keystonerc_admin 
 #  docker pull rastasheep/ubuntu-sshd:14.04
 #  docker save rastasheep/ubuntu-sshd:14.04 | glance image-create --is-public=True   --container-format=docker --disk-format=raw --name rastasheep/ubuntu-sshd:14.04

  
****************************************************************
To enable security rules and launch NovaDocker Container :-
****************************************************************

#  . keystonerc_demo 

# neutron security-group-rule-create --protocol icmp \
  --direction ingress --remote-ip-prefix 0.0.0.0/0 default

# neutron security-group-rule-create --protocol tcp \
  --port-range-min 22 --port-range-max 22 \
  --direction ingress --remote-ip-prefix 0.0.0.0/0 default

# neutron security-group-rule-create --protocol tcp \
  --port-range-min 80 --port-range-max 80 \
  --direction ingress --remote-ip-prefix 0.0.0.0/0 default


# neutron security-group-rule-create --protocol tcp \
  --port-range-min 80 --port-range-max 4848 \
  --direction ingress --remote-ip-prefix 0.0.0.0/0 default


# neutron security-group-rule-create --protocol tcp \
  --port-range-min 80 --port-range-max 8080 \
  --direction ingress --remote-ip-prefix 0.0.0.0/0 default



# neutron security-group-rule-create --protocol tcp \
  --port-range-min 80 --port-range-max 8181  \
  --direction ingress --remote-ip-prefix 0.0.0.0/0 default


******************************************************************
Launch new instance via uploaded image :-
******************************************************************


#  . keystonerc_demo  

#   nova boot --image "rastasheep/ubuntu-sshd:14.04" --flavor m1.tiny
    --nic net-id=private-net-id UbuntuDocker


either via dashboard

*****************************************************
Update before reboot /etc/cr.d/rc.local as follows :-
*****************************************************
[root@fedora21wks ~(keystone_admin)]# cat  /etc/rc.d/rc.local
#!/bin/bash
chmod 666 /var/run/docker.sock ;
systemctl restart  openstack-nova-compute



[root@fedora21wks ~(keystone_admin)]# chmod a+x   /etc/rc.d/rc.local
 

   Starting NovaDocker TomCat container,  floating IP 192.168.1.158

  
Starting Nova-Docker GlassFish4.1 NovaDocker container,
floating IP 192.168.1.159


  
*** Running /etc/my_init.d/00_regen_ssh_host_keys.sh...
*** Running /etc/my_init.d/01_start-sshd.sh...
No SSH host key available. Generating one...
Creating SSH2 RSA key; this may take some time ...
Creating SSH2 DSA key; this may take some time ...
Creating SSH2 ECDSA key; this may take some time ...
Creating SSH2 ED25519 key; this may take some time ...
invoke-rc.d: policy-rc.d denied execution of restart.
SSH KEYS regenerated by Boris just in case !
SSHD started !
*** Running /etc/my_init.d/database.sh...
Derby database started !
*** Running /etc/my_init.d/run.sh...
Bad Network Configuration.  DNS can not resolve the hostname: 
java.net.UnknownHostException: instance-00000009: instance-00000009: unknown error
Waiting for domain1 to start ..............
Successfully started the domain : domain1
domain  Location: /opt/glassfish4/glassfish/domains/domain1
Log File: /opt/glassfish4/glassfish/domains/domain1/logs/server.log
Admin Port: 4848
Command start-domain executed successfully.
=> Modifying password of admin to random in Glassfish
spawn asadmin --user admin change-admin-password
Enter the admin password> 
Enter the new admin password> 
Enter the new admin password again> 
Command change-admin-password executed successfully.
=> Enabling secure admin login
spawn asadmin enable-secure-admin
Enter admin user name>  admin
Enter admin password for user "admin"> 
You must restart all running servers for the change in secure admin to take effect.
Command enable-secure-admin executed successfully.
=> Done!
========================================================================
You can now connect to this Glassfish server using:

     admin:0f2HOP1vCiDd

Please remember to change the above password as soon as possible!
========================================================================
=> Restarting Glassfish server
Waiting for the domain to stop 
Command stop-domain executed successfully.
=> Starting and running Glassfish server
=> Debug mode is set to: false
Bad Network Configuration.  DNS can not resolve the hostname: 
java.net.UnknownHostException: instance-00000009: instance-00000009: unknown error 
 
 
[root@fedora21wks ~(keystone_admin)]# ssh root@192.168.1.159
root@192.168.1.159's password: 
Last login: Tue May 26 12:38:48 2015 from 192.168.1.75
root@instance-00000009:~# ps -ef
UID        PID  PPID  C STIME TTY          TIME CMD
root         1     0  0 12:18 ?        00:00:00 /usr/bin/python3 -u /sbin/my_init
root        96     1  0 12:18 ?        00:00:00 /bin/bash /etc/my_init.d/run.sh
root       100     1  0 12:18 ?        00:00:00 /usr/sbin/sshd
root       162     1  0 12:18 ?        00:00:03 /opt/jdk1.8.0_25/bin/java -Djava.library.path=/op
root       426    96  0 12:18 ?        00:00:01 java -jar /opt/glassfish4/bin/../glassfish/lib/cl
root       443   426 12 12:18 ?        00:02:43 /opt/jdk1.8.0_25/bin/java -cp /opt/glassfish4/gla
root      1110   100  0 12:39 ?        00:00:00 sshd: root@pts/0 
root      1112  1110  0 12:39 pts/0    00:00:00 -bash
root      1123  1112  0 12:39 pts/0    00:00:00 ps -ef
root@instance-00000009:~# ifconfig
lo        Link encap:Local Loopback  
          inet addr:127.0.0.1  Mask:255.0.0.0
          inet6 addr: ::1/128 Scope:Host
          UP LOOPBACK RUNNING  MTU:65536  Metric:1
          RX packets:8479 errors:0 dropped:0 overruns:0 frame:0
          TX packets:8479 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:0 
          RX bytes:1544705 (1.5 MB)  TX bytes:1544705 (1.5 MB)

ns292e45a2-ad Link encap:Ethernet  HWaddr fa:16:3e:b9:a8:4e  
          inet addr:50.0.0.19  Bcast:0.0.0.0  Mask:255.255.255.0
          inet6 addr: fe80::f816:3eff:feb9:a84e/64 Scope:Link
          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
          RX packets:17453 errors:0 dropped:0 overruns:0 frame:0
          TX packets:9984 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:1000 
          RX bytes:28521655 (28.5 MB)  TX bytes:5336887 (5.3 MB)

root@instance-00000009:~# 

**************************************************
Running NovaDocker's containers (instances) :- 
**************************************************
 
[root@fedora21wks ~(keystone_admin)]# docker ps
CONTAINER ID        IMAGE                                      COMMAND                CREATED             STATUS              PORTS               NAMES
c5c4594da13d        boris/docker-glassfish41:latest            "/sbin/my_init"        26 minutes ago      Up 26 minutes                           nova-d751e04c-8f9b-4171-988a-cd57fb37574c   
a58781eba98b        tutum/tomcat:latest                        "/run.sh"              4 hours ago         Up 4 hours                              nova-3024f190-8dbb-4faf-b2b0-e627d6faba97   
cd1418845931        eugeneware/docker-wordpress-nginx:latest   "/bin/bash /start.sh   5 hours ago         Up 5 hours                              nova-c0211200-eee9-431e-aa64-db5cdcadad66   
700fe66add76        rastasheep/ubuntu-sshd:14.04               "/usr/sbin/sshd -D"    7 hours ago         Up 7 hours                              nova-9d0ebc1d-5bfa-44d7-990d-957d7fec5ea2   
 

Sunday, May 24, 2015

RDO Kilo Set up for Two VM Nodes (Controller&&Network+Compute) ML2&OVS&VXLAN on Fedora 21

Following bellow is brief instruction  for two node deployment test Controller&&Network+Compute Nodes for RDO Kilo, which was performed on Fedora 21 host with KVM/Libvirt Hypervisor . Two VMs (4 GB RAM,2 VCPUS)  have been setup. Controller&&Network VM two (management subnet,VTEP's subnet) VNICs, Compute Node VM two VNICS (management,VTEP's subnets). Management network finally converted to public.SELINUX should be set to permissive mode ( vs  packstack deployments on CentOS 7.1)
*********************************
Two Libvirt networks created
*********************************
# cat openstackvms.xml
<network>
   <name>openstackvms</name>
   <uuid>d0e9964a-f91a-40c0-b769-a609aee41bf2</uuid>
   <forward mode='nat'>
     <nat>
       <port start='1024' end='65535'/>
     </nat>
   </forward>
   <bridge name='virbr2' stp='on' delay='0' />
   <mac address='52:54:00:60:f8:6d'/>
   <ip address='192.169.142.1' netmask='255.255.255.0'>
     <dhcp>
       <range start='192.169.142.2' end='192.169.142.254' />
     </dhcp>
   </ip>
 </network>
**********************************************************************
Libvirt's default network 192.168.122.0/24 was used as VTEP's
**********************************************************************
Follow https://www.rdoproject.org/Quickstart  until packstack startup.
You might have to switch to rdo-testing.repo manually (/etc/yum.repos.d) .
Just updated "enabled=1 or 0" in corresponding *.repo file. Anyway in
meantime make sure that release and testing repos are in expected state,
to avoid unpredictable consequences.

**********************
AnswerTwoNode.txt
**********************
[root@ip-192-169-142-127 ~(keystone_admin)]# cat answerTwoNode.txt
[general]
CONFIG_SSH_KEY=/root/.ssh/id_rsa.pub
CONFIG_DEFAULT_PASSWORD=
CONFIG_MARIADB_INSTALL=y
CONFIG_GLANCE_INSTALL=y
CONFIG_CINDER_INSTALL=y
CONFIG_NOVA_INSTALL=y
CONFIG_NEUTRON_INSTALL=y
CONFIG_HORIZON_INSTALL=y
CONFIG_SWIFT_INSTALL=y
CONFIG_CEILOMETER_INSTALL=y
CONFIG_HEAT_INSTALL=n
CONFIG_CLIENT_INSTALL=y
CONFIG_NTP_SERVERS=
CONFIG_NAGIOS_INSTALL=y
EXCLUDE_SERVERS=
CONFIG_DEBUG_MODE=n
CONFIG_CONTROLLER_HOST=192.169.142.127
CONFIG_COMPUTE_HOSTS=192.169.142.137
CONFIG_NETWORK_HOSTS=192.169.142.127
CONFIG_VMWARE_BACKEND=n
CONFIG_UNSUPPORTED=n
CONFIG_VCENTER_HOST=
CONFIG_VCENTER_USER=
CONFIG_VCENTER_PASSWORD=
CONFIG_VCENTER_CLUSTER_NAME=
CONFIG_STORAGE_HOST=192.169.142.127
CONFIG_USE_EPEL=y
CONFIG_REPO=
CONFIG_RH_USER=
CONFIG_SATELLITE_URL=
CONFIG_RH_PW=
CONFIG_RH_OPTIONAL=y
CONFIG_RH_PROXY=
CONFIG_RH_PROXY_PORT=
CONFIG_RH_PROXY_USER=
CONFIG_RH_PROXY_PW=
CONFIG_SATELLITE_USER=
CONFIG_SATELLITE_PW=
CONFIG_SATELLITE_AKEY=
CONFIG_SATELLITE_CACERT=
CONFIG_SATELLITE_PROFILE=
CONFIG_SATELLITE_FLAGS=
CONFIG_SATELLITE_PROXY=
CONFIG_SATELLITE_PROXY_USER=
CONFIG_SATELLITE_PROXY_PW=
CONFIG_AMQP_BACKEND=rabbitmq
CONFIG_AMQP_HOST=192.169.142.127
CONFIG_AMQP_ENABLE_SSL=n
CONFIG_AMQP_ENABLE_AUTH=n
CONFIG_AMQP_NSS_CERTDB_PW=PW_PLACEHOLDER
CONFIG_AMQP_SSL_PORT=5671
CONFIG_AMQP_SSL_CERT_FILE=/etc/pki/tls/certs/amqp_selfcert.pem
CONFIG_AMQP_SSL_KEY_FILE=/etc/pki/tls/private/amqp_selfkey.pem
CONFIG_AMQP_SSL_SELF_SIGNED=y
CONFIG_AMQP_AUTH_USER=amqp_user
CONFIG_AMQP_AUTH_PASSWORD=PW_PLACEHOLDER
CONFIG_MARIADB_HOST=192.169.142.127
CONFIG_MARIADB_USER=root
CONFIG_MARIADB_PW=7207ae344ed04957
CONFIG_KEYSTONE_DB_PW=abcae16b785245c3
CONFIG_KEYSTONE_REGION=RegionOne
CONFIG_KEYSTONE_ADMIN_TOKEN=3ad2de159f9649afb0c342ba57e637d9
CONFIG_KEYSTONE_ADMIN_PW=7049f834927e4468
CONFIG_KEYSTONE_DEMO_PW=bf737b785cfa4398
CONFIG_KEYSTONE_TOKEN_FORMAT=UUID
# Here 2 options available
# CONFIG_KEYSTONE_SERVICE_NAME=httpd
CONFIG_KEYSTONE_SERVICE_NAME=keystone
CONFIG_GLANCE_DB_PW=41264fc52ffd4fe8
CONFIG_GLANCE_KS_PW=f6a9398960534797
CONFIG_GLANCE_BACKEND=file
CONFIG_CINDER_DB_PW=5ac08c6d09ba4b69
CONFIG_CINDER_KS_PW=c8cb1ecb8c2b4f6f
CONFIG_CINDER_BACKEND=lvm
CONFIG_CINDER_VOLUMES_CREATE=y
CONFIG_CINDER_VOLUMES_SIZE=10G
CONFIG_CINDER_GLUSTER_MOUNTS=
CONFIG_CINDER_NFS_MOUNTS=
CONFIG_CINDER_NETAPP_LOGIN=
CONFIG_CINDER_NETAPP_PASSWORD=
CONFIG_CINDER_NETAPP_HOSTNAME=
CONFIG_CINDER_NETAPP_SERVER_PORT=80
CONFIG_CINDER_NETAPP_STORAGE_FAMILY=ontap_cluster
CONFIG_CINDER_NETAPP_TRANSPORT_TYPE=http
CONFIG_CINDER_NETAPP_STORAGE_PROTOCOL=nfs
CONFIG_CINDER_NETAPP_SIZE_MULTIPLIER=1.0
CONFIG_CINDER_NETAPP_EXPIRY_THRES_MINUTES=720
CONFIG_CINDER_NETAPP_THRES_AVL_SIZE_PERC_START=20
CONFIG_CINDER_NETAPP_THRES_AVL_SIZE_PERC_STOP=60
CONFIG_CINDER_NETAPP_NFS_SHARES_CONFIG=
CONFIG_CINDER_NETAPP_VOLUME_LIST=
CONFIG_CINDER_NETAPP_VFILER=
CONFIG_CINDER_NETAPP_VSERVER=
CONFIG_CINDER_NETAPP_CONTROLLER_IPS=
CONFIG_CINDER_NETAPP_SA_PASSWORD=
CONFIG_CINDER_NETAPP_WEBSERVICE_PATH=/devmgr/v2
CONFIG_CINDER_NETAPP_STORAGE_POOLS=
CONFIG_NOVA_DB_PW=1e1b5aeeeaf342a8
CONFIG_NOVA_KS_PW=d9583177a2444f06
CONFIG_NOVA_SCHED_CPU_ALLOC_RATIO=16.0
CONFIG_NOVA_SCHED_RAM_ALLOC_RATIO=1.5
CONFIG_NOVA_COMPUTE_MIGRATE_PROTOCOL=tcp
CONFIG_NOVA_COMPUTE_PRIVIF=eth1
CONFIG_NOVA_NETWORK_MANAGER=nova.network.manager.FlatDHCPManager
CONFIG_NOVA_NETWORK_PUBIF=eth0
CONFIG_NOVA_NETWORK_PRIVIF=eth1
CONFIG_NOVA_NETWORK_FIXEDRANGE=192.168.32.0/22
CONFIG_NOVA_NETWORK_FLOATRANGE=10.3.4.0/22
CONFIG_NOVA_NETWORK_DEFAULTFLOATINGPOOL=nova
CONFIG_NOVA_NETWORK_AUTOASSIGNFLOATINGIP=n
CONFIG_NOVA_NETWORK_VLAN_START=100
CONFIG_NOVA_NETWORK_NUMBER=1
CONFIG_NOVA_NETWORK_SIZE=255
CONFIG_NEUTRON_KS_PW=808e36e154bd4cee
CONFIG_NEUTRON_DB_PW=0e2b927a21b44737
CONFIG_NEUTRON_L3_EXT_BRIDGE=br-ex
CONFIG_NEUTRON_L2_PLUGIN=ml2
CONFIG_NEUTRON_METADATA_PW=a965cd23ed2f4502
CONFIG_LBAAS_INSTALL=n
CONFIG_NEUTRON_METERING_AGENT_INSTALL=n
CONFIG_NEUTRON_FWAAS=n
CONFIG_NEUTRON_ML2_TYPE_DRIVERS=vxlan
CONFIG_NEUTRON_ML2_TENANT_NETWORK_TYPES=vxlan
CONFIG_NEUTRON_ML2_MECHANISM_DRIVERS=openvswitch
CONFIG_NEUTRON_ML2_FLAT_NETWORKS=*
CONFIG_NEUTRON_ML2_VLAN_RANGES=
CONFIG_NEUTRON_ML2_TUNNEL_ID_RANGES=1001:2000
CONFIG_NEUTRON_ML2_VXLAN_GROUP=239.1.1.2
CONFIG_NEUTRON_ML2_VNI_RANGES=1001:2000
CONFIG_NEUTRON_L2_AGENT=openvswitch
CONFIG_NEUTRON_LB_TENANT_NETWORK_TYPE=local
CONFIG_NEUTRON_LB_VLAN_RANGES=
CONFIG_NEUTRON_LB_INTERFACE_MAPPINGS=
CONFIG_NEUTRON_OVS_TENANT_NETWORK_TYPE=vxlan
CONFIG_NEUTRON_OVS_VLAN_RANGES=
CONFIG_NEUTRON_OVS_BRIDGE_MAPPINGS=physnet1:br-ex
CONFIG_NEUTRON_OVS_BRIDGE_IFACES=
CONFIG_NEUTRON_OVS_TUNNEL_RANGES=1001:2000
CONFIG_NEUTRON_OVS_TUNNEL_IF=eth1
CONFIG_NEUTRON_OVS_VXLAN_UDP_PORT=4789
CONFIG_HORIZON_SSL=n
CONFIG_SSL_CERT=
CONFIG_SSL_KEY=
CONFIG_SSL_CACHAIN=
CONFIG_SWIFT_KS_PW=8f75bfd461234c30
CONFIG_SWIFT_STORAGES=
CONFIG_SWIFT_STORAGE_ZONES=1
CONFIG_SWIFT_STORAGE_REPLICAS=1
CONFIG_SWIFT_STORAGE_FSTYPE=ext4
CONFIG_SWIFT_HASH=a60aacbedde7429a
CONFIG_SWIFT_STORAGE_SIZE=2G
CONFIG_PROVISION_DEMO=y
CONFIG_PROVISION_TEMPEST=n
CONFIG_PROVISION_TEMPEST_USER=
CONFIG_PROVISION_TEMPEST_USER_PW=44faa4ebc3da4459
CONFIG_PROVISION_DEMO_FLOATRANGE=172.24.4.224/28
CONFIG_PROVISION_TEMPEST_REPO_URI=https://github.com/openstack/tempest.git
CONFIG_PROVISION_TEMPEST_REPO_REVISION=master
CONFIG_PROVISION_ALL_IN_ONE_OVS_BRIDGE=n
CONFIG_HEAT_DB_PW=PW_PLACEHOLDER
CONFIG_HEAT_AUTH_ENC_KEY=fc3fb7fee61e46b0
CONFIG_HEAT_KS_PW=PW_PLACEHOLDER
CONFIG_HEAT_CLOUDWATCH_INSTALL=n
CONFIG_HEAT_USING_TRUSTS=y
CONFIG_HEAT_CFN_INSTALL=n
CONFIG_HEAT_DOMAIN=heat
CONFIG_HEAT_DOMAIN_ADMIN=heat_admin
CONFIG_HEAT_DOMAIN_PASSWORD=PW_PLACEHOLDER
CONFIG_CEILOMETER_SECRET=19ae0e7430174349
CONFIG_CEILOMETER_KS_PW=337b08d4b3a44753
CONFIG_MONGODB_HOST=192.169.142.127
CONFIG_NAGIOS_PW=02f168ee8edd44e4

******************
Then run :-
******************
# packstack --answer-file=./answerTwoNode.txt
**********************************************************************************
Up on packstack completion on Controller Node create following files ,
designed to  convert mgmt network into external
**********************************************************************************

[root@ip-192-169-142-127 network-scripts]# cat ifcfg-br-ex
DEVICE="br-ex"
BOOTPROTO="static"
IPADDR="192.169.142.127"
NETMASK="255.255.255.0"
DNS1="83.221.202.254"
BROADCAST="192.169.142.155"
GATEWAY="192.169.142.1"
NM_CONTROLLED="no"
TYPE="OVSIntPort"
OVS_BRIDGE=br-ex
DEVICETYPE="ovs"
DEFROUTE="yes"
IPV4_FAILURE_FATAL="yes"
IPV6INIT=no


[root@ip-192-169-142-127 network-scripts]# cat ifcfg-eth0
DEVICE="eth0"
# HWADDR=00:22:15:63:E4:E2
ONBOOT="yes"
TYPE="OVSPort"
DEVICETYPE="ovs"
OVS_BRIDGE=br-ex
NM_CONTROLLED=no
IPV6INIT=no

*************************************************
Next step to performed on Network Node :-
*************************************************
# chkconfig network on
# systemctl stop NetworkManager
# systemctl disable NetworkManager
# reboot ( Controller Node)

*************************
Controller status
*************************
[root@ip-192-169-142-127 ~(keystone_admin)]# nova service-list
+----+------------------+----------------------------------------+----------+---------+-------+----------------------------+-----------------+
| Id | Binary           | Host                                   | Zone     | Status  | State | Updated_at                 | Disabled Reason |
+----+------------------+----------------------------------------+----------+---------+-------+----------------------------+-----------------+
| 1  | nova-consoleauth | ip-192-169-142-127.ip.secureserver.net | internal | enabled | up    | 2015-05-24T15:12:02.000000 | -               |
| 2  | nova-scheduler   | ip-192-169-142-127.ip.secureserver.net | internal | enabled | up    | 2015-05-24T15:12:01.000000 | -               |
| 3  | nova-conductor   | ip-192-169-142-127.ip.secureserver.net | internal | enabled | up    | 2015-05-24T15:12:02.000000 | -               |
| 4  | nova-cert        | ip-192-169-142-127.ip.secureserver.net | internal | enabled | up    | 2015-05-24T15:12:00.000000 | -               |
| 5  | nova-compute     | ip-192-169-142-137.ip.secureserver.net | nova     | enabled | up    | 2015-05-24T15:12:00.000000 | -               |
+----+------------------+----------------------------------------+----------+---------+-------+----------------------------+-----------------+
[root@ip-192-169-142-127 ~(keystone_admin)]# neutron agent-list
+--------------------------------------+--------------------+----------------------------------------+-------+----------------+---------------------------+
| id                                   | agent_type         | host                                   | alive | admin_state_up | binary                    |
+--------------------------------------+--------------------+----------------------------------------+-------+----------------+---------------------------+
| 08dd042e-fa52-4b06-980f-16063ecd6a90 | Open vSwitch agent | ip-192-169-142-137.ip.secureserver.net | :-)   | True           | neutron-openvswitch-agent |
| 26a92f7c-d960-4c8c-8176-aec558b1fd43 | DHCP agent         | ip-192-169-142-127.ip.secureserver.net | :-)   | True           | neutron-dhcp-agent        |
| 4f5376af-a8f5-4359-8e53-1fabf885b3d2 | L3 agent           | ip-192-169-142-127.ip.secureserver.net | :-)   | True           | neutron-l3-agent          |
| a64d3787-8d9d-4b41-a4da-ea0b2b611491 | Open vSwitch agent | ip-192-169-142-127.ip.secureserver.net | :-)   | True           | neutron-openvswitch-agent |
| f16a196a-a1ec-464a-875d-432a3dba182d | Metadata agent     | ip-192-169-142-127.ip.secureserver.net | :-)   | True           | neutron-metadata-agent    |
+--------------------------------------+--------------------+----------------------------------------+-------+----------------+---------------------------+
[root@ip-192-169-142-127 ~(keystone_admin)]# ovs-vsctl show
c5da4b6e-70a9-49c4-895c-7a4715b0bfce
    Bridge br-ex
        Port "eth0"
            Interface "eth0"
        Port br-ex
            Interface br-ex
                type: internal
        Port "qg-147dd7b7-45"
            Interface "qg-147dd7b7-45"
                type: internal
        Port phy-br-ex
            Interface phy-br-ex
                type: patch
                options: {peer=int-br-ex}
    Bridge br-int
        fail_mode: secure
        Port "tap672f6457-99"
            tag: 1
            Interface "tap672f6457-99"
                type: internal
        Port int-br-ex
            Interface int-br-ex
                type: patch
                options: {peer=phy-br-ex}
        Port patch-tun
            Interface patch-tun
                type: patch
                options: {peer=patch-int}
        Port br-int
            Interface br-int
                type: internal
        Port "qr-c53117c1-e2"
            tag: 1
            Interface "qr-c53117c1-e2"
                type: internal
    Bridge br-tun
        fail_mode: secure
        Port "vxlan-c0a87a89"
            Interface "vxlan-c0a87a89"
                type: vxlan
                options: {df_default="true", in_key=flow, local_ip="192.168.122.127", out_key=flow, remote_ip="192.168.122.137"}
        Port patch-int
            Interface patch-int
                type: patch
                options: {peer=patch-tun}
        Port br-tun
            Interface br-tun
                type: internal
    ovs_version: "2.3.1-git4750c96"