[原]使用RDO 安装OpenStack Icehouse

    OpenStack 每半年发布一个版本,Icehouse 是最近的一个版本,相对于Havana 提供了更多的功能和驱动支持。本文是使用RedHat 提供的RDO 脚本进行部署的文档。
    RDO 部署方式比较快捷,但由于相关的yum 源都在国外,若直接安装,经常出现rpm 包获取失败导致的问题。故建议部署前,先把相关的软件源镜像到本地,修改DNS 指向。(注意,不能直接修改repos 库中的位置,因为在多节点部署时,RDO 会自动安装epel、forman 等repo文件,手动修改是来不及的)
    本文采用两节点方式部署,第一个节点node01 作为控制节点(身份认证、网络服务、计算调度服务、Cinder服务、镜像服务等)+计算节点;第二个节点node02 作为单纯的计算节点扩展。相关详细的概念请见OpenStack 官网,这里不再一一说明。
(阅读本文时,建议点击上方的“边栏”按钮,把边栏隐藏,否则格式可能会混乱。)

1.系统环境
两个节点:

引用
node01.linuxfly.org
eth0: 192.168.48.213
eth1: 10.0.48.213

node02.linuxfly.org
eth0: 192.168.48.214
eth1: 10.0.48.214

eth0作为管理网卡和外部网络连接网卡;eth1作为gre 通道的连接网卡,也是两个节点间数据沟通的网卡。

2.配置本地软件源
使用192.168.86.37 上的本地yum 源,根据新的脚本执行情况进行修改。
安装前,需要确保rdo.fedorapeople.org 可正常解析到192.168.86.37:

引用
[root@gd2-cloud-037 ~]# vi /var/named/fedorapeople.org.master.zone
$TTL 1D
@       IN SOA root.repos.fedorapeople.org. repos.fedorapeople.org. (
                                        0       ; serial
                                        1D      ; refresh
                                        1H      ; retry
                                        1W      ; expire
                                        3H )    ; minimum
        NS      repos.fedorapeople.org.

repos           IN      A       192.168.86.37
rdo           IN      A       192.168.86.37

测试:

引用
[root@node01 ~]# ping -c2 rdo.fedorapeople.org            
PING rdo.fedorapeople.org (192.168.86.37) 56(84) bytes of data.
64 bytes from gd2-cloud-037.vclound.com (192.168.86.37): icmp_seq=1 ttl=61 time=0.331 ms
64 bytes from gd2-cloud-037.vclound.com (192.168.86.37): icmp_seq=2 ttl=61 time=0.354 ms

— rdo.fedorapeople.org ping statistics —
2 packets transmitted, 2 received, 0% packet loss, time 1001ms
rtt min/avg/max/mdev = 0.331/0.342/0.354/0.021 ms

还要修改http 的虚拟主机配置:

引用
[root@gd2-cloud-037 ~]# vi /etc/httpd/conf.d/yum_vhost.conf

    ServerAdmin webmaster@vclound.com
    DocumentRoot /var/www/html/root/repos.fedorapeople.org/repos
    ServerName rdo.fedorapeople.org
    ErrorLog logs/rdo.fedorapeople.org-error_log
    CustomLog logs/rdo.fedorapeople.org-access_log common

否则在使用RDO 安装时,可能会遇到错误:

引用
2014-05-23 18:18:07::INFO::shell::78::root:: [192.168.48.214] Executing script:
(rpm -q ‘rdo-release-icehouse’ || yum install -y –nogpg http://rdo.fedorapeople.org/openstack/openstack-icehouse/rdo-release-icehouse-3.noarch.rpm) || true
2014-05-23 18:18:19::INFO::shell::78::root:: [192.168.48.214] Executing script:
yum-config-manager –enable openstack-icehouse
2014-05-23 18:18:19::ERROR::run_setup::892::root:: Traceback (most recent call last):
  File “/usr/lib/python2.6/site-packages/packstack/installer/run_setup.py”, line 887, in main
    _main(confFile)
  File “/usr/lib/python2.6/site-packages/packstack/installer/run_setup.py”, line 574, in _main
    runSequences()
  File “/usr/lib/python2.6/site-packages/packstack/installer/run_setup.py”, line 553, in runSequences
    controller.runAllSequences()
  File “/usr/lib/python2.6/site-packages/packstack/installer/setup_controller.py”, line 84, in runAllSequences
    sequence.run(self.CONF)
  File “/usr/lib/python2.6/site-packages/packstack/installer/core/sequences.py”, line 96, in run
    step.run(config=config)
  File “/usr/lib/python2.6/site-packages/packstack/installer/core/sequences.py”, line 43, in run
    raise SequenceError(str(ex))
SequenceError: Failed to set RDO repo on host 192.168.48.214:
RPM file seems to be installed, but appropriate repo file is probably missing in /etc/yum.repos.d/

2014-05-23 18:18:19::INFO::shell::78::root:: [192.168.48.213] Executing script:
rm -rf /var/tmp/packstack/0c97dceac80e41b081bc8316ae439d88
2014-05-23 18:18:19::INFO::shell::78::root:: [192.168.48.214] Executing script:
rm -rf /var/tmp/packstack/20a0e6a1d0ba4ab0a0c4490ba3dc9fce

启动防火墙:

引用
[root@node01 ~]# iptables -L -n
Chain INPUT (policy ACCEPT)
target     prot opt source               destination        

Chain FORWARD (policy ACCEPT)
target     prot opt source               destination        

Chain OUTPUT (policy ACCEPT)
target     prot opt source               destination        
[root@node01 ~]# service iptables save
iptables:将防火墙规则保存到 /etc/sysconfig/iptables:     [确定]

[root@node02 ~]# iptables -L -n
Chain INPUT (policy ACCEPT)
target     prot opt source               destination        

Chain FORWARD (policy ACCEPT)
target     prot opt source               destination        

Chain OUTPUT (policy ACCEPT)
target     prot opt source               destination        
[root@node02 ~]# service iptables save
iptables:将防火墙规则保存到 /etc/sysconfig/iptables:     [确定]

如果不执行该动作,可能执行RDO 时会遇到错误:

引用
ERROR : Error appeared during Puppet run: 192.168.48.214_prescript.pp
Error: Could not start Service[iptables]: Execution of ‘/sbin/service iptables start’ returned 6:

3.安装软件

引用
[root@node01 ~]# wget http://rdo.fedorapeople.org/openstack/openstack-icehouse/rdo-release-icehouse-3.noarch.rpm
[root@node01 ~]# rpm -ivh rdo-release-icehouse-3.noarch.rpm
[root@node01 ~]# cat /etc/yum.repos.d/rdo-release.repo
[openstack-icehouse]
name=OpenStack Icehouse Repository
baseurl=http://repos.fedorapeople.org/repos/openstack/openstack-icehouse/epel-6/
enabled=1
skip_if_unavailable=0
gpgcheck=1
gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-RDO-Icehouse
priority=98

因为foreman 的源还没有同步到本地,把域名加入到两个节点上:

引用
[root@node01 ~]# echo ‘208.74.145.172 yum.theforeman.org’ >> /etc/hosts

否则,会报:

引用
http://yum.theforeman.org/releases/1.5/el6/x86_64/repodata/repomd.xml: [Errno 14] PYCURL ERROR 22 – “The requested URL returned error: 404 Not Found”

安装RDO 脚本:

引用
[root@node01 ~]# yum install -y openstack-packstack
Installed:
  openstack-packstack.noarch 0:2014.1.1-0.12.dev1068.el6  
  
Dependency Installed:
  openstack-packstack-puppet.noarch 0:2014.1.1-0.12.dev1068.el6     openstack-puppet-modules.noarch 0:2014.1-11.1.el6     ruby.x86_64 0:1.8.7.352-13.el6     ruby-irb.x86_64 0:1.8.7.352-13.el6     ruby-libs.x86_64 0:1.8.7.352-13.el6    
  ruby-rdoc.x86_64 0:1.8.7.352-13.el6                               rubygem-json.x86_64 0:1.5.5-1.el6                     rubygems.noarch 0:1.3.7-5.el6    

Complete!

使用/dev/sdb 作为lvm 提供给cinder 使用:

引用
[root@node01 ~]# pvcreate /dev/sdb
  Physical volume “/dev/sdb” successfully created
[root@node01 ~]# vgcreate cinder-volumes /dev/sdb
  Volume group “cinder-volumes” successfully created
[root@node01 ~]# vgdisplay
  — Volume group —
  VG Name               cinder-volumes
  System ID            
  Format                lvm2
  Metadata Areas        1
  Metadata Sequence No  1
  VG Access             read/write
  VG Status             resizable
  MAX LV                0
  Cur LV                0
  Open LV               0
  Max PV                0
  Cur PV                1
  Act PV                1
  VG Size               100.00 GiB
  PE Size               4.00 MiB
  Total PE              25599
  Alloc PE / Size       0 / 0  
  Free  PE / Size       25599 / 100.00 GiB
  VG UUID               YkbC1M-UJuf-WXKS-se8W-yoZx-Y8JU-cvj9fN

生成应答文件:

[root@node01 ~]# packstack –gen-answer-file=openstack-icehouse-test-20140523.txt

修改应答文件:
确认需要安装的服务,以及相关服务的数据库密码,登陆密码等信息。

引用
[root@node01 ~]# egrep -v ‘^$|^#’ openstack-icehouse-test-20140523.txt
[general]
CONFIG_SSH_KEY=
CONFIG_MYSQL_INSTALL=y
CONFIG_GLANCE_INSTALL=y
CONFIG_CINDER_INSTALL=y
CONFIG_NOVA_INSTALL=y
CONFIG_NEUTRON_INSTALL=y
CONFIG_HORIZON_INSTALL=y
CONFIG_SWIFT_INSTALL=y
CONFIG_CEILOMETER_INSTALL=y
CONFIG_HEAT_INSTALL=y
CONFIG_CLIENT_INSTALL=y
CONFIG_NTP_SERVERS=192.168.86.37
CONFIG_NAGIOS_INSTALL=y
EXCLUDE_SERVERS=
CONFIG_DEBUG_MODE=n
CONFIG_VMWARE_BACKEND=n
CONFIG_VCENTER_HOST=
CONFIG_VCENTER_USER=
CONFIG_VCENTER_PASSWORD=
CONFIG_VCENTER_CLUSTER_NAME=
CONFIG_MYSQL_HOST=192.168.48.213
CONFIG_MYSQL_USER=root
CONFIG_MYSQL_PW=e37cb47f36294ec1
CONFIG_AMQP_SERVER=rabbitmq
CONFIG_AMQP_HOST=192.168.48.213
CONFIG_AMQP_ENABLE_SSL=n
CONFIG_AMQP_ENABLE_AUTH=n
CONFIG_AMQP_NSS_CERTDB_PW=18698078ad434bb2b6d630352b8cfcb1
CONFIG_AMQP_SSL_PORT=5671
CONFIG_AMQP_SSL_CERT_FILE=/etc/pki/tls/certs/amqp_selfcert.pem
CONFIG_AMQP_SSL_KEY_FILE=/etc/pki/tls/private/amqp_selfkey.pem
CONFIG_AMQP_SSL_SELF_SIGNED=y
CONFIG_AMQP_AUTH_USER=amqp_user
CONFIG_AMQP_AUTH_PASSWORD=f459146266d54cfb
CONFIG_KEYSTONE_HOST=192.168.48.213
CONFIG_KEYSTONE_DB_PW=e6406987528f4d86
CONFIG_KEYSTONE_ADMIN_TOKEN=6ee2056522fe45229463ad91cd0f9911
CONFIG_KEYSTONE_ADMIN_PW=linuxfly # 管理员的密码
CONFIG_KEYSTONE_DEMO_PW=demo
CONFIG_KEYSTONE_TOKEN_FORMAT=PKI
CONFIG_GLANCE_HOST=192.168.48.213
CONFIG_GLANCE_DB_PW=4962746651af47d1
CONFIG_GLANCE_KS_PW=ef3b37da000d430c
CONFIG_CINDER_HOST=192.168.48.213
CONFIG_CINDER_DB_PW=de717f5796ff4bbc
CONFIG_CINDER_KS_PW=c7560ce891bd4c98
CONFIG_CINDER_BACKEND=lvm
CONFIG_CINDER_VOLUMES_CREATE=no # 因为已经创建cinder 使用的lvm,这个卷就不需要创建了
CONFIG_CINDER_VOLUMES_SIZE=20G
CONFIG_CINDER_GLUSTER_MOUNTS=
CONFIG_CINDER_NFS_MOUNTS=
CONFIG_NOVA_API_HOST=192.168.48.213
CONFIG_NOVA_CERT_HOST=192.168.48.213
CONFIG_NOVA_VNCPROXY_HOST=192.168.48.213
CONFIG_NOVA_COMPUTE_HOSTS=192.168.48.213,192.168.48.214 # 提供计算资源的节点地址
CONFIG_NOVA_CONDUCTOR_HOST=192.168.48.213
CONFIG_NOVA_DB_PW=ae2a88c5486e473f
CONFIG_NOVA_KS_PW=2cea90eab44e4d31
CONFIG_NOVA_SCHED_HOST=192.168.48.213
CONFIG_NOVA_SCHED_CPU_ALLOC_RATIO=16.0
CONFIG_NOVA_SCHED_RAM_ALLOC_RATIO=1.5
CONFIG_NOVA_COMPUTE_PRIVIF=eth1
CONFIG_NOVA_NETWORK_HOSTS=192.168.48.213
CONFIG_NOVA_NETWORK_MANAGER=nova.network.manager.FlatDHCPManager
CONFIG_NOVA_NETWORK_PUBIF=eth0
CONFIG_NOVA_NETWORK_PRIVIF=eth1
CONFIG_NOVA_NETWORK_FIXEDRANGE=192.168.32.0/22
CONFIG_NOVA_NETWORK_FLOATRANGE=10.3.4.0/22
CONFIG_NOVA_NETWORK_DEFAULTFLOATINGPOOL=nova
CONFIG_NOVA_NETWORK_AUTOASSIGNFLOATINGIP=n
CONFIG_NOVA_NETWORK_VLAN_START=100
CONFIG_NOVA_NETWORK_NUMBER=1
CONFIG_NOVA_NETWORK_SIZE=255
CONFIG_NEUTRON_SERVER_HOST=192.168.48.213
CONFIG_NEUTRON_KS_PW=9163ba62da104edc
CONFIG_NEUTRON_DB_PW=c44e1326a3ae436a
CONFIG_NEUTRON_L3_HOSTS=192.168.48.213
CONFIG_NEUTRON_L3_EXT_BRIDGE=br-ex
CONFIG_NEUTRON_DHCP_HOSTS=192.168.48.213
CONFIG_NEUTRON_LBAAS_HOSTS=
#CONFIG_NEUTRON_L2_PLUGIN=openvswitch # 默认L2使用openvswitch,改为ml2
CONFIG_NEUTRON_L2_PLUGIN=ml2
CONFIG_NEUTRON_METADATA_HOSTS=192.168.48.213
CONFIG_NEUTRON_METADATA_PW=f65edbe4677a4928
#CONFIG_NEUTRON_ML2_TYPE_DRIVERS=local # 默认为local,本地模式,改为gre模式,下面还有一些相关的选项
CONFIG_NEUTRON_ML2_TYPE_DRIVERS=gre
#CONFIG_NEUTRON_ML2_TENANT_NETWORK_TYPES=local
CONFIG_NEUTRON_ML2_TENANT_NETWORK_TYPES=gre
CONFIG_NEUTRON_ML2_MECHANISM_DRIVERS=openvswitch
CONFIG_NEUTRON_ML2_FLAT_NETWORKS=*
CONFIG_NEUTRON_ML2_VLAN_RANGES=
#CONFIG_NEUTRON_ML2_TUNNEL_ID_RANGES=
CONFIG_NEUTRON_ML2_TUNNEL_ID_RANGES=100:1000
CONFIG_NEUTRON_ML2_VXLAN_GROUP=
CONFIG_NEUTRON_ML2_VNI_RANGES=
CONFIG_NEUTRON_L2_AGENT=openvswitch
CONFIG_NEUTRON_LB_TENANT_NETWORK_TYPE=local
CONFIG_NEUTRON_LB_VLAN_RANGES=
CONFIG_NEUTRON_LB_INTERFACE_MAPPINGS=
#CONFIG_NEUTRON_OVS_TENANT_NETWORK_TYPE=local
CONFIG_NEUTRON_OVS_TENANT_NETWORK_TYPE=gre
CONFIG_NEUTRON_OVS_VLAN_RANGES=
CONFIG_NEUTRON_OVS_BRIDGE_MAPPINGS=
CONFIG_NEUTRON_OVS_BRIDGE_IFACES=
#CONFIG_NEUTRON_OVS_TUNNEL_RANGES=
CONFIG_NEUTRON_OVS_TUNNEL_RANGES=100:1000
CONFIG_NEUTRON_OVS_TUNNEL_IF=eth1
CONFIG_NEUTRON_OVS_VXLAN_UDP_PORT=4789
CONFIG_OSCLIENT_HOST=192.168.48.213
CONFIG_HORIZON_HOST=192.168.48.213
CONFIG_HORIZON_SSL=n
CONFIG_SSL_CERT=
CONFIG_SSL_KEY=
CONFIG_SWIFT_PROXY_HOSTS=192.168.48.213
CONFIG_SWIFT_KS_PW=3c978e1c9d8c4706
CONFIG_SWIFT_STORAGE_HOSTS=192.168.48.213
CONFIG_SWIFT_STORAGE_ZONES=1
CONFIG_SWIFT_STORAGE_REPLICAS=1
CONFIG_SWIFT_STORAGE_FSTYPE=ext4
CONFIG_SWIFT_HASH=3da3ecfbf7784ad6
CONFIG_SWIFT_STORAGE_SIZE=2G
CONFIG_PROVISION_DEMO=n # 不创建demo 项目
CONFIG_PROVISION_TEMPEST=n
CONFIG_PROVISION_DEMO_FLOATRANGE=172.24.4.224/28
CONFIG_PROVISION_TEMPEST_REPO_URI=https://github.com/openstack/tempest.git
CONFIG_PROVISION_TEMPEST_REPO_REVISION=master
CONFIG_PROVISION_ALL_IN_ONE_OVS_BRIDGE=n
CONFIG_HEAT_HOST=192.168.48.213
CONFIG_HEAT_DB_PW=8358dc2482164097
CONFIG_HEAT_AUTH_ENC_KEY=237cb003b0134395
CONFIG_HEAT_KS_PW=fded3b8f93a5463a
#CONFIG_HEAT_CLOUDWATCH_INSTALL=n
CONFIG_HEAT_CLOUDWATCH_INSTALL=y
#CONFIG_HEAT_CFN_INSTALL=n
CONFIG_HEAT_CFN_INSTALL=y
CONFIG_HEAT_CLOUDWATCH_HOST=192.168.48.213
CONFIG_HEAT_CFN_HOST=192.168.48.213
CONFIG_CEILOMETER_HOST=192.168.48.213
CONFIG_CEILOMETER_SECRET=658f7e3e604b4bb9
CONFIG_CEILOMETER_KS_PW=8a413f3ab3e748ed
CONFIG_MONGODB_HOST=192.168.48.213
CONFIG_NAGIOS_HOST=192.168.48.213
CONFIG_NAGIOS_PW=ab42ae321e744f78
CONFIG_USE_EPEL=y
CONFIG_REPO=
CONFIG_RH_USER=
CONFIG_RH_PW=
CONFIG_RH_BETA_REPO=n
CONFIG_SATELLITE_URL=
CONFIG_SATELLITE_USER=
CONFIG_SATELLITE_PW=
CONFIG_SATELLITE_AKEY=
CONFIG_SATELLITE_CACERT=
CONFIG_SATELLITE_PROFILE=
CONFIG_SATELLITE_FLAGS=
CONFIG_SATELLITE_PROXY=
CONFIG_SATELLITE_PROXY_USER=
CONFIG_SATELLITE_PROXY_PW=

这里admin 用户的密码通过CONFIG_KEYSTONE_ADMIN_PW=linuxfly 设置,在安装完毕后,会在/root目录下有个环境变量的文件,可导入其中的值,进行命令行的操作(见最后的示例):

引用
[root@node01 ~]# cat keystonerc_admin
export OS_USERNAME=admin
export OS_TENANT_NAME=admin
export OS_PASSWORD=linuxfly
export OS_AUTH_URL=http://192.168.48.213:5000/v2.0/
export PS1='[\u@\h \W(keystone_admin)]\$ ‘

执行RDO 安装:

引用
[root@node01 ~]# packstack –answer-file=openstack-icehouse-test-20140523.txt
Welcome to Installer setup utility
Packstack changed given value  to required value /root/.ssh/id_rsa.pub

Installing:
Clean Up                                             [ DONE ]
Setting up ssh keys                                  [ DONE ]
Discovering hosts’ details                           [ DONE ]
Adding pre install manifest entries                  [ DONE ]
Installing time synchronization via NTP              [ DONE ]
Adding MySQL manifest entries                        [ DONE ]
Adding AMQP manifest entries                         [ DONE ]
Adding Keystone manifest entries                     [ DONE ]
Adding Glance Keystone manifest entries              [ DONE ]
Adding Glance manifest entries                       [ DONE ]
Installing dependencies for Cinder                   [ DONE ]
Adding Cinder Keystone manifest entries              [ DONE ]
Adding Cinder manifest entries                       [ DONE ]
Checking if the Cinder server has a cinder-volumes vg[ DONE ]
Adding Nova API manifest entries                     [ DONE ]
Adding Nova Keystone manifest entries                [ DONE ]
Adding Nova Cert manifest entries                    [ DONE ]
Adding Nova Conductor manifest entries               [ DONE ]
Creating ssh keys for Nova migration                 [ DONE ]
Gathering ssh host keys for Nova migration           [ DONE ]
Adding Nova Compute manifest entries                 [ DONE ]
Adding Nova Scheduler manifest entries               [ DONE ]
Adding Nova VNC Proxy manifest entries               [ DONE ]
Adding Nova Common manifest entries                  [ DONE ]
Adding Openstack Network-related Nova manifest entries[ DONE ]
Adding Neutron API manifest entries                  [ DONE ]
Adding Neutron Keystone manifest entries             [ DONE ]
Adding Neutron L3 manifest entries                   [ DONE ]
Adding Neutron L2 Agent manifest entries             [ DONE ]
Adding Neutron DHCP Agent manifest entries           [ DONE ]
Adding Neutron LBaaS Agent manifest entries          [ DONE ]
Adding Neutron Metadata Agent manifest entries       [ DONE ]
Adding OpenStack Client manifest entries             [ DONE ]
Adding Horizon manifest entries                      [ DONE ]
Adding Swift Keystone manifest entries               [ DONE ]
Adding Swift builder manifest entries                [ DONE ]
Adding Swift proxy manifest entries                  [ DONE ]
Adding Swift storage manifest entries                [ DONE ]
Adding Swift common manifest entries                 [ DONE ]
Adding Heat manifest entries                         [ DONE ]
Adding Heat Keystone manifest entries                [ DONE ]
Adding Heat CloudWatch API manifest entries          [ DONE ]
Adding Heat CloudFormation API manifest entries      [ DONE ]
Adding MongoDB manifest entries                      [ DONE ]
Adding Ceilometer manifest entries                   [ DONE ]
Adding Ceilometer Keystone manifest entries          [ DONE ]
Adding Nagios server manifest entries                [ DONE ]
Adding Nagios host manifest entries                  [ DONE ]
Adding post install manifest entries                 [ DONE ]
Preparing servers                                    [ DONE ]
Installing Dependencies                              [ DONE ]
Copying Puppet modules and manifests                 [ DONE ]
Applying 192.168.48.214_prescript.pp
Applying 192.168.48.213_prescript.pp
192.168.48.213_prescript.pp:                         [ DONE ]          
192.168.48.214_prescript.pp:                         [ DONE ]          
Applying 192.168.48.214_ntpd.pp
Applying 192.168.48.213_ntpd.pp
192.168.48.214_ntpd.pp:                              [ DONE ]    
192.168.48.213_ntpd.pp:                              [ DONE ]    
Applying 192.168.48.213_mysql.pp
Applying 192.168.48.213_amqp.pp
192.168.48.213_mysql.pp:                             [ DONE ]      
192.168.48.213_amqp.pp:                              [ DONE ]      
Applying 192.168.48.213_keystone.pp
Applying 192.168.48.213_glance.pp
Applying 192.168.48.213_cinder.pp
192.168.48.213_keystone.pp:                          [ DONE ]        
192.168.48.213_glance.pp:                            [ DONE ]        
192.168.48.213_cinder.pp:                            [ DONE ]        
Applying 192.168.48.213_api_nova.pp
192.168.48.213_api_nova.pp:                          [ DONE ]        
Applying 192.168.48.213_nova.pp
Applying 192.168.48.214_nova.pp
192.168.48.214_nova.pp:                              [ DONE ]    
192.168.48.213_nova.pp:                              [ DONE ]    
Applying 192.168.48.214_neutron.pp
Applying 192.168.48.213_neutron.pp
192.168.48.214_neutron.pp:                           [ DONE ]        
192.168.48.213_neutron.pp:                           [ DONE ]        
Applying 192.168.48.213_osclient.pp
Applying 192.168.48.213_horizon.pp
192.168.48.213_osclient.pp:                          [ DONE ]        
192.168.48.213_horizon.pp:                           [ DONE ]        
Applying 192.168.48.213_ring_swift.pp
192.168.48.213_ring_swift.pp:                        [ DONE ]          
Applying 192.168.48.213_swift.pp
Applying 192.168.48.213_heat.pp
192.168.48.213_swift.pp:                             [ DONE ]      
192.168.48.213_heat.pp:                              [ DONE ]      
Applying 192.168.48.213_heatcw.pp
Applying 192.168.48.213_heatcnf.pp
192.168.48.213_heatcw.pp:                            [ DONE ]        
192.168.48.213_heatcnf.pp:                           [ DONE ]        
Applying 192.168.48.213_mongodb.pp
192.168.48.213_mongodb.pp:                           [ DONE ]        
Applying 192.168.48.213_ceilometer.pp
Applying 192.168.48.213_nagios.pp
Applying 192.168.48.214_nagios_nrpe.pp
Applying 192.168.48.213_nagios_nrpe.pp
192.168.48.214_nagios_nrpe.pp:                       [ DONE ]            
192.168.48.213_ceilometer.pp:                        [ DONE ]            
192.168.48.213_nagios.pp:                            [ DONE ]            
192.168.48.213_nagios_nrpe.pp:                       [ DONE ]            
Applying 192.168.48.214_postscript.pp
Applying 192.168.48.213_postscript.pp
192.168.48.214_postscript.pp:                        [ DONE ]          
192.168.48.213_postscript.pp:                        [ DONE ]          
Applying Puppet manifests                            [ DONE ]
Finalizing                                           [ DONE ]

**** Installation completed successfully ******

Additional information:
* File /root/keystonerc_admin has been created on OpenStack client host 192.168.48.213. To use the command line tools you need to source the file.
* To access the OpenStack Dashboard browse to http://192.168.48.213/dashboard .
Please, find your login credentials stored in the keystonerc_admin in your home directory.
* To use Nagios, browse to http://192.168.48.213/nagios username : nagiosadmin, password : ab42ae321e744f78
* The installation log file is available at: /var/tmp/packstack/20140524-002156-yBxGmU/openstack-setup.log
* The generated manifests are available at: /var/tmp/packstack/20140524-002156-yBxGmU/manifests

4.查看OpenStack 的运行状态

引用
[root@node01 ~]# ovs-vsctl show
c4b035f0-98e4-4868-9e14-094ca5a952f4
    Bridge br-tun
        Port br-tun
            Interface br-tun
                type: internal
        Port patch-int
            Interface patch-int
                type: patch
                options: {peer=patch-tun}
        Port “gre-0a0030d6”
            Interface “gre-0a0030d6”
                type: gre
                options: {in_key=flow, local_ip=”10.0.48.213″, out_key=flow, remote_ip=”10.0.48.214″}
    Bridge br-int
        Port br-int
            Interface br-int
                type: internal
        Port patch-tun
            Interface patch-tun
                type: patch
                options: {peer=patch-int}
    ovs_version: “1.11.0”
[root@node02 ~]# ovs-vsctl show
6d99ed4b-2030-4e6f-bee0-9b782977b76e
    Bridge br-tun
        Port br-tun
            Interface br-tun
                type: internal
        Port patch-int
            Interface patch-int
                type: patch
                options: {peer=patch-tun}
        Port “gre-0a0030d5”
            Interface “gre-0a0030d5”
                type: gre
                options: {in_key=flow, local_ip=”10.0.48.214″, out_key=flow, remote_ip=”10.0.48.213″}
    Bridge br-int
        Port patch-tun
            Interface patch-tun
                type: patch
                options: {peer=patch-int}
        Port br-int
            Interface br-int
                type: internal
    ovs_version: “1.11.0”

5.调整相关参数
1)创建外部网络使用的桥接端口
如果不创建该桥接端口,实例无法连接路由网关、外部网络以及meta-data 服务,l3-agent.log 提示:

引用
2014-05-24 01:43:42.096 3283 ERROR neutron.agent.l3_agent [req-5cbe551e-28e5-40d8-b3df-8def91cb5f81 None] The external network bridge ‘br-ex’ does not exist

创建步骤:

引用
[root@node01 ~]# cat /etc/sysconfig/network-scripts/ifcfg-eth0
DEVICE=eth0
BOOTPROTO=none
IPV6INIT=no
MTU=1500
ONBOOT=yes
HWADDR=00:50:56:81:9a:e1
USERCTL=no
[root@node01 ~]# cat /etc/sysconfig/network-scripts/ifcfg-br-ex
DEVICE=br-ex
BOOTPROTO=none
IPV6INIT=no
MTU=1500
NM_CONTROLLED=no
ONBOOT=yes
IPADDR=192.168.48.213
NETMASK=255.255.255.0
GATEWAY=192.168.48.1
DNS1=192.168.86.37
USERCTL=no

[root@node01 ~]# ovs-vsctl add-br br-ex; ovs-vsctl add-port br-ex eth0; service network restart
重启neutron-l3-agent 服务。
[root@node01 ~]# /etc/init.d/neutron-l3-agent restart
停止 neutron-l3-agent:                                    [确定]
正在启动 neutron-l3-agent:                                [确定]

2)使用Memcache 作为token 的后端服务

引用
[root@node01 ~]# vi /etc/keystone/keystone.conf
# Controls the token construction, validation, and revocation
# operations. Core providers are
# “keystone.token.providers.[pki|uuid].Provider”. (string
# value)
#provider=
#provider=keystone.token.providers.pki.Provider
provider=keystone.token.providers.uuid.Provider

# Keystone Token persistence backend driver. (string value)
#driver=keystone.token.backends.sql.Token
#driver=keystone.token.backends.sql.Token
driver=keystone.token.backends.memcache.Token

重启memcached 服务:

引用
[root@node01 ~]# /etc/init.d/memcached restart
停止 memcached:                                           [确定]
正在启动 memcached:                                       [确定]

重启相关服务:

引用
[root@node01 ~]# for i in `chkconfig –list|grep ‘3:启用’|egrep ‘neutron|openstack’|grep -v neutron-ovs-cleanup|awk ‘{print $1}’`; do service $i restart; done        
停止 neutron-dhcp-agent:                                  [确定]
正在启动 neutron-dhcp-agent:                              [确定]
停止 neutron-l3-agent:                                    [确定]
正在启动 neutron-l3-agent:                                [确定]
停止 neutron-metadata-agent:                              [确定]
正在启动 neutron-metadata-agent:                          [确定]
停止 neutron-openvswitch-agent:                           [确定]
正在启动 neutron-openvswitch-agent:                       [确定]
停止 neutron:                                             [确定]
正在启动 neutron:                                         [确定]
停止 openstack-ceilometer-alarm-evaluator:                [确定]
正在启动 openstack-ceilometer-alarm-evaluator:            [确定]
停止 openstack-ceilometer-alarm-notifier:                 [确定]
正在启动 openstack-ceilometer-alarm-notifier:             [确定]
停止 openstack-ceilometer-api:                            [确定]
正在启动 openstack-ceilometer-api:                        [确定]
停止 openstack-ceilometer-central:                        [确定]
正在启动 openstack-ceilometer-central:                    [确定]
停止 openstack-ceilometer-collector:                      [确定]
正在启动 openstack-ceilometer-collector:                  [确定]
停止 openstack-ceilometer-compute:                        [确定]
正在启动 openstack-ceilometer-compute:                    [确定]
停止 openstack-cinder-api:                                [确定]
正在启动 openstack-cinder-api:                            [确定]
停止 openstack-cinder-backup:                             [确定]
正在启动 openstack-cinder-backup:                         [确定]
停止 openstack-cinder-scheduler:                          [确定]
正在启动 openstack-cinder-scheduler:                      [确定]
停止 openstack-cinder-volume:                             [确定]
正在启动 openstack-cinder-volume:                         [确定]
停止 openstack-glance-api:                                [确定]
正在启动 openstack-glance-api:                            [确定]
停止 openstack-glance-registry:                           [确定]
正在启动 openstack-glance-registry:                       [确定]
停止 openstack-heat-api:                                  [确定]
正在启动 openstack-heat-api:                              [确定]
停止 openstack-heat-api-cfn:                              [确定]
正在启动 openstack-heat-api-cfn:                          [确定]
停止 openstack-heat-api-cloudwatch:                       [确定]
正在启动 openstack-heat-api-cloudwatch:                   [确定]
停止 openstack-heat-engine:                               [确定]
正在启动 openstack-heat-engine:                           [确定]
停止 keystone:                                            [确定]
正在启动 keystone:                                        [确定]
停止 openstack-nova-api:                                  [确定]
正在启动 openstack-nova-api:                              [确定]
停止 openstack-nova-cert:                                 [确定]
正在启动 openstack-nova-cert:                             [确定]
停止 openstack-nova-compute:                              [确定]
正在启动 openstack-nova-compute:                          [确定]
停止 openstack-nova-conductor:                            [确定]
正在启动 openstack-nova-conductor:                        [确定]
停止 openstack-nova-consoleauth:                          [确定]
正在启动 openstack-nova-consoleauth:                      [确定]
停止 openstack-nova-novncproxy:                           [确定]
正在启动 openstack-nova-novncproxy:                       [确定]
停止 openstack-nova-scheduler:                            [确定]
正在启动 openstack-nova-scheduler:                        [确定]
Stopping openstack-swift-account:                          [确定]
Starting openstack-swift-account:                          [确定]
Stopping openstack-swift-account-auditor:                  [确定]
Starting openstack-swift-account-auditor:                  [确定]
Stopping openstack-swift-account-reaper:                   [确定]
Starting openstack-swift-account-reaper:                   [确定]
Stopping openstack-swift-account-replicator:               [确定]
Starting openstack-swift-account-replicator:               [确定]
Stopping openstack-swift-container:                        [确定]
Starting openstack-swift-container:                        [确定]
Stopping openstack-swift-container-auditor:                [确定]
Starting openstack-swift-container-auditor:                [确定]
Stopping openstack-swift-container-replicator:             [确定]
Starting openstack-swift-container-replicator:             [确定]
Stopping openstack-swift-container-updater:                [确定]
Starting openstack-swift-container-updater:                [确定]
Stopping openstack-swift-object:                           [确定]
Starting openstack-swift-object:                           [确定]
Stopping openstack-swift-object-auditor:                   [确定]
Starting openstack-swift-object-auditor:                   [确定]
Stopping openstack-swift-object-replicator:                [确定]
Starting openstack-swift-object-replicator:                [确定]
Stopping openstack-swift-object-updater:                   [确定]
Starting openstack-swift-object-updater:                   [确定]
Stopping openstack-swift-proxy:                            [确定]
Starting openstack-swift-proxy:                            [确定]

关闭crontab 的定时任务:

引用
[root@node01 ~]# crontab -u keystone -e
# HEADER: This file was autogenerated at Sat May 24 00:28:13 +0800 2014 by puppet.
# HEADER: While it can still be managed manually, it is definitely not recommended.
# HEADER: Note particularly that the comments starting with ‘Puppet Name’ should
# HEADER: not be deleted, as doing so could cause duplicate cron jobs.
# Puppet Name: token-flush
#*/1 * * * * /usr/bin/keystone-manage token_flush >/dev/null 2>&1

因为keystone-manage token_flush 是针对SQL 保存token的情况实现的,如果不关闭,执行会报错:

引用
2014-05-26 11:07:01.847 7258 CRITICAL keystone [-] NotImplemented: The action you have requested has not been implemented.
2014-05-26 11:07:01.847 7258 TRACE keystone Traceback (most recent call last):
2014-05-26 11:07:01.847 7258 TRACE keystone   File “/usr/bin/keystone-manage”, line 51, in
2014-05-26 11:07:01.847 7258 TRACE keystone     cli.main(argv=sys.argv, config_files=config_files)
2014-05-26 11:07:01.847 7258 TRACE keystone   File “/usr/lib/python2.6/site-packages/keystone/cli.py”, line 190, in main
2014-05-26 11:07:01.847 7258 TRACE keystone     CONF.command.cmd_class.main()
2014-05-26 11:07:01.847 7258 TRACE keystone   File “/usr/lib/python2.6/site-packages/keystone/cli.py”, line 154, in main
2014-05-26 11:07:01.847 7258 TRACE keystone     token_manager.driver.flush_expired_tokens()
2014-05-26 11:07:01.847 7258 TRACE keystone   File “/usr/lib/python2.6/site-packages/keystone/token/backends/kvs.py”, line 355, in flush_expired_tokens
2014-05-26 11:07:01.847 7258 TRACE keystone     raise exception.NotImplemented()
2014-05-26 11:07:01.847 7258 TRACE keystone NotImplemented: The action you have requested has not been implemented.
2014-05-26 11:07:01.847 7258 TRACE keystone

简单验证:

引用
[root@node01 ~]# source keystonerc_admin
[root@node01 ~(keystone_admin)]# nova service-list
+——————+———————+———-+———+——-+—————————-+—————–+
| Binary           | Host                | Zone     | Status  | State | Updated_at                 | Disabled Reason |
+——————+———————+———-+———+——-+—————————-+—————–+
| nova-consoleauth | node01.linuxfly.org | internal | enabled | up    | 2014-05-26T03:08:08.000000 | –               |
| nova-conductor   | node01.linuxfly.org | internal | enabled | up    | 2014-05-26T03:08:06.000000 | –               |
| nova-scheduler   | node01.linuxfly.org | internal | enabled | up    | 2014-05-26T03:08:08.000000 | –               |
| nova-compute     | node01.linuxfly.org | nova     | enabled | up    | 2014-05-26T03:08:09.000000 | –               |
| nova-compute     | node02.linuxfly.org | nova     | enabled | up    | 2014-05-26T03:08:08.000000 | –               |
| nova-cert        | node01.linuxfly.org | internal | enabled | up    | 2014-05-26T03:08:06.000000 | –               |
+——————+———————+———-+———+——-+—————————-+—————–+
[root@node01 ~(keystone_admin)]# neutron agent-list
+————————————–+——————–+———————+——-+—————-+
| id                                   | agent_type         | host                | alive | admin_state_up |
+————————————–+——————–+———————+——-+—————-+
| 04990a48-4b83-4999-ae29-1fbc33e9de3a | Metadata agent     | node01.linuxfly.org | :-)   | True           |
| 1d5758ca-73fa-4729-bd95-6a4bf8066c5f | L3 agent           | node01.linuxfly.org | :-)   | True           |
| 3babec70-d4bd-4135-8bd6-097ab7e22a54 | Open vSwitch agent | node02.linuxfly.org | :-)   | True           |
| 431d2a91-2bc8-4f95-b574-9f6dc94cb49d | DHCP agent         | node01.linuxfly.org | :-)   | True           |
| c706de15-50fb-495b-868b-bb7a228d64d1 | Open vSwitch agent | node01.linuxfly.org | :-)   | True           |
+————————————–+——————–+———————+——-+—————-+

安装完成。

应答文件:openstack-icehouse-test-20140523.tgz

[原]使用RDO 安装OpenStack Icehouse下载文件
点击这里下载文件

解压为openstack-icehouse-test-20140523.txt,可使用packstack –answer-file 进行重复安装。

测试OpenStack Icehouse Horizon(2) —— 创建网络
测试OpenStack Icehouse Horizon(1) —— 创建项目和用户

原创文章,作者:ItWorker,如若转载,请注明出处:https://blog.ytso.com/98254.html

(0)
上一篇 2021年8月20日
下一篇 2021年8月20日

相关推荐

发表回复

登录后才能评论