OpenStack Grizzly – Creating a cinder ONLY (block storage) node – standalone

 In OpenStack

Binary Royale is an IT consultancy company based in the East Midlands. We spend all of our time with clients, helping them to make good decisions about their IT. When we come across issues that would be useful to others we “try” to post the answers on our website – . We cover Derby and Derbyshire, Nottingham and Nottinghamshire mainly, but do also have clients further afield. Please browse our website to see what we offer – thanks, and enjoy the blog

Hi All

I’ve been doing a fair amount of work with OpenStack recently. One of the first hurdles I’ve encountered was trying to create a BLOCK (Cinder) Storage Node, which wasn’t just the cinder modules installed alongside all of the other OpenStack services, like every piece of documentation out there at present (April 2013) seems to be written about. In fact its a BLOCK node totally separate and connected with ISCSI. This is what I needed to achieve.

If you wish to have PERSISTENT storage you’ll need a BLOCK storage (Cinder)


I’ve been following the instructions from here

as you can see from this set of instructions, the guide is creating 3 different nodes : COMPUTE, CONTROLLER and NETWORK – see diagram below



What it DOESN’T guide you through is installing your BLOCK/CINDER node on a separate box. If you look through the creation of the CONTROLLER NODE, is that actually the CINDER instructions are in there. They are NOT shown in the diagram but they are in the guide – slightly confusing.


To take a separate server, containing decent Ethernet connectivity and bags of storage, and configure it to be a BLOCK/CINDER Storage Node and connect to your setup with iSCSI; Just how most full blown OpenStack setups, in my opinion, would be created. Everyone needs persistent storage right ? And it needs to be centralised and available to all your COMPUTE nodes right ?


  • Start by following the instructions in the Grizzly Multi Node guide (Link Above) to create your CONTROLLER NODE – When you get to section 2.10 “Cinder” – Skip it and continue.
  • Now build yourself a BLOCK node, using the instructions 2.1 and 2.2 – just to get a box online, with the right repositories and networked.
[important title=”Note 1″]In production I’d recommend you install the OS onto a 146 or 300GB Drive Array (RAID 1)[/important] [important title=”Note 2″]You’re going to need some storage to serve from your CINDER node. I’d leave this storage unconfigured for now just so the installation of Ubuntu doesn’t grab it and use it[/important]
  • You need 3 network interfaces for this node
  • Eth0 for OpenStack Management – 10.10.10.x has been used in the setup above
  • Eth1 for Public facing API – Internet connection – 192.168.100.x has been used above
  • Eth2 for iSCSI – I’m using 10.10.99.x

Your /etc/network/interfaces may look like this for example

# Not internet connected(used for OpenStack management)
auto eth0
iface eth0 inet static

# The primary network interface
auto eth1
iface eth1 inet static

# Not internet connected(used for iSCSI)
auto eth2
iface eth2 inet static
  • Follow the instructions to get the grizzly repositories in place and the dist-upgrades all done. Now your new block node is ready for the Cinder modules
  • apt-get install -y apt-get install -y cinder-volume iscsitarget iscsitarget-dkms
  • Edit /etc/cinder/cinder.conf to make it look like so
rootwrap_config = /etc/cinder/rootwrap.conf
sql_connection = mysql://cinderUser:cinderPass@
volume_name_template = volume-%s
volume_group = cinder-volumes
verbose = True
auth_strategy = keystone
state_path = /var/lib/cinder
lock_path = /var/lock/cinder
volumes_dir = /var/lib/cinder/volumes

  • Now edit /etc/cinder/api-paste.ini – scroll to the bottom and replace the following section
paste.filter_factory = keystoneclient.middleware.auth_token:filter_factory
service_protocol = http
service_host =
service_port = 5000
auth_host =
auth_port = 35357
auth_protocol = http
admin_tenant_name = service
admin_user = cinder
admin_password = service_pass
signing_dir = /var/lib/cinder
  • Now restart all the cinder services on your new CINDER/BLOCK node

cd /etc/init.d/; for i in $( ls cinder-* ); do sudo service $i restart; done

  • Now synchronize the cinder settings with the mySQL db, over on your controller node

cinder-manage db sync

  • Notice how you get an error about it NOT being able to connect to the mySQL database? no problem. You need to install the python mysql client

apt-get install -y python-mysqldb

  • Now try running the db sync command again to register your settings and presence on the controller node
  • OK. Now it’s time to provision the storage you have earmarked for this Storage node
    • Bring the storage online
    • Build it in a RAID array of your choosing, remembering that RAID 1 is fast and RAID 5 is sloooowwww
    • Restart your node if you have to so that fdisk -l can see it
    • On my test platform I’m using a 20GB Drive
    • when I run fdisk -l it appears as – Disk /dev/sdb: 21.5 GB, 21474836480 bytes
    • OK. So we need to format this new piece of storage now and give it to CINDER
[important title=”Note 3″]Please replace /dev/sdb with whatever your storage has been discovered as with fdisk[/important]

fdisk /dev/sdb
#Type in the followings:

  • Now to give this node to LVM to manage and name it correctly

pvcreate /dev/sdb1
vgcreate cinder-volumes /dev/sdb1

Sorry to do this to you, but I’ve given up on OpenStack for now; well the rolling out of it manually that is. I’ve decided to use the FuelWeb ISO, available from Mirantis. This builds a cluster for you in some very easy steps, and seeing as OpenStack is pretty complicated, I’m going to take all the help I can.



Recent Posts
Contact Us

We're not around right now. But you can send us an email and we'll get back to you, asap.

Not readable? Change text. captcha txt

Start typing and press Enter to search