Expanding a Ceph Cluster:

So Ceph, you've heard us talk about Ceph before, but Ceph is an infinitely scalable storage platform. It's self-healing, it rearranges its own data to keep everything self and even across all the hard drives in the many-node cluster that is a Ceph storage platform.

And these are great, these are all things that you want in a self-healing, self living storage organism (if you will). But when this can get you into trouble is when you go to add a bunch of new space into this cluster, and as you can kind of see where I'm going here, I like to use an analogy of a bathtub. And suspend your belief for a minute here, but picture this, you've got a bathtub and it's full of water and it's got its water to a certain amount, and you need to add more water to this bathtub. So, what do you do? You gotta add more volume to it.

Same thing with a Ceph cluster. So what do you do? You add another storage node in there. So this is where I said suspend your belief, take your bathtub and just imagine now that it's a foot longer. So you've added all that new volume to it; what happens? The water just immediately rushes in and redistributes itself, right? So it’s your bathtub, whatever that's fine. But for a Ceph cluster, the same thing is going to happen. If you just put a node in and just say “okay, add”, all the data is just gonna immediately start rushing into the other node, and it's going to start moving data around, and all of a sudden, your network pipes are full of, not client i/o into the cluster, it's the data freeing itself into the new space.

This is what we want, essentially, but we don't want to disrupt client i/o. So, how we avoid this, how we build, how we stop this rush of data flying into the new server is, let's go back to our bathtub analogy for a minute; so before you magically put this one foot extension on it, let's imagine you build a little dam wall and it's just holding the water here. We do the same thing in Ceph by setting a couple flags that say “do not distribute your data, leave where it's at”. So what happens then is we can build a little dam and we can build our tub out. And all of a sudden we've got all this fresh new volume that we could put water into (data), and okay, it's there, it's all stood up, perfect. Do we take the dam away? Nah, we'll just open a little door in it and we let the data (flow). So instead of a big rush of a flood, you get a good kind of *flow noise* there, now I'm even again.

And well, that's kind of how you expand your Ceph cluster, you put a little fake dam in, set a couple flags that say “don't move your data”, add your new storage nodes in, drives, storage nodes, whatever it is, however you choose to expand your cluster, and then you take the flags off and build a little door, rather than taking the whole dam off, and it'll just slowly trickle charge in the background as trickle charge *laughs*, redistribute if you will in the background as your client i/o remains untouched.

Alright, and fun fact on the theme today-bathtubs and storage servers, don't bring your Storinator in the tub with you, make sure you keep your Ceph clusters and your tubs completely separated, although they're very similar in analogies, keep them separated. Okay, so analogies and fun facts aside, the technical details on how to avoid the bathtub tidal wave of expanding a Ceph cluster is available on our knowledge base, knowledgebase.45drives.com. We'll put a link in the description.

And as always, questions comments, anything at all, leave in comment, social media, we'd love to hear from you, hope you enjoyed another Tuesday tech tip and I'll see you next week.

Discover how 45Drives can work for you.

Contact us to discuss your storage needs and to find out why the Storinator is right for your business.

Contact 45Drives