Have you ever lay awake late at night, unable to sleep, retreading over the small mistakes you made years ago, physically cringing? Or do you think about the good stuff you did, consider one of the big wins you had, and go to sleep happy?
Looking back, going to direct wired architecture was one of the best decisions we ever made as a company.
If you're unfamiliar with direct-wired architecture - it means there is a 1 to 1 connection for the drive to the system with the full throughput of each drive in the server. The drives connect through a card called the LSI 9305. The LSI 9305 is our standard option at 45Drives, although others are available.
Check out this blog to find out what prompted us to go direct-wired and why it turned out to be such a great decision.
Back in January, our team released a new module inside our Houston Command Center for setting up storage pools, shares, snapshots, and more. This ZFS module has recently been updated and will be a part of our July 6th, 2021 launch of new Houston features/modules for Cockpit users.
Our lead engineer, Brett Kelly, gives us a rundown of what's new inside our ZFS module in this week's tech tip. He also gives us a sneak peek at the final module update before our July 6th release.
Remember, the theme behind all of these new features is to get everyone out of the command line and make setting up and managing your storage infrastructure a breeze!
ZFS is an advanced file system that offers many beneficial features such as pooled storage, data scrubbing, capacity, and more.
One of the most beneficial features of ZFS is the way it caches, reads, and writes. ZFS allows for tiered caching of data through the use of memory. ZFS offers several caching levels for both reads and writes that can be complicated if you're unfamiliar, that’s why we're making it easy with a throwback to our ZFS Caching article.
In fact, this article was one of our most popular articles ever. Click the link below to learn all about how ZFS caches from an easy-to-understand, high-level perspective.
The answer is found at the bottom of this newsletter.
L2ARC can be used to improve the performance of random read loads on your system.
In a ZFS system, a caching technique called adaptive replacement cache (ARC), caches as much of your dataset in RAM as possible. This frequently allows data to be accessed quickly, much faster than having to go the backing HDD array. It follows that more RAM means more ARC space which in turn means more data can be cached.
As you would have seen if you read our ZFS caching article, L2ARC exists on an SSD instead of much quicker RAM. It is still far faster than spinning disks though and much cheaper than RAM, so when the hit rate for ARC is low, adding a L2ARC could have some performance benefits. Instead of looking at the HDDs to find data, the system will look at RAM and an SSD to improve performance. L2ARC is usually considered if the hit rate for the ARC is below 90% while having 64+ GB of RAM.
Awesome right? Well, keep reading! The article below walks you through in detail the process of adding an L2ARC or cache drive to your zpool.
is making headlines again! CHIME has detected hundreds of mysterious fast radio bursts
in space - cool, right?
But did you know they store their data on a Storinator? Learn more about CHIME here!
Want to be featured in The Direct Wire?
Send us your Storinator setup, pitch us a
or become a
We've love to hear from you and feature you on our website and in The Direct Wire!
An overview of CephFS architecture!
Looking to learn the basics of CephFS? Check out this 4-part mini-series on our youtube channel.
In these videos, 45Drives co-founder Doug Milburn and Lead R&D Engineer Brett Kelly sit down to discuss and give examples of CephFS architecture.
Watch them by clicking the link below.