As I’ve previously mentioned I was fortunate enough to recently attend the joint SNIA’s Storage Developer 2023 Conference and Gestalt Storage Field Day 26 Event. At these events we both heard directly from companies in *FD style but also dove deep into the storage realm with an excellent collection of breakout sessions. One thing you did not hear much about at either event was traditional spinning hard disks. For all the new hotness such as AI/ML model building it just simply isn’t fast enough without throwing literal racks of it at the problem to keep up with the ingress. For edge use cases such as the super cool keynote about the Spaceborne Compute systems from HPE in the International Space Station or anything manufacturing related there is an idea that any kind of storage medium that moves will quickly become damaged because of environmental reasons. Next comes density. For the longest time we all wanted SSDs but you largely weren’t going to be able to get even into terabyte range and if you did the cost per gig were going to be so astronomically higher it wasn’t possible except for high speed workloads. Today we are not only seeing flash based disks economically in the multi-terabyte range but with the innovations into QLC such as what Solidigm is up to lately we’re seeing SSD rival and surpass spinning disk both in capacity and price. Take for example the D5-P5336 from Solidigm; these disks range from 15.36 TB up to 61.44 TB (coming soon) in a single device. That’s seems insane to me but at the same time this type of capacity is what is needed by the market. Paired with the D7-5810 by the Cloud Storage Acceleration Layer software as we saw during their SFD26 presentation to create a tiered storage system you can achieve amazing reads and writes while maintaining far lower cost than we associated with high speed, dense storage in the past. Finally with modern flash storage there is a much greater level of energy efficiency. This was a major topic of the conference, with sustainability being a core datacenter architectural design constraint how storage power consumption is involved. End of the day flash is to storage as LED bulbs are to your home’s lighting, it’s just better and cheaper and the technological innovations we need. Conclusion In the end where does that leave traditional spinning disks? I think you are going to still have the “cheap and deep” use case for now; think secondary backup storage or glacier style object storage platforms, but if the current trend of SSD pricing going to rock bottom continues those will become less common.
As I’ve previously mentioned I was fortunate enough to recently attend the joint SNIA’s Storage Developer 2023 Conference and Gestalt Storage Field Day 26 Event. At these events we both heard directly from companies in *FD style but also dove deep into the storage realm with an excellent collection of breakout sessions. One thing you did not hear much about at either event was traditional spinning hard disks. For all the new hotness such as AI/ML model building it just simply isn’t fast enough without throwing literal racks of it at the problem to keep up with the ingress. For edge use cases such as the super cool keynote about the Spaceborne Compute systems from HPE in the International Space Station or anything manufacturing related there is an idea that any kind of storage medium that moves will quickly become damaged because of environmental reasons. Next comes density. For the longest time we all wanted SSDs but you largely weren’t going to be able to get even into terabyte range and if you did the cost per gig were going to be so astronomically higher it wasn’t possible except for high speed workloads. Today we are not only seeing flash based disks economically in the multi-terabyte range but with the innovations into QLC such as what Solidigm is up to lately we’re seeing SSD rival and surpass spinning disk both in capacity and price. Take for example the D5-P5336 from Solidigm; these disks range from 15.36 TB up to 61.44 TB (coming soon) in a single device. That’s seems insane to me but at the same time this type of capacity is what is needed by the market. Paired with the D7-5810 by the Cloud Storage Acceleration Layer software as we saw during their SFD26 presentation to create a tiered storage system you can achieve amazing reads and writes while maintaining far lower cost than we associated with high speed, dense storage in the past. Finally with modern flash storage there is a much greater level of energy efficiency. This was a major topic of the conference, with sustainability being a core datacenter architectural design constraint how storage power consumption is involved. End of the day flash is to storage as LED bulbs are to your home’s lighting, it’s just better and cheaper and the technological innovations we need. Conclusion In the end where does that leave traditional spinning disks? I think you are going to still have the “cheap and deep” use case for now; think secondary backup storage or glacier style object storage platforms, but if the current trend of SSD pricing going to rock bottom continues those will become less common.
I was recently invited to attend Gestalt IT’s Storage Field Day 26 event in conjunction with SNIA’s Storage Developer Conference in Fremont, CA. Through the SFD26 event we heard from I’ll be honest, I’m not exactly a storage systems subject matter expert but I am a long time consumer of storage systems. Of late would definitely consider myself “storage adjacent” as my work has largely revolved around backups and data protection. I say that to say that going to an event like this was very much so a kick into the deep end of the pool. There were things that blew my mind (CXL and DNA based storage media) but also things which were completely in my lane such as the session advocating for S3 to become a standards based protocol. In this series of posts I’ll be covering the event and the presenting organizations starting with some take aways that I had from SNIA’s Storage Developer Conference. What Is Old Is New Again If you are like me and storage is at best a secondary technology for your attention you may have heard the of Compute Express Link (CXL) but maybe not the adjacent term that was everywhere at the event, storage class memory. As I’ve sat through a number of sessions and read the excellent primer by Andy Banta I’ve come to think of them as 2 sides of the same coin; CXL is putting expanded memory onto the PCIe bus and thus making it possible to radically scale the amount of RAM that can be available to a system and heavily cached RAM packaged as storage on the PCIe bus sold as storage class memory. So why does this matter? It comes down to what leverage you need to expand for your need. If you need to have more RAM available per computing processor the CXL path is more appropriate. If you need ultra fast storage to either meet a specific workload need or to act as part of a tiered storage system such as the proposed SEF standard storage class memory may be more your target. In both of these cases it is definitely early days of actually seeing it in the wild. CXL is rapidly working its way through the versions of the standard with the forthcoming 3.0 release being the one that I think will be the one that starts to see more widespread adoption by providers and even wider use cases. With storage class memory it first came onto the scene with Intel’s Optane memory products which were then quickly killed in spin out to what was eventually Solidigm but possibly because it was an idea that was too expensive and too soon. That said with the just announced DL7-P5810 from the aforementioned Solidigm it appears they are ready to try again. What was interesting to me was to hear all the talk about CXL, and memory/flash tiered systems such as the Software Enabled Flash standard being proposed is they are both new approaches to old conversations we’ve been having in the virtualization space for years. CXL is being born from the idea that there as always there is usually a need for more RAM capacity than processing power and it’s essentially creating a software defined memory scale out capability. Sound familiar? Conversely Software Enabled Flash and any of the other concepts that will allow for tiering Storage Class Memory or even the various classes of SSD is very much so the same considerations as what those of us buy CS series Nimble arrays last decade were taking. In the end a lot of the software defined conversations present in the modern storage space are the same ones we’ve been having for the past 10-15 years, we just now want to do things much, much faster. Conclusion In the end the technologies around the storage industry continue to evolve, giving us the ability to read and write data in unfathomable quanities while continuing to make it faster and smaller. But while the technology to create storage changes, the ways we consume it, the ways we present it, the ways we leverage its capabilities seems very cyclical to me. In the end I’m very curious to see where we go from here. In the next post I will look at how companies such as Solidigm are pushing the traditional hard disk further into the recycling bin of history.
I was recently invited to attend Gestalt IT’s Storage Field Day 26 event in conjunction with SNIA’s Storage Developer Conference in Fremont, CA. Through the SFD26 event we heard from I’ll be honest, I’m not exactly a storage systems subject matter expert but I am a long time consumer of storage systems. Of late would definitely consider myself “storage adjacent” as my work has largely revolved around backups and data protection. I say that to say that going to an event like this was very much so a kick into the deep end of the pool. There were things that blew my mind (CXL and DNA based storage media) but also things which were completely in my lane such as the session advocating for S3 to become a standards based protocol. In this series of posts I’ll be covering the event and the presenting organizations starting with some take aways that I had from SNIA’s Storage Developer Conference. What Is Old Is New Again If you are like me and storage is at best a secondary technology for your attention you may have heard the of Compute Express Link (CXL) but maybe not the adjacent term that was everywhere at the event, storage class memory. As I’ve sat through a number of sessions and read the excellent primer by Andy Banta I’ve come to think of them as 2 sides of the same coin; CXL is putting expanded memory onto the PCIe bus and thus making it possible to radically scale the amount of RAM that can be available to a system and heavily cached RAM packaged as storage on the PCIe bus sold as storage class memory. So why does this matter? It comes down to what leverage you need to expand for your need. If you need to have more RAM available per computing processor the CXL path is more appropriate. If you need ultra fast storage to either meet a specific workload need or to act as part of a tiered storage system such as the proposed SEF standard storage class memory may be more your target. In both of these cases it is definitely early days of actually seeing it in the wild. CXL is rapidly working its way through the versions of the standard with the forthcoming 3.0 release being the one that I think will be the one that starts to see more widespread adoption by providers and even wider use cases. With storage class memory it first came onto the scene with Intel’s Optane memory products which were then quickly killed in spin out to what was eventually Solidigm but possibly because it was an idea that was too expensive and too soon. That said with the just announced DL7-P5810 from the aforementioned Solidigm it appears they are ready to try again. What was interesting to me was to hear all the talk about CXL, and memory/flash tiered systems such as the Software Enabled Flash standard being proposed is they are both new approaches to old conversations we’ve been having in the virtualization space for years. CXL is being born from the idea that there as always there is usually a need for more RAM capacity than processing power and it’s essentially creating a software defined memory scale out capability. Sound familiar? Conversely Software Enabled Flash and any of the other concepts that will allow for tiering Storage Class Memory or even the various classes of SSD is very much so the same considerations as what those of us buy CS series Nimble arrays last decade were taking. In the end a lot of the software defined conversations present in the modern storage space are the same ones we’ve been having for the past 10-15 years, we just now want to do things much, much faster. Conclusion In the end the technologies around the storage industry continue to evolve, giving us the ability to read and write data in unfathomable quanities while continuing to make it faster and smaller. But while the technology to create storage changes, the ways we consume it, the ways we present it, the ways we leverage its capabilities seems very cyclical to me. In the end I’m very curious to see where we go from here. In the next post I will look at how companies such as Solidigm are pushing the traditional hard disk further into the recycling bin of history.
With most SaaS services used in businesses, backup and recovery is a rare addition. In this Gestalt IT Tech Talk recorded at the recent Storage Field Day event, Stephen Foskett, and W Curtis Preston, aka, Mr. Backup, explore the state of data protection in the SaaS world. SaaS applications like Office 360, G Suite and similar ones, offer a scant level of recoverability which is at core of frequent user data loss. The rapid rise of ransomware attacks lends an extra urgency to mitigate this. Listen to the conversation to know how companies can avert potential crisis by adopting an alternate path.