#SymLink: VMware Edge Cloud Orchestrator adapts VMware’s technology for SMBs and edge environments, focusing on minimal resource use and SaaS-based management. @k00laidIT #K00laidIT #VMware #EFD3
https://www.koolaid.info/vmware-edge-cloud-orchestrator-vmware-for-the-very-small-masses/
VMware Edge Cloud Orchestrator: VMware For The Very Small Masses

Not everybody who needs on-premises virtualization and wants to use VMware vSphere needs full blown VCF. ~Me That leading quote is just as jump off the page as VMware’s presenters, Alan Renou…

koolaid.info
#SymLink: Jim Jones explores how edge computing tackles challenges like hardware efficiency, environmental resilience, and legacy support, enhancing data processing at remote sites. @k00laidIT #OnLogic #K00laidIT #EFD3
https://www.koolaid.info/what-problems-are-modern-edge-computing-solving-for/
What Problems Are Modern Edge Computing Solving For? - koolaid.info

I was recently honored to be selected as a delegate at the Futurum Group‘s Edge Field Day 3 event in Santa Clara, California. There we heard from companies about their solutions and products that pertain to this smaller but ever growing segment of the enterprise IT landscape. In the next few posts I’ll cover those companies visions specifically but before we get to that I think it is worth discussing the common issues that they are all trying to solve. While each of these companies are differentiated in their solution to the needs of computing in remote places there are many commonalities that came through as to what is seen as the challenges before them. What is Edge Computing In the Datacenter or the cloud the edge is that firewall or gateway you have setting between your widespread storage and application estate and the big, bad Internet. For those who are working in Edge computing they are more concerned with ensuring that applications and data are available in a distributed manner that allows for processing to be completed where it’s created and then siphoned upstream for collation and analysis. Before we start defining problems it’s worth calling out some normal use cases for edge computing. As we said edge computing exists outside of our normal boundaries. These use cases can be anything from an evolution of how we handle remote retail locations and traditional Remote Office, Branch Office (ROBO) to more modern use cases including manufacturing, agricultural monitoring and management and even smart car enablement. In all of these use cases one solution commonality that I found during the course of the presentations is that most of the software based solutions either already had a defined partnership with OnLogic or are in the process of it because they’ve solved the hardware problem for most if not all of the edge use cases. It was a great presentation that highlighted a company that puts customer and partner success above all else, finding solutions to some frankly hard problems. Problem #1: Be Lightweight The first thing that was consistently stated was that physical devices you deploy in these remote locations, regardless of the use case, will need to be lightweight. In the Datacenter it is common for us to consider hundreds of GB of RAM and dozens of CPU cores per host as well as access to specialized cooling and 3 phase power the norm. In edge deployments this often can be a couple of small consumer units thrown on a shelf in a closet or embedded in industrial controllers, in neither case do you have space or support systems to allow for power. Often designs today are around ultra small form factor units such as those from OnLogic, purpose built for the solution to provide sufficient capability in an efficient manner. Problem #2: Resilience When you deploy computing to the edge you are most likely going to find yourself dealing with new environmental concerns that you don’t typically in the Datacenter. First off is these locations often rely on connectivity that includes WiFi or 5g which will not always be available. As such you have to design systems as well as applications to handle being offline for periods of time. Next is the true environmental concern, these compute and storage nodes are often deployed in locations that deal with complications like extreme heat or moisture. Again this is another place where OnLogic truly shines, allowing for all the power that the solution needs in highly resilient system designs that are often fanless and sealed to allow for deployment where ever needed. Finally we come back to the fact that these things can be just sitting on a shelf in a closet so physical loss through theft or disaster is a very real possibility. In that case the devices should be well secured and monitored so any loss can be mitigated and reported. Problem #3: Dealing with the Past, Building for the Future Today’s Edge computing in many ways is an evolution of the old ROBO solutions we as IT Pros have been deploying for decades at this point. Whether it’s a SD-WAN device with an IP phone, printer and laptop behind it for home workers or a small Windows server running SQL server and various applications for Point of Sale and Inventory Management Edge Computing vendors have figured out that their solutions often have things that came before them and they have to support a path to migrate legacy systems. In the 2 days at EFD3 every single company providing edge compute was supplying mixed support for both Virtual Machines as well as containers. The virtual machines are designed to be transient in most solutions, a short term solution to allow workloads to be migrated to modern hardware while the applications they provided were modernized piece by piece to more cloud native options. I’ll make an argument that in my experience this short term solution will often become more long term than anybody would like them to be due to just the nature of how enterprise systems often crawl but I’ll save that for another post. But once these short term workloads are migrated the bread and butter of any of these edge computing solutions is to leverage containerized applications to allow for both security requirements and to make the applications as lightweight as possible. While Kubernetes is what you hear the most of in the Datacenter at the edge the preference seems to be more towards treating each host as an island and letting the applications provide their own resiliency between hosts and the cloud. Problem #4: Work at Scale The last common problem to be covered in this post would be that these solutions need handle at scale. For all the talk in the Datacenter over the years about breaking down silos and minimizing effort to support I was very impressed to see the 3 different management overlays for potentially globally distributed edge nodes and how they can...

koolaid.info
Jim Jones shares his perspective on the evolving landscape of data storage, noting the diminishing number of use cases for spinning disk technology in his latest blog post #K00laidIT #SNIA #Solidigm #SDC23 #SFD26 https://www.koolaid.info/spinning-disk-use-cases-are-getting-smaller/
Spinning Disk Use Cases Are Getting Smaller - koolaid.info

As I’ve previously mentioned I was fortunate enough to recently attend the joint SNIA’s Storage Developer 2023 Conference and Gestalt Storage Field Day 26 Event. At these events we both heard directly from companies in *FD style but also dove deep into the storage realm with an excellent collection of breakout sessions. One thing you did not hear much about at either event was traditional spinning hard disks. For all the new hotness such as AI/ML model building it just simply isn’t fast enough without throwing literal racks of it at the problem to keep up with the ingress. For edge use cases such as the super cool keynote about the Spaceborne Compute systems from HPE in the International Space Station or anything manufacturing related there is an idea that any kind of storage medium that moves will quickly become damaged because of environmental reasons. Next comes density. For the longest time we all wanted SSDs but you largely weren’t going to be able to get even into terabyte range and if you did the cost per gig were going to be so astronomically higher it wasn’t possible except for high speed workloads. Today we are not only seeing flash based disks economically in the multi-terabyte range but with the innovations into QLC such as what Solidigm is up to lately we’re seeing SSD rival and surpass spinning disk both in capacity and price. Take for example the D5-P5336 from Solidigm; these disks range from 15.36 TB up to 61.44 TB (coming soon) in a single device. That’s seems insane to me but at the same time this type of capacity is what is needed by the market. Paired with the D7-5810 by the Cloud Storage Acceleration Layer software as we saw during their SFD26 presentation to create a tiered storage system you can achieve amazing reads and writes while maintaining far lower cost than we associated with high speed, dense storage in the past. Finally with modern flash storage there is a much greater level of energy efficiency. This was a major topic of the conference, with sustainability being a core datacenter architectural design constraint how storage power consumption is involved. End of the day flash is to storage as LED bulbs are to your home’s lighting, it’s just better and cheaper and the technological innovations we need. Conclusion In the end where does that leave traditional spinning disks? I think you are going to still have the “cheap and deep” use case for now; think secondary backup storage or glacier style object storage platforms, but if the current trend of SSD pricing going to rock bottom continues those will become less common.

koolaid.info
#SymLink: The article "Spinning Disk Use Cases Are Getting Smaller" considers the declining relevance of HDDs in the face of SSDs and cloud storage, with a focus on the remaining niches for spinning disk technology. @k00laidIT #Solidigm #SNIA #K00laidIT #SFD26 #SDC23
https://www.koolaid.info/spinning-disk-use-cases-are-getting-smaller/
Spinning Disk Use Cases Are Getting Smaller - koolaid.info

As I’ve previously mentioned I was fortunate enough to recently attend the joint SNIA’s Storage Developer 2023 Conference and Gestalt Storage Field Day 26 Event. At these events we both heard directly from companies in *FD style but also dove deep into the storage realm with an excellent collection of breakout sessions. One thing you did not hear much about at either event was traditional spinning hard disks. For all the new hotness such as AI/ML model building it just simply isn’t fast enough without throwing literal racks of it at the problem to keep up with the ingress. For edge use cases such as the super cool keynote about the Spaceborne Compute systems from HPE in the International Space Station or anything manufacturing related there is an idea that any kind of storage medium that moves will quickly become damaged because of environmental reasons. Next comes density. For the longest time we all wanted SSDs but you largely weren’t going to be able to get even into terabyte range and if you did the cost per gig were going to be so astronomically higher it wasn’t possible except for high speed workloads. Today we are not only seeing flash based disks economically in the multi-terabyte range but with the innovations into QLC such as what Solidigm is up to lately we’re seeing SSD rival and surpass spinning disk both in capacity and price. Take for example the D5-P5336 from Solidigm; these disks range from 15.36 TB up to 61.44 TB (coming soon) in a single device. That’s seems insane to me but at the same time this type of capacity is what is needed by the market. Paired with the D7-5810 by the Cloud Storage Acceleration Layer software as we saw during their SFD26 presentation to create a tiered storage system you can achieve amazing reads and writes while maintaining far lower cost than we associated with high speed, dense storage in the past. Finally with modern flash storage there is a much greater level of energy efficiency. This was a major topic of the conference, with sustainability being a core datacenter architectural design constraint how storage power consumption is involved. End of the day flash is to storage as LED bulbs are to your home’s lighting, it’s just better and cheaper and the technological innovations we need. Conclusion In the end where does that leave traditional spinning disks? I think you are going to still have the “cheap and deep” use case for now; think secondary backup storage or glacier style object storage platforms, but if the current trend of SSD pricing going to rock bottom continues those will become less common.

koolaid.info
#SymLink: The article "Thinking About… Storage in 2023" on www.koolaid.info discusses potential future trends and innovations in digital storage, exploring how businesses and individuals could adapt to these changes. @k00laidIT #Solidigm #SNIA #K00laidIT #SFD26
https://www.koolaid.info/thinking-about-storage-in-2023/
Thinking About… Storage in 2023 - koolaid.info

I was recently invited to attend Gestalt IT’s Storage Field Day 26 event in conjunction with SNIA’s Storage Developer Conference in Fremont, CA. Through the SFD26 event we heard from I’ll be honest, I’m not exactly a storage systems subject matter expert but I am a long time consumer of storage systems. Of late would definitely consider myself “storage adjacent” as my work has largely revolved around backups and data protection. I say that to say that going to an event like this was very much so a kick into the deep end of the pool. There were things that blew my mind (CXL and DNA based storage media) but also things which were completely in my lane such as the session advocating for S3 to become a standards based protocol. In this series of posts I’ll be covering the event and the presenting organizations starting with some take aways that I had from SNIA’s Storage Developer Conference. What Is Old Is New Again If you are like me and storage is at best a secondary technology for your attention you may have heard the of Compute Express Link (CXL) but maybe not the adjacent term that was everywhere at the event, storage class memory. As I’ve sat through a number of sessions and read the excellent primer by Andy Banta I’ve come to think of them as 2 sides of the same coin; CXL is putting expanded memory onto the PCIe bus and thus making it possible to radically scale the amount of RAM that can be available to a system and heavily cached RAM packaged as storage on the PCIe bus sold as storage class memory. So why does this matter? It comes down to what leverage you need to expand for your need. If you need to have more RAM available per computing processor the CXL path is more appropriate. If you need ultra fast storage to either meet a specific workload need or to act as part of a tiered storage system such as the proposed SEF standard storage class memory may be more your target. In both of these cases it is definitely early days of actually seeing it in the wild. CXL is rapidly working its way through the versions of the standard with the forthcoming 3.0 release being the one that I think will be the one that starts to see more widespread adoption by providers and even wider use cases. With storage class memory it first came onto the scene with Intel’s Optane memory products which were then quickly killed in spin out to what was eventually Solidigm but possibly because it was an idea that was too expensive and too soon. That said with the just announced DL7-P5810 from the aforementioned Solidigm it appears they are ready to try again. What was interesting to me was to hear all the talk about CXL, and memory/flash tiered systems such as the Software Enabled Flash standard being proposed is they are both new approaches to old conversations we’ve been having in the virtualization space for years. CXL is being born from the idea that there as always there is usually a need for more RAM capacity than processing power and it’s essentially creating a software defined memory scale out capability. Sound familiar? Conversely Software Enabled Flash and any of the other concepts that will allow for tiering Storage Class Memory or even the various classes of SSD is very much so the same considerations as what those of us buy CS series Nimble arrays last decade were taking. In the end a lot of the software defined conversations present in the modern storage space are the same ones we’ve been having for the past 10-15 years, we just now want to do things much, much faster. Conclusion In the end the technologies around the storage industry continue to evolve, giving us the ability to read and write data in unfathomable quanities while continuing to make it faster and smaller. But while the technology to create storage changes, the ways we consume it, the ways we present it, the ways we leverage its capabilities seems very cyclical to me. In the end I’m very curious to see where we go from here. In the next post I will look at how companies such as Solidigm are pushing the traditional hard disk further into the recycling bin of history.

koolaid.info