Want to make your storage sing? Then make sure it’s designed for composability – Blocks and Files


SPONSORED While the market numbers are buoyant, most organizations already have a good understanding of the benefits of adopting hyperconverged infrastructure (HCI). By tightly integrating compute, storage, and networking into an HCI platform, organizations gain simplicity, scalability, and management, whether they are building large-scale data centers or banking systems. or enterprise-wide e-commerce. In addition, the OPEX subscription-style pricing that HCI providers wholesale adopt can gain approval from the finance department.

However, individual components may be underutilized for some workloads in a standard HCI configuration. But if some applications are more computationally intensive, you can always add racks specially tuned for them. For more data intensive applications, you can add additional SSDs to some racks. Maybe they can sit next to the GPU enriched SKUs you’ve ordered for your AI workloads.

Then, before you know it, bubbling beneath the surface of this placid pool of similar-looking boxes, is what Scott Hamilton, senior director of product management and marketing at Western Digital, describes as “sku-nami “.

“The more you have, the harder it is to manage and predict,” he argues. Conversely, you can end up with a bunch of stuck resources when there are fewer options / less granularity.

The direction of movement of modern workloads makes this problem even more difficult. The move to cloud and cloud native applications places more emphasis on flexibility and scalability. At the same time, the new generation of AI and machine learning applications rely on vast amounts of data (although this will vary depending on whether the focus is on training or inference at any given time).

These are the macro trends. On a more micro level, NVMe SSDs have increased the amount of data that can be served up to the processing cores of a system, resulting in better CPU usage. But why confine this to individual servers? “NVMe led the way. Now the standard allows NVMe on Fabrics, which basically means I disaggregate my NVMe storage, which was directly connected to the CPU over PCI, and expand it on a fabric so that it can be shared, ”says Hamilton.

NVMe sets the tone

Once you’ve taken the plunge into sharing an NVMe resource pool, the next obvious step is to ask yourself, “It wouldn’t be great if I could disaggregate all of my resources, whether it’s compute, GPUs or storage, fast or high capacity, and make it all shareable? “

These disaggregated components can then be composed into new logical entities that are precisely tailored to specific workloads and projects. “If it’s dynamic, you can put those resources back into the pool when they’re no longer needed, whether it’s over time, days, or projects. “

This is what Western Digital was aiming to launch in 2018 when the storage giant announced its proposal for an open composable disaggregated infrastructure (CDI), with an API. Hamilton describes the initiative as a framework that allows communication between all resources, “which are peers in a CDI model, as opposed to an HCI model, where the CPU is the hub and everything else is below.”

The 2018 CDI announcement was accompanied by the Openflex 3000 Fabric Enclosure, a 1U device that houses up to 10 OpenFlex F3200 Flash devices, each offering up to 61.4TB of capacity, with write IOPS of up to 2.1M and 11.5 GB / s write bandwidth.

“We also demonstrated a disc attached to the tissue,” adds Hamilton. “It was managed in terms of NVMe namespaces, which are like volumes in the world of disks. Everything can be managed the same – at least storage, whether it’s flash or disk. It just has different features and prices.

So, in Western Digital’s vision, composability and disaggregation doesn’t just apply to production workloads, but also to cooler storage and ultimately an entire infrastructure where everything is disaggregated, attached to the fabric. and composable.

Being open makes things “a little more complicated,” Hamilton admits, but working with a large ecosystem will accelerate the development of that infrastructure. “You have to work with the NIC partners, you have to work with the switching partners… we partner with a lot of people in this ecosystem, whether it’s Broadcom, whether it’s Mellanox, which is now Nvidia, which then goes into the business. GPU arena.

The results of the CDI pilots and the proof of concept now show exactly what the approach can offer.

Composability? What’s the score

The bursting of storage components for general enterprise workloads, such as Oracle or SQL Server applications, should enable enterprises to remove data bottlenecks in HCI systems, and thus increase response times and speed up queries. Additionally, any time another HCI system is added to simply resolve a storage issue, organizations may face additional licensing costs. By disaggregating the infrastructure, they can avoid this expense, increasing the amount of storage (cheaper) compared to compute (more expensive), thus ensuring better CPU usage.

Likewise, Western Digital is working with “a big software company” on a customer use case to improve the service experience for managing service tickets. The company is evaluating a software-defined storage (SDS) approach, using OpenFlex devices with SSDs, to handle huge volumes of customer and service data. As Hamilton explains, “What they’re trying to do is reduce latency, which allows them to differentiate themselves and make their overall system faster and more profitable.”

On a larger scale, Western Digital is working on a customer use case with a telecommunications provider that has adopted HCI for its content delivery network. “It was just getting expensive enough” as the company added customers to individual points of presence. The HCI model he deployed meant that he was not making adequate use of the existing storage in each of the servers.

“We offer them disaggregated flash storage,” says Hamilton. “So very fast, low latency, but it’s shared across the fabric, and then they can have their compute nodes. We can offer them much higher efficiency. Some predictions are that the CDI approach could deliver a 10-fold improvement in dollars per gigabit per second.

Not all CDI applications should focus on composability and on-the-fly orchestration. According to Hamilton, the approach also lends itself to the development of more static, but highly efficient and cost-effective infrastructure. “You can think of disaggregation as a great building block and then do something very specific. “

For example, France Brain and Spine Institute (ICM) put OpenFlex platform to work in its research infrastructure. The system handles large amounts of data, captured from a range of medical imaging tools, including MRI systems and digital light sheet microscopes that generate up to 21 TB of data per hour. Transferring data to researchers places such a burden that researchers often have to use lower resolution images. Even with this fix, it may take four hours to copy data to storage. The alternative of adding local storage to the workstations was a no-go because the Institute lacked physical space in its Paris headquarters.

The ICM has opted for a centralized OpenFlex-based system, initially serving 10 workstations, with plans to eventually scale to more than 50. The system provides full high-resolution images with latencies of 34 microseconds or less, which allows scientists to analyze more images at four times the resolution allowed by the previous system. Other benefits include removing the need to support multiple storage servers on campus, while additional microscopes can be added without having to deploy additional local storage.

In conclusion, HCI opened people’s eyes to the possibilities of software-defined storage. However, the tight integration of components reduces flexibility and can impact performance and budgets. In response, Western Digital designed the disaggregated CDI platform to provide this greater flexibility. The business approach feeds into the SDS or orchestration layer that customers may already have in place. “You can see that software-defined storage and its adoption over time is increasing rapidly… and CDI is benefiting,” says Hamilton.

And while not everyone needs to orchestrate their infrastructure on the fly today, they do need the flexibility to more precisely tailor their infrastructure to their current and future workloads. At the end of the day, Hamilton says, “if I don’t break up, then I’m not really composing. “

This article is sponsored by Western Digital.


Source link

Previous What China expects from companies: total surrender
Next Katie Stanton's Moxxie Ventures Raises $ 85 Million Second Fund, Adds Former Twitter Director As Partner

No Comment

Leave a reply

Your email address will not be published. Required fields are marked *