Kalray NGenea
Next Generation
Global Data Platform
Build smarter, more efficient, and energy-wise data center
infrastructures for data-centric applications
that provides game-changing data processors and acceleration cards, unstructured data management software and high-performance storage for data-intensive workloads.
Supercharge your storage for your most demanding data-centric applications.
Unify storage tiers and enable remote teams to collaborate effectively.
Obtain faster and better insights from your fast-growing data sets.
“Dell Technologies and Kalray have joined forces to deliver an optimized HPC storage solution that delivers the throughput and capacity needed to manage rapid data growth and increased demands from data-intensive HPC and AI workloads like the Duos RIP system.”
Elliott Berger, Sr. Solution Architect,Data Science Infrastructure Solutions at Dell Technologies
Kalray’s technologies enable customers to design the most powerful data-centric applications in expanding sectors such as Media & Entertainment, Life Sciences, Research, Manufacturing and Telecommunications.
We enable customers to scale infrastructures to efficiently meet the performance and capacity requirements of data intensive workloads and AI applications.
We support complex existing & new environments, including multi-location, multi-silo, multi-cloud, on-premise, and new and old infrastructures.
Our products are designed to deliver the best TCO efficiency: our DPU’s and storage solutions deliver the highest performance per watt and $ spent.
Our solutions do not create data or vendor lock-in: they feature open and pluggable architectures without proprietary formats.
NGenea unifyies heterogeneous storage tiers to maximize the ROI of existing infrastructures while leveraging new storage environments, on premises or in the cloud. NGenea’s automated data management ensures data is available wherever it is needed across the global workflow.
NGenea is a global unstructured data management and storage solution for data-intensive workloads. NGenea combines storage tiers to enable better insights and accelerated workflows while automating data management, ensuring that data is available wherever it is needed across the global workflow. NGenea supports all major cloud vendors including AWS, Azure, and GCP as well as on-premises.
SOLUTION BRIEF – NGenea
SOLUTION BRIEF – Dell Technologies & Kalray NGenea
NGeneaTop 5 Reasons To Choose Dell Technologies & Kalray NGenea
NG-Hub is an easy-to-use web interface featuring an array of tools that allows centralized control of all storage within a global namespace. It provides users with a digestible view of data, a global search functionality to perform a single search across multiple sites. Users have control of monitoring, management, and bandwidth allocation, with the ability to customize and automate sequences at the click of a button.
DATA SHEET – NG-Hub Data Management Solutions
As storage requirements keep growing astronomically, customers seek to lengthen the life cycle of their storage systems….
NGenea’s NG-Stor is a high-performance storage tier for the most data-intensive workloads. Powered by a proven high-performance parallel file system, trusted by thousands of organizations worldwide, NG-Stor can easily manage petabytes of data and billions of files, all under a single global namespace.
DATA SHEET – NG-Stor High Performance Computing
Innovative AI/ML, HPC, Video applications require Tier-0 storage with lower latency, higher IOPs & throughput to maximize the processing cycles of their compute infrastructure…
NGenea can further accelerate data-intensive workloads through integrations with NG-Box, Kalray’s all-NVMe flash array. As such, NGenea can meet the storage requirements of the most demanding HPC, AI/ML and post-production workloads.
DATA SHEET – NG-Box Accelerate Your Workloads
Generative AI is an AI application that creates models capable of generating new content, be it in the form of text, graphics, music, or more. These models are typically powered by deep learning techniques like GANs (Generative Adversarial Networks) or RNNs (Recurrent Neural Networks). They can generate content that closely resembles human-created data. Generative AI significantly augments capabilities to produce novel, data-driven outputs and has found applications in a wide range of industries.
Kalray Ngenea addresses the storage challenges of Generative AI: NG-Hub, Ngenea’s data management layer, enables automated data transfer between on-premises and cloud storage and supports all data sizes. Intelligent caching minimizes data transfer costs and ensures instant data access. NG-Stor features a high-performance parallel file system that can easily manage petabytes of data, supporting the most demanding AI models. As such, Kalray Ngenea provides the best performance, scalability and efficiency for Generative AI data management.
Generative AI applications require fast and seamless data access, especially when dealing with large data sets. The sheer volume of data required to train and fine-tune generative models can strain traditional storage systems, leading to performance bottlenecks and prolonged model training times. To make things even more challenging, data sets typically include a mix of large and small files and objects. Additionally, data format compatibility can be a hurdle, as generative AI models often require data in its native format for optimal performance. Lastly, managing and scaling storage infrastructures to accommodate the ever-growing data demands of generative AI applications can be complex and resource-intensive. Hybrid cloud offers the benefit of virtually unlimited scalability but requires fast data movement between on-premises and cloud infrastructures.
New workloads are entering the enterprise storage space: next-gen processors power innovative AI/ML, HPC, and Video applications. Enterprises need to maximize the processing cycles of their expensive compute infrastructure through low-latency, high-performance Tier-0 storage.
Powered by a proven high-performance parallel file system trusted by thousands of organizations worldwide, NG-Stor can easily manage petabytes of data and billions of files, all under a single global namespace. NG-Stor meets the storage requirements of the most demanding HPC, AI/ML and post-production workloads. It leverages the fastest components (NVMe) to feed the most data-hungry compute processors and delivers the highest throughput & IOPS/$ and lowest latency.
Quickly and easily provide applications and users access to any data in the global environment, regardless of where they’re located.
Cloud storage continues to be a big topic for enterprises: a key use case is global collaboration, to support distributed applications and remote teams. Global collaboration requires a single global namespace that allows remote users and applications to instantly access and share data across all storage tiers.
Ngenea’s NG-Hub enables organizations to securely share data between remote teams, across sites. It creates a single namespace with supported (S3) CSPs and provides instant data access for distributed applications and remote users. Ngenea features real-time availability, powerful search and instant data access. It supports on-premises clouds as well as AWS, Azure and GCP.
Gain control to fully and easily leverage resources no matter where they’re located. Burst to the cloud when needed and scale back when done.
Cloud bursting for data-intensive workloads requires organizations to quickly serve relevant data to compute instances in the Cloud. NGenea’s NG-Hub enables organizations to automatically transfer data from on-premises storage into the cloud, without manual data movement. There is no limit on the number of cloud compute instances and intelligent caching minimizes data transfer costs. NG-Hub provides each cloud compute instance with an easy access protocol, adheres to on-premises ACLs and security and only retrieves required data to reduce egress costs.
When on-premises infrastructures reach peak capacity, organizations “burst” extra workloads to public cloud services. This is a convenient and cost-effective method to support workloads with fluctuating demand patterns. Public cloud enables organizations to easily scale up or down to meet workload demands and are available worldwide. Cloud bursting enables organizations to reduce additional investment in on-premises infrastructure while leveraging the scale and flexibility of public cloud solutions. As such, they avoid service interruption to business-critical applications due to sudden workload spikes.
However, NVMe flash is more difficult to deploy at scale as traditional storage controllers cannot handle the NVMe performance and as such become performance bottlenecks. Some vendors designed scale-out NVMe solutions based on X86 but those do not have the convenience of traditional arrays, which customers have come to like a lot over the past decades.
The move from HDD to SSD created a whole new generation of much faster all-flash arrays. But traditional SSDs use SATA interfaces, which were designed for HDD and were not optimized for fast I/O. Today’s generation NVMe SSDs is much faster than traditional (SATA) SSD as the NVMe interface unlocks high IOPs, high bandwidth and ultra-low latency.
Kalray’s NG-Box is a disaggregated NVMe storage array that was designed from the ground up to leverage the full potential of NVMe flash devices at massive scale, while ensuring the lowest storage Total Cost of Ownership (TCO). Featuring the Kalray K200-LP Smart Storage Controllers, Flashbox enables customers to deploy NVMe with the convenience of traditional storage array architectures, without performance or durability trade-offs.