Nothing Special   »   [go: up one dir, main page]

PM+CS Summit 2021 Presentation Abstracts

2021 Persistent Memory Summit Presentations


Opening Remarks and State of the Union

Jim Pappas, SNIA Board of Directors, Director of Technology Initiatives, Intel Corporation

Abstract

Join Jim Pappas as he kicks off our first virtual Summit with an update on the status of persistent memory and computational storage at SNIA, and why expanding the Summit to include computational storage makes sense.


Future of Persistent Memory, DRAM, and SSD Form Factors Aligned with New System Architectures

Arthur Sainio, Director, Product Marketing, SMART Modular Technologies 

Abstract

The options for memory expansion and system acceleration are growing and starting to align with emergent serial and fabric attached architectures.  Application responsiveness and system performance are key values for end users.  Modern data workloads require real-time processing of large data-sets resident in main memory But, memory capacity has not scaled as compared to the number of CPU cores available in modern day servers. The new serial and fabric attached interconnect architectures (CXL, OpenCAPI, Gen-Z,) enable the use of other form factor like EDSFF E1.S and E3.S for memory expansion and acceleration use-cases. Application and hardware requirements are driving selection of form factors best suited for adding cache coherent serial or fabric attached memory.


The Persistent Memory Connection - How to Attach PM in Computing Systems?

Jonathan Hinkle, Executive Director and Distinguished Researcher, Systems Architecture, Lenovo

Abstract

Persistent Memory is being used today to speed a wide range of applications from traditional workloads like database and storage to emerging AI and edge workloads.  As with many new technologies, use of PM started as adapted to existing system implementations for benefit of first adoption, from NVDIMM-N to 3DXP NVMe drives.  Now after further progress, it’s more clear how to optimize the means of interfacing to PM in computing systems for maximum benefit.  This session will chart the journey from early implementations to near-future architectures that will truly unlock the strong value of PM, plus share some real results from next-gen hardware prototyping.


NVMe Computational Storage: A New Hope for Accelerators and DPUs

Stephen Bates, Chief Technology Officer, Eideticom

Abstract

Computational Storage offers the promise of a vendor-neutral and open-standard for moving parts of applications from a host to an accelerator. In this talk we will look at the promise and potential of the work being done within SNIA and NVMe to standardize how host CPUs communicate and offload tasks to NVMe Computational Storage devices.  This is the first time a device-level open standard is being developed for computation devices. We predict this will enable a vendor-neutral ecosystem around accelerators, DPUs and Computation Storage devices that will lead to a new level of adoption. We will lay out our vision of this and Eideticom's role, products and customer successes in this new and exciting world!


Dynamic Trends in Non-Volatile Memory Technologies

Tom Coughlin, President, Coughlin Associates and Jim Handy, General Director, Objective Analysis

Abstract

Non-volatile memories such as magnetic random access memory (MRAM), resistive RAM (RRAM) and phase change memory (PCM) such as 3D XPoint are rolling out in products serving enterprise, data center, edge and endpoint applications.  This talk will give projections on the growth of these new memory products, discrete and embedded, with examples of current products as well as some discussion on the role of memory and storage networks that enable emerging uses of these memories and which will transform computing architectures.


What Does the Future Hold for Persistent Memory? A Panel Discussion

Moderator: Dave Eggleston, Principal, Intuitive Cognition Consulting

Abstract

Micron’s announcement about ending their work on 3D XPoint raises interesting questions over the future of PMEM. A select group of experts will tackle pointed topics concerning future challenges and opportunities for PMEM. This wide-ranging panel discussion will wrestle with the systems, applications, architectures, technologies, key players, and motivations in the evolving PMEM ecosystem. Don’t miss this timely event!

 


CXL 2.0 - Architecture and Benefits for Computational Storage

David Wang, Director, Memory Product Planning, Samsung Electronics

Abstract

Released in November 2020, the CXL™ 2.0 Specification added new features and functionality, including support for switching enabling device fan-out, memory scaling, expansion and the migration of resources; memory pooling for increased memory utilization efficiency; and support for persistent memory – all while maintaining full backwards compatibility with CXL 1.1 and 1.0.

The CXL 2.0 architecture enables computational storage. CXL moves persistent memory and storage together in a way that it is no longer a singular order of compute, memory and storage. Rather than the traditional inline process of going from compute to memory to storage and back, CXL can go from compute directly to storage, or from compute to memory and memory straight to storage.

In this presentation, attendees will learn about the CXL 2.0 architecture, as well as its benefits for computational storage.

 


Security in Computational Storage Drives  

David McIntyre, Director, Product Planning and Business Enablement, Samsung Corporation

Abstract

Data storage growth continues to increase the demand for additional compute resources to run applications in the cloud and at the edge. In situ processing within a solid state drive or array of drives provides a more efficient approach of bringing compute resources to the data, where applications can be accelerated in performance without the necessity of sending the data to the host processor complex. With computational storage, the stored data can be more resilient with improved security measures by reducing data in-flight between host compute and solid state drives. This presentation will provide an overview of security challenges and proposed solutions to address the policy requirements of cloud service providers and enterprise customers that can benefit from this approach to better meeting the processing demands required on exponentially growing data and storage requirements.


The Challenges of Measuring Persistent Memory Performance 

Eduardo Berrocal, Sr. Software Engineer, Intel and Keith Orsak, Master Technologist, Hewlett Packard Enterprise

Abstract

With multiple types of persistent memory in the market today, and many more potentially coming in the future, defining a way to consistently measure performance can be a challenge.  The SNIA Solid State Storage (S3) Technical Work Group (TWG) has undertaken an effort to define and test persistent memory, both in drive and raw memory format.  The most recent effort is the Performance Test Specification (PTS).  This talk will discuss the test and cover: Test Methodology; platform setup; synthetic and real-world workloads; and reporting format for test results. The PTS will enable an understand of both block and byte-addressable memory performance for accurate comparison in the real world.


Benefits of Computation in CSD, CSA, CSP - A Panel

Moderator - Mike Heumann , Managing Partner, G2M Communications

Abstract

Computational storage discussions raise interesting questions. Join our live panel discussion where you can ask your questions, and find out where our experts stand on subjects like why there are so many different factors for computational storage; what are the barriers to adoption and how should customers approach them; and where does computational storage go in the future.

Back to Top


How Computational Storage Can Become a new Standard for Cloud Architectures

Jerome Gaysse, Senior Technology and Marketing Analyst, Silinnov Consulting

Abstract

The major challenge for cloud providers is to deal with infrastructures that provide performance, security, application availability, and scalability. Disaggregation and hyperconvergence are the current architecture trends for public and private cloud. What about computational storage?  This talk presents how computational storage can be used in a Cloud environment, complying with the infrastructure requirements listed above. A database acceleration use case based on both CSD and CSP will be highlighted, including architecture description and system benefits simulation results.

 


Recap of the Day and Closing Remarks

Tom Coughlin, President, Coughlin Associates and Jim Handy, General Director, Objective Analysis

Abstract

Missed a session?  Of course you can watch all sessions on demand at the end of the day, but don't miss the fastest 10-minute recap of key points and take-aways.


Birds-of-a-Feather - Alphabet Soup and Computational Storage

Moderators: Eli Tiomkin, Chair, SNIA Computational Storage SIG and Scott Shadley and Jason Molgaard, Co-Chairs, SNIA Computational Storage TWG

Abstract

In the evolving world of CPU, GPU, DPU and Smart-NICs, and PMEM, where and why is Computational Storage needed? Come to this BoF and discuss the alphabet soup, how they all work together and how to satisfy different needs with different configurations. We will have some primer slides and look for input from the attendees on direction and value of each of these new solutions.


State of the Computational Storage Market - a Supplier's View

Scott Shadley, Co-Chair, SNIA Computational Storage Technical Work Group/VP Marketing, NGD Systems

Abstract

When doing research on the topic of Computational Storage, there is a lot of content that continues to surface from both vendors and now more importantly the editors, authors of major outlets like EnterpriseAI and TechTarget. Even the Analysts, like IDC, 451 Research and most recently Gartner are getting into the mix. The goal of this conversation is to look at some of the industry conversations, showcase where current vendors are in deployments with use cases they have shared and see where the logical integration point will occur from a standardization (SNIA, NVMe, etc) and customer adoption to mass deployments. The Compute infrastructure as we know it is continuing to evolve, first with production and adoption of PM products, and not with the additional of Computational Storage solutions. But there is also a need to know what is right, wrong, deployable and cause for concern. A walk through the evolution of all these items will be portrayed and leave the attendee with a better understanding the market direction. Stay tuned and find out about all the Hype.


Four Top Use Cases for Big Memory Today and Tomorrow

Dr. Charles Fan, CEO and Founder, MemVerge 

Abstract

Big memory computing consists of DRAM, persistent memory, and big memory software like Memory Machine from MemVerge, all working together in a new class of software-defined memory. During the first year since MemVerge unveiled big memory in May of 2020, four strong use cases emerged: 1) in-memory databases, 2) cloud infrastructure, 3) animation and VFX, and 4) genomics. All share a critical need for composable memory with capacity, performance, availability, mobility, and security that can be tailored for the app. They are also demanding memory infrastructure with the capacity to handle massive data sets and avoid the performance hit of IO to storage. In this session, Dr. Charles Fan will provide a detailed overview of the memory-related problems for each use case and how big memory addressed each challenge. He will also briefly review four uses expected to emerge in the future.

Back to Top


Practical Computational Storage:  Performance, Value, and Limitations

Bradley Settlemyer, Sr. Research Scientist, Los Alamos National Labs   

Abstract

Ongoing standardization efforts for computational storage promise to make the offload of data intensive operations to near-storage compute available across a wide variety of storage devices and platforms. However, many technologists, storage architects and data center managers are still unclear on whether computational storage offers real benefits to practitioners. In this talk we describe Los Alamos National Laboratory’s ongoing efforts to deploy computational storage into the HPC data center. We focus first on describing the quantifiable performance benefits offered by computational storage; second, we present the economic case that motivates deploying computational storage into production; and third, we describe the practical limits that exist in applying computational storage to data center storage systems.



Persistent Memory on CXL

Andy Rudoff, Senior Software Engineer, Intel Corporation

Abstract

The emerging Compute Express Link (CXL)  includes support for memory devices and provides a natural place to attach persistent memory (pmem).  In this talk, Andy describes the additions made to the CXL 2.0 specification in order to support pmem.  Device identification, configuration of interleave sets and namespaces, event reporting, and the programming model will be covered.   Andy will describe how multiple standards like CXL, ACPI, and UEFI all come together to continue to provide the standard SNIA NVM Programming Model for pmem.


Why Distributed AI Needs Computational Storage

Michael Kagan, CTO, NVIDIA

Abstract

Artificial Intelligence is increasingly being used in every type of business and industry vertical including finance, telco, healthcare, manufacturing, automotive, and retail. The nature of AI is becoming distributed across multiple nodes in the data center but also across the cloud and edge. Traditional local and networked storage solutions often struggle to meet the needs of AI running on many different types of devices in various locations. Computational storage can solve the challenge of data locality for distributed AI. These solutions include smart storage devices, adding data processing to storage arrays, and deploying new types of compute processors in the storage, next to the storage, or even in the storage network.


Present and Future Uses for Computational Storage and Persistent Memory:  A Panel Discussion

Moderated by Dave Eggleston, Principal, Intuitive Cognition Consulting   

Abstract

As CS and PMEM products have moved from concepts into reality there has been a similar migration in their application usages and value propositions. Join in as this group of experts debates why CS are PMEM are needed, the real-world problems they solve, the barriers still blocking the way, and the pots of gold over the horizon. Additionally, this lively panel will tussle over: CPUs vs. GPUs vs. DPUs, cloud vs. edge adoption, scale up vs. scale out, legacy vs. new interconnect, as well as your live audience questions!



Beyond Zoned Namespace - What Do Applications Want?

Chun Liu, Chief Architect, Futurewei Technologies 

Abstract

When data processing engines are using more and more log semantics, it’s natural to extend Zoned Namespace to provide a native log interface by introduce variable size, byte append-able, named zones. Using the newly introduced ZNSNLOG interface, not only the storage device enables less write amplification/more log write performance, but more flexible and robust naming service. Given the trends towards a compute-storage disaggregation paradigm and more capable computational storage, ZNSNLOG extension enables more opportunities for near data processing.


A New Path to Better Data Movement within System Memory, Computational Memory with SDXI

Shyam Iyer, Chair, SNIA SDXI Technical Work Group; Distinguished Engineer, Dell 

Abstract

Today, computation associated with data in use occurs in system memory. As the system memory envelope expands to includes different tiers and classes of memory helped by memory fabrics, data in use envelope increases.  In many usage models, moving data to where the computation occurs is important. In other usage models, data copies are needed for compute scaling. Data movement is a resource intensive operation used by a variety of software stacks and interfaces. Offloading data movement frees up chargeable compute cycles.

While the ability to use offloads have existed, they have not been easy to integrate into various user and kernel level software stacks. The industry needs a standard offload data movement interface. Additionally, a standard data movement interface that envisions offloading other computations involving system memory is highly desirable.

This talk will connect the audience with SNIA’s SDXI (Smart Data Acceleration Interface) TWG’s efforts in this space. SNIA’s SDXI TWG is standardizing a memory to memory data movement and acceleration interface.

 


Security Impacts to a Changing Ecosystem

A Panel Discussion moderated by Jason Molgaard, Principal Storage Solutions Architect, Arm

Abstract

Computational Storage may introduce new attack surfaces for hackers.  The threats themselves may be familiar, but can now be potentially deployed in the storage device.  Vendors and end users need to be looking hard at security to ensure secure deployments.  This session will explore supply chain issues and implications, the state of specifications and standards related to this technology, and potential security opportunities of the technology.


CXL: Expanding the Ecosystem

A Panel Discussion moderated by Tom Coughlin, President, Coughlin Associates

Abstract

CXL is an open industry standard that provides high bandwidth low latency interconnection. This panel will discuss how CXL works and how it changes the use of memory, accelerators and memory management as well as use cases, availability and how to get involved.


Recap of the Day and Closing Remarks

Tom Coughlin, President, Coughlin Associates and Jim Handy, General Director, Objective Analysis

Abstract

With all sessions now available on-demand, join us for a recap of Day 2 highlights.