IBM at PyTorch 2023

San Francisco, CA, United States
This event has ended.

About

IBM is a proud sponsor of the 2023 PyTorch Conference in San Francisco on October 16-17. Join us to learn more about watsonx and its capabilities, and how open-source frameworks like PyTorch play a critical role in scaling enterprise AI. Here’s how:   

  • Visit the IBM booth Monday evening and Tuesday to view interactive demos of our conversational AI platform watsonx Assistant, and IBM’s geospatial AI foundation model developed in collaboration with NASA.   
  • Gather at the Women and Non-Binary in PyTorch Breakfast hosted by Quiana Berry, Red Hat on Tuesday morning to discuss navigating the AI revolution responsibly in the open-source community.   
  • Attend keynotes by IBM researchers Raghu Ganti on how to leverage PyTorch to scale AI training and inference, and Priya Nagpurkar on the value of open-source for the enterprise.   
  • Hear from IBM in poster presentations and lightning talks that discuss topics such as enabling generative AI on new hardware platforms, using PyTorch for geospatial AI model training, and scaling cloud-native PyTorch FSDP to 20B parameters for watsonx. 

We look forward to seeing you in San Francisco!

For presentation times of workshops, demos, and papers see the agenda section below.
Note: All times are displayed in your local time.

Read the IBM Developer blog post on IBM contributions at the conference

Agenda

  • Visit the IBM booth Monday evening and Tuesday to view interactive demos of our conversational AI platform watsonx Assistant, and IBM’s geospatial AI foundation model developed in collaboration with NASA.

  • This talk explains our journey in enabling generative AI applications on a new hardware (HW) platform. We are working on running generative AI applications on IBM z from correctness and runtime performance perspectives. We share experiences for developers to write PyTorch and its ecosystem for a new HW. IBM z has a unique feature that uses big-endian byte order. While most HW platforms use little-endian, big-endian was not supported well in PyTorch and its pip packages. We supported both endians, for example, to exchange pre-trained models among any platform by fixing test and application failures. Our 32 PRs make no test failure on IBM z. The ecosystem, like the Hugging Face (HF) transformer framework, now works well. We will share our experience to enable CI for a new HW to keep the main branch healthy for a new HW. We enable HW acceleration features in PyTorch runtime and TorchInductor, such as SIMD. We will also briefly explain exploiting an in-core AI accelerator. Here are the takeaways: - Enabling a new HW without test failures in PyTorch and its ecosystem, like HF transformers - Adding CI for a HW new platform in the upstream - Enabling performance features for a new HW.

    Speaker: Kazuaki Ishizaki (IBM)

  • I am a middleware developer / architect, who has worked in open source a maintainer for many years. Prior to that I was a middle ware developer. I had a lot of success in bring in architectural ideals and middleware developer thought processes to my previous open source project Egeria. I have just joined PyTorch full time. This session details my thoughts on how PyTorch as a project, can be enhanced and made more accessible by using architectural / middleware development approaches. The idea is not to replace existing working process, but to add additional enrichment. I hope that the audience and community will buy into this approach and help me realise it. I think this approach is key to moving PyTorch from being research centric towards being more widely adopted. The session will cover: - my initial impressions of a noob to PyTorch. - the need for a big picture conceptual overview of PyTorch at a higher level than the code in the documentation. With the need for many more overview pictures. - bringing in the need for a Glossary for users to have terms defined once and referred to.

    Speaker: David Radley (IBM)

  • The computer hardware evolution has accelerated. In just the last few years we have seen new ideas on combining heterogeneous CPU cores, multiple types of memory, a renewed focus on power saving, expectations for dynamically changing resource properties, as well as enhancements for security and confidentiality for cloud workloads and data in traditional applications and new AI model processing. This evolution forces us to revisit the resource management domain of Kubernetes. This domain is one of the most fundamental and complex areas in Kubernetes. Under the hood there are several components that Kubernetes relies on: Kubelet, Runtimes, Monitoring Agents, the OS kernel and hardware. With a focus on specifics of the AI workloads and use cases, this presentation will touch on UX changes for end users, interfaces between k8s stack components and how different pluggable algorithms, implemented via code and new policies, can help achieve evolved performance, power and resource utilization and prioritization goals.

    Speakers: Mike Brown (IBM) & Alexander Kanevskiy (Intel)

  • Navigating the AI Revolution Responsibly in the Open Source Community 

    As the AI revolution reshapes the technological landscape, it is imperative to prioritize diversity, equity, and inclusion in open source communities. This engaging breakfast session at the PyTorch Conference invites diverse open source leaders to embark on a reflective journey. Together, we will explore the room for change in the AI ecosystem, establish governance guardrails to shield vulnerable communities from potential harm, and underscore the critical need to protect marginalized voices. 

    With a compelling blend of thought-provoking insights and creative expression, the session will dive into real-world examples, strategies for ethical AI governance, and showcase initiatives that integrate inclusivity into AI development.  Quiana Berry, a passionate advocate at the intersection of technology and social justice, will present a powerful poetry performance that encapsulates the nuances of the AI revolution and the importance of safeguarding diverse communities.

  • Visit the IBM booth Monday evening and Tuesday to view interactive demos of our conversational AI platform watsonx Assistant, and IBM’s geospatial AI foundation model developed in collaboration with NASA.

  • In this talk we will cover lessons learned about PT 2.0 compile after using it in IBM’s Watsonx.AI stack with NVIDIA GPUs and custom IBM accelerators as the main inference acceleration solution. Specifically, we will cover the results of our latency and throughput experiments with a range of LLM models, ranging from encoder-only, encoder-decoder, and decoder-only transformer models. We will talk about performance comparisons with other approaches in the field as well as our collaboration with the core PyTorch team to fix some of the bugs we have encountered when using features such as dynamic shapes and CUDA graph trees. We will also comment on how we have been using the torch.compile() API to compile and run models on IBM’s AIU accelerator and why we have made that choice. Finally, we will also cover the interaction of parallel approaches such as Tensor Parallel for bigger models combined with Compile for inference workloads.

    Speaker: Antoni Viros i Martin (IBM)

  • As generative AI models grow larger and more complex, the ability to scale these models becomes a critical challenge facing enterprises today. How can developers leverage PyTorch to maximize the value of these large, multi-billion parameter models to make them run faster, more efficiently, and more affordably both on-prem and in the cloud? This keynote will highlight various levers that PyTorch FSDP provides to scale AI model training on hundreds of GPUs and how IBM applied them to obtain state-of-the-art training throughput in models with up to 70 billion parameters. It will also discuss how we combined the latest advancements in PyTorch compile with custom tensor parallel implementation to achieve significantly reduced inferencing latency.

  • Open-source communities accelerate innovation by empowering members to harness collective insights and build on a vast prior body of work. However, achieving a successful and responsible open-source community, and how enterprise companies should contribute to these communities, can be a delicate balance. Priya Nagpurkar, who leads the strategy for AI and cloud platforms at IBM Research, will discuss what IBM looks for in open-source collaborators, how PyTorch forwards IBM's strategic goals, and the role open-source technologies will play in generative AI’s future.

    Speaker: Priya Nagpurkar

    PN
    Priya Nagpurkar
    Priya Nagpurkar
    Vice President, Hybrid Cloud Platform and Developer Productivity, IBM Research
    IBM Research

Related Events