Deep Dive
4 minute read

Opening a new era in algorithmic development

IBM Research is turbocharging algorithm development for a world with quantum computing and AI.

Algorithms have always been IBM Research’s superpower.

Since the dawn of modern computing, IBM scientists have time and again redefined how humanity computes. Each breakthrough has led mathematicians to re-examine their toolkits and engineers to re-design their hardware. New industries have sprung up in the wake.

In every era IBM Research has asked the same question: what new mathematics will pull the rest of the field forward? Today, we must revisit this lineage; the ground under computing is shifting once again, this time faster than ever.

While classical high-performance machines still carry the bulk of the world’s simulations and analytics, foundation-model AI is translating unstructured data, time series data, and image data into dense, information-rich representations. And quantum processors are beginning to manipulate information in ways beyond the ability of bits alone.

Taken separately, each technology is formidable. Together, they create a fertile landscape for a revolution in algorithmic invention and design.

Over the next several years, our scientists will focus on four tightly linked areas that underpin everything from drug discovery to supply-chain resilience: differential equations, combinatorial optimization, linear algebra, and stochastic processes.

While these topics may sound academic, they govern a significant share of the world’s thorniest questions: how to design vehicles in a way that that reduces turbulence and improves aerodynamic efficiency, how to optimize the dispatch and routing of fleets of delivery trucks, how to effectively design complex systems like energy storage devices, and how to model uncertainly in the evolution of markets.

In every case we see an opportunity to re-imagine solutions to fundamental problems in mathematics and computing, considering new hardware, new data-driven representations, and AI-accelerated discovery of algorithms. Where that hardware isn’t available, we invent it.

A legacy of algorithmic excellence

Advancing computation has always required looking at information and utilizing compute hardware in creative new ways. Seventy years ago, Hans Peter Luhn saw the future of search decades before the internet existed. His hashing algorithms changed both software and storage hardware forever by inventing entirely new algorithmic tools to organize data.

Then in 1965, James Cooley and John Tukey turned a chalkboard discussion into an engine for modern signal processing with the invention of the fast Fourier transform (FFT). Their algorithm allowed a scientific computer to perform a Fourier transform with significantly fewer iterations, unlocking real-time signal processing.

And in the early 2000s that same spirit powered the IBM Blue Gene supercomputers, which ran quantum chemistry codes on hundreds of thousands of energy-efficient cores. Blue Gene proved that energy efficiency, once dismissed as a side constraint, could be the very lever that scales performance. It ushered in a new era of computational materials science, enabling researchers to probe realistic size- and time-scales for the first time.

Today, we sit at the intersection of classical HPC with entirely new computing paradigms: AI and quantum. And in the spirit of our past advances, we must craft new algorithmic tools to process data and solve previously unsolvable problems with this hardware.

A process for algorithmic discovery

IBM Research is doubling down on the effort to explore four critical areas for algorithmic innovation.

Differential equations. Climate models, fluid dynamics, and epidemiology all rely on differential equations, but traditional solvers can require the world’s largest supercomputers. AI’s knack for learning high-fidelity surrogate models can greatly reduce that cost, while quantum algorithms promise access to properties of differential equations in Hilbert spaces of exponentially large dimensions, which are beyond the reach of classical computers.

Combinatorial optimization. AI can have an important role in exploiting problem structure and predicting approximate solutions for hard problems. Quantum circuits will provide further gains by exploiting probabilistic mathematics of quantum mechanics, especially when those circuits are guided, pruned, and refined by machine-learned priors and algorithms.

Linear algebra. Linear algebra is the common language of science; AI-driven algorithm discovery can reduce computations, while faster factorization techniques and quantum-enhanced eigensolvers lift the ceiling on model size and precision.

Stochastic processes. Where uncertainty reigns supreme, new probabilistic AI kernels and quantum sampling schemes are finally taming the curse of dimensionality.

We believe these areas will unlock faster routes to more sustainable materials, lifesaving drugs, more accurate climate predictions, resilient global logistics, and cybersecurity strong enough for the post-quantum world.

But how do we discover new algorithms in these areas? An algorithm must deliver a verifiable advantage in speed, accuracy, energy use, or cost, to be considered valuable over the state-of-the-art.

That emphasis on verifiability drives our workflow. We will benchmark every candidate rigorously and publicly, celebrating the ideas that survive real-world scrutiny and discarding the ones that do not. And we will open the toolbox of papers, code, and best-practice guides so that universities, startups, and enterprises can push the field forward with us.

This process, and these algorithms, will also shape the computers of tomorrow. Hardware architects already study the workloads emerging from our labs as they plan cryogenic control stacks and heterogeneous accelerators. The feedback loop between algorithm and machine, so central to IBM’s history, will never be tighter.

Inventing what’s next in algorithms

This new age of algorithms has just begun, and IBM Research is paving the way to creating new approaches that incorporate classical, quantum, and AI to significantly outperform state of the art performance for challenging problems.

We are already seeing exciting progress. Learning representations for time series signals are outperforming current statistical and ML algorithms for tasks such as classification, anomaly detection, search and clustering, and more. We are learning to solve inverse problems for computational fluid dynamics and enabling parametric design. We have tested promising new algorithms using quantum circuits to simplify the process of calculating ground state energies, and are beginning to see the first hypotheses of verifiable quantum advantage.

While rooted in our history, this is just the start of a new journey. Together, let’s write the next chapter in the story that began with punch cards and now unfolds in bits, neurons, and qubits.

Related posts