Limitations of Concurrency in Transaction Processing
Abstract
Given the pairwise probability of conflict p among transactions in a transaction processing system, together with the total number of concurrent transactions n, the effective level of concurrency E(n,p) is defined as the expected number of the n transactions that can run concurrently and actually do useful work. Using a random graph model of concurrency, we show for three general classes of concurrency control methods, examples of which are (1) standard locking, (2) strict priority scheduling, and (3) optimistic methods, that (1) E(n, p) ⩽ n(1 - p/2)n-1, (2) E(n, p) ⩽ (1 - (1 - p)n)/p, and (3) 1 + ((1 - p)/p)ln(p(n - 1) + 1) ⩽ E(n, p) ⩽ 1 + (1/p)ln(p(n - 1) + 1). Thus, for fixed p, as n ↣ ∞), (1) E ↣ 0 for standard locking methods, (2) E ⩽ 1/p for strict priority scheduling methods, and (3) E ↣ ∞ for optimistic methods. Also found are bounds on E in the case where conflicts are analyzed so as to maximize E. The predictions of the random graph model are confirmed by simulations of an abstract transaction processing system. In practice, though, there is a price to pay for the increased effective level of concurrency of methods (2) and (3): using these methods there is more wasted work (i.e., more steps executed by transactions that are later aborted). In response to this problem, three new concurrency control methods suggested by the random graph model analysis are developed. Two of these, called (a) running priority and (b) older or running priority, are shown by the simulation results to perform better than the previously known methods (l)-(3) for relatively large n or large p, in terms of achieving a high effective level of concurrency at a comparatively small cost in wasted work. © 1985, ACM. All rights reserved.