Using the Zoom Allocator Package

Last updated: September 10, 2000, by Walter E. Brown

Introduction

The Zoom Allocator package provides, as its name suggests, a memory-management subsystem. Named PoolAllocator, this subsystem features an interface whose design follows the model of the std::allocator class, while featuring significantly improved performance for dynamic memory management. This performance benefit is achieved by reducing the average per-allocation performance overhead, and is due to the following strategies embedded in the package: The package is designed to be employed in the following contexts:
  1. As an implementation tool to provide, for users' classes, customized operators new and delete that achieve improved performance.
  2. As an allocator (in the STL sense) for use by standard library container classes and by any other classes directly employing the std::allocator interface.
The following sections will describe both these contexts in greater detail.

Customized operators new and delete

Consider, as an example of user code employing this package, the following definition of a very simple class C:
#include "Allocator/PoolAllocator.h"

class C  {

public:
   C( int val = 0 ) : x(val)  { ; }
   int  val()  { return x; }
   DeclarePoolAllocator(C,8)

private:
   int  x;
};  // C

DefinePoolAllocator(C,8)
This class incorporates the package's required header, Allocator/PoolAllocator.h. This header supplies, for users' convenience, the macros DeclarePoolAllocator() and DefinePoolAllocator(). These macros combine, as illustrated above, to provide a convenient means of specifying that the class being defined (such as class C above) is to make use of the custom PoolAllocator class.

Each of the two macros takes the same two arguments. The first argument is the name of the class; the second argument is the allocation blocking factor (specified as multiples of sizeof C). In the above example, the arguments specify that dynamic allocations for instances of class C will take place eight at a time. This implies that allocation overhead via the customized operator new will, on average, take only a bit over one-eighth the overhead of the default, general-purpose, operator new. (There is nothing special about the number eight; it was chosen purely for purposes of illustration. Users should feel free to select a blocking factor based on their understanding of the problem at hand.)

Actual performance improvement will heavily depend on the size of the class being defined, on the allocation blocking factor selected, and on the actual pattern of dynamic allocations and deallocations. No definitive studies have yet been performed; however, empirical observations strongly suggest that substantial performance improvements can be achieved.

In technical terms, operator new and operator delete are overloaded as a result of incorporating the PoolAllocator mechanism in the class. This includes throwing, non-throwing, and placement forms of these operators. However, it is important to note that the array forms, operator new[] and operator delete[] (and their variants) are not overloaded; their implementation via PoolAllocator is identical to the default.

std::allocator usage

Because the C++ library's default std::allocator does not, in general, employ whatever operators new and delete a class may furnish, special user code is required to enable use of these function in connection with a container. This entails direct use of the PoolAllocator class, as illustrated below:
typedef  std::list<C, PoolAllocator<C,8*sizeof C> >  CList;
For best performance, the blocking factor (here, eight) that is supplied in the DeclarePoolAllocator() macro, in the DefinePoolAllocator() macro, and in any usage of the same class in connection with an explicit PoolAllocator<> as illustrated in this section, should all match for a given class (here, C).