Memoization is a technique to cache the results of slow function calls and return the cached value when the function is called with the same inputs again, instead of reevaluating it. It is very common to replace function calls with lookup tables if the same inputs are happening over and over again in a known, predictable way. Memoization is, at its core, an extension of this practice where the lookup table is extended even during runtime for new arguments not seen previously. A basic theoretical background can be found on Wikipedia or any undergraduate-level computer science textbook.
Octave’s memoize
function provides drop-in memoization functionality for
any user function or Octave function, including compiled functions.
mem_fcn_handle =
memoize (fcn_handle)
¶Create a memoized version mem_fcn_handle of function fcn_handle.
Each call to the memoized version mem_fcn_handle checks the inputs against an internally maintained table, and if the inputs have occurred previously, then the result of the function call is returned from the table itself instead of evaluating the full function again. This speeds up the execution of functions that are called with the same inputs multiple times.
For example, here we take a slow user-written function named slow_fcn
and memoize it to a new handle cyc
. The first executions of both
versions take the same time, but the subsequent executions of the memoized
version returns the previously computed value, thus reducing 2.4 seconds of
runtime to only 2.4 milliseconds. The final check verifies that the same
result was returned from both versions.
>> tic; p = slow_fcn (5040); toc Elapsed time is 2.41244 seconds. >> tic; p = slow_fcn (5040); toc Elapsed time is 2.41542 seconds. >> cyc = memoize (@slow_fcn); >> tic; r = cyc (5040); toc Elapsed time is 2.42609 seconds. >> tic; r = cyc (5040); toc Elapsed time is 0.00236511 seconds. >> all (p == r) ans = 1
See also: clearAllMemoizedCaches.
To memoize a function z = foo(x, y)
, use this general pattern:
foo2 = memoize (@(x, y) foo(x, y)); z = foo2 (x, y);
In the above example, the first line creates a memoized version foo2
of
the function foo
. For simple functions with only trivial wrapping, this
line can also be shortened to:
foo2 = memoize (@foo);
The second line z = foo2 (x, y);
calls that memoized version foo2
instead of the original function, allowing memoize
to intercept the call
and replace it with a looked-up value from a table if the inputs have occurred
before, instead of evaluating the original function again.
Note that this will not accelerate the first call to the function but only subsequent calls.
Note that due to the overhead incurred by memoize
to create and manage
the lookup tables for each function, this technique is useful only for
functions that take at least a couple of seconds to execute. Such functions
can be replaced by table lookups taking only a millisecond or so, but if the
original function itself was taking only milliseconds, memoizing it will not
speed it up.
Recursive functions can be memoized as well, using a pattern like:
function z = foo (x, y) persistent foo2 = memoize (@foo); foo2.CacheSize = 1e6; ## Call the memoized version when recursing z = foo2 (x, y); endfunction
The CacheSize
can be optionally increased in anticipation of a large
number of function calls, such as from inside a recursive function. If
CacheSize
is exceeded, the memoization tables are resized, causing a
slowdown. Increasing the CacheSize
thus works like preallocation to
speed up execution.
The function clearAllMemoizedCaches
clears the memoization tables when
they are no longer needed.