Python Development, memoized functions.

Antti Karttunen antti.karttunen at gmail.com
Fri Jan 5 03:45:16 CET 2007


Alec Mihailovs wrote:

> Antti,
>
> I didn't mean to criticize your program.

I'm not offended if somebody criticizes my programs. Maybe that would 
even improve their quality...

> It certainly looks nice. And memoization is certainly a good thing for 
> many purposes.
>
> That, probably, can be done by creating a new object, something like a 
> function_with_cache (including the cache attribute) instead of a 
> function, and defining an instance of that object instead of defining 
> a function.
>
> I just wanted to say that in general, recursive programs produce a lot 
> of overhead related to multiple function calls. For example, to 
> calculate the millionth value, a recursive program makes (at least) 
> two millions function calls. And every such call is rather expensive - 
> it requires a lot of clock cycles for stack allocation, moving 
> variables from the memory to the stack and back etc. In many 
> situations that can be avoided by including this recursion inside a 
> program (as I did in A000045_list example - that was just an example, 
> not a procedure that I would really use for that if I had to :)

Yes, I guess the Binet's formula is the way to do it in real life? 
(Provided that the system's
floating point implementation gives accurate values for the required range!)

>
> With my extensive Maple experience, I can say that it can be clearly 
> seen in Maple - practically any recursive procedure, rewritten as a 
> non-recursive procedure, looks longer, but works much faster.

In general, my experiences agree. Though just how much faster is the 
faster depends. Maybe Maple
has especially heavy overhead on the function calls. Some Forth-based 
CPU would probably
be at the other extreme.

But maybe we could have a best of both worlds then? Let the function compute
the values quickly into the vector, but let's make that totally 
transparent to the outside
callers. I.e. that they don't need to "preinitialize" it to any 
arbitrary limit, but the function
itself will "on the demand" increase the length of the vector and 
compute a new
batch of values into it (_without_ recursion), for the future consumption?

>
> Alec
>

Antti







More information about the SeqFan mailing list