Lisp HUG Maillist Archive

Old tricks now formalized?

Hi All,

I just ran across this on Lambda the Ultimate, from Simon Peyton Jones, and others... If you wade through the OCaml example showing how to do Map and Fold using these Generators, it looks very much like the "trick" I had to resort to years ago when I wrote my own Prolog in Lisp, given the absence of Continuations in Lisp. Essentially, it relies on throwing exceptions and intercepting them, changing out the exception handlers along the way. So what was once deemed a "hack" has now been blessed?

Simple Generators v. Lazy Evaluation

Oleg Kiselyov, Simon Peyton-Jones and Amr Sabry: Simple Generators:
Incremental stream processing, pervasive in practice, makes the best case for lazy evaluation. Lazy evaluation promotes modularity, letting us glue together separately developed stream producers, consumers and transformers. Lazy list processing has become a cardinal feature of Haskell. It also brings the worst in lazy evaluation: its incompatibility with effects and unpredictable and often extraordinary use of memory. Much of the Haskell programming lore are the ways to get around lazy evaluation.
We propose a programming style for incremental stream processing based on typed simple generators. It promotes modularity and decoupling of producers and consumers just like lazy evaluation. Simple generators, however, expose the implicit suspension and resumption inherent in lazy evaluation as computational effects, and hence are robust in the presence of other effects. Simple generators let us accurately reason about memory consumption and latency. The remarkable implementation simplicity and efficiency of simple generators strongly motivates investigating and pushing the limits of their expressiveness.
To substantiate our claims we give a new solution to the notorious pretty-printing problem. Like earlier solutions, it is linear, backtracking-free and with bounded latency. It is also modular, structured as a cascade of separately developed stream transducers, which makes it simpler to write, test and to precisely analyze latency, time and space consumption. It is compatible with effects including IO, letting us read the source document from a file, and format it as we read.
This is fascinating work that shows how to gain the benefits of lazy evaluation - decoupling of producers, transformers, and consumers of data, and producing only as much data as needed - in a strict, effectful setting that works well with resources that need to be disposed of once computation is done, e.g. file handles.
The basic idea is that of Common Lisp signal handling: use a hierarchical, dynamically-scoped chain of handler procedures, which get called - on the stack, without unwinding it - to parameterize code. In this case, the producer code (which e.g. reads a file character by character) is the parameterized code: every time data (a character) is produced, it calls the dynamically innermost handler procedure with the data (it yields the data to the handler). This handler is the data consumer (it could e.g. print the received character to the console). Through dynamic scoping, each handler may also have a super-handler, to which it may yield data. In this way, data flows containing multiple transformers can be composed.
I especially like the OCaml version of the code, which is just a page of code, implementing a dynamically-scoped chain of handlers. After that we can already write map and fold in this framework (fold using a loop and a state cell, notably.) There's more sample code.
This also ties in with mainstream yield.
By Manuel J. Simoni at 2013-02-21 13:30 | Fun | Paradigms | Software Engineering | 5 comments | other blogs| 3768 reads

Dr. David McClain
dbm@refined-audiometrics.com



Re: Old tricks now formalized?

Hello:

Just a note on generators. I implemented my own generators in Lisp,
but did not resort to fancy tricks. The only negative is that we don't
get to use YIELD because, as you said, we don't have built-in
continuations (and I'd rather not haul in a library for doing CPS
transformation).

My technique was simple: just use closures, and signal a particular
condition if/when the generator is exhausted.

For example:

;; Take 5 elements from the FIBS generator.
CL-USER> (take 5 (fibs))
(0 1 1 2 3)

;; Square each natural number (MAP)
CL-USER> (map-generator (lambda (x) (* x x))
                        (up-from 0))
#<GENERATOR {1006B6BB63}>

;; And then accumulate them (FOLD)
CL-USER> (accumulator (lambda (x y) (+ x y))
                      0
                      *)
#<GENERATOR {1006CE1853}>

;; What are the first three accumulations of the list of squares?
;; They should be 0, 0+0^2, 0+0^2+1^2
CL-USER> (list (next *) (next *) (next *))
(0 0 1)

;; Compute the next even accumulation.
CL-USER> (next-if #'evenp **)
14

;; Collect the next 50.
CL-USER> (take 50 ***)
(30 55 91 140 204 285 385 506 650 819 1015 1240 1496 1785 2109 2470 2870 3311
 3795 4324 4900 5525 6201 6930 7714 8555 9455 10416 11440 12529 13685 14910
 16206 17575 19019 20540 22140 23821 25585 27434 29370 31395 33511 35720 38024
 40425 42925 45526 48230 51039)

So that is how they are used, but how are they implemented? Well, we
resort to a rather low-level closure where we record state. For
example, for FIBS:

(defun fibs ()
  "Create a new generator for Fibonacci numbers."
  (generator
   (let ((x 0)
         (y 1))
     (lambda ()
       (prog1 x
         (psetq x y
                y (+ x y)))))))

I particularly don't mind this style, and I think the compositional
benefits of generators is worth creating these kinds of closures, but
I do admit that it is somewhat difficult to encapsulate the "current
state" of a generator easily.

You can find all of the code for these here:
http://tinyurl.com/bzeluja (points to
https://bitbucket.org/tarballs_are_good/lisp-random/src/3db634111d35e788c6ea2f4a1b3ab38334e24cde/generators.lisp?at=default
). Other examples from there include finite generators, generators for
the lines of a file (useful if you don't want to read in a file all at
once, and don't want to have an explicit READ-LINE loop), and some
functional "transducers" (map, reduce, filter).

Cheers,

Robert Smith


On Sat, Feb 23, 2013 at 9:37 AM, David McClain
<dbm@refined-audiometrics.com> wrote:
> Hi All,
>
> I just ran across this on Lambda the Ultimate, from Simon Peyton Jones, and
> others... If you wade through the OCaml example showing how to do Map and
> Fold using these Generators, it looks very much like the "trick" I had to
> resort to years ago when I wrote my own Prolog in Lisp, given the absence of
> Continuations in Lisp. Essentially, it relies on throwing exceptions and
> intercepting them, changing out the exception handlers along the way. So
> what was once deemed a "hack" has now been blessed?
>
> Simple Generators v. Lazy Evaluation
>
> Oleg Kiselyov, Simon Peyton-Jones and Amr Sabry: Simple Generators:
>
> Incremental stream processing, pervasive in practice, makes the best case
> for lazy evaluation. Lazy evaluation promotes modularity, letting us glue
> together separately developed stream producers, consumers and transformers.
> Lazy list processing has become a cardinal feature of Haskell. It also
> brings the worst in lazy evaluation: its incompatibility with effects and
> unpredictable and often extraordinary use of memory. Much of the Haskell
> programming lore are the ways to get around lazy evaluation.
> We propose a programming style for incremental stream processing based on
> typed simple generators. It promotes modularity and decoupling of producers
> and consumers just like lazy evaluation. Simple generators, however, expose
> the implicit suspension and resumption inherent in lazy evaluation as
> computational effects, and hence are robust in the presence of other
> effects. Simple generators let us accurately reason about memory consumption
> and latency. The remarkable implementation simplicity and efficiency of
> simple generators strongly motivates investigating and pushing the limits of
> their expressiveness.
> To substantiate our claims we give a new solution to the notorious
> pretty-printing problem. Like earlier solutions, it is linear,
> backtracking-free and with bounded latency. It is also modular, structured
> as a cascade of separately developed stream transducers, which makes it
> simpler to write, test and to precisely analyze latency, time and space
> consumption. It is compatible with effects including IO, letting us read the
> source document from a file, and format it as we read.
>
> This is fascinating work that shows how to gain the benefits of lazy
> evaluation - decoupling of producers, transformers, and consumers of data,
> and producing only as much data as needed - in a strict, effectful setting
> that works well with resources that need to be disposed of once computation
> is done, e.g. file handles.
> The basic idea is that of Common Lisp signal handling: use a hierarchical,
> dynamically-scoped chain of handler procedures, which get called - on the
> stack, without unwinding it - to parameterize code. In this case, the
> producer code (which e.g. reads a file character by character) is the
> parameterized code: every time data (a character) is produced, it calls the
> dynamically innermost handler procedure with the data (it yields the data to
> the handler). This handler is the data consumer (it could e.g. print the
> received character to the console). Through dynamic scoping, each handler
> may also have a super-handler, to which it may yield data. In this way, data
> flows containing multiple transformers can be composed.
> I especially like the OCaml version of the code, which is just a page of
> code, implementing a dynamically-scoped chain of handlers. After that we can
> already write map and fold in this framework (fold using a loop and a state
> cell, notably.) There's more sample code.
> This also ties in with mainstream yield.
> By Manuel J. Simoni at 2013-02-21 13:30 | Fun | Paradigms | Software
> Engineering | 5 comments | other blogs| 3768 reads
>
> Dr. David McClain
> dbm@refined-audiometrics.com
>
>
>

_______________________________________________
Lisp Hug - the mailing list for LispWorks users
lisp-hug@lispworks.com
http://www.lispworks.com/support/lisp-hug.html


Updated at: 2020-12-10 08:35 UTC