TweetFollow Us on Twitter

Lazy Lists
Volume Number:5
Issue Number:10
Column Tag:Lisp Listener

(Mumble Paul(About Lisp))

By Paul Snively, Wheeling, IL

Note: Source code files accompanying article are located on MacTech CD-ROM or source code disks.

It’s been a while since anyone has written anything about any dialect of Lisp or, for that matter, anything that has anything to do with Lisp in MacTutor. This might lead one to the conclusion that nothing of any importance is going on in the Lisp world, or with Lisp as it relates specifically to the Macintosh. This is somewhat saddening, because nothing could be further from the truth.

The most obvious and most visible thing that’s happened in the Lisp world relative to the Macintosh is that Apple Computer, Inc. purchased the assets of Coral Software, Inc. in a move that appears to have surprised several people, myself included. It’s quite rare for Apple to out-and-out buy a third-party software-development house; my interpretation of the acquisition is that it serves to prove Apple’s corporate commitment to many different kinds of artificial intelligence research with respect to future system architectures, and it also serves to prove that Apple uses Allegro CL on the Macintosh a great deal internally (Apple seems to be a stickler for having control of the development systems that they themselves use internally, which is why MPW exists in the first place).

I won’t go into great detail in this space about Allegro CL [for details, see MacTutor, March 1988]. Suffice it to say that the current version, as of this writing, is 1.2.2, and that when it ships through APDA it will include the base language, the Foreign Function Interface that will allow linking MPW C or Pascal code into your Lisp program, and the Standalone-Application Generator that will allow you to turnkey your Lisp program. Also available from APDA for twenty dollars will be the current version of Portable CommonLOOPS, about which I’ll have more to say a bit later.

The other big news in the Common Lisp community on the Macintosh is that ExperTelligence, Inc. has licensed Procyon Common Lisp from Procyon Research, Ltd. in the United Kingdom for distribution in the United States of America. The Procyon implementation has a great deal going for it, not the least of which is an almost religious adherence to the specification of Common Lisp set forth in Steele’s Common LISP: The Language, hereafter called CLtL, which is the canonical description of the language published in 1984. This is a two-edged sword: there is a fair amount of Common Lisp source out there that presumes a somewhat more liberal interpretation of CLtL than the Procyon implementation supports. This is rarely, if ever, damaging (the only example I’ve come across in actual practice is that FIND-SYMBOL wants an honest-to-goodness package object as its optional second parameter, whereas Allegro CL, and apparently most Common Lisp implementations, will allow a package name to be passed).

Another thing that can be considered a plus of the Procyon system is that it seems to know a fair amount about the Macintosh as far as data structures are concerned. For example, Procyon includes a pretty nifty structure editor. A structure editor, as opposed to a text editor, deals with Lisp data structures in the heap, rather than their textual representation in a file. One of the interesting things that it does is to treat bit-vectors, one of Common Lisp’s data types, as bitmaps, providing a fatbits-style editor to modify the contents of the bit-vector. It’s touches like this that make the Procyon system a pleasure to use.

In terms of system performance, Procyon and Allegro subjectively seem very close. Both companies, upon hearing that I intended to write about the two systems, made comments about how performance could be tweaked out of their systems, but I’m going to adopt here the attitude that there are really only two levels of performance for any given system: good enough and not good enough. It’s a subjective measurement anyway; FullWrite Professional’s performance is “good enough” to me, but I know a great many people who do not use it precisely because its performance is “not good enough” to them. To me, both Allegro CL and Procyon Common Lisp perform well enough.

Another thing about Procyon that you may or may not appreciate is the fact that it comes with a two-volume set of documentation, and that Procyon Common Lisp’s DOCUMENTATION function is indexed to the printed documentation (although, happily, lambda lists for functions are still available online). It’s a good thing that Procyon’s documentation is so comprehensive and large; the implementation offers quite a few extensions above and beyond CLtL’s standard definition.

Procyon includes several examples, both of generic Common Lisp programming and of several of the Procyon-specific extensions, such as the so-called “Common Graphics” system that implements Procyon’s interface to the Macintosh Toolbox and OS. Oddly, Procyon, an ostensibly complete Common Lisp implementation, does not ship with CLtL, a must for any Common Lisp programmer, nor does it ship with any of the standard Common Lisp-based texts such as Winston and Horn. It’s my opinion that this somewhat mars the educational value of an otherwise well-documented system.

In the 2.0 release that I received for evaluation, Procyon’s text editor is limited to a maximum of 32K, a limitation that both Procyon and ExperTelligence assure me will be removed, if it hasn’t been already by the time of this writing, or by the time it reaches the shelves. I’m also told that some items, such as a file compiler and an implementation of the forthcoming Common Lisp Object System (CLOS), are under development and will either be integrated into the system or available separately for a modest charge.

Speaking of CLOS, Procyon 2.0 does not ship with any kind of object system, meaning that it can’t easily be extended to support new types of OS and/or Toolbox data structures and functions. In fact, there is a fair amount of documentation given describing how to create custom stream types, since most Common Lisp I/O revolves around streams. Since there is no object system, adding new stream types seems needlessly complex and unwieldy, at least when compared to Allegro CL, which allows you to define subclasses of the basic *STREAM* class and then simply define a few methods to have a new stream type (a good example of this is given in Allegro’s Serial-Streams.Lisp source file, which defines a stream type that allows Allegro CL to do stream-oriented I/O through the serial ports).

Both Allegro CL and Procyon Common Lisp are good, solid, reliable implementations of the Common Lisp standard, a standard which seems to be of increasing importance in the world at large and the Macintosh world in particular. Both will at least consider running on a two-megabyte machine, although realism dictates at least four to get anything accomplished, and, as with all systems, eight is something of a dream. Procyon offers a few more tools, such as the structure editor, and seems somehow “Macier.” Allegro offers an object system, easier Toolbox and OS access, and feels “Lispier.” Allegro CL also now benefits somewhat from being supported by Apple Computer, Inc. Individual preference will be the final arbiter in terms of which system you find yourself using.

[In the late-breaking news department, ExperTelligence will have Procyon Common Lisp 2.1 available by the time you read this. 2.1 will remove the limitations noted above, and as a product line is divided into three different levels, as shown:

Personal Professional Developer

Retail: $620.00 $1250.00 $2000.00

Education: $445.00 $900.00 $1450.00

Includes: Procyon 2.1 Procyon 2.1 Procyon 2.1

CLOS CLOS

FFI

Runtime

ID

CLOS = Common Lisp Object System, FFI = Foreign Function Interface, Runtime = Standalone Application Generator, and ID = Interface Designer.

If the Developer’s version of Procyon sounds intriguing, it should. As you may know, ExperTelligence’s Interface Designer was the precursor to NeXT’s Interface Builder, which is a significant element in why the NeXT computer is so easy to program. CLOS answers my criticisms about Procyon lacking a real object system, and the FFI and Runtime systems are great if you’re doing serious commercial work. ExperTelligence tells me I’ll receive a review copy of the Developer’s system once all components of it have cleared customs as they come from the United Kingdom, and I will report on them once I have them. Procyon 2.0 owners should call ExperTelligence at (805) 967-1797 for upgrade information.]

A little bit earlier I mentioned Portable CommonLOOPS in passing. For some time there has been something of a war waging among a variety of object-oriented programming systems that have been written in Lisp. Lisp Machines, Inc. (now defunct) had Object Lisp, which survives in Allegro CL. Symbolics, Inc. and MIT have Flavors, which is offered by a variety of vendors and is available as an option for Allegro CL. Xerox PARC has Portable CommonLOOPS, the current generation of what started as LOOPS, became CommonLOOPS when Common Lisp was defined, and was then made “portable” in the sense that it has been customized for several of the most popular implementations of Common Lisp, including Allegro CL for the Macintosh.

One reason that I think Portable CommonLOOPS is important is because it’s a very powerful, flexible object-oriented programming environment that runs very well in Allegro CL. The other, and perhaps more important, reason is that an ANSI committee is standardizing the Common Lisp language, and that standard will include the Common Lisp Object System (CLOS), and each version of Portable CommonLOOPS that is released by Xerox PARC is a little bit closer to CLOS, which means that it’s a little bit closer to the future. Also, Portable CommonLOOPS is in the public domain, which means the price is right, and you get source code. If you’re thinking about writing a large object-oriented program in Common Lisp, think strongly about writing it in Portable CommonLOOPS. That way it will stand a much better chance of surviving the inevitable transition to an ANSI-standard Common Lisp implementation. The Portable CommonLOOPS sources are available via anonymous FTP from a number of UNIX sites on the nets, as well as from APDA (my understanding is that they’ll charge $20.00 or so for the distribution).

One problem with Portable CommonLOOPS is that the source code is the only documentation, apart from a handful of release notes that are really only of use if you’re already part of the “intelligentsia” with respect to Portable CommonLOOPS. Luckily, since Portable CommonLOOPS is moving toward being a CLOS implementation, you can largely get away with reading CLOS documentation and applying it to Portable CommonLOOPS (which is the whole point anyway). To that end, I recommend Winston and Horn Third Edition to the person who needs a complete Common Lisp text that devotes some space to CLOS, and Object-Oriented Programming in Common Lisp to the person who already has a working knowledge of Common Lisp but is perhaps a CLOS tyro. This book, by Sonya Keene, takes the reader through most aspects of CLOS from the bare basics to advanced subjects, using examples that actually mean something, such as process locking and implementing I/O streams within CLOS. Keene, an employee of Symbolics, Inc. and a member of the ANSI committee responsible for the CLOS standard and specification, obviously knows her subject well and presents it in a highly coherent manner.

All that has been going on has not necessarily been going on in the Common Lisp world, however. Since I last wrote about Lisp in MacTutor, Semantic Microsystems, developers of MacScheme and MacScheme+Toolsmith, have also been busy. As of this writing, the current version of MacScheme+Toolsmith is 1.51. This version includes support for much of Inside Macintosh Volume V; a few new items in the menus, such as “Load ,” which allows you to load a Scheme file without first opening it and without typing the full pathname in the transcript window; and some new contributed code.

The first contribution is a very nice macro called ExtendSyntax that uses a pattern-matching system to allow the construction of other macros that extend Scheme’s syntax. This utility is described in The Scheme Programming Language by R. Kent Dybvig. It makes writing fairly complex macros much less painful than it normally is in Scheme. As an example, consider the LET special form. As most Scheme programmers realize, what LET does is to encapsulate the body of the LET in a lexical environment, which it then applies to the values assigned to the names. With EXTEND-SYNTAX, you would express this idea like this:

; Here’s how the let macro could have been defined:

(extend-syntax (lett)
               ((lett ((x e) ...) body ...)
                ((lambda (x ...) body ...)
                 e ...)))

In other words, we’re defining a new piece of syntax, LETT, that can accept some list of pairs (that’s what the “(x e) ” means) and some body of expressions (the “body ”) and will expand to “(lambda” followed by the list of the first elements of the pairs (that is, the names), followed by all the expressions of the body, followed by all of the second elements of the pairs (that is, the values). In other words,

(lett ((x 3) (y 4))
 (+ x y))

would expand to:

(lambda (x y)
 (+ x y)
 3 4)

This is just a simple example, but hopefully you can see how expressing macros as syntactical patterns that expand to other syntactical patterns makes macro-writing much easier.

The other major contribution shipped with 1.51 is a MacScheme implementation of Texas Instruments’ SCOOPS system. This is the object-oriented programming system that TI wrote for and ships with their own PC-Scheme system, and Semantic Microsystems’ own John Ulrich did the MacScheme version. If you’ve never tried your hand at object-oriented programming and you own MacScheme+Toolsmith 1.51 or later, take a look at SCOOPS. The MacScheme version leaves some fairly important optimizations out, and unfortunately uses EVAL, which means that standalone applications generated with it will not have any dead code removed, since the routine that finds dead code can’t rip anything out lest it be needed for EVALing at run-time. These shortcomings aside, SCOOPS remains a nice example of how object-oriented programming can be implemented, and seems to be a fairly powerful system.

All of this assumes that you have some Scheme experience. However, it sometimes seems that only people who went to MIT or Indiana University to study computer science have even heard of Scheme. Semantic Microsystems must feel the same way--they recently announced the introduction of Scheme Express, a compact Scheme for those who want to learn the language while investing a minimal amount of money. Priced at $69.95, Scheme Express should fit the bill quite nicely. Apparently Scheme Express is extremely small and extremely slow--it will apparently fit into 512K, and will be quite comfortable even on a Mac Plus. It also claims to run at about 25% of the speed of MacScheme, and that compiled Scheme Express programs will be one quarter of the size of compiled MacScheme programs.

My advice to anyone and everyone who is curious about Lisp at all is this: give Semantic Microsystems a call at (503) 643-4539. Chances are pretty good that you’ll hear the cheerful voice of Kathi Williams at the other end of the line, or perhaps you’ll speak to John Ulrich (if you do, ask him about object-oriented programming using lexical closures as objects. Then sit back and be prepared to learn a lot). And please tell either of them that I sent you. Then ask about Scheme Express. At $69.95, you can’t go too far wrong as far as a learning environment is concerned. If you do get Scheme Express, the next step is to run out and buy a copy of Structure and Interpretation of Computer Programs, by Abelson and Sussman.

Abelson and Sussman is MIT’s introductory computer-science text. Don’t let that scare you off; many students face it every year with no computer programming experience, let alone any Scheme experience. The book is a gem; the principles that it espouses and design theory that it purveys apply regardless of your background or language preferences. The code given happens to be in Scheme, and the exercises are geared toward being implemented in Scheme. So with Scheme Express and this book by your side, you are well-equipped to learn Lisp, and have invested about $100 or so. That’s not unreasonable, even if you’re only mildly curious. There is much for me to learn from Abelson and Sussman; you might discover that there is for you, too.

Abelson and Sussman isn’t your everyday intro text. Things start off easily enough with Chapter 1, Section 1 having you find square roots using Newton’s method.

Section 2 deals extensively with why some algorithms are “better” than others (“better” in most cases meaning significantly faster) and has you write a function that tests for the primality of some number. This section is worth paying a lot of attention to--it thoroughly describes the various “orders” of certain algorithms (O(n2), O(n), O(log n), etc.) and gives you some mental tools with which to reduce the order of your algorithm (that elusive O(log n) algorithm is much faster than that intuitively obvious O(n2) one).

Section 3 introduces the idea that functions are just another kind of data, and in particular that functions can take other functions as parameters, and/or can return functions as their values. This is one of Lisp’s more unique features as a language--there is no enforced distinction between code and data. Functions can be passed as arguments, returned as values, stored in variables, etc. In chapter 1, a good grasp of procedural abstraction is conveyed in only sixty-seven pages.

Chapter 2 is about data abstraction. Section 1 covers how you might implement arithmetic operators for rational numbers, such as 1/3. A later example describes interval arithmetic, such as is used every day by engineers who use electronic components that have tolerance ranges, such as a resistor that is within 5% of having 10 ohms of resistance.

Section 2 provides a lucid discussion of hierarchical data, such as trees, and covers such seemingly divergent examples as symbolic differentiation, representing sets, and Huffman encoding trees.

Section 3 revolves around the notion that the representation of data should be irrelevant to whatever uses the data, assuming that the published interface remains the same. The example of complex numbers is used; complex numbers can be expressed either in rectangular form (real part and imaginary part) or polar form (magnitude and angle). The user of the complex number system should neither know nor care which representation is being used (in fact, both might be used at different times). A complex arithmetic system is implemented to demonstrate this idea, and is then generalized to demonstrate data-directed programming.

Section 4 deals with defining generic operators that can manipulate data of varying types. The exercises in this section integrate the normal arithmetic functions with the rational and complex ones that you wrote in earlier exercises. Problems of data type coercion are dealt with. The example of adding symbolic algebra (adding and multiplying univariate polynomials) is given, demonstrating that by taking advantage of the generic arithmetic operators, the polynomial functions can operate on polynomials with coefficients of any type known by the generic arithmetic functions. If coercion was added according to a prior exercise, then the functions can also handle polynomials with differing types of coefficients. Rational functions (rational numbers whose numerators and denominators are polynomials) are also added to the generic arithmetic system.

Chapter 3 is perhaps the most interesting chapter in the book, in my opinion. It talks about issues of “Modularity, Objects, and State.” Section 1, interestingly, talks about assignment. It may seem rather strange to have a textbook not get around to mentioning assignment until 167 pages have gone by, but the thing that most people don’t seem to understand about Lisp is that it’s not one of the so-called “procedural” languages. It is rather one of the so-called “functional” languages. “Functional” doesn’t mean that the language does things; it means that everything in Lisp is a function that returns some value. There is an entire branch of computer science devoted to the exploration of this approach to programming (for example, there’s an interesting beast called the applicative-order Y-combinator that allows you to write recursive functions without naming them--that is, without using assignment--a task that, on the face of it, seems impossible).

Section 2 explains in depth exactly what “environments” are, and how they influence how expressions are evaluated, and their relationship to variables and internal definitions.

Section 3 details how mutable data can be used to represent data structures such as queues and tables. The examples given here are among the most interesting in the book, in my opinion. The first is a digital circuit simulator that allows you to build circuits out of AND gates, OR gates, and inverters, specifying a signal delay for each component, and applying probes to various wires and then applying signals to various wires. The probes show the signal and the simulation time at which the signal appeared, so that you can determine, for example, the component configuration(s) that will enable a binary adder to work as far as the delays are concerned. The second example is a system supporting the propagation of constraints through a constraint network. One box might constrain one connector to be the multiple of the other two connectors, for example. Another constrains its single connector to have a constant value, and so on. By building networks of these constraints, arithmetic relationships (as opposed to functions, which are one-way) can be supported. For example, the relationship between Celsius and Fahrenheit temperatures is 9C = 5(F-32). Note that this relationship can be solved algebraically for either C in terms of F or F in terms of C, and in most programming languages, that is exactly what you must do: solve for one variable in terms of the other, and express that as a function in the language. With the constraint system, you can make a network out of two multiplication constraints, an additive constraint, and three constant constraints. You can then put a probe on the two connectors, C and F, and set one or the other to a value. The probes in either case will show the values of C and F as the various constraints are activated. The system is bidirectional; it doesn’t matter whether you provide a value for C or F as the probes will provide the correct answer in either case.

Section 4 is where life really gets interesting. The previous three sections have dealt extensively with the concepts underlying assignment and local state, and have employed rather crude, “homebrew” techniques to implement what amounts to object-oriented programming. Section 4 instead takes advantage of the fact that functions are simply another Lisp data type that can be stored in lists or variables for later use to implement a beautifully strange and strangely beautiful data structure called a stream. This is not a “stream” in the Common Lisp sense. Rather, an Abelson and Sussman stream is a collection of data that “feels” very much like a list, but has some special properties. First of all, you must be very careful about using assignment in the presence of streams. That is, a stream is not mutable in the same way that a list is. Secondly, streams can represent some data structures much more efficiently than a list can, in the sense that not all of its elements have been evaluated (I’ll say more about this later). For this same reason, a stream can represent some infinitely-long data structures that cannot be represented as lists at all. A good example of this last ability would be the stream of all prime numbers, which isn’t difficult to generate as a stream, but is quite impossible to generate as a list.

Chapter 4 conveys the importance and power of metalinguistic abstraction. That is, it points out that oftentimes the best approach to using a programming language to solve a problem is to use that language to implement another language in which the problem can be trivially solved. Previous applications of this idea (the language for simulating digital circuits and the language for expressing networks of constraints) are pointed out. In Section 1, Scheme’s own evaluator is rewritten in Scheme.

In Section 2, this new evaluator is modified to examine the effects of using normal-order evaluation as opposed to applicative-order evaluation, as well as to examine the results of using various methods of binding values to variables.

Section 3 discusses packages, which are environments that distinguish among variables having the same name. Their implementation and use to prevent name conflicts in a generic arithmetic system is explained.

Section 4 shows how metalinguistic abstraction can be used to create a programming language that follows an entirely different paradigm than Lisp’s. Logic programming, also known as declarative programming, is discussed. Much attention is paid to the issues surrounding logical expressions such as AND, OR, and NOT (not to be confused with the Lisp primitives of the same names) and their differences from mathematical logic.

Section 5 details the logical query system, and goes into great depth about pattern matching, rules, and unification (a generalization of pattern matching that makes the application of rules possible). Again the emphasis is on the usefulness of being able to implement a new language in Lisp that can then be used to solve the original problem.

Chapter 5, ironically, delves from what might seem the extreme high end of the programming spectrum--using a language to implement another language--to what might seem the low end of that same spectrum, namely creating processes built around the concept of having a certain number of registers, a handful of simple instructions, and a stack. Section 1 covers the simulation of an arbitrary “register machine.”

Section 2 describes the “explicit-control” evaluator. This differs from the evaluator of chapter 4 in that rather than being written metacircularly (that is, rather than being a Scheme evaluator written in Scheme) it is written in the language of the simulated register machine. Issues such as sequence evaluation and tail recursion optimization are covered, as are the handling of special forms, such as conditionals. A sample run of the evaluator is discussed, and then the issue of internal definitions is touched upon.

Section 3 discusses the compilation of Scheme programs for the simulated register machine. Compiling expressions, compiler data structures, code generation, interfacing to the evaluator, and lexical addressing are all presented in some depth.

Finally, Section 4 reveals some of the secrets of Lisp’s dynamic memory management, discussing how memory is allocated and how unused memory is automagically returned to the heap for later reuse.

The scope of Structure and Interpretation of Computer Programs may seem a bit daunting at first, but with a real Scheme such as Scheme Express in front of you, the secrets within this text come tumbling out at a sometimes alarming rate, and there is a great deal of pleasure to be had from mastering some aspect of programming that had eluded you before (at least, that has been my experience).

When talking about chapter 3, I mentioned streams in passing and promised to talk about them in more detail, which I will do, because you can do some pretty interesting things with them. Since I’m going to implement them from top to bottom in Common Lisp, I’d better call them lazy-lists instead of streams, since Common Lisp uses the term “stream” to describe an I/O sink. First of all, an important thing to understand is that lazy-lists are generally used to model some situation where there is a flow of data from point A to point B. As a result, there are a handful of standard operations that you might like to be able to apply to the contents of any given lazy-list. For example, there may be some function that you wish to apply to every element of the lazy-list. Similarly, there may be elements of some lazy-list that you wish to filter out of that lazy-list if they satisfy some predicate. It is also quite common to take the elements of a lazy-list and accumulate them together somehow. Another common function is to append one lazy-list to another.

Let’s define the lazy-list data structure and primitive operations first. Then we can define these higher-order operations on lazy-lists. A lazy-list is nothing more than a list of two elements. The first element is anything; the second element is a function that, when evaluated, returns another lazy-list consisting of the next element in the lazy-list followed by a function that returns the next lazy-list, etc. At any point along this process, the function in question could actually be the-empty-lazy-list, signifying the end of the lazy-list (although, as has been noted, this is not always necessary; some lazy-lists are infinitely long).

Lazy-lists are built using something very much like CONS. In fact, we’ll call the function LAZY-CONS. The different between LAZY-CONS and CONS is that LAZY-CONS doesn’t immediately evaluate its second parameter. Instead it delays it. DELAY is a standard part of Scheme, but it isn’t in most Lisps. In Common Lisp, you would implement DELAY something like this:

(defun memoize (fun)
 (let ((already-evaled nil) (value nil))
 #’(lambda ()
 (if already-evaled
 value
 (progn
 (setf already-evaled t)
 (setf value (funcall fun)))))))

(defmacro delay (thing)
 ‘(memoize #’(lambda () ,thing)))

This is actually an optimized delay; the MEMOIZE function is a bizarre little thing that takes a function as a parameter and wraps it up in a neat little package that basically determines whether the function has ever been called before. If it hasn’t, it calls it and preserves (and returns) its returned value. If it has, it simply returns the stored value. In other words, any function that we pass to MEMOIZE will only get called once, assuming that what is actually called is the function that MEMOIZE returns.

DELAY, then, is a macro (so it doesn’t evaluate “thing,” which would defeat the whole purpose) that first makes a function that, when called, returns “thing” as its value (by wrapping “thing” in a lambda expression of no arguments), then passes that function to MEMOIZE, so that “thing” only gets evaluated when it absolutely has to, and even then only once.

Building a lazy-list, then, would look something like this:

;;Here is our lazy-list CONStructor:
(defmacro lazy-cons (thing lazy-list)
  ‘(cons ,thing (delay ,lazy-list)))

The flip side of building a lazy-list, of course, is accessing its elements. Since it’s like a list, we’ll write LAZY-CAR and LAZY-CDR to get at its components. LAZY-CAR is very straightforward:

;;Here is LAZY-CAR:
(defun lazy-car (lazy-list)
  (car lazy-list))

LAZY-CDR is only slightly more involved, since the remaining item in the lazy-list data structure has been delayed:

;;Here is LAZY-CDR:
(defun lazy-cdr (lazy-list)
  (force (cdr lazy-list)))

FORCE is used to return the value of the cdr of the lazy-list, and thanks to MEMOIZE, it will do so whether it’s the first time or the umpteenth time without re-evaluating the original object every time. Since the cdr of a lazy-list is a function, FORCE is simply:

(defun force (promise)
 (funcall promise))

With these functions, we have the bare-bones basics of the lazy-list manipulation functions. First let’s define a couple of useful items:

;;Define the empty lazy list:
(defconstant the-empty-lazy-list ‘())

;;How do we know if a lazy-list is empty?
(defun empty-lazy-list-p (lazy-list)
  (eq the-empty-lazy-list lazy-list))

Once we’ve defined these, for the sake of termination testing, formulating the remaining higher-order lazy-list functions isn’t too difficult. For example, appending lazy-list l2 to lazy-list l1 works like this:

;;This is to lazy lists what Common Lisp’s APPEND is to normal lists.
(defun append-lazy-lists (l1 l2)
  (if (empty-lazy-list-p l1)
    l2
    (lazy-cons (lazy-car l1)
               (append-lazy-lists (lazy-cdr l1) l2))))

Accumulating the elements of a lazy-list looks like this:

;; This is a nice, generic accumulation function that takes a combiner
;; function (usually #’+ or #’cons or something like that), an initial
;; value (typically 0 or 1 for numeric accumulations or ‘() for lists)
;; and some lazy-list to apply all of this to.
(defun accumulate (combiner initial-value lazy-list)
  (if (empty-lazy-list-p lazy-list)
    initial-value
    (funcall combiner (lazy-car lazy-list)
             (delay (accumulate combiner
                         initial-value
                         (lazy-cdr lazy-list))))))

;;This function prevents infinite recursion when accumulating infinite
;;lazy-lists.
(defun interleave (l1 l2)
  (if (empty-lazy-list-p l1)
    (force l2)
    (lazy-cons (lazy-car l1)
               (interleave (force l2) (delay (lazy-cdr l1))))))

There are times when what you have is a lazy-list of lazy-lists, and you want to reduce that to a lazy-list of non-lazy-list elements. Using ACCUMULATE and INTERLEAVE, both defined above, it’s easy:

;;This handy thing uses ACCUMULATE to flatten a lazy-list of lazy-lists.
(defun flatten (lazy-list)
  (accumulate #’interleave the-empty-lazy-list lazy-list))

There are many times when you want a lazy-list that has been constructed by applying some function to every element of some other lazy-list. That’s what LAZY-MAP is for:

;;This maps some proc across every element of some lazy-list.
(defun lazy-map (proc lazy-list)
  (if (empty-lazy-list-p lazy-list)
    the-empty-lazy-list
    (lazy-cons (funcall proc (lazy-car lazy-list))
               (lazy-map proc (lazy-cdr lazy-list)))))

It’s also sometimes desirable to remove all elements from a lazy-list that don’t pass some test. FILTER is the function that does that:

;;This returns the lazy-list that contains all items that, when passed 
to 
;;pred, return something non-NIL.
(defun filter (pred lazy-list)
  (cond ((empty-lazy-list-p lazy-list) the-empty-lazy-list)
        ((funcall pred (lazy-car lazy-list))
         (lazy-cons (lazy-car lazy-list)
                    (filter pred (lazy-cdr lazy-list))))
        (t (filter pred (lazy-cdr lazy-list)))))

The next function is useful on those occasions that you need to apply a function to a lazy-list that side-effects the lazy-list, rather than producing a new lazy-list:

;;This is an awful lot like LAZY-MAP, except that it doesn’t accumulate 
its
;;results, which is a fancy way of saying that you use LAZY-MAP if you 
need
;;a function result and FOR-EACH if you need side-effects.
(defun for-each (proc lazy-list)
  (if (empty-lazy-list-p lazy-list)
    ‘done
    (progn (funcall proc (lazy-car lazy-list))
           (for-each proc (lazy-cdr lazy-list)))))

Finally, there’s FLATMAP, which is just the combination of FLATTEN and LAZY-MAP. LAZY-MAP often produces a lazy-list of lazy-lists that needs to be flattened, so this is just some syntactic sugar to make that easier:

;;Flattening the result of lazy-mapping is so useful and so common that 

;;there’s a whole separate function for it.
(defun flatmap (f s)
  (flatten (lazy-map f s)))

Once in a great while, it’s nice to convert a regular list to a lazy-list:

;;Sometimes (ok, rarely) it’s nice to convert a list to a lazy-list:
(defun lazy-list (list)
  (if (null list)
    the-empty-lazy-list
    (lazy-cons (car list) (lazy-list (cdr list)))))

Another fairly common operation is nested mapping. That is, several lazy-lists are generated and some function is mapped across them, and the results are accumulated according to some other function. It’s worth defining a fairly complex macro, COLLECT, that will make code that does this easier to read, because the expanded code can be pretty illegible, and contrary to popular belief, good Lisp programmers do care about their code being readable! The goal of the COLLECT macro is to take expressions of the form:

(collect <result>
 ((<v1> <set1>)
  .
  .
  .
  (<vn> <setn>))
 <restriction>)

and expand them to:

(lazy-map #’(lambda (tuple)
 (let ((<v1> (car tuple))...(<vn> (ca...dr tuple)))
 <result>))
 (filter #’(lambda (tuple)
 (let ((<v1> (car tuple))
       .
       .
       .
       (<vn> (ca...dr tuple)))
    <restriction>))
 (flatmap
 #’(lambda (<v1>)
 (flatmap
 #’(lambda (<v2>)
 .
 .
 .
 (lazy-map #’(lambda (<vn>)
 (list <v1>...<vn>))
 <setn>))
 .
 .
 .
 <set2>))
 <set1>)))

It’s probably not too hard to see why you’d want a macro like COLLECT to create these kinds of structures. Abelson and Sussman doesn’t list this macro; it’s left as an exercise for the reader. Here’s my Common Lisp solution to that exercise:

;;This is the tricky one.  The COLLECT macro makes nested mappings a 
tad 
;;easier than they would be otherwise, but this is the most complex macro
;;I’ve ever had to write.  Here goes nothing:
(defmacro collect (result pairs &optional (restriction t))
  (let ((vars (mapcar #’car pairs))
        (sets (mapcar #’cadr pairs))
        (lets (genlets pairs)))
    ‘(lazy-map #’(lambda (tuple)
                   (let ,lets
                     ,result))
               (filter #’(lambda (tuple)
                           (let ,lets
                             ,restriction))
                       ,(genmaps vars sets)))))

;;Given a list of pairs, this creates a let body based on tuple.
(defun genlets (pairs)
  (do ((i (1- (length pairs)) (1- i))
       (result ‘() (cons (cons (car (nth i pairs))
 (list (list ‘nth i ‘tuple))) result)))
      ((< i 0) result)))

;;This beast generates the flatmap/lazy-map sequence for the vars and 
sets.
(defun genmaps (vars sets)
  (labels ((genmaps-1 (vars sets depth)
                      (if (null (cdr sets))
                        ‘(lazy-map #’(lambda (,(car (last vars)))
 (list ,@vars))
 ,(car sets))
                        ‘(flatmap #’(lambda (,(nth depth vars))
 ,(genmaps-1 vars (cdr sets) (1+ depth)))
 ,(car sets)))))
    (genmaps-1 vars sets 0)))

Infinitely long lazy-lists can be both interesting and useful:

;;An infinite lazy-list of 1’s:
(defconstant ones (lazy-cons 1 ones))

;;A function to add two lazy-lists together:
(defun add-lazy-lists (l1 l2)
  (cond ((empty-lazy-list-p l1) l2)
        ((empty-lazy-list-p l2) l1)
        (t
         (lazy-cons (+ (lazy-car l1) (lazy-car l2))
                    (add-lazy-lists (lazy-cdr l1) (lazy-cdr l2))))))

;;The lazy-list of all integers:
(defconstant integers (lazy-cons 1 (add-lazy-lists ones integers)))

;;A function to scale all elements of a lazy-list by some constant:
(defun scale-lazy-list (c lazy-list)
  (lazy-map #’(lambda (x) (* x c)) lazy-list))

There are many interesting things that can be done with these functions and data structures. It isn’t terribly difficult, for example, to define the lazy-list of all prime numbers:

;;Here’s the sieve of Eratosthenes written using lazy-list functions:
(defun sieve (lazy-list)
 (lazy-cons
 (lazy-car lazy-list)
 (sieve (filter
 #’(lambda (x) (not (divisiblep x (lazy-car lazy-list))))
 (lazy-cdr lazy-list)))))

(defconstant primes (sieve (lazy-cdr integers)))

Theoretically, all prime numbers can be easily gotten from this data structure. In actual practice you’re likely to run out of memory asking for large ones. Speaking of asking for elements of lazy-lists, a LAZY-NTH function would be very helpful:

;;Like NTH for normal lists, except lazy:
(defun lazy-nth (n lazy-list)
 (if (= n 0)
 (lazy-car lazy-list)
 (lazy-nth (1- n) (lazy-cdr lazy-list))))

The hairy COLLECT macro that I wrote is used for nested mappings, as I mentioned above. One simple example is generating the lazy-list of all triples that represent wins on a tic-tac-toe board that’s numbered like this:

This tic-tac-toe board is actually a “magic square”--the numbers in all of the “win” directions add up to fifteen. So what we’re interested in generating is the lazy-list of all distinct positive integers i, j, and k less than or equal to n that sum to s where n is 9 and s is 15. Let’s write triples using collect:

;;Find all triples of i, j, and k <= n that sum to s.
(defun triples (n s)
 (collect (list i j k)
 ((i (enumerate-interval 1 n))
  (j (enumerate-interval 1 (1- i)))
  (k (enumerate-interval 1 (1- j))))
 (= (+ i j k) s)))

ENUMERATE-INTERVAL simply returns the lazy-list of integers from its first parameter to its second parameter, inclusive. It looks like this:

;;Return the lazy-list of integers from low to high:
(defun enumerate-interval (low high)
 (if (> low high)
 the-empty-lazy-list
 (lazy-cons low (enumerate-interval (1+ low) high))))

As you can see, lazy-lists can be a powerful medium for expressing some interesting mathematical and logical ideas. If there is interest, I can explore more uses in future articles. One other example could be a lazy-list-based solution to the classic eight-queens problem. Another, somewhat more practical, idea would be to discuss why lazy-lists are useful in the design and implementation of a forward- and backward-chaining inference engine for deductive information retrieval. As always, I love to hear from you readers what you want to hear, so please feel free to write or call!

That’s all for this time. Enjoy!

 

Community Search:
MacTech Search:

Software Updates via MacUpdate

Tor Browser 11.5.8 - Anonymize Web brows...
Using Tor Browser you can protect yourself against tracking, surveillance, and censorship. Tor was originally designed, implemented, and deployed as a third-generation onion-routing project of the U.... Read more
Alarm Clock Pro 15.0 - $19.95 (91% off)
Alarm Clock Pro isn't just an ordinary alarm clock. Use it to wake you up in the morning, send and compose e-mails, remind you of appointments, randomize the iTunes selection, control an internet... Read more
Google Chrome 107.0.5304.121 - Modern an...
Google Chrome is a Web browser by Google, created to be a modern platform for Web pages and applications. It utilizes very fast loading of Web pages and has a V8 engine, which is a custom built... Read more
calibre 6.9.0 - Complete e-book library...
Calibre is a complete e-book library manager. Organize your collection, convert your books to multiple formats, and sync with all of your devices. Let Calibre be your multi-tasking digital librarian... Read more
Safari Technology Preview 16.4 - The new...
Safari Technology Preview contains the most recent additions and improvements to WebKit and the latest advances in Safari web technologies. And once installed, you will receive notifications of... Read more
FileZilla 3.62.2 - Fast and reliable FTP...
FileZilla (ported from Windows) is a fast and reliable FTP client and server with lots of useful features and an intuitive interface. The FileZilla Client not only supports FTP, but also FTP over TLS... Read more
djay Pro 4.0.13 - Transform your Mac int...
djay Pro provides a complete toolkit for performing DJs. Its unique modern interface is built around a sophisticated integration with iTunes and Spotify, giving you instant access to millions of... Read more
Opera 93.0.4585.21 - High-performance We...
Opera is a fast and secure browser trusted by millions of users. With the intuitive interface, Speed Dial and visual bookmarks for organizing favorite sites, news feature with fresh, relevant content... Read more
AppCleaner 3.6.6 - Uninstall your apps e...
AppCleaner allows you to uninstall your apps easily. It searches the files created by the applications and you can delete them quickly. Supports macOS Ventura. Fixed an issue causing failed updates... Read more
QuickBooks 21.0.7.1248 - Financial manag...
QuickBooks helps you manage your business easily and efficiently. Organize your finances all in one place, track money going in and out of your business, and spot areas where you can save. Built for... Read more

Latest Forum Discussions

See All

‘Top Hunter Roddy & Cathy’ Review –...
The NEOGEO is generally characterized by, with only a few notable exceptions, fighting games and Metal Slug. Within a couple of years of its launch, the vast majority of the output on the console seemed to be mining (quite successfully) a few... | Read more »
SwitchArcade Round-Up: Reviews Featuring...
Hello gentle readers, and welcome to the SwitchArcade Round-Up for November 28th, 2022. In today’s article, we’ve got a pair of reviews to check out. Full reviews of Pokemon Scarlet and Violet and The Oregon Trail are waiting for you to read. There’... | Read more »
‘OPUS: Echo of Starsong’ Interview: Port...
With OPUS: Echo of Starsong ($8.99) having finally launched on iOS after hitting PC and consoles, I had a chance to talk to Scott Chen who is the co-founder and executive producer of Sigono. In our chat, I touched on topics like game subscription... | Read more »
Best iPhone Game Updates: ‘Rush Rally 3’...
Hello everyone, and welcome to the week! It’s time once again for our look back at the noteworthy updates of the last seven days. As November breaths its last, the holiday season is right around the corner. That means we should start seeing more... | Read more »
‘Total Football’ is an Arcade-Style Socc...
GALA SPORTS recently launched its brand new soccer title, Total Football, and, true to its name, it is a pure arcade-style soccer game in the same vein as FIFA Mobile and PES Mobile. It also features official licensing from FIFPro and Manchester... | Read more »
Genshin Impact will recieve two new char...
HoYoverse has announced that Genshin Impacts version 3.3 will be arriving on December 7th. Titled All Senses Clear, All Existence Void, the update will bring two powerful new characters and a brand new card-based minigame. [Read more] | Read more »
‘Wreckfest’ Mobile Compared With Console...
HandyGames’ mobile version of Bugbear’s demolition derby-style racer Wreckfest ($9.99) released on iOS and Android recently, and we featured it as our Game of the Week. | Read more »
Black Friday Deals Here – The TouchArcad...
After taking a couple of weeks off we return on this glorious Black Friday with another episode of The TouchArcade Show. We get into a big discussion about virtual assistants like Alexa, Siri, and Google, and their place in the greater smarthome... | Read more »
TouchArcade Game of the Week: ‘Station 1...
I’m a big fan of Glitch Games and their unique brand of point-and-click adventure/escape room/puzzle games, and while they’re a tiny outfit and there’d typically be a couple years gap in-between their new releases, they were always worth the wait.... | Read more »
SwitchArcade Round-Up: ‘Super Lone Survi...
Hello gentle readers, and welcome to the SwitchArcade Round-Up for November 25th, 2022. Today we look at the remaining releases for the week, and I’ll be honest with you: it’s not a great assortment. Still, there are at least a couple of things... | Read more »

Price Scanner via MacPrices.net

Cyber Monday: 24″ Apple M1 iMacs for $150 off...
Amazon has Apple’s 24″ M1 iMacs on Black Friday sale for $150 off MSRP. Their prices are currently the lowest available for new iMacs among the Apple retailers we track: – 24″ M1 iMacs (8-Core CPU/7-... Read more
Cyber Monday Sale: 25% off Apple MagSafe acce...
Apple retailers are offering MagSafe accessories for up to 25% off MSRP for Cyber Monday. Here are the best deals available, currently from Verizon and Amazon: (1) Verizon has Apple MagSafe Chargers... Read more
Cyber Monday Sale: Apple AirPods for up to $1...
Looking for Apple AirPods, AirPods Pro, or AirPods Max this Cyber Monday? Look no further than our Apple AirPods Price Tracker. We track prices from 20+ Apple retailers and update the tracker... Read more
Final day for Apple’s Black Friday/Cyber Mond...
CYBER MONDAY Apple’s four day Black Friday/Cyber Monday 2022 event is now live and will run from November 25, 2022 to November 28, 2022 (ends today!). Receive a free $100-$250 Apple Gift Card with... Read more
Cyber Monday: Apple 13″ M2 MacBook Airs for $...
Apple retailers have posted their Cyber Monday prices on 13″ MacBook Airs. Take up to $200 off MSRP on M2-powered Airs with these sales with prices starting at only $1049. Free shipping is available... Read more
The best Cyber Monday iPhone sale? This $500...
If you switch to Xfinity Mobile and open a new line of service, they will take $500 off the price of a new iPhone, no trade-in required. This is the best no trade-in Cyber Monday Apple iPhone 14 deal... Read more
Cyber Monday Sale: Apple 16″ MacBook Pros for...
Amazon is offering $500 off MSRP discounts on Apple 16″ MacBook Pros with M1 Pro CPUs as part of their Cyber Monday sale. Their prices are the lowest available for these models from any Apple... Read more
Cyber Monday Sale: Apple 14″ MacBook Pros for...
Amazon is offering $300-$500 off MSRP discounts on Apple 14-inch MacBook Pros with M1 Pro CPUs as part of their Cyber Monday sale. Their prices are the lowest available for these models from any... Read more
Cyber Monday Sale: Apple Watch Ultra for $60...
Amazon has Apple Watch Ultra models (Alpine Loop, Trail Loop, and Opean Bans) on sale for $60 off MSRP as part of their Cyber Monday sale, each including free shipping, reducing the price for an... Read more
Cyber Monday MacBook Sale: 13″ M1 Apple MacBo...
Amazon has Apple 13″ M1 MacBook Airs back on sale for $200 off MSRP, starting at only $799, for Cyber Monday 2022. Their prices are the lowest available for new MacBooks this Cyber Monday. Stock may... Read more

Jobs Board

*Apple* Electronic Repair Technician - PlanI...
…a highly motivated individual to join our Production Department as an Apple Electronic Repair Technician. The computer repair technician will diagnose, assemble, Read more
Product Manager II - *Apple* - DISH (United...
…you will be doing We seek an ambitious, data-driven thinker to assist the Apple Product Development team as our new Retail Wireless division continues to grow and Read more
Staff Engineer 5G Protocol, *Apple* - DISH...
…metrics. Essential Functions and Responsibilities for a Staff Engineer 5G protocol( Apple ) Knowledge of 5G and 4G/LTE protocols and system architectures Experience Read more
Cashier - *Apple* Blossom Mall - JCPenney (...
Cashier - Apple Blossom Mall Location:Winchester, VA, United States (https://jobs.jcp.com/jobs/location/191170/winchester-va-united-states) - Apple Blossom Mall Read more
Omnichannel Associate - *Apple* Blossom Mal...
Omnichannel Associate - Apple Blossom Mall Location:Winchester, VA, United States (https://jobs.jcp.com/jobs/location/191170/winchester-va-united-states) - Apple Read more
All contents are Copyright 1984-2011 by Xplain Corporation. All rights reserved. Theme designed by Icreon.