jeudi, décembre 14, 2006

La musique des langages

Paul Rubin sur c.l.l :

In musical terms, Python is like a Beatles song, very popular
and easy to sway and dance to. Lisp is like a Bach fugue.

jeudi, mars 30, 2006

Application méthodique de la chance pour la résolution d'un problème

Toward the end of the thirteenth century, Ramon Llull (Raimundo Lulio or Raymond Lully) invented the thinking machine. [...] The circumstances and objectives of this machine no longer interest us, but its guiding principle - the methodical application of chance to the resolution of a problem - still does.

Ramon Llull's Thinking Machine, Jorge Luis Borges (1899-1986).

(cité dans Concepts, Techniques and Models of Computer Programming, chapitre Relational Programming)

samedi, février 04, 2006

Programmation littéraire

L'un des développeurs d'AXIOM (un système de calcul symbolique et algébrique écrit en Lisp), sur la liste des Jardiniers de Common Lisp :

I AM advocating writing literate programs. Writing for people
is much harder than writing for the machine but ultimately it
is people who use your code and who need to maintain it. At
this point in my life I've written uncountably many lines of
Lisp code only to watch it die. I believe that literate programming
will help the code live.

How many programs have you written that have died?

Why do academics end up with 100 papers published in journals
and programmers end up with nothing?

Why aren't there journals that accept literate lisp programs?
With referees? At conferences?

How many times have you seen papers presented giving "results"
of programs you can't see and can't run? Why isn't the code
embedded in the paper?

Is the lisp garden going to turn into a javadoc-like, developer-only,
collection of code snippets? Or are we going to write thoroughly
documented, carefully structured programs that can be maintained
and modified?

Assume that the lisp garden eventually reproduces the java library
functionality. I have the sources for the java library. There is
nothing in these sources that is written 'for the person'. The
sources are all 'for the machine' or 'for the developer'. But
what about trying to understand whole subsystems like the SQL
commands, or the networking commands, or the swing commands?

Will we do the same thing with Lisp? Why? It should be the case
that the whole X3J13 document is interwoven with the actual code
that implements it. If I REALLY want to know how setf forms work
then I can read both the standard description and the actual
implementation code in the same chapter.

Why isn't the CLIM spec woven into the CLIM implementation code?
Why can't I look for how a line is drawn and see both the spec
and the implementation in the same place?

Why can't I learn about garbage collection by reading the source
code of my lisp implementation? Why doesn't it contain the text
of the book "Garbage Collection: Algorithms for Automatic Dynamic
Memory Management" for the algorithm used?

How come there is not "paper" on hashtables that contains the lisp
code so I can just drag-and-drop it onto my running lisp and have
a new hash table implementation with the documentation that explains
the theory?

It's the late 90s... we should be able to do this.

(throw 'soapbox-exit nil)

vendredi, janvier 06, 2006

Nature de CL

Rainer Joswig sur c.l.l :

CL is a pragmatic blend of several programming facilities
from the Lisp history.

CL is a dynamic, functional, procedural, compilation supporting, interactive, incrementally extendable, reflective, symbolic, object-oriented, standardized, programmable, late-binding programming language.

And some more.

vendredi, décembre 23, 2005

Conciseness is power

Kaz Kylheku sur c.l.l :

Greg Bacon wrote:

>> As an exercise, I wrote code to find anagrams in a one-word-per-line
>> dictionary. See
>> Practices/Kata/KataSix.rdoc. Within the equivalence classes, I sorted
>> the words in ascending order and sorted classes in descending order of
>> size. See the code below:
>> (defun anagram-list (hash)
>> (let ((anagrams '()))
>> (maphash (lambda (normal words)
>> (when (>= (length words) 2)
>> (push words anagrams)))
>> hash)
>> anagrams))
>> (defun find-anagrams (path)
>> (let ((anagrams (make-hash-table :test #'equal)))
>> (with-open-file (s path)
>> (loop for word = (read-line s nil)
>> while word do
>> (push word (gethash (normal-form word) anagrams))))
>> (anagram-list anagrams)))
>> (defun chars-of (word)
>> (loop for i from 0 below (length word) collect (char word i)))
>> (defun normal-form (word)
>> (let* ((chars (chars-of (string-downcase word)))
>> (normal-order (sort chars #'char-lessp)))
>> (coerce normal-order 'string)))
>> (defun longest-first (a b)
>> (let ((lena (length a))
>> (lenb (length b)))
>> (cond ((> lena lenb) t)
>> ((< lena lenb) nil)
>> (t (loop for aa in a
>> for bb in b do
>> (when (string-lessp bb aa) (return nil))
>> t)))))
>> (let ((anagrams (find-anagrams #P"words721.txt")))
>> (loop for group in anagrams do
>> (setf group (sort group #'string-lessp)))
>> (loop for group in (sort anagrams #'longest-first) do
>> (format t "~{~a~^ ~}~%" group)))
>> For comparison, I wrote Perl code to do the same task:
>> #! /usr/bin/perl
>> sub longest_first {
>> my $bylen = scalar(@$b) <=> scalar(@$a);
>> if ($bylen) {
>> return $bylen;
>> }
>> else {
>> my $byalpha;
>> for (0 .. $#$a) {
>> $byalpha = $a->[$_] cmp $b->[$_];
>> return $byalpha if $byalpha;
>> }
>> 0;
>> }
>> }
>> @ARGV = "words721.txt" unless @ARGV;
>> my %group;
>> while (<>) {
>> s/\s+\z//;
>> push @{ $group{ join "", sort split //, $_ } } => $_;
>> }
>> my @anagrams = map [ sort @$_ ], grep @$_ >= 2, values %group;
>> for (sort longest_first @anagrams) {
>> print join(" " => @$_), "\n";
>> }
>> The sort comparator is syntactically awkward and bulky, but in the other
>> areas, the Perl code seems more concise.

Conciseness is power, when it is achieved by actually removing
irrelevant details so that you can express the computational ideas more
directly. The Perl code does this for some things here. For instance
while (<>) hides a lot of logic. Looping over file names, opening the
files, and reading lines.

The Common Lisp language doesn't have this operator.

But you see, it doesn't have to. If you want that operator, you can
make it yourself.

If I made it myself, I probably wouldn't design it that way because it
relies on a global variable, namely the @ARGV. I would pass in the list
of files as a sequence. Or I could have it use some default list named
by a special variable. The programmer could override this special
variable by binding it to something else around the invocation of the
file-munging construct.

I could also design the file munging construct so that it can nest with
itself, by not using a fixed variable name for the line that is read:

(defmacro read-lines-from-files (line-string-var file-list-form) ...)

(read-lines-from-files (word '("words721.txt"))
(push word (gethash (normal-form word) anagrams))

This is as concise as I would like it. It is /semantically/ concise and
clean. I deliberately give the macro a descriptive name, so people can
guess what it is without having to decipher its internals or read any

Any further terseness, by designing cryptic nonsense like while (<>),
or "cheating" by using global variables and fixed variable bindings, is
bad design.

Also note how some of your Lisp is more verbose simply because you
broke up the computation and gave meaningful names to intermediate

In the Perl, you have a nested sort and join thing over the characters
of the string.

In the Lisp code, I learn that to form the keys, words are transformed
into something you call a "normal form", which is formed by folding to
lower case and sorting the characters to a "normal order".

These little tidbits serve as documentation which give me a vocabulary
for talking about the program and help me understand what it does

Even if I didn't know what STRING-DOWNCASE does, if it wasn't a
standard Lisp function but defined somewhere in your program, I could
probably guess what it does.

I only know that your Perl program performs string downcasing because I
have taken your your word for it that it is equivalent to the Lisp

I'm guessing that it's done by:


I would never guess what this does.

Lastly, compare only the "longest first" functions only between the two
programs. Perl doesn't have any substantial tricks which helped there.
There is that cute <=> thing which computes -1, 0 or 1.

Thtt <=> operator inspires this idea: define a macro operator in Lisp
which evaluates two expressions, and then based on the result of
comparing their values, evaluates one of three clauses:

(defmacro less-greater-equal-case (left-form right-form less-form
greater-form equal-form)

Then you could do this:

(defun longest-first (a b)
(less-greater-equal-case (length a) (length b)
nil t
for aa in for bb in b do
when (string-lessp bb aa)
return nil
finally return t)))

Note, by the way, how I changed your LOOP slightly to take advantage of
the WHEN and FINALLY clauses.

Of course, the length of the LESS-GREATER-EQUAL-CASE operator counts
against the length of the program. Or should it? Is that fair? You can
put that into a library and reuse it.

mercredi, octobre 26, 2005

Nettoyer l'espace de nom CLUSER au démarrage

Recette de Pascal Bourguignon sur c.l.l :

I put this in my ~/.common.lisp which is loaded by all my ~/.clisprc,
~/.sbclrc, ~/openmcl-init.lisp, etc...

;; Clean the packages imported into COMMON-LISP-USER:

dimanche, octobre 16, 2005

Lisp is a boon for noobs

Alan Crowe sur comp.lang.lisp :
Pascal Costanza  wrote an interesting account
of the life cycle of languages, focusing on the accumulation
of complications, using JAVA as an example.

Then he repeated the controversial bit:

> Lisp was never designed to be a simple language to be easy
> to learn for newbies or to program simple applications in
> it.

True. Lisp was designed by experts for their own personal use.

> There are lots of examples where you can explain
> concepts only in terms of other concepts that you can only explain later.
> That's why I stand by my claim that Lisp is harder to learn than other
> languages - because it is mostly concerned with extendibility that you
> can only really take advantage of as an expert Lisp programmer and that
> other languages don't even touch.

I'm unconvinced by this. I would be more persuaded by a
tracing of a causal chain from a concept through to it
causing trouble for a beginner. I think this would be a
difficult argument to make because the argument would not
work if it used a computer science concept that would cause
a beginner grief whatever programming language they were

Let me suggest a causal chain that goes the other way.

Lisp's greatest claim to fame is as the first practical
programmable programming language. Experts use Lisp for
meta-programming. This whole approach is hugely
controversial. You are working in a team and reading a
colleague's code. He has made use of defmacro, adding new
constructs to the language. What are the semantics of his
constructs? If he has named them using the jargon of the
application area, if they do `the right thing' by the
conventions of the application area, the semantics may work
out all right, but will you even get that far? How do you
parse a language containing constructs that you haven't seen

Lisp pushes a vigorous vision of how this is supposed to
work. It uses a fully parenthesised prefix notation. The
idea in other languages is to have key-words or reserved
words that direct the parsing. In other languages adding
new reserved words stops your colleagues from reading your
code because they cannot even parse it. In Lisp the
reserved words do not direct the parse. The parse is made
explicit with the parentheses. So you can add new "reserved
words" without making your code un-parsable by others.

Now let us shift focus to the beginner. He faces a problem.
How do you parse a language containing constructs that you
haven't seen before? Unlike the expert, the cause of his
problem is not that the constructs are newly minted. The
cause of the beginner's problem is that he is just starting
out on learning the language, so all the constructs are new
/to him/. Nevertheless it is essentially the same problem.

Indeed, a beginner may recognise all too well what the
parentheses are there for. They are training wheels on the
syntax. Instead of seeing "if form1 form2 form3" and racking
his brain for the rules about when an if has an else clause,
the beginner is spoon fed "(if form1 form2) form3" or
"(if form1 form2 form3)". He may feel patronized by being
given a toy language and demand to learn a proper one, in
which the reserved words guide the parse and there is a
grammar to learn.

The beginner may have a hard time grasping that the
parentheses are not training wheels and are not for him. The
point that they are there to help industrial teams exploit
meta-programming probably goes right over his head.

My claim is that the "by experts, for experts" nature of
Lisp accidently bestows a great boon on the beginner.

One can make the same claim about other aspects of the
syntax. Experts realise that writing `m.a(b)' instead of
`(m a b)' doesn't buy you anything. So they left it out of
Lisp. That is they left it out because they didn't want to
be bothered with it themselves, and they were designing the
language for their own use. But it doesn't help anybody! So
again the beginner gets a free ride. By using a tool intended
for experts, the beginner misses out on the spinach and
rhubarb of dumbed down languages, and that makes the
beginner's life more pleasant.

The same thing happens with arithmetic. The bignums and the
rationals are there because experts want to write algebra
systems. They are not there to make arithmetic in the
language work in school book fashion for the sake of
children learning the language. Nevertheless having the
arithmetic do the right thing is a substantial advantage
for any child looking for a first programming language.

I think that there is an unacknowledged emotional
undercurrent here. You don't let your child play with power
tools, electric drills, circular saws, etc. Why not?
Children are weaker than adults and would benefit more from
the power assistance. Indeed children's enthusiasm is often
dimmed by the time it takes to get worthwhile things done,
and access to power tools would help with this. So why not?

You could say that it is just obvious and get annoyed with
the stupid question. That is a mistake. There is a real
reason why not. Power tools really are dangerous. Digging
for the real reason is empowering. You are empowered to see
the limitations of metaphor.

When we ask "should beginners get power tools intended for
experts?" we naturally tend to "No!", fearing accidents and
trips to the hospital. But there is no danger of severed
limbs in tapping away at the computer keyboard in whatever
language. If there are drawbacks in giving a beginner Common
Lisp as a first programming language, those drawbacks need
to be made explicit and compared against the advantages.

Alan Crowe

mercredi, septembre 28, 2005

Les merveilles de la sélection simple

Sur la prétendue complexité de CLOS et le modèle objet de Smalltalk (comp.lang.lisp) :

Pascal Costanza wrote:

>> Another example is the complexity of CLOS. In other OO
>> languages you have much fewer options (like just single
>> inheritance and single dispatch) which, I think, makes
>> learning such languages simpler because you don't have to
>> worry about not using features that aren't there in the
>> first place.

In the late 80's a friend tried to show me Smalltalk. He
typed in a few things, and it looked cool, so I wanted to
have a go. That was nearly 20 years ago, so I don't remember
any Smalltalk, but I do remember what I wanted to do, and in
Common Lisp it would look like this:

(defclass mod7 ()
((n :accessor n
:initarg :n)))

I imagined that a simple example of an object would be a
number modulo 7. The object has internal state, the value of
the number, but it is /internal/, its objectness gives it a
distinctive meaning.

If I just press on with code in CL you will soon see what I
was trying to do

(defclass mod12 ()
((n :accessor n
:initarg :n)))

(defvar 5mod7 (make-instance 'mod7 :n 5))
(defvar 6mod7 (make-instance 'mod7 :n 6))
(defvar 5mod12 (make-instance 'mod12 :n 5))
(defvar 6mod12 (make-instance 'mod12 :n 6))

(defmethod add ((x mod7)(y mod7))
(make-instance 'mod7 :n (mod (+ (n x)
(n y))

(defmethod add ((x mod12)(y mod12))
(make-instance 'mod12 :n (mod (+ (n x)
(n y))

(describe (add 5mod7 6mod7))
# is an instance of class #:
The following slots have :INSTANCE allocation:
N 4

(describe (add 5mod12 6mod12))
# is an instance of class #:
The following slots have :INSTANCE allocation:
N 11

It is a very simple idea: 5+6=4 mod 7 but 5+6=11 mod 12.

It was the simplest example I could think of to try out
object oriented programming. I wanted to try out having a
function do different things to objects of different kinds,
so I needed two kinds of objects, integers modulo 7 and
integers modulo twelve, and a function, addition, that could
plausibly behave differently on the two different kinds of

I'm not sure at what age children are taught arithmetic
modulo n. Arithmetic modulo 10 (ie 5+6 is one unit and one
ten) must be young (8? 9?).

I thought my example was childlike in its simplicity. I was
thunderstruck when there turned out to be a technical
difficulty with doing this in Smalltalk.

I thought that the differentness of the classes, with mod7
being different from mod12, was an inherent property of the
classes, yet somehow it seemed to be a relative property,
dependent on the object's position in the argument list,
visible if the object was the first argument, yet invisible
if the object was the second argument.

I found this incomprehensibly awful. It completely destroyed
the object metaphor. Nothing that I was familiar with
behaved like this. It was as though I had a bowl of apples
and oranges and I could pick up any single fruit and it would
stay an apple or an orange, but when I picked up a second
one its distinctive appleness or orangeness would vanish and
I would only have a generic fruit in my hand.

When I'm writing code I don't merely anthropomorphize my
functions, I decenter and become them.

(defun foo (n)
(cond ((evenp n) ...

NOT "foo checks whether n is even ...", BUT "First /I/ check
whether n is even ...".

So being unable to see whether my second argument was mod7
or mod12 felt like losing an eye. I fled in horror from this
hideous, mutilating language. The trauma of my first
encounter with single dispatch remains vivid in my memory
nearly twenty years later, the sub-basement in St Stephen's
Street, the grey light from the two windows behind me,...

I often see attempts to justify single dispatch by a concern
for human factors, claiming that it is easy to learn and simple to
understand. My exerience could not be more different.

Alan Crowe

mardi, septembre 27, 2005

Discussion avec arguments sur le typage statique sur comp.lang.lisp :

Matthias Buelow frightened me by writing:

>> I don't know why it seems to be difficult to understand
>> that for some application domains you'd need the compiler
>> to do as much work as possible to avoid any kind of
>> runtime error (and that implies static type checking),


>> while for other domains it might be more beneficial to use
>> a more dynamic approach.

>> Everything where human life or expensive property is at
>> risk would fall into the earlier domain (say, control
>> software for a nuclear reactor, the fly-by-wire software
>> of aircraft, car electronics).

Static type checking is the 2nd feeblest possible kind of
checking, just better than not bothering at all. It tells
you that values belong to the set from which code was
intended to select them, but not that it picked the correct

Static type checking tells me that compute-control-rod-motion
returns something belonging to the control-rod-motion type,
but not whether it moves the control rod in or out. I still
need to test whether the motions are in the correct
direction and of the correct amount.

What kind of type error would survive such testing? This is
not a rhetorical question. We know how a type error can slip
through testing: when there are paths in the code that are
not exercised. So here is the scenario:

There is a path in the reactor control software that has not
been tested. If we used a dynamically typed language we
might get a type error at run time, which might trigger the
emergency shut down. If we had used a statically typed
language we would not get type error. Either way is really
scary. If we haven't tested that path through the code, then
the control software might be pulling the control out when
it is supposed to be pushing it in. If you didn't test it,
it doesn't work.

Static typing might be OK for crappy consumer software. If
fast forward makes your video recorder rewind, then you can
probably work around it by using fast rewind to fast
forward. Bugs, yeah, they happen.

Writing high integrity software implies going so far beyond
what static typing can offer that static type offer no
techical benefit. The impact of static typing is purely
psychological: the code compiles, so it is half way to being
correct. Since it is human nature to feel that half a loaf
is better than none, this psychological aspect undermines

Alan Crowe

dimanche, juillet 24, 2005

Taille de Common Lisp

La taille de common lisp (comp.lang.lisp Rob. MacLachlan 1991) :

So what about the size of Common Lisp? Is Lisp a VAX? Consider the
-- Big (too big to run on a VAX)
-- Old (older than a VAX)
-- Slow
-- Sold by DEC

What lessons (if any) can language designers draw from the victory of RISC?
-- Choose primitive primitives and migrate complexity upward.
-- At any given level, implement only those features that are cost-effective.
-- Optimize operations with a high dynamic frequency in favor of rare

Note that the lesson of RISC is *not* "choose simple systems." Considering
the additional complexity of the compiler (due to the optimization a RISC
processor demands), a RISC-based programming system is more complicated
overall. A solution should be as simple as possible, and no simpler.

Consider the minimalist-maximalist dimension of the system design space:
-- A minimalist design is as simple as possible (in hopes of low production
-- A maximalist design is as complex as is useful (in hopes of maximizing
flexibility and ease of use.)

RISC was initially publicized using largely minimalist arguments:
"Our chip is faster because it has fewer instructions."

Although the "RISC philosophy" has won over almost all hardware architects,
the minimalist argument has been largely rejected as simplistic. This is
why you see RISC machines with hundreds of instructions. The key point is
that although a simple CPU design is inherently cheaper to design and
manufacture, it is not true that the simplest CPU chip invariably produces
the most cost-effective computer *system*.

A minimalist programming system is based on a small set of cheap operations.
Whenever you need another operation, you implement it using these cheap
operations. At least for simple problems, this results in programs that use
about as little hardware as possible. There are two major problems with
minimalist programming systems:
-- The minimalist design goals totally ignore the cost of programming.
-- The hardware efficiency advantage of minimalist systems is only clear-cut
with minimalist programming problems.

Today there is no such thing as a minimalist programming system. All
programming systems (assembler, C, whatever) have accepted some some reduction
in hardware efficiency in exchange for an increase in ease of programming.
This is widely accepted as true, as a "good thing". [But programming systems
still do differ in their position on the minimalist-maximalist continuum.
More later...]

It is less widely acknowledged that minimalist systems aren't that much more
efficient for solving complex problems. Could you write X much more
efficiently in assembler than its current C implementation? Before you answer
"Hell yes!", I point out that I mean X11, xlib, resources, widgets, xtk and
motif, with total software pin-for-pin compatibility with the C version.

Of course, nobody would attempt to do such a thing. If someone wanted a
maximally efficient window system, they would start by throwing out huge
amounts of "unnecessary" functionality. Only after they have finished
crafting a minimal window system for their application would they consider
anything drastic like assembly coding.

Everybody agrees that X contains lots of unnecessary stuff. But given the
evident truth of this, it is amazingly difficult to find features that almost
everyone agrees are unnecessary. This is "design by committee" in a nutshell.
If you were writing a window system for just you to use, then you really could
throw out all the junk. But if you want your office-mate to help you, then
you may find he has his own ideas. Pretty soon you have keymaps and
environment variables. And he may want to draw splines.

The bottom line is that today's programming systems are used by many people.
This means that a programming system must please many people's tastes and
cater to many people's needs. This means there will be some mismatch between
your needs and the available offerings, but there is *no better way*. If you
want to buy a car, but you hate arm-rests, it is cheapest to buy a Ford off
the show-room floor and cut off the arm-rests.

Any system over a certain small complexity has to be a multi-person effort.
At this point, the minimalists are no longer offering an alternative -- they
are merely bitching that systems that complex *should not be*. There are some
things man was not meant to know... God's will, Dijkstra's will, whatever.

But suppose you did decide to reimplement on top of a minimalist system. If
so, there is a real risk that reimplementing "just what you need" will result
in a less efficient program. To be sure, if you were as wizardly as the X
implementors and spent as much time tuning as they did, then you could get the
same or better efficiency for your operation subset. But if you say, "2-d
trees are too hairy, I'll use a linear search", then you may be in for a

I think that in reality, the biggest efficiency advantage of minimalist
programming systems is not the any inherent advantage of reimplementing all
the time, but rather that:
**** Minimalist operations provide a good Efficiency Model ****

An efficiency model is a tool the programmer uses to tell how expensive a
particular code fragment will be. In the current state of the art,
application of efficiency models is purely manual: you stare at the code.
If you see a lot of operations or a lot of repetitions, then you know the
code is expensive.

In C, the efficiency model is so straightforward that Jon Bentley can
publish an article telling you how to predict the performance of your C
program with 80% accuracy. In contrast, most of today's Lisp programmers
are working with internal efficiency models having perhaps 5% accuracy. And
every time somebody writes a Lisp program 20x slower than a C program,
people conclude "Lisp is terribly inefficient."

o.k., now I've stopped baffling you with bullshit about window systems, and
we're back to the Common Lisp v.s. Scheme v.s. C wars. And RISC v.s. CISC
too! Way to go...

I'll restate my RISC lessons here so that you don't have to page back:
-- Choose primitive primitives and migrate complexity upward.
-- At any given level, implement only those features that are cost-effective.
-- Optimize operations with a high dynamic frequency in favor of rare

Remember those 100 instruction multi-million transistor RISCs? Well, the
reason that new instructions creep into RISC processors is one of the reasons
that new functions creep into Lisp. Integer multiplication was added to the
SPARC because some people use it a lot, and it is a pain for each of them to
design their own hardware multiplier and hook it into the motherboard.
Similarly, Common Lisp has hash-tables because some people use them a lot, and
it is a pain for everyone to have to design their own and hook them into the
garbage collector.

With either integer multiplication or hash-tables it is possible to come up
with high-level software solutions that don't require a soldering iron, but
you will pay a big efficiency hit.

The *other* reason why functions creep into Common Lisp is a strictly
maximalist one: to get economies of scale by standardizing the interfaces to
useful functionality. In a word, reuse. In fact, this is a lot of the reason
that CISCs got so complicated: not because the hairy instructions were much
faster, but because they were easier to use in assembly language.

RISC has to some degree punted on the interface standardization issue by
observing that only compilers and assemblers need to know the interface to the
hardware. In other words, they pushed the interface up a level. Programming
systems can do this too, but at some point, there has to be a top level that
programmers can recognize. This strongly encourages standardization of the
intermediate layers as well (to avoid duplication of effort).

Common Lisp is definitely a maximalist programming system. Does this mean
that it is a total loss for efficiency? I think not, but there are some
major efficiency challenges:
1] Make better efficiency models for Common Lisp. Add automatic compiler
efficiency notes that flag simple problems, and document advanced compiler
capabilities so that programmers can exploit them.
2] Eliminate unnecessary inefficiencies in Common Lisp by adding complexity
to the implementation:
- Compiler optimizations provide graceful performance degradation for
less-tuned programs.
- Generational GC makes consing cheap and eliminates offensive GC pauses.
3] Make simple Common Lisp programs comparably efficient to simple C
programs. This is the "delivery" problem.

The Python compiler for CMU Common Lisp has already addressed 1 and 2. A
case study is Scott Fahlman's "export" connectionist AI simulator. This is
a medium-size float-intensive application written by someone with a good
Lisp efficiency model. It was written well before Python's number-crunching
capabilities were designed, and was tuned to perform well in the
then-available Lisp systems.

When first compiled with Python, there were more than 100 efficiency notes.
Even with all these problems, the program ran 2x faster than the commercial
Lisp on the same machine (I won't say whether it was Franz or Lucid.)
After fixing type declaration problems revealed by these notes, it was 5x
faster. Interestingly, a version of this program had been recoded in C for
speed. The Python version took only 1.6 times as long as the MIPS C compiler.

Floating point? Who cares? That's "unnecessary"...

The point is, that with sufficient effort and complexity in the Lisp
implementation, users can fairly easily come within a factor of 2 of C, even
for problems that seem "un-Lispy". In other words, the efficiency model has
been improved from 5% to 50% accuracy.

As for 3 above (the delivery problem), there is still a lot of work to be done
there. Both Lucid and Franz are working on it, and I have some ideas too.
The problem is how to get a relatively small binary for a delivery system.
The difficulty is in figuring out how to do this without throwing the baby out
with the bath-water. People don't use Lisp instead of C because they like its
funky syntax. They use Lisp instead of C because it has lots of useful stuff
built in, and is easy to extend. Unfortunately, those are the exact same
reasons that "ld" doesn't work well on Lisp programs.

What I find more interesting than the prospect of making FORTRAN-in-Lisp as
fast as FORTRAN or C-in-Lisp as small as C, is the prospect of making
Lisp-in-Lisp as fast as possible. Now we have FORTRAN efficiency in Lisp, but
how about making efficient use of all the stuff that's in Common Lisp but not
FORTRAN? Although that's lots of stuff, I assure you that there are few
useless features. I recently read through the index of functions, macros and
special forms in CLtL, and it was rare to find an operation I had never used
(and I found none that I didn't know what they did.)

The real answer to "Why is Common Lisp so big?" is "Compared to what?"
If your system is very small compared to Common Lisp, then maybe you should
attack a more complex problem. Big systems easily exceed the size of
Common Lisp, making the penalty for having all that stuff around relatively

As systems become more complex, the original CLtL I Common Lisp will come to
seem less and less complex. Remember ALGOL with no I/O? Fortunately, Common
Lisp is growing to keep up with progress in computer science.

Robert A. MacLachlan (

lundi, juillet 18, 2005

Sparseness in language design

Kent Pitman sur comp.lang.lisp : writes:
 Are you familiar with Gauche Scheme?  You can define methods on types
> so that you can do something like:
> gosh> (define-method object-apply ((s ) (i ))
> (list-ref s i))
> #
> gosh> (let ((x '(a b c d)))
> (x 2))
> c
> Or, of course, you could make it return the ith cdr, i concatenations
> of the list, etc.
> This is kind of neat, but I think that most of the time when you
> evaluate (a-list 2) it's a mistake, and I'd rather have it raise an
> error than fail silently. But I've never used it in practice.

This is the property I keep calling "sparseness" in language design.

In assembly language, it's common to make 0 be an invalid opcode because
0's happen a lot and if you execute non-code by accident, you'll hopefully
quickly find a 0 and the program will stop.

The analogy in higher level languages is to assure that you don't make
everything be "defined".

People with only a superficial familiarity with TECO used to fear that it
would just run forever if given a wrong program, since every single
character was an operator. But in practice it would quickly get arity
or other bounds errors. I'd bet the same is so of APL.

In C++, you often run with NULLs well past where you wish because they
match up typewise with pointers to objects even though NULL really has
a different "shape" than other pointer data of the same "type". When you
finally get in the debugger, finding the source of the NULL is sometimes

In Common Lisp, changes to do things like make everything generic would be
a step in the direction of running longer after an error had happened, only
finding yourself in the debugger when the good debugging context was gone.

- - -

The other problem with this is that the BIG temptation is to want to do
just as you say--to define LIST. But CL's basic theory of design says
that you should never define things that are in public/shared packages since
other modules might be tempted to make incompatible definitions of the
same public/shared symbols. For user data, of course, it's a different
matter. But then, it's not hard to do

(defmethod ref ((v vector) (key fixnum)) (aref v key))

(defmethod ref ((h hash-table) (key t)) (gethash key h))

and then to define your own REF extensions in your own package.

CL doesn't have any way at all to define new things that can go in the
car of a list, so the issue you actually cite in your example doesn't
come up. You'd have to write (funcall a-list 2) in order for the example
to work in CL, so at that point doing (ref a-list 2) is just as easy
and requires no special language support.

My weird (and properly-named)
addresses the issue of finding syntactic common ground between CL and
Scheme. I didn't make much success of it, but I think it's tractable.
Maybe some day some bored soul will complete it...

dimanche, juillet 10, 2005

Emacs et mode Lisp : racourcis clavier

Indentation des expressions avec Emacs, et quelques autres racourcis (Joe Marshall dans c.l.l) :

The obvious one is

M-C-q indent-sexp

Which will re-indent the sexp based on the parenthesis and symbols
in the CAR of the list. If the resulting indentation is bogus, you
probably have an unbalanced paren or something.

But it is *very* worthwhile to learn these:

M-( insert-parenthesis
Puts in a () pair and puts point between them.

M-C-f forward-sexp
M-C-b backward-sexp
Go forward or backward by s-exp.

M-C-k kill-sexp
M-C-del backward-kill-sexp
Delete an sexp.

M-C-t transpose-sexp
swaps the sexps on either side of point, moving point to behind
the last one. This sounds random, but it allows you to `drag'
sexps around in list structure. You can get an awful lot of
milage out of this one.

M-C-@ mark-sexp
Puts mark at end of next sexp, leaves point where it is.
If your emacs is set up to highlight the current region, this
will highlight the sexp.

These next two are a bit esoteric, but also worth knowing:

C-M-u backward-up-list
Go to the beginning of the *containing* list structure.
So suppose you are in the body of a LET expression. C-M-u
will put point just before the open paren of the LET.

C-M-d down-list
Go to the beginning of the next *contained* list structure.

These ones aren't particularly lisp-specific, but rather handy in lisp

C-M-a beginning-of-defun
Takes you to the current top level defun. At this point, if you
type C-M-f (forward-sexp) you'll either end up at the end of
the current defun, or your parens aren't balanced.

C-M-/ dabbrev-completion
If you type the first few letters of a symbol and hit this,
it will attempt to complete the symbol. If it guesses wrong,
hit it again. You can save a lot of typing with this and still
use reasonably descriptive identifiers.

You should also be using font-lock-mode. I have mine set up for *extreme*
coloring (much more than the usual maximum decoration). I have found that
what happens is that I no longer pay much conscious attention to the color,
but if the color is `wrong', the code looks `funny' and I'm alerted to the
error right away.

I suggest taking a day and forcing yourself to use these commands.
It'll be slow going for a little while (an hour or two), but then
you'll get the hang of it and you'll wonder how you ever did without.

vendredi, juillet 08, 2005

Symboles, Keywords et ASDF

Par Ch. Stacy sur comp.lang.lisp (Re: Difference between ' and : symbols) :

"jonathon"  writes:
> I've noticed that with ASDF there are 2 ways of referring to a
> system. Some use the single quote for the name, others use the
> colon before the symbol. Both seem to work.
> What is the difference? Isn't a symbol evaluated to itself,
> while a single-quoted value is evaluated to that value?

Symbols do _not_ evaluate to themselves; they evaluate
to the value of the variable (if any) that they are naming.

Let's start out by reviewing how evaluation works.

lisp> x
=> {error: unbound variable X}
lisp> (setq x 12345)
lisp> x
=> 12345

Normally, all the arguments to a function are recursively
evaluated before the function is called on them.

lisp> (defun foo (a b)
(list a b))

lisp> (foo x (* x 2))
=> (12345 24690)

You can inhibit evaluation with the special operator QUOTE.
QUOTE looks like a function call, but it's not!

lisp> (quote hello)

QUOTE is a "special operator". What's special is that
it does not obey the normal rule for Lisp evaluation.
QUOTE does not evaluate its arguments; instead,
it returns the arguments literally.

You can abbreviate QUOTE as the quote character (').

lisp> 'hello

lisp> '(* x 2)
=> (* X 2)

Notice that in contrast to symbols, numbers (eg. 12345)
normally evaluate to themselves. So do strings.

lisp> 12345
=> 12345

lisp> "hello, world"
=> "hello, world"

By the way, if you quote a number or a string, it's totally
gratuitous, but you'll end up with the same result:

lisp> '12345
=> 12345

Numbers and strings _are_ "self-evaluating".
Symbols are _not_ self-evaluating.

lisp> x
=> 12345

Symbols only "evaluate to themselves" if you use QUOTE.
Otherwise, symbols evaluate to their variable value.

QUOTE breaks out of the normal evaluation rules,
and always makes its argument "evalute to itself".
It a way to introduce a literal value (for a symbol
or a list, which normally would be evaluated, and
thus otherwise would have turned into the resulting
value of a variable or a function call).

Before we proceed, there is another detail about
symbols that you ought to understand.

Every symbol is an object which has several attributes.
A symbol has a name, such as X, and the symbol might
also have a variable value binding, such as 12345.

A symbol also has an attribute called its "package",
which is part of its fully qualified name.
The syntax of a symbol's name is PACKAGE:NAME.

Beginners usually take no notice of the package.
This is because we usually just see the abbreviated "name"
part of the symbol's name. Lisp doesn't usually need
to print out the symbol's fully qualified name,
which would include the package prefix.

When you start up Lisp, the current package (analagous to
the idea of a current working directory in a file system)
is called "CL-USER". Any symbols that you make up will
be inside of ("interned in") this package. When you type FOO,
the Lisp READ function that is processing your input knows
that this meant CL-USER:FOO. And the PRINT function, likewise,
doesn't bother to print out the package prefix.
It simply prints the symbol's name as FOO.

(Let's never mind under what circumstances the fully
qualified name of the symbol would be printed, or when you
might have to input the package name. It has to do with
disambiguating the symbol in the presence of more than
one package, and something with having the current package
set to something different than the symbol's package.
It also has to do with whether the symbol has been
imported/exported from the package as an external symbol.
None of those details are relevent to your question.)

We can now come to explain the confusing things
from your experience with ASDF.

There is a special package named KEYWORD.
Symbols in the KEYWORD package are called "keywords".
Keywords are a kludge in the language:

* Symbols do not evaluate to themselves.
Except for keywords!

lisp> x
=> 12345
lisp> keyword:x
=> :X
lisp> :x
=> :X

As you can see, keywords are magic,
and somewhat unlike ordinary symbols.

Notice that the package name, KEYWORD, is not normally
printed, nor does it need to be entered. To indicate
the keyword package, leave off "KEYWORD" and just
use the delimiter (":") that would form the prefix.

Notice that the value of X and :X are not the same.
That's because X and :X are two distinct objects.
They are both symbols, and they happen to have the same
name as each other, but they are in different packages.

Keywords are a convenient kludge in the language that allows
you to easily input a symbol, useful for certain purposes,
that will never have a variable value. Rather, it will
always "evaluate to itself" (that it, its name, constantly).

Lisp "keywords" are employed for much the same purpose
that keywords serve in other programming languages.
To wit, they are used as unambiguous syntactic markers.
Many functions can take arguments specified by keyword.

lisp> (position #\C "aAbBcCdDC" :test #'char=)
=> 5
lisp> (position #\C "aAbBcCdDC" :test #'char= :from-end t)
=> 8

Keywords are "self-evaluating", thus like numbers
or strings, but unlike normal symbols in that regard.

The specific thing that confused you is why ASDF functions
accept either symbols or keywords as system names.

The general answer is that some programs work by maintaining
some sort of data table that maps one object to another object
for the purposes of the application.

Each symbol is a unique and distinct object, and could
therefore be used as the lookup key in such a table.
In source code, in order to refer to the symbol itself,
and not its variable value, you must QUOTE it.
(Otherwise, due to the rules of evaluation, you would be
referring to whatever variable value the symbol might have.)

A keyword is a unique and distinct object, and could
therefore be used as the lookup key in such a table.
You don't need to quote keywords. One of the essential
features of keywords is that you don't have to quote them.
And they always evaluate to the same thing: themselves.

You could quote your keywords (or numbers or strings)
if you wanted to. It does not change their value.

lisp> (quote 12345)
=> 12345
lisp> :x
=> :X
lisp> (quote :x)
=> :X

So why would anyone ever bother to quote a keyword?
They might do that if they were trying to emphasize to
someone that they were using the keyword as a piece of data,
as opposed to it being a syntactic marker in some function
call that accepts keyword-specified arguments.

lisp> (find-solution previous-run :method ':alpha-beta)
=> nil

lisp> (prune-directory "/My" :keep ':oldest)
Pruning all but the oldest files...12 kept, 69 deleted!

You still have one last confusing issue, specific to ASDF,
and this is where the real confusion comes!

We have learned that in table lookups, generally perhaps
one object is as good as another for the purpose of serving
as a unique identifier. The ASDF functions want you to pick
some identifier to be a "system name"

ASDF seems to accept either a keyword like :FOO, or a
symbol (eg. in the default CL-USER package) such as FOO.
If you experiment some more, you might discover that
ASDF will also accept strings, such as "FOO".

lisp> foo
=> {error: unbound variable FOO}
lisp> 'foo
=> FOO
lisp> ':FOO
=> :FOO
lisp> :foo
=> :FOO
lisp> (equal 'foo :foo)
=> NIL
lisp> (equal 'foo "foo")
=> NIL

Notice that FOO and :FOO are distinct symbols.
You could decide to use one or the other, but they should
not be interchangable, since they are not the same object!

It seems as though you could use ASDF to manipulate three
different systems, one named FOO, one named :FOO, and
another named "FOO".

The confusing thing is that ASDF accepts any of those
identifiers, and somehow treats them all as referring
to the same thing. Clearly "FOO" and :FOO are not
the same object -- they are not even of the same type!

If you look at the source code for ASDF, you will learn
that ASDF accepts any of those three pieces of data as a
system identifier. But what ASDF really wants is a string.

Every symbol object has a name, which is a string.
When given a symbol, ASDF extracts that string from the
symbol and just uses the string itself.
Moreover, ASDF wants these strings to be lowercase.
(Symbol names are normally uppercase.)
Whether you handed ASDF a string, or whether ASDF extracted
the string from a symbol's name, it calls STRING-DOWNCASE on it.
That lowercase string becomes the actual name of the system.
ASDF is generous in accepting either that lowercase string,
an upper or mixed-case version of the string, or a symbol,
or a keyword, and in any event canonicalizes your input
in order to form the actual system name.

If you made it through all that explanation,
I think that should probably answer your question.

I'm going to finish this up here by responding to someone
else's posting where they tried to explain the mystery to you,
but maybe introduced some additional confusion.
This will also help complete the puzzle. (Timofei Shatrov) wrote:
> Thus, (in-package :asdf) works,
> but (in-package 'asdf) doesn't
> because > (quote asdf) is not a string, symbol or character.

The above has nothing to do with why ASDF treats two
different symbols (FOO and :FOO) as the same identifier.
Rather, ASDF is massaging the value that you hand it,
transforming it into another piece of data.
ASDF is programmed to accept several different kinds
of objects as system identifiers, and it turns all of
them into the same thing.

I don't see how the above was supposed to answer the
ASDF question; I think the poster may be confused.
In the example above, with IN-PACKAGE, something else
entirely is going on.

IN-PACKAGE is a macro, not a function.

In Lisp, there are two kinds of operators that can be named
as the first element in an expression. The normal thing
is a function. Expressions (F x) where F names a function
follow the normal, rules for recursive evaluation -- all the
arguments are evaluated, then the function named F is called.

But expressions of the form (M x) where M names a macro
follow some different rules of evaluation. Macro calls
look about the same as function calls, but are special.

Closely related to "macros" are "special forms".
They are conceptually similar to each other in that they
both resemble function calls, and they both introduce
new and arbitrary syntax and evaluation rules.

One special form that you know about already is QUOTE.
It does not evaluate its argument, but just returns
it literally.

lisp> (hello world)
=> {error: unbound variable WORLD}
Maybe undefined function HELLO, too,
but we didn't even get that far.

lisp> '(hello world)

IN-PACKAGE is not a function.
It is a macro, and is defined to process its arguments
according to its own idiosyncratic rules.
As it happens, it does _not_ evaluate its arguments.

IN-PACKAGE accepts what it terms a "name", which it defines
as a "string designator". If you look up "string designator"
in the CLHS Glossary, you will see that this might either
be a string, or it might be a symbol.

As documented, IN-PACKAGE does not evaluate this string designator.
You can _not_ use some expression there which would _evaluate_ to
a string or a symbol. The argument must literally _already_ be
a string or a symbol.

Therfore, to set the current package to CL-USER,
you could write any of the following:

(in-package "CL-USER")
Because that's a string.
Notice that package names are always upper-case.

(in-package CL-USER)
Because that's a symbol, and IN-PACKAGE will look up
the name of the symbol, which is "CL-USER".

(in-package :CL-USER)
Because that's a symbol, and IN-PACKAGE will look up
the name of the symbol, which is "CL-USER".

(I hope that example wasn't too confusing!
The symbol CL-USER has a package, which is not mentioned
here and which is irrelevent. IN-PACKAGE is only interested
in extracting the "name" of the symbol, not its package.
And then we are using the resulting string for the purpose
of manipulating Lisp's "current package".)

You cannot write this:
(in-package 'CL-USER)

Because that argument is just shorthand for this:
(in-package (QUOTE CL-USER))

The transformation of 'CL-USER into (QUOTE CL-USER)
is not one of evaluation. It is performed at read-time,
when the source code is translated from characters into
Lisp objects. (In the typical sequence of events, read-time
preceeds evaluation.) And in this case, there will be no
evaluation of that argument, because IN-PACKAGE is defined
to never evaluate its arguments. Plainly, a list is not
a string or a symbol, and so this will not be valid.

The only similarity between the ASDF functions and IN-PACKAGE
is that they both will accept either strings or symbols.
But how they process their arguments, and how they use
those arguments internally, are different.

Besides functions like OPERATE (aka OOS), ASDF also does have some
macros, notably DEFSYSTEM. Unlike functions, the DEFSYSTEM macro
is similar to IN-PACKAGE in that it doesn't evaluate its arguments.

I hope that clears a few things up!


mercredi, juin 29, 2005

Top-down vs Bottom-up design

Richard Feynman, sur la construction du réacteur principal de la navette Challenger :
The engine is a much more complicated structure than the Solid
Rocket Booster, and a great deal more detailed engineering goes into
it. Generally, the engineering seems to be of high quality and
apparently considerable attention is paid to deficiencies and faults
found in operation.

The usual way that such engines are designed (for military or
civilian aircraft) may be called the component system, or bottom-up
design. First it is necessary to thoroughly understand the properties
and limitations of the materials to be used (for turbine blades, for
example), and tests are begun in experimental rigs to determine
those. With this knowledge larger component parts (such as bearings)
are designed and tested individually. As deficiencies and design
errors are noted they are corrected and verified with further
testing. Since one tests only parts at a time these tests and
modifications are not overly expensive. Finally one works up to the
final design of the entire engine, to the necessary
specifications. There is a good chance, by this time that the engine
will generally succeed, or that any failures are easily isolated and
analyzed because the failure modes, limitations of materials, etc.,
are so well understood. There is a very good chance that the
modifications to the engine to get around the final difficulties are
not very hard to make, for most of the serious problems have already
been discovered and dealt with in the earlier, less expensive, stages
of the process.

The Space Shuttle Main Engine was handled in a different manner,
top down, we might say. The engine was designed and put together all
at once with relatively little detailed preliminary study of the
material and components. Then when troubles are found in the
bearings, turbine blades, coolant pipes, etc., it is more expensive
and difficult to discover the causes and make changes. For example,
cracks have been found in the turbine blades of the high pressure
oxygen turbopump. Are they caused by flaws in the material, the effect
of the oxygen atmosphere on the properties of the material, the
thermal stresses of startup or shutdown, the vibration and stresses of
steady running, or mainly at some resonance at certain speeds, etc.?
How long can we run from crack initiation to crack failure, and how
does this depend on power level? Using the completed engine as a test
bed to resolve such questions is extremely expensive. One does not
wish to lose an entire engine in order to find out where and how
failure occurs. Yet, an accurate knowledge of this information is
essential to acquire a confidence in the engine reliability in use.
Without detailed understanding, confidence can not be attained.

A further disadvantage of the top-down method is that, if an
understanding of a fault is obtained, a simple fix, such as a new
shape for the turbine housing, may be impossible to implement without
a redesign of the entire engine.

The Space Shuttle Main Engine is a very remarkable machine. It has
a greater ratio of thrust to weight than any previous engine. It is
built at the edge of, or outside of, previous engineering
experience. Therefore, as expected, many different kinds of flaws and
difficulties have turned up. Because, unfortunately, it was built in
the top-down manner, they are difficult to find and fix. The design
aim of a lifetime of 55 missions equivalent firings (27,000 seconds of
operation, either in a mission of 500 seconds, or on a test stand) has
not been obtained. The engine now requires very frequent maintenance
and replacement of important parts, such as turbopumps, bearings,
sheet metal housings, etc. The high-pressure fuel turbopump had to be
replaced every three or four mission equivalents (although that may
have been fixed, now) and the high pressure oxygen turbopump every
five or six. This is at most ten percent of the original
specification. But our main concern here is the determination of

mardi, juin 28, 2005

Le bruit de 140 programmeurs C++

c.l.l snippets :

Joe Marshall writes:
> It is somewhat horrifying to imagine the amount of damage 80 C++
> programmers could do.

I once worked on a project with 140 (yes, one hundred and forty
C++ coders). Most of them were new hires working for a well-known
system integrator and didn't actually have enough training to be
productive. To allow them to create working code, the well-known SI
provided them with a fill-in-the-blanks interface -- written in
Microsoft Word. They threw a big party when the codebase reached a
million lines.

You just can't make this stuff up....

Patrick May

vendredi, juin 17, 2005

Packages, Systems, Modules

C. Stacy in c.l.l :

When peple speak of "modules" in other languages,
they may be referring to a way of isolating programs
from each other. Modules are boundaries, possibly
representing compilation units. There are generally
ramifications for program linkage. Some languages may
also have features for supporting defined communication
paths between modules working across processors or networks.

Common Lisp doesn't have any "module system", and the
word "module" isn't really generally used in Lisp.

Common Lisp does have a trivial feature called modules.
It refers to a library claiming to having been loaded.
A module name (any random string) can appear in the
list stored in the global variable *MODULES*.
The function REQUIRE can take a module name,
and through some implementation manner, cause
the module to be loaded. The loader concludes
by calling the function PROVIDE, which does nothing
more than stick the given module name in *MODULES*.

I think REQUIRE/PROVIDE may be deprecated in the ANSI standard.

Let's get packages out of the way, and come back to modules later.

Common Lisp has "packages", which are nothing more than namespaces.
It is not a library; it is not code. It's a bag of symbol names.
Every symbol has a name, like CONS or FOO, and every symbol
also has a "home package". These form the fully qualified name
of the symbol. One of the most important things about a symbol
is that it has a unique name (with respect to its home package).
Every symbol's name really looks like PACKAGE:NAME.

When you are working with your code, you don't need to say
the name of the package. At any given moment, a current
package object is bound (as *PACKAGE*) for READ.
As you type in a symbol name, READ "interns" the symbol
in the current package. (That is, READ looks it up by name,
or else creates it.) You just say SHUTTLE rather than
having to spell out SPACE:SHUTTLE.

Packages may export symbols, thus forming an API.
Packages may import symbols, individually or wholesale
from another package. Suppose that you are writing
some source code to be handed to the compiler,
or are typing at Lisp. Let's say the current package
is "CL-USER" (which is the default scratch workspace).
To reference the exported symbol LAUNCH from the SPACE
package, you can say SPACE:SHIP. But if you had only
written LAUNCH, you would have been referencing your
own distinct CL-USER:SHIP symbol. Let's suppose that
you would like to write LAUNCH and mean SPACE:SHIP.
You can "import" SPACE:SHIP into CL-USER.
SHIP now refers to SPACE:SHIP.

If you want to reach into some foreign package to reference
a symbol that has not been exported, you may freely perform
this violation. Just use the double-colon syntax.
even though it hasn't been exported.
You are now a bad person.
But if you really needed to do that,
Lisp isn't going to get in your way.

The reserved package containing the Common Lisp language
symbols is called COMMON-LISP. Most packages are used for
symbols representing Lisp code, so they "use" COMMON-LISP;
all the exported COMMON-LISP symbols are imported.
You can just write CONS instead of COMMON-LISP:CONS.

Package names do not in themselves mean anything,
and there is no standard naming convention.
Packages can also have nicknames (aliases).
I like to name mine like "COM.DTPQ.WHATEVER".

Packages are not related to files in any way.
At any point in any source file, or when typing
at Lisp, the form IN-PACKAGE is used to establish
the current package (for the remainder of the
input stream, or until another IN-PACKAGE).

Packages are not related to modules or anything, either.
Packages are only about symbol names.

Now let's talk about "Systems".
Common Lisp doesn't have "systems", either,
but the Lisp Machine had a form called DEFSYSTEM.
This is for solving the problem that you have a
bunch of files that go together, and they need
to be compiled and loaded in some order with
some particular dependancies. DEFSYSTEM is
used to describe all of that. You may then call
the function MAKE-SYSTEM, which will analyze your
system dependancies, look at the files and timestamps,
and call the compiler appropriately. There is also
LOAD-SYSTEM, which loads up your system as specified.

The Lisp Machine DEFSYSTEM was very fancy an integrated with
the system version control and release and patching features.
Most Lisp implementations have copied this idea and provided
at least some simple DEFSYSTEM extension. There's a portable
one called MK-DEFSYSTEM. Some programs include their own.
The latest popular portable DEFSYSTEM critter is called ASDF.
(I guess this stands for "Another System Definition Facility").

Systems and Packages interact in the sense that a package has to
be defined before it is used, which is the sort of compile/load
dependancy that is described by the DEFSYSTEM utility.

Now, back to "modules".

The REQUIRE/PROVIDE interface was intended as a least
common denominator, poor man's kind of LOAD-SYSTEM function.
Modules may or may not interact in any way with whatever
nice "systems" (eg. DEFSYSTEM) feature you may have.

Speaking of "features", there's another global variable
called *FEATURES*, which is just a list of symbols that
represent features that have somehow become available
to you. These are used with the #+ and #- reader macros.
This is the equivalent of #ifdef in other languages,
although you can also query *FEATURES* at runtime.
Yes, it's a hack. No, there's no standard for how
features should be named or how to prevent feature
name collisions.

Modules are a lot like features, except they are
relatively useless. There are no "module" objects.
A module is just a name (presumably of something or another),
that somehow got stuck onto the list in *MODULES*.

Sometimes a Lisp image is called a "world", by the way.
(But there are no interplanetary features implied.)
On the Lisp Machine, the entire operating system and
all of the applications were loaded into one gigantic
image (a single address space execution environment),
which was dumped out and booted from. Therefore,
some people still refer to a "Lisp image" as a "world".

So, in the world, you have some packages, which make up the
namespaces in which your symbols "live". You may also have
some modules (which are supposed to represent that you have
loaded some particular program or something). Modules are
a vaccuous abstraction, and do not represent any language
boundary; that's the job of Packages and is only accomplished
lexically. Nothing strictly prevents anything in the Lisp world
from interacting with anything else in the same Lisp world.
(And there's only ever the one world.)

Along comes Linux, and especially Debian (and Gentoo).
They have this wonderful operating system feature called
the "package manager", which is used to automatically
load and manage packages of software on your machine.
You can just type a quick command to tell the system that
you want "Emacs", and presto -- Emacs is magically
installed on your machine and ready for use.

Wouldn't it be nice if you could do something like
that with your Lisp programs, from inside Lisp?

So some nice people created a program called the
Common Lisp Controller, currently ported to
Debian and Gentoo Linux.

Debian's notion of "package" has nothing to do with Lisp's
concept of a "package". Common Lisp Controller sort of
conceptually translates a Debian "package" into a Lisp
"module" or "system".

We'll need something that can register Lisp programs
and their dependancies. It will need to be able to
find the files, see what needs compiling, and load files.
It may need to understand that some systems depend
on first loading other systems.
This is a job for some DEFSYSTEM!
Specifically, they choose ASDF.

Next, some conventions are established for how to package
up arbitrary Lisp libraries for the Debian distribution:
having them use ASDF, and putting their "system definition"
(your program's files that contain the calls to ASDF)
in a particular place.

Finally, the Lisp implementations that want to fully play along
can come pre-built with a standard interface to the ASDF-based
distribution package scheme.

And the interface that was chosen was REQUIRE.
After all, we're not using "modules" for anything else!

So now you can log in to Debian, and use its package
manager to magically fetch some software you want.
At the shell, you type: # apt-get "cl-base64"
and it gets installed on your machine.

Then, in Lisps such as SBCL, you can just fire
it up and say: (require "cl-base64")
and the "cl-base64" package is loaded by ASDF into your
Lisp world, all ready for you to use.

So, they have partially recycled the REQUIRE function
as a handy loading interface for these systems defined
by ASDF. By the way, there are no particularly interesing
standard semantics to the *MODULES* variable; it doesn't
get updated by ASDF when a system is loaded. ASDF decides
if something needs loading or reloading (or if a file needs
recompiling and loading) by looking at other information.

I suppose they could have also made REQUIRE go all the
way by bringing up a shell and doing the "apt-get" command
for you, but they didn't. Perhaps because you have to
be "root" to do that.

So I guess that's what "modules" are, now.
Not very related to "modules" in languages like MODULA.

Aside from extensions like ASDF and Common Lisp Controller,
Lisp doesn't really know or care very much about files.
It basically just has LOAD and COMPILE-FILE, and they
don't care what's in a file, nor relate to packages
or modules or anything else.

Lisp's packages are the language element that provides
logical seperation of programs from one another,
through naming. It works out extremely well.

For more information, please see:

Rep, the Lisp implementation used by Sawfish, is a kind of Scheme.
The language is a lexically scoped "Lisp-1". Every closure is
associated with a "module" object, which provides a namespace
for resolving unbound variable names. Modules may reference other
modules, importing and exporting references. Modules are also
associated with files and dynamic loading and linking.
I don't know anything about rep.

See also:

vendredi, juin 03, 2005

Longueur d'une fonction

(inspiré d'une remarque sur comp.lang.lisp)

Q : Quelle est la longueur idéale d'une fonction en Lisp ?

R : Assez longue pour atteindre la dernière parenthèse fermante.

Longueur d'une liste

LIST-LENGTH will return nil for circular lists and (length list) for proper lists.
Barry Margolin sur comp.lang.lisp :

Of course, this will typically work in the way Steven wishes to avoid, by walking the entire list. However, it's not necessary to build up a set of seen conses. The usual implementation is something like this:

(defun list-length (list)
(if (null list)
(1+ (loop for slow on (cdr list)
and fast = (cddr list) then (cddr fast)
counting slow
when (eq slow fast)
do (return-from list-length nil)))))

Rather than remembering every cons it has seen, it walks the list one step at a time and two steps at a time. If the fast one ever "laps" the slow one, the list must be circular.

mercredi, juin 01, 2005

Le source dans l'image lisp

Pascal Bourguignon sur comp.lang.lisp : exemple pour attacher du code source à des fonctions/variables.

Well, you could use (fdefinition 'myfun) or (function-lambda-expression (symbol-function 'myfun)) but the implementations are not mandated to return anything helpful, so if you want to "edit the source" in the REPL, you should start with:

(defpackage "MY-COMMON-LISP"
(:nicknames "MY-CL")
(:use "COMMON-LISP")
#| later add DEFINE-CONDITION, DEFPACKAGE, ...|#))
(in-package "MY-COMMON-LISP")

(common-lisp:defparameter *sources* (make-hash-table :test (function equal)))

(common-lisp:defmacro defun (name &rest stuff)
(setf (gethash (cons 'function name) *sources*) `(defun ,name ,@stuff))
`(common-lisp:defun ,name ,@stuff))

(common-lisp:defmacro defmacro (name &rest stuff)
(setf (gethash (cons 'function name) *sources*) `(defmacro ,name ,@stuff))
`(common-lisp:defmacro ,name ,@stuff))

(common-lisp:defmacro defvar (name &rest stuff)
(setf (gethash (cons 'variable name) *sources*) `(defvar ,name ,@stuff))
`(common-lisp:defvar ,name ,@stuff))

(common-lisp:defmacro defparameter (name &rest stuff)
(setf (gethash (cons 'variable name) *sources*) `(defparameter ,name ,@stuff))
`(common-lisp:defparameter ,name ,@stuff))

(common-lisp:defmacro defconstant (name &rest stuff)
(setf (gethash (cons 'variable name) *sources*) `(defconstant ,name ,@stuff))
`(common-lisp:defconstant ,name ,@stuff))

(defun source (symbol kind) (gethash (cons kind symbol) *sources*))

(defun edit (symbol &optional (kind 'function))
(let ((source (source symbol kind))
(tempfile (make-pathname
:name (format nil "TEMP~8,'0X" (random #x100000000))
:type "LIST" :case :common)))
;; NOTE: Here we could loop while there is an error in the load.
(progn (with-open-file (out tempfile
:direction :output
:if-exists :supersede
:if-does-not-exist :create)
(format out ";; -*- mode: lisp -*-~%~
;; Source for the ~A ~A~2%" kind symbol)
(when source (print source out)))
(ed tempfile)
(load tempfile))
(delete-file tempfile))))

(export 'source)
(export 'edit)

(export (let ((syms '()))
(do-symbols (s "COMMON-LISP")
(push (find-symbol (symbol-name s) *package*) syms))

(defpackage "MY-COMMON-LISP-USER"
(:nicknames "MY-CL-USER")
(:use "MY-COMMON-LISP"))
(in-package "MY-COMMON-LISP-USER")

(defun fact (x) (if (<>
(source 'fact 'function)
(defparameter fact '(--> (and (is sky blue) (is planet earth))
(is wheather good)))
(source 'fact 'variable)
(edit 'fact)
(source 'fact 'function)
(edit 'fact 'variable)
(source 'fact 'variable)

Or, if you don't want to bother, do as everybody else, edit the source files in a unix file system with emacs, and only load them from the REPL.


CMUCL will give you a second REPL via telnet for:

(setf mp::*idle-process* mp::*initial-process*)
(mp::start-lisp-connection-listener :port 6789 :password "fred")

(said GP Lisper on comp.lang.lisp)

lundi, mai 02, 2005

Séparation de l'Eglise et de l'Etat

"We have tried to design a language that separates church from state."

Ken Pitman sur comp.lang.lisp

PS : structure de l'expression (f x)

vendredi, avril 08, 2005

API Threads de SBCL

C'est curieux, je ne trouve aucune référence complète sur la question. Voila ce qu'on a pour l'instant (mises à jour prévues) (tout cela provient du manuel de SBCL et de swank-sbcl.lisp appartenant à SLIME) :

package SB-THREAD

MAKE-MUTEX :name -> lock?
WITH-MUTEX lock body -> nil

MAKE-WAITQUEUE -> waitqueue

MAKE-THREAD fun -> integer
ALL-THREADS -> list of integers
%THREAD-STATE-SLOT thread -> {0:running, 1:stopping, 2:stopped, 3:dead}




package sb-sys:


mercredi, mars 23, 2005

Unwind-protect, Dynamic-wind, Deep binding, Transactions

Ce sont de vieilles et très utiles fonctionalités (connues en tant que [try ... finally] ailleurs).

Unwind-protect appartient à Common Lisp et fonctionne bien avec des continuations de seconde classe (des échapements).

Dynamic-wind essaye de faire la même chose en Scheme, mais il se coltine call/cc, c'est à dire les continuations à durée in(dé)finie ...

Deux bonnes ressources.

Je vais me lire aussi Worlds in Collision: A Mostly Functional Model of Concurrency Control and Recovery et Shallow Binding in Lisp 1.5 de Henry Baker.

Je ne sais pas s'il y a un rapport avec les *wind*, mais il est question de variables dynamiques et de transactions. S'il y a un rapport, à vous de trouver lequel.

Ode au langage C

Présenté par un slashdoteur anonyme ...

May your signals all trap
May your references be bounded
All memory aligned
Floats to ints rounded


Non-zero is true
++ adds one
Arrays start with zero
And NULL is for none

For octal, use zero
0x means hex
= will set
== means test

use -> for a pointer
a dot if its not
? : is confusing
use them a lot

a.out is your program
there's no U in foobar
And char (*(*x())[])() is a function returning a pointer to an array of pointers to functions returning char.

vendredi, mars 18, 2005

Fault Tolerant Shell

Je viens de lire un papier rigolo sur un shell "tolérant aux pannes", FTSH.

La philosophie du machin est d'appliquer un équivalent du protocole Ethernet aux activités distribuées d'un script ou d'un langage de programmation de haut niveau.

En d'autres termes (hum), de la gestion d'exceptions étendue avec timer, nombre d'essais et probes. Associer à un bloc protégé un ensemble de mécanismes ou heuristiques pour décider quand ça doit, finalement, péter.

A méditer ...

Quelques slides avant le plat de résistance.