Anton Vodonosov wrote:
The manual says: "the :version, :author, description and other [defsystem] fields are not required but they provide documentation and information for people that want to use this system. It also says in the section about asdf:operate: "If a version argument is supplied, then operate also ensures that the system found satisfies it using the version-satisfies method."
From some other places in the manual we can even guess how version-satisfies work.
Still, I think it was too much to expect from closer-mop to encode API compatibility information in the version identifier. Note also, in 2009 ASDF didn't consider version "0.6" as satisfying requirement for "0.55":
Yes, that's because the version scheme in ASDF is not a sequence of period-separated integers, but a sequence of period-separated strings. That was in ASDF 1, and we didn't dare mess with it. Pascal was very kind about this oddity, and has tweaked his revision numbering to fit ASDF's constraints.
http://lists.common-lisp.net/pipermail/moptilities-devel/2009-December/date....
When Faré wrote up the design rationale for ASDF, one of the principles was that ASDF should ask the right person for the right information.
For example, the library should not dictate where files go -- that's the job of the library's installer (user). The library itself knows where its components are *relative to the installation* -- that's information the library implementor will have, which the installer will not have.
Similarly, if I am the library supplier, I am the one who knows when I have changed the API incompatibly. That is why major component version changes were intended *not* to satisfy version requirements that have different major components. This is reasonable, because it provides a channel of information from the library supplier, that can be checked automatically, and if the downstream is still compatible, it should be a relatively easy fix (but is it? I don't know if there's a good way to say "I'll accept version 0.55+ or 1.0+").
The only other option would be to have the library client provide both upper and lower bounds on the version numbers that the client will accept. But, except for cases of *known* upgrade incompatibility, this is information that the library client simply cannot have.
The rules here are simple and easy to understand: if you change the API in an incompatible way, bump the major version number.
Yes, this rules out the false modestly of version numbers like 0.0.145, which looks like an alpha, but in fact turns out to be the fifth full release, but that's a sacrifice we can all live with! ;-)
As my own devil's advocate, the counter-argument is that the state of the art in CL libraries is so poor that even getting a version number is unusual. But this seems to be a counsel of despair: it says that since we have a bad state of affairs now, we are doomed to live with it forever.
I suppose I could take a little while and write up a candidate "versioning systems with ASDF" node for the ASDF manual, and push it for consideration by the community.
Also, as we speak about versioning, I have been trying to use semantic versioning as described at http://semver.org/ and I don't think it is a silver bullet - it doesn't solve all problems.
Isn't the subtitle of /The Mythical Man-Month/ "No Silver Bullet"?
Yes, this doesn't solve every problem, but it might give you a useful warning if your upstream has changed under you.... Pascal's case is an odd one, because the bump to 1.0 is, in some sense, a false positive.
OTOH, it's not really so bad to have to take a look at a library that has seen no commits in three years.
Cheers, r
Hi Robert.
I would be interested to discuss library versioning.
Lets agree that this discussion is not about fix of the moptilities/closer-mop problem, which happens on already deployed versions of ASDF which we can not undeploy or fix. moptilities/closer-mop authors can negoticate one o
As for semantic versioning, it is good to distinguish API compatible changes from API incompatible, I fully agree.
But my point - it's not enough to just bump major version number, as semantic versioning suggests.
If author of "somelib" library wants to make an API incompatible change, it is better to release new ASDF system "somelib2" and put code into new package somelib2.
Consider a use case: my-application depends on library-a 1.0.0 and library-b 1.0.0. Both library-a and library-b depend on some basic-lib, version 1.0.0.
Now basic-lib author makes incompatible change, and bumps version to 2.0.0. Library-a 1.1.0 adopts basic-lib 2.0.0 while library-b remains unchanged.
Result: my-application is broken, because it now depends on basic-lib 2.0.0 and basic-lib 1.0.0.
I think the approach of breaking API and informing others about via changed version number is inadequate.
We should try to never break API compatibility. Once we released something as library-a and its API is used by others, it is desirable that the thing named library-a fulfilled that API forever.
In the above example, if the basic-lib author released the incompatible descendant as basic-lib2 with code moved to package basic-lib2, then my-application could load basic-lib and basic-lib2 and they peacefully coexist in the same lisp image.
I think this approach is the best in 90% of cases.
Best regards, - Anton
Anton Vodonosov wrote:
Hi Robert.
I would be interested to discuss library versioning.
Lets agree that this discussion is not about fix of the moptilities/closer-mop problem, which happens on already deployed versions of ASDF which we can not undeploy or fix. moptilities/closer-mop authors can negoticate one o
Absolutely. That's why I trimmed those parts of your email from my reply and changed the subject line.
As for semantic versioning, it is good to distinguish API compatible changes from API incompatible, I fully agree.
But my point - it's not enough to just bump major version number, as semantic versioning suggests.
If author of "somelib" library wants to make an API incompatible change, it is better to release new ASDF system "somelib2" and put code into new package somelib2.
Consider a use case: my-application depends on library-a 1.0.0 and library-b 1.0.0. Both library-a and library-b depend on some basic-lib, version 1.0.0.
Now basic-lib author makes incompatible change, and bumps version to 2.0.0. Library-a 1.1.0 adopts basic-lib 2.0.0 while library-b remains unchanged.
Result: my-application is broken, because it now depends on basic-lib 2.0.0 and basic-lib 1.0.0.
I think the approach of breaking API and informing others about via changed version number is inadequate.
We should try to never break API compatibility. Once we released something as library-a and its API is used by others, it is desirable that the thing named library-a fulfilled that API forever.
In the above example, if the basic-lib author released the incompatible descendant as basic-lib2 with code moved to package basic-lib2, then my-application could load basic-lib and basic-lib2 and they peacefully coexist in the same lisp image.
I think this approach is the best in 90% of cases.
I see your point here, but I think it's too radical. It's like having a purely functional programming language with no garbage-collection! We'd be left with heaps of unmaintained and unmaintainable versions of basic-lib floating around in the worse case.
Also, the two different libraries won't live happily in the same lisp image, unless they change package at every release (or we all adopt Ron Garret's lexically-scoped namespaces).
This approach seems like it will be such a monumental pain for the library maintainers, and anyone who wishes to upgrade his/her client program from basic-lib to basic-lib2, that it's unrealistic. [Indeed, some would say even my more modest semantic versioning proposal is too unrealistic!]
I'd argue that having two versions of the same library *in the same image* is too demanding a target. But two versions of the same library on the same machine is quite feasible, and indeed I do this myself, every day. I have different source trees for different projects, and with each source tree is associated a different ASDF configuration.
This is not a futuristic "wouldn't it be nice if..." situation, either: it's a simple requirement of my daily work environment. E.g., I have projects that rely on different versions of FiveAM (they are hosted differently, so it's not just a matter of demanding that everyone upgrade -- some I run, some I don't).
Best, r
19.11.2013, 23:41, "Robert P. Goldman" rpgoldman@sift.info:
it's too radical
It's not radical, actually my proposal is very similar to yours
It's like having a purely functional programming language
Yes, I see this as an FP analogy too and expect that avoiding mutations and destructive changes will simplify life for developers.
with no garbage-collection!
Why without garbage-collection? We are not leaking any resources.
We'd be left with heaps of unmaintained and unmaintainable versions of basic-lib floating around in the worse case.
The amount of library versions does not change in the approach I propose. But versions with different APIs are have different names.
And we deal only with versions we use, and can forget about any other versions.
Also, the two different libraries won't live happily in the same lisp image, unless they change package
Wait, I do propose to change package
at every release
at every API incompatible change (in other words, for every new API).
This approach seems like it will be such a monumental pain for the library maintainers,
What pain do you mean? It's a zero cost solution. It requires no additional efforts from library maintainer, and even no special support from ASDF and other tools.
and anyone who wishes to upgrade his/her client program from basic-lib to basic-lib2
The client job remains the same: if he wants to migrate to the new API he rewrites parts of his code using new functions. Nothing above that.
Moreover, as client can have access to both APIs simultaneously, he can sometimes migrate partially: leave his old tested code as is (using basic-lib), but in the places where he needs new functionality he calls basic-lib2. So, client can benefit from new features without investing efforts into rewriting and retesting code.
I'd argue that having two versions of the same library *in the same image* is too demanding a target.
It's not a target. The target is to not break clients. It's rather a tool to achieve this target, or a pleasant side-product of a cheap decision to name different things differently.
Please think about this approach a little more. IMHO this approach is convenient and I would recommend it as the first thing to consider to anyone who is going to change API of a public library.
Am I missing anything?
Best regards, - Anton
I think, you're both right. :)
I have a similar experience of migrating a large Java application through changes of versions of Jetty from 6th to 9th, and it was much less painful when the namespaces were changed (between v. 6 and 7 if I'm not mistaken) for the points mentioned by Anton. So I think, it indeed makes sense to create, say, hunchentoot2 if there's going to be again a serious incompatible API change, like the transfer from 0.X to 1.X. (But a transfer from 0.X to 1.X is an exception here, because it is assumed that until v.1 the software is not stable).
At the same time, if you make a small "local" API change, it often doesn't justify creating a new package & system, because there will be more total inconvenience for those who aren't affected (the majority) to migrate to the new version than for those who are affected to change their code.
So it all just boils down to common senses, some level of discipline and the amount of change some system really needs to go through intrinsically. My guess is that for the vast majority of libraries such dramatic version transitions may happen once in several years, and it totally makes sense for me to do a new namespace/system for that.
Best regards,
--- Vsevolod Dyomkin +38-096-111-41-56 skype, twitter: vseloved
On Wed, Nov 20, 2013 at 2:50 PM, Anton Vodonosov avodonosov@yandex.ruwrote:
19.11.2013, 23:41, "Robert P. Goldman" rpgoldman@sift.info:
it's too radical
It's not radical, actually my proposal is very similar to yours
It's like having a purely functional programming language
Yes, I see this as an FP analogy too and expect that avoiding mutations and destructive changes will simplify life for developers.
with no garbage-collection!
Why without garbage-collection? We are not leaking any resources.
We'd be left with heaps of unmaintained and unmaintainable versions of basic-lib floating around in the worse case.
The amount of library versions does not change in the approach I propose. But versions with different APIs are have different names.
And we deal only with versions we use, and can forget about any other versions.
Also, the two different libraries won't live happily in the same lisp image, unless they change package
Wait, I do propose to change package
at every release
at every API incompatible change (in other words, for every new API).
This approach seems like it will be such a monumental pain for the library maintainers,
What pain do you mean? It's a zero cost solution. It requires no additional efforts from library maintainer, and even no special support from ASDF and other tools.
and anyone who wishes to upgrade his/her client program from basic-lib to basic-lib2
The client job remains the same: if he wants to migrate to the new API he rewrites parts of his code using new functions. Nothing above that.
Moreover, as client can have access to both APIs simultaneously, he can sometimes migrate partially: leave his old tested code as is (using basic-lib), but in the places where he needs new functionality he calls basic-lib2. So, client can benefit from new features without investing efforts into rewriting and retesting code.
I'd argue that having two versions of the same library *in the same image* is too demanding a target.
It's not a target. The target is to not break clients. It's rather a tool to achieve this target, or a pleasant side-product of a cheap decision to name different things differently.
Please think about this approach a little more. IMHO this approach is convenient and I would recommend it as the first thing to consider to anyone who is going to change API of a public library.
Am I missing anything?
Best regards,
- Anton
Just to chime in in the middle: There is no known solution to the so-called "DLL hell" problem. Libraries interact badly because of their interactions, not because one or the other is "bad." Even with the best of intentions, a library author cannot predict what changes will break existing clients and what changes won't, because that author doesn't know about all possible interactions. When APIs change, telling clients that they are now incompatible may be a lie, because they may not depend on the specific change. (For example, is the addition of a keyword argument an incompatible change or not? It may, or it may not be...)
You are basically trying to solve the halting problem for a program where you don't know significant parts of the program. ;)
There is a field of research about component-oriented programming where this was a hot topic for quite some time, and nothing ever came out of it. The only practical working solution was that of Microsoft COM, where you need to change a GUID when APIs change, and since it's a black box model, that covers a lot of ground. Common Lisp libraries are definitely not black box, so even this solution will probably not work that well. (Changing the name of the library or the system definition, as Vsevolod suggests, would be similar.)
If you want to give control to developers, you could provide a way that depends-on specifications are list designators, with some form of declarative way of precisely specifying which versions are compatible and which aren't. (Then you could describe situations like, compatible with everything up to and including 0.9.x, and everything above 1.0.0, but excluding 1.0.0 - a situation that actually occurred when Closer to MOP was incompatible with SBCL 1.0.0 for a brief moment in history... ;)
Pascal
Sent from my iPad
On 20 Nov 2013, at 14:34, Vsevolod Dyomkin vseloved@gmail.com wrote:
I think, you're both right. :)
I have a similar experience of migrating a large Java application through changes of versions of Jetty from 6th to 9th, and it was much less painful when the namespaces were changed (between v. 6 and 7 if I'm not mistaken) for the points mentioned by Anton. So I think, it indeed makes sense to create, say, hunchentoot2 if there's going to be again a serious incompatible API change, like the transfer from 0.X to 1.X. (But a transfer from 0.X to 1.X is an exception here, because it is assumed that until v.1 the software is not stable).
At the same time, if you make a small "local" API change, it often doesn't justify creating a new package & system, because there will be more total inconvenience for those who aren't affected (the majority) to migrate to the new version than for those who are affected to change their code.
So it all just boils down to common senses, some level of discipline and the amount of change some system really needs to go through intrinsically. My guess is that for the vast majority of libraries such dramatic version transitions may happen once in several years, and it totally makes sense for me to do a new namespace/system for that.
Best regards,
Vsevolod Dyomkin +38-096-111-41-56 skype, twitter: vseloved
On Wed, Nov 20, 2013 at 2:50 PM, Anton Vodonosov avodonosov@yandex.ru wrote: 19.11.2013, 23:41, "Robert P. Goldman" rpgoldman@sift.info:
it's too radical
It's not radical, actually my proposal is very similar to yours
It's like having a purely functional programming language
Yes, I see this as an FP analogy too and expect that avoiding mutations and destructive changes will simplify life for developers.
with no garbage-collection!
Why without garbage-collection? We are not leaking any resources.
We'd be left with heaps of unmaintained and unmaintainable versions of basic-lib floating around in the worse case.
The amount of library versions does not change in the approach I propose. But versions with different APIs are have different names.
And we deal only with versions we use, and can forget about any other versions.
Also, the two different libraries won't live happily in the same lisp image, unless they change package
Wait, I do propose to change package
at every release
at every API incompatible change (in other words, for every new API).
This approach seems like it will be such a monumental pain for the library maintainers,
What pain do you mean? It's a zero cost solution. It requires no additional efforts from library maintainer, and even no special support from ASDF and other tools.
and anyone who wishes to upgrade his/her client program from basic-lib to basic-lib2
The client job remains the same: if he wants to migrate to the new API he rewrites parts of his code using new functions. Nothing above that.
Moreover, as client can have access to both APIs simultaneously, he can sometimes migrate partially: leave his old tested code as is (using basic-lib), but in the places where he needs new functionality he calls basic-lib2. So, client can benefit from new features without investing efforts into rewriting and retesting code.
I'd argue that having two versions of the same library *in the same image* is too demanding a target.
It's not a target. The target is to not break clients. It's rather a tool to achieve this target, or a pleasant side-product of a cheap decision to name different things differently.
Please think about this approach a little more. IMHO this approach is convenient and I would recommend it as the first thing to consider to anyone who is going to change API of a public library.
Am I missing anything?
Best regards,
- Anton
On Wed, 2013-11-20 at 18:54 +0100, Pascal Costanza wrote:
Just to chime in in the middle: There is no known solution to the so-called "DLL hell" problem. Libraries interact badly because of their interactions, not because one or the other is "bad." Even with the best of intentions, a library author cannot predict what changes will break existing clients and what changes won't, because that author doesn't know about all possible interactions. When APIs change, telling clients that they are now incompatible may be a lie, because they may not depend on the specific change. (For example, is the addition of a keyword argument an incompatible change or not? It may, or it may not be...)
Given the flexibility of CL, there are innocuous changes that might break dependent code. For example, adding a new return value to a function is backwards-compatible if the latter is used via multiple-value-bind not if the user employs multiple-value-list plus destructuring-bind. That's perfectly legal CL and, in some cases, might be justifiable as the best solution; even simply adding a function is not backwards-compatible if the dependent code uses boundp at runtime.
If you want to give control to developers, you could provide a way that depends-on specifications are list designators, with some form of declarative way of precisely specifying which versions are compatible and which aren't. (Then you could describe situations like, compatible with everything up to and including 0.9.x, and everything above 1.0.0, but excluding 1.0.0 - a situation that actually occurred when Closer to MOP was incompatible with SBCL 1.0.0 for a brief moment in history... ;)
The Haskell people tried that with cabal but their experience was that too stringent dependency specs make upgrades a hell. Example:
FOO 1.5 depends on BAR <= 1.2 && >= 1.0 QUUX 0.7 depends on FOO <= 1.5 && >= 1.0 and BAR <= 1.2 && >= 1.0
FOO 1.6 is released with dependency on BAR <= 1.5 && >= 1.3
Now one cannot install QUUX any more because its dependencies cannot be met. It directly depends on BAR <= 1.2 and indirectly on BAR >= 1.3 Users of QUUX will have to modify it locally and contact its developer to update the definition of QUUX or fix it.
In practice it seems that the best thing to do is have relaxed dependencies and rely on an integrator/distributor to put together packages and developers have to make sure that when they make a release, their library works with the most recent release of its dependencies. In other words, work with snapshots of the development "world" and never try to mix libraries from different ages.
Pascal Costanza wrote:
Just to chime in in the middle: There is no known solution to the so-called "DLL hell" problem. Libraries interact badly because of their interactions, not because one or the other is "bad." Even with the best of intentions, a library author cannot predict what changes will break existing clients and what changes won't, because that author doesn't know about all possible interactions. When APIs change, telling clients that they are now incompatible may be a lie, because they may not depend on the specific change. (For example, is the addition of a keyword argument an incompatible change or not? It may, or it may not be...)
You are basically trying to solve the halting problem for a program where you don't know significant parts of the program. ;)
I get it, but this is a "the better is the enemy of the good" argument. I know we can't *solve* the DLL hell problem. But that is not a good reason not to solve part of it. [Heck, I'm an AI guy -- *all* my problems are at least intractable, and none are solvable in the general case!]
When an API changes, telling a client it is incompatible is not a lie, it's a conservative approximation to the truth.
After all, telling a client that everything is fine (which is what we do now), is equally a lie.
So the best we can do is give a clue. And, along the lines of Fare's design principle, if I am changing the API, I am the only one who knows this. So I can signal this by bumping the major version number, and the first time someone tries to load, cause an exception. Now we can check the exception, and if it's not important, we simply update the versioning information and proceed.
Another thing that we could do would be to make the version errors continuable. I thought this was something that people would like, but I haven't seen any reaction positive or negative to this suggestion.
If it *is* important, we may have saved the poor programmer a ton of effort, since s/he is unlikely to get an error message that says "the API has changed, have you checked with your library supplier?" After all, emitting such an error message is equally equivalent to solving the halting problem.
There is a field of research about component-oriented programming where this was a hot topic for quite some time, and nothing ever came out of it. The only practical working solution was that of Microsoft COM, where you need to change a GUID when APIs change, and since it's a black box model, that covers a lot of ground. Common Lisp libraries are definitely not black box, so even this solution will probably not work that well. (Changing the name of the library or the system definition, as Vsevolod suggests, would be similar.)
If you want to give control to developers, you could provide a way that depends-on specifications are list designators, with some form of declarative way of precisely specifying which versions are compatible and which aren't. (Then you could describe situations like, compatible with everything up to and including 0.9.x, and everything above 1.0.0, but excluding 1.0.0 - a situation that actually occurred when Closer to MOP was incompatible with SBCL 1.0.0 for a brief moment in history... ;)
That we can do, and it would be useful, but that doesn't address the same problem as semantic version. Your suggestion involves the CLIENT developer reading the mind of the LIBRARY developer. That doesn't meet the need for the library developer to communicate in a broadcast way to all the clients (whom, in general, s/he will not know). It *does* provide a valuable way for the client to adapt to the library change and record the information that the client developer has learned about the state of the library.
Best, R
Pascal Costanza pc@p-cos.net wrote:
Just to chime in in the middle: There is no known solution to the so-called "DLL hell" problem.
You're right of course, but in practice, I think we have a lot to learn from guix/nixos. Ultimately, I would like to see quicklisp and asdf melt into a beast like that...
On Wed, 2013-11-20 at 23:24 +0100, Didier Verna wrote:
Pascal Costanza pc@p-cos.net wrote:
Just to chime in in the middle: There is no known solution to the so-called "DLL hell" problem.
You're right of course, but in practice, I think we have a lot to learn from guix/nixos. Ultimately, I would like to see quicklisp and asdf melt into a beast like that...
Me too, but my impression is that there will be significant opposition to that.
good afternoon;
On 21 Nov 2013, at 12:02 PM, Stelian Ionescu wrote:
On Wed, 2013-11-20 at 23:24 +0100, Didier Verna wrote:
Pascal Costanza pc@p-cos.net wrote:
Just to chime in in the middle: There is no known solution to the so-called "DLL hell" problem.
You're right of course, but in practice, I think we have a lot to learn from guix/nixos. Ultimately, I would like to see quicklisp and asdf melt into a beast like that...
Me too, but my impression is that there will be significant opposition to that.
just as likely, insignificant opposition: there are those who would prefer it does less, better, rather than more ineffectually.
best regards, from berlin,
On Wed, Nov 20, 2013 at 12:54 PM, Pascal Costanza pc@p-cos.net wrote:
Just to chime in in the middle: There is no known solution to the so-called "DLL hell" problem.
Yes there is. http://nixos.org/nixos/
Libraries interact badly because of their interactions, not because one or the other is "bad." Even with the best of intentions, a library author cannot predict what changes will break existing clients and what changes won't, because that author doesn't know about all possible interactions. When APIs change, telling clients that they are now incompatible may be a lie, because they may not depend on the specific change. (For example, is the addition of a keyword argument an incompatible change or not? It may, or it may not be...)
You are basically trying to solve the halting problem for a program where you don't know significant parts of the program. ;)
Good analogy.
There is a field of research about component-oriented programming where this was a hot topic for quite some time, and nothing ever came out of it. The only practical working solution was that of Microsoft COM, where you need to change a GUID when APIs change, and since it's a black box model, that covers a lot of ground. Common Lisp libraries are definitely not black box, so even this solution will probably not work that well. (Changing the name of the library or the system definition, as Vsevolod suggests, would be similar.)
If you want to give control to developers, you could provide a way that depends-on specifications are list designators, with some form of declarative way of precisely specifying which versions are compatible and which aren't. (Then you could describe situations like, compatible with everything up to and including 0.9.x, and everything above 1.0.0, but excluding 1.0.0 - a situation that actually occurred when Closer to MOP was incompatible with SBCL 1.0.0 for a brief moment in history... ;)
I didn't see any response to my proposal of letting libraries declare how far backward compatible they purport to be.
—♯ƒ • François-René ÐVB Rideau •Reflection&Cybernethics• http://fare.tunes.org A politician divides mankind into two classes: tools and enemies. — Nietzsche
Anton Vodonosov wrote:
19.11.2013, 23:41, "Robert P. Goldman" rpgoldman@sift.info:
it's too radical
It's not radical, actually my proposal is very similar to yours
I am not as optimistic about your approach, but it has two huge advantages:
1. Unlike my proposal, yours requires no infrastructure support (no enforcement in ASDF) 2. Your proposal requires no buy-in
What I mean is that anyone can experiment with your approach to library versioning simply by following your guidelines for system construction, and by "branching" a system when its API changes, instead of trying to manage the issue with version numbering.
If you are right, your approach will provide value and convince people to adopt it.
I remain pessimistic, and I am not convinced by your argument that having old library versions is a no-cost solution. For those old library versions to be of any value, they will have to be maintained, and bug fixes will have to be backported (while the API remains constant). My guess is that this won't happen and those old versions will simply bit rot.
But this is an empirical question! Although pessimistic, I'd be happy to be wrong, and encourage you to go forth and test out your principles.
Lisp has always been a locus for experimenting with software engineering; I'm encouraged to see that we are still thinking about how to do it better.
Best, Robert
The proposal to put new, incompatible version into new package does not imply any additional maintenance of old versions.
And BTW, other versioning approaches do not prevent from support of previous versions. These two questions are completely orthogonal.
If speak about old versions maintenance (in whatever versioning).
People using old versions should understand that development focus is shifted to the latest versions.
OTOH if someone has large application depending on say hunchentoot 0.13.0 and it is easier to accept his patch that for him to migrate to hunchentoot 1.2.18 - why not, create a branch from 0.13.0 and commit his patch.
21.11.2013, 15:01, "Stelian Ionescu" sionescu@cddr.org:
Since CL library development isn't subsidized by generous companies - like in the Java, Python & Ruby world - the best we can do with limited resources is break an API, maintain the project name and simply require all users to forward-port their code.
How the requirement for additional work in clients (which are probably other open source libraries) is a resource saving?
There are examples when it takes years to forward-port some libraries, due to incompatible changes in the dependencies.
When we do not break clients, it saves resources of their authors, and prevents CL library world from degradation.
20.11.2013, 18:55, "Robert P. Goldman" rpgoldman@sift.info:
I am not convinced by your argument that having old library versions is a no-cost solution. For those old > library versions to be of any value, they will have to be maintained
The value of the old versions is the possibility for existing clients to remain working. Maintenance is not required.
But this is an empirical question! Although pessimistic, I'd be happy to be wrong, and encourage you to go forth and test out your principles.
Thanks! When I have a situation where incompatible change is needed I will definitely consider this option. I also encourage others who is about to make incompatible changes to consider this approach and ask advice from the community.
Best regards, - Anton
On Thu, 2013-11-21 at 17:51 +0400, Anton Vodonosov wrote:
The proposal to put new, incompatible version into new package does not imply any additional maintenance of old versions.
And BTW, other versioning approaches do not prevent from support of previous versions. These two questions are completely orthogonal.
If speak about old versions maintenance (in whatever versioning).
People using old versions should understand that development focus is shifted to the latest versions.
OTOH if someone has large application depending on say hunchentoot 0.13.0 and it is easier to accept his patch that for him to migrate to hunchentoot 1.2.18 - why not, create a branch from 0.13.0 and commit his patch.
21.11.2013, 15:01, "Stelian Ionescu" sionescu@cddr.org:
Since CL library development isn't subsidized by generous companies - like in the Java, Python & Ruby world - the best we can do with limited resources is break an API, maintain the project name and simply require all users to forward-port their code.
How the requirement for additional work in clients (which are probably other open source libraries) is a resource saving?
Because the library developers' time is much more precious that the users' time, with few exceptions.
On Wed, 2013-11-20 at 08:54 -0600, Robert P. Goldman wrote:
Anton Vodonosov wrote:
19.11.2013, 23:41, "Robert P. Goldman" rpgoldman@sift.info:
it's too radical
It's not radical, actually my proposal is very similar to yours
I am not as optimistic about your approach, but it has two huge advantages:
- Unlike my proposal, yours requires no infrastructure support (no
enforcement in ASDF) 2. Your proposal requires no buy-in
What I mean is that anyone can experiment with your approach to library versioning simply by following your guidelines for system construction, and by "branching" a system when its API changes, instead of trying to manage the issue with version numbering.
If you are right, your approach will provide value and convince people to adopt it.
I remain pessimistic, and I am not convinced by your argument that having old library versions is a no-cost solution. For those old library versions to be of any value, they will have to be maintained, and bug fixes will have to be backported (while the API remains constant). My guess is that this won't happen and those old versions will simply bit rot.
I agree. Since CL library development isn't subsidized by generous companies - like in the Java, Python & Ruby world - the best we can do with limited resources is break an API, maintain the project name and simply require all users to forward-port their code.
: antonv Note also, in 2009 ASDF didn't consider version "0.6" as satisfying requirement for "0.55":
: rpgoldman Yes, that's because the version scheme in ASDF is not a sequence of period-separated integers, but a sequence of period-separated strings.
I suppose you mean the opposite: "the version scheme in ASDF is a sequence of period-separated integers, not a sequence of period-separated strings."
Once again, it looks like the model that Dan Barlow was aiming to emulate was the numbering of Linux dynamic libraries; which at least superficially looks like the same as so called "semantic versioning".
The intent was to help solve DLL hell for Lisp libraries, but was insufficiently documented, and was lost to the community when Dan Barlow left.
At the same time, I believe the solution was a bad fit then, and even worse now, because this versioning scheme is designed for binary releases of software developed in a centralized way. ASDF libraries are distributed as source, and more and more developed in a distributed way.
Thanks to #+foo, #.(if (find-symbol "FOO" :bar) ...) and (eval-when ...), CL can deal with source-level compatibility in ways that C cannot.
When Faré wrote up the design rationale for ASDF, one of the principles was that ASDF should ask the right person for the right information.
Emphatically yes. But make it design rationale for ASDF 2, not ASDF. I claim the successes and failures of this principle and its implementation, not Dan Barlow though he did provide a starting point. See Robert's and my paper: "ASDF 2: More Coordination, Less Coordination" http://common-lisp.net/project/asdf/ilc2010draft.pdf%E2%80%8E
For example, the library should not dictate where files go -- that's the job of the library's installer (user). The library itself knows where its components are *relative to the installation* -- that's information the library implementor will have, which the installer will not have.
Great example.
Similarly, if I am the library supplier, I am the one who knows when I have changed the API incompatibly. That is why major component version changes were intended *not* to satisfy version requirements that have different major components. This is reasonable, because it provides a channel of information from the library supplier, that can be checked automatically, and if the downstream is still compatible, it should be a relatively easy fix (but is it? I don't know if there's a good way to say "I'll accept version 0.55+ or 1.0+").
The only other option would be to have the library client provide both upper and lower bounds on the version numbers that the client will accept. But, except for cases of *known* upgrade incompatibility, this is information that the library client simply cannot have.
And here I stop you.
The client library knows which version of the supplier library it was designed to work with, and which version it was tested with. But it is not responsible for how the supplier will evolve his versions.
The supplier library knows which previous versions it is compatible with, and which versions it has broken compatibility with regard to features that he actively supports — as opposed to having made incompatible changes where no promise of compatibility was made, or where he explicitly reserved the right to make a change (including fixing bugs and removing long deprecated features).
Therefore, I contend that the supplier library, not the client library, should provide the information as to which older versions it is or isn't compatible enough with to version-satisfies the requirement of a given version. The explicit requirement of a newer versions, by definition, it fails to satisfy.
By default, it could be, as in ASDF 1, the "semantic versioning" requirement of newer version AND same major version number.
Or it could be "This version can stand in as a replacement for any older version whatsoever", as is the current ASDF 3 behavior, and as ASDF needs for itself.
Or it could be "This version can stand in as a replacement for any older version as far back as this given version", where each library supplies its own number.
In any case, it should be the supplier library's role to specify that, not the client library's, at least in general and as a first approximation. The added advantage is that libraries can then implement their own version parsing schemes that need not be the same as ASDF: just override version-satisfies and have it call your own functions to parse and compare version strings.
Yes, sometimes a client might insist on a specific old version range for a library, rather than the open-ended range of the versions being actively developed. But then what that client really wants is a fork of the library started at some old version, so maybe the solution isn't a matter of adding support in ASDF, but in encouraging authors to fork projects when needed — or encouraging them to find solutions that don't rely on explicit or implicit forking.
The rules here are simple and easy to understand: if you change the API in an incompatible way, bump the major version number.
Yes, this rules out the false modestly of version numbers like 0.0.145, which looks like an alpha, but in fact turns out to be the fifth full release, but that's a sacrifice we can all live with! ;-)
I think that makes a meaningful default, or at least option, that we should encourage people to use.
But that's just not what most library authors have done in practice, and not what ASDF has done itself (it has maintained backward compatibility from ASDF 1 to ASDF 2 to ASDF 3, modulo unavoidable bugs).
As my own devil's advocate, the counter-argument is that the state of the art in CL libraries is so poor that even getting a version number is unusual. But this seems to be a counsel of despair: it says that since we have a bad state of affairs now, we are doomed to live with it forever.
Or maybe we should also strive for more cooperation with less coordination: tweak ASDF's version handling so it becomes extensible, and document the protocol including the default and one non-default (which it seems we already have).
I suppose I could take a little while and write up a candidate "versioning systems with ASDF" node for the ASDF manual, and push it for consideration by the community.
Yes, that too.
Also, as we speak about versioning, I have been trying to use semantic versioning as described at http://semver.org/ and I don't think it is a silver bullet - it doesn't solve all problems.
Isn't the subtitle of /The Mythical Man-Month/ "No Silver Bullet"?
Yes, this doesn't solve every problem, but it might give you a useful warning if your upstream has changed under you.... Pascal's case is an odd one, because the bump to 1.0 is, in some sense, a false positive.
OTOH, it's not really so bad to have to take a look at a library that has seen no commits in three years.
I don't think we're doing badly. At least not in THIS regard.
That's what I think the problem is with Lisp libraries: http://fare.livejournal.com/169346.html Sometimes, cooperation DOES require coordination!
—♯ƒ • François-René ÐVB Rideau •Reflection&Cybernethics• http://fare.tunes.org Gun: weapon of individual vs individual. Bomb: weapon of group vs group. That's why collectivists hate guns and love bombs.
On Tue, 19 Nov 2013, Faré wrote:
Thanks to #+foo, #.(if (find-symbol "FOO" :bar) ...) and (eval-when ...), CL can deal with source-level compatibility in ways that C cannot.
Moving a bit off-topic, GNU Autoconf takes probing for function availability and behavior further towards a science than perhaps any other system in use. [Its a shame that M4 and portable shell scripting were required for pragmatic reasons. That said, many lispers would learn some interesting macro lessons by studying Autoconf's use of M4.]
Also, runtime probing for functionality actually are often used in C/C++ programs, though usually not understood by most of the dev team. These probes frequently take the form of dlopen/dlsym and their Win32 equivalents. Also, many APIs like NPAPI, OpenGL, CUDA, and CPUID have their own extension registry systems.
FWIW, my little "read-macros" package demonstrated some functionality to simplify writing read-time conditional code without pushing everything on *features*.
http://git.tentpost.com/?p=lisp/read-macros.git
Back on-topic, semantic versioning systems such as advocated by GNU libtool try to provide a conservative estimate on portability. In such environments, it is not uncommon for a project to make a new release simply because a dependency bumped a version number. This is a primary motivation for the autoconf-style detection of behavior rather than trusting in names and version numbers. If behavior is detected, then a new version may simply require recompilation instead of a new source release.
One approach to solving "DLL-hell" is to essentially avoid shared libraries altogether. Significant progress can be made by having an application fully specify the revisions of all dependencies. Then these can be delivered atomically with the application, and a build system can assemble them. This in the path used by most commercial MSWin applications, several Java and Javascript frameworks, etc. Problems arise when two dependencies require different versions of a common third dependency, especially when that third dependency needs to have "singleton" behavior for consistency.
On Tue, 19 Nov 2013, Anton Vodonosov wrote:
But my point - it's not enough to just bump major version number, as semantic versioning suggests.
If author of "somelib" library wants to make an API incompatible change, it is better to release new ASDF system "somelib2" and put code into new package somelib2.
This concept resonates with me. The existence of a new API version should not preclude further releases of older API versions. Branching, and thus API versions, should not be restricted to a simple linear progression. Most semantic versioning systems partially acknowledge this through the use of "minor" and "micro" version numbers. (A git-like tree would be better...)
Towards this end, I had started investigating a set of macros and/or features to simplify the process of embedding version information directly into the CL package names themselves. Unfortunately, I didn't find any clean solution that met my goals. Here are a couple emails on the subject.
http://article.gmane.org/gmane.lisp.libcl.devel/110 http://article.gmane.org/gmane.lisp.libcl.devel/123 [Note: LibCL and its mailing lists are now defunct.]
Another possible source of ideas is the FreeBSD's new "pkg-ng" system. Apparently they found a nice solution to integrate binary packages with locally-compiled source ports.
All that said, semantic versioning is tried and true, easy to implement, and a useful improvement on the current ASDF status quo. Other approaches such as behavior testing and nonlinear are harder to implement and should play nicely with a semantic versioning system. Thus I am all in favor with ASDF adopting a reasonable semantic system today.
Whatever we do, please implement an escape hatch for the end user to override the versioning system's idea of compatibility. These things often have obscure failure modes and/or prevent nuanced usage. Just like CL::internal symbols, it is nice to have a straightforward way to void the warranty and bypass the normal safety mechanisms.
- Daniel
On Thu, 2013-11-21 at 00:36 -0500, Daniel Herring wrote: [...]
FWIW, my little "read-macros" package demonstrated some functionality to simplify writing read-time conditional code without pushing everything on *features*.
Interesting.
Back on-topic, semantic versioning systems such as advocated by GNU libtool try to provide a conservative estimate on portability.
libtool combines the notions of API and ABI compatibility. We have it a bit easier.
[...]
On Tue, 19 Nov 2013, Anton Vodonosov wrote:
But my point - it's not enough to just bump major version number, as semantic versioning suggests.
If author of "somelib" library wants to make an API incompatible change, it is better to release new ASDF system "somelib2" and put code into new package somelib2.
This concept resonates with me. The existence of a new API version should not preclude further releases of older API versions.
In practice nobody is going to take the time to maintain older versions except for a handful of very popular projects, and even in that case there will be significant social and economic pressure into avoiding divergence and waste of resources.
[...]
Towards this end, I had started investigating a set of macros and/or features to simplify the process of embedding version information directly into the CL package names themselves. Unfortunately, I didn't find any clean solution that met my goals. Here are a couple emails on the subject.
http://article.gmane.org/gmane.lisp.libcl.devel/110 http://article.gmane.org/gmane.lisp.libcl.devel/123 [Note: LibCL and its mailing lists are now defunct.]
Some interesting ideas there.
[...]
All that said, semantic versioning is tried and true, easy to implement, and a useful improvement on the current ASDF status quo. Other approaches such as behavior testing and nonlinear are harder to implement and should play nicely with a semantic versioning system. Thus I am all in favor with ASDF adopting a reasonable semantic system today.
I agree.
Whatever we do, please implement an escape hatch for the end user to override the versioning system's idea of compatibility. These things often have obscure failure modes and/or prevent nuanced usage. Just like CL::internal symbols, it is nice to have a straightforward way to void the warranty and bypass the normal safety mechanisms.
I agree.
Stelian Ionescu wrote:
Whatever we do, please implement an escape hatch for the end user to override the versioning system's idea of compatibility. These things often have obscure failure modes and/or prevent nuanced usage. Just like CL::internal symbols, it is nice to have a straightforward way to void the warranty and bypass the normal safety mechanisms.
I agree.
I'm going to take that as a vote to implement a continuation restart for version mismatch errors. [Yes, I'm grasping at straws! ;-)]
Cheers, r
I'm going to take that as a vote to implement a continuation restart for version mismatch errors. [Yes, I'm grasping at straws! ;-)]
I vote NO to that.
If the client system specifies a minimum version, it means it, and any older version is an error that better occur early than late. If he specified a maximum version — he shouldn't, and ASDF *must not* provide a means to do that (and so after all I vote lp#1183179 be resolved as "Invalid"). Backward compatibility if specified must be specified by the provider system, not the client system, (and then defsystem should have a :backward-compatible-to keyword argument). And if the provider system declares that it doesn't support compatibility with the old API — it means it, too. If somehow things are actually compatible, then either system has to be modified indeed, and/or you should be pulling newer more compatible versions, anyway. If what you really want is a fork of an old system, then use a fork of an old system, with its own name, e.g. "hunchentoot-0.13". If you're using extreme ways of being compatible with a wider range of versions of a system than expressible through ASDF version constraints, then don't use ASDF version constraints and stick to your extreme ways.
I also vote against "semantic versioning" as a meaningful thing for Common Lisp code. It's a great notion for the binary release of C dynamic libraries that just doesn't mean anything for Lisp source (or even fasl) releases.
Clients should be able to specify an open-ended range with a minimal version, and only that. Providers should be able to specify also a range with a minimum API compatibility version, and only that — with the current version always being the maximum supported version.
—♯ƒ • François-René ÐVB Rideau •Reflection&Cybernethics• http://fare.tunes.org A tautology is a thing which is tautological.
Faré wrote:
I'm going to take that as a vote to implement a continuation restart for version mismatch errors. [Yes, I'm grasping at straws! ;-)]
I vote NO to that.
Usually I find myself in agreement with you, but in this case I find myself two for two against, so I will share my rationale, and then propose a compromise solution:
1. If one wants a programming language + environment that won't permit you to expressly shoot yourself in the foot, then there's always Haskell, Pascal, and Ada, among others. I don't think it's our job to do that. I'm ok with providing protective warnings to keep the programmer from harming him or herself, but I don't think it's our job to prevent him/her from doing what s/he wants, assuming that it's clear that s/he understands the situation. Hence my suggestion that we allow people to continue through a version error.
2. In my experience, the errors one gets when upgrading a dependency that has changed its API are so odd, unpredictable, and difficult to trace to root cause that we should provide what help we can.
That said, there is clearly an *enormous* range of different opinions on this issue, and these are all reasoned opinions. So I suggest a compromise:
1. For those who like semantic versioning, revert to the behavior that an expressly indicated API upgrade by the supplier causes a signal to be conditioned. So if I release floyd-warshall 2.0, systems requiring 1.x will see a condition.
For those who *don't* like semantic versioning, this condition will be a WARNING, and not an ERROR. Those who *really* like semantic versioning can establish a handler for this warning that will treat it as an error (possibly continuable).
2. #1 will meet the needs of those who want to be able to shoot themselves in the foot. Now ASDF will have two conditions for version mismatch: one for library too old, and one for library (possibly) too new. Since the first condition is almost always really bad, we don't provide a continuation for it, and Faré should be satisfied. Since the second condition is a WARNING, I'm satisfied that if a library developer wishes to signal a disruption in his/her API, I will be so informed.
3. I'm OK with upper and lower version constraints. I don't think it's a substitute for semantic versioning, since it doesn't allow information to flow from a library supplier to a library consumer. But I agree that it's valuable for the case where a library consumer has looked at the new version of the library, knows that the change is incompatible, and for some reason will not or has not yet adapted the client library.
Does this provide some minimal level of satisfaction to all? I think it's a reasonable compromise, allowing those who want semantic versioning in a stronger sense to have it, without unduly burdening those who don't.
Best, R
On Thu, Nov 21, 2013 at 12:45 PM, Robert P. Goldman rpgoldman@sift.info wrote:
Faré wrote:
I'm going to take that as a vote to implement a continuation restart for version mismatch errors. [Yes, I'm grasping at straws! ;-)]
I vote NO to that.
Usually I find myself in agreement with you, but in this case I find myself two for two against, so I will share my rationale, and then propose a compromise solution:
- If one wants a programming language + environment that won't permit
you to expressly shoot yourself in the foot, then there's always Haskell, Pascal, and Ada, among others. I don't think it's our job to do that. I'm ok with providing protective warnings to keep the programmer from harming him or herself, but I don't think it's our job to prevent him/her from doing what s/he wants, assuming that it's clear that s/he understands the situation. Hence my suggestion that we allow people to continue through a version error.
Well, make it a CERROR if you want, in case a user wants to find out exactly what breaks, and maybe identify the client system as having an overly strict version requirement. But really, if a system has a version constraint, (a) it ought to be there for a reason [I don't see developers gratuitously add version constraints where not needed] and (b) why not upgrade the provider system?
- In my experience, the errors one gets when upgrading a dependency
that has changed its API are so odd, unpredictable, and difficult to trace to root cause that we should provide what help we can.
I think that telling the user about a version mismatch, and hence suggesting that he should upgrade the provider system, is already the greatest help we can provide.
That said, there is clearly an *enormous* range of different opinions on this issue, and these are all reasoned opinions. So I suggest a compromise:
- For those who like semantic versioning, revert to the behavior that
an expressly indicated API upgrade by the supplier causes a signal to be conditioned. So if I release floyd-warshall 2.0, systems requiring 1.x will see a condition.
For those who *don't* like semantic versioning, this condition will be a WARNING, and not an ERROR. Those who *really* like semantic versioning can establish a handler for this warning that will treat it as an error (possibly continuable).
I contend that the provider system, not the consumer system, should specify whether it's using semantic versioning or not.
With my proposed :backward-compatible-to keyword, you would specify: (defsystem floyd-warshall :version "2.1.4" :backward-compatible-to "2.0" ...)
Alternatively, it could specify a function to compare versions: (defsystem floyd-warshall :version "2.1.4" :version-satisfies 'uiop:version-compatible-p)
The latter alternative has the advantage that people can specify version schemes different from dot-separated integers, if they want (in which case parse-defsystem should have some checks relaxed).
- #1 will meet the needs of those who want to be able to shoot
themselves in the foot. Now ASDF will have two conditions for version mismatch: one for library too old, and one for library (possibly) too new. Since the first condition is almost always really bad, we don't provide a continuation for it, and Faré should be satisfied. Since the second condition is a WARNING, I'm satisfied that if a library developer wishes to signal a disruption in his/her API, I will be so informed.
I don't think there should be a warning, but only a (c)error. I don't think the consumer should specify semantic versioning, but the provider, at which point a cerror is appropriate.
Now what if provider at version 20 says it provides compatibility back to version 10, and consumer is ready for version 15 but includes workarounds to support the system back to version 5? Maybe consumer should be able to specify two versions: "I'm tested to work with version 15, but am supposed to work with version 5". Then we check that the two ranges have non-empty intersection.
- I'm OK with upper and lower version constraints. I don't think it's
a substitute for semantic versioning, since it doesn't allow information to flow from a library supplier to a library consumer. But I agree that it's valuable for the case where a library consumer has looked at the new version of the library, knows that the change is incompatible, and for some reason will not or has not yet adapted the client library.
I don't think the consumer client should specify semantic versioning or not. It's not information the consumer has. Only the provider has this information.
If the consumer has decided that new versions of the library are not compatible, then either library has to be fixed to restore compatibility, or the provider library needs be forked if the compatibility is not restorable and the consumer won't adopt the new API. Version blacklists can be useful, but should not include open-ended ranges: if you're not going to support future versions of the provider system, you should fork it to keep API support.
—♯ƒ • François-René ÐVB Rideau •Reflection&Cybernethics• http://fare.tunes.org I'm pro-choice, or as I call it pro-Go-Mind-Your-Own-Goddamn-Fucking-Business. For it's always SOMEONE's choice. Here: not yours, the mother's.
Fare, I do not understand how you think about versioning.
Why do you say semantic versioning is for binary libraries?
I even suppose you mean something different than me, Robert and others. Because semantic versioning as described at http://semver.org/ focuses on distinction between API compatible changes in libraries, and changes which break API.
On Thu, Nov 21, 2013 at 3:56 PM, Anton Vodonosov avodonosov@yandex.ru wrote:
Fare, I do not understand how you think about versioning.
Why do you say semantic versioning is for binary libraries?
Because that's what it was designed for: binary releases of C libraries, such as /usr/lib/libmagic.1.0.0
The version string identifies the version of the API, not the version of the source code. It is for use by the compiler, but by the linker. By the time you link compiled C code, there can be no adaptation of code API, it's all hardwired. Therefore, you need strict binary compatibility, and that's what the linker version system tries to enforce.
When you compile, you can detect API discrepancy, etc. In the C world, you do that with autoconf, #ifdef, etc. A same piece of source code will happily compile against a wide range of versions of the C library, and usually doesn't check version numbers, only the availability of given functions and CPP macros. Most of my C programs can compile unmodified against linux libc4, libc5, libc6 and whichever BSD libc, etc. Once compiled, though, the code can only link against an ABI-compatible version of its dependencies.
The situation with CL is much more like the situation with C source code: you load the code as either source or precompiled fasls, then you introspect for functions and macros if needed, and compile your own code. At no point is there distribution of precompiled provider libraries that you load together precompiled consumer libraries that were compiled with maybe different versions of the provider libraries. Therefore, we don't care at all about linker versions.
I even suppose you mean something different than me, Robert and others. Because semantic versioning as described at http://semver.org/ focuses on distinction between API compatible changes in libraries, and changes which break API.
This framework just isn't meaningful in the context of Lisp code, or of any source code based code release system. And even in the context of precompiled code release system, it is primitive and stupid as compared to the one and only real solution, which is NixOS.
—♯ƒ • François-René ÐVB Rideau •Reflection&Cybernethics• http://fare.tunes.org Pacifism is a shifty doctrine under which a man accepts the benefits of the social group without being willing to pay — and claims a halo for his dishonesty. — Robert Heinlein
Faré wrote:
I contend that the provider system, not the consumer system, should specify whether it's using semantic versioning or not.
With my proposed :backward-compatible-to keyword, you would specify: (defsystem floyd-warshall :version "2.1.4" :backward-compatible-to "2.0" ...)
Alternatively, it could specify a function to compare versions: (defsystem floyd-warshall :version "2.1.4" :version-satisfies 'uiop:version-compatible-p)
The latter alternative has the advantage that people can specify version schemes different from dot-separated integers, if they want (in which case parse-defsystem should have some checks relaxed).
I'll tell you honestly, even though I like semantic versioning, I will never do this. It's just too burdensome to track whether version intervals preserver API compatibility.
I can just barely manage to think about bumping a major version number when I know I'm breaking or going to break the API, but anything more is asking too much of the library supplier.
Indeed, given the number of ASDF systems in the wild that have no version number at all, even bumping the major version number seems to be asking a lot! So I'm not interested in adding support for :backward-compatible-to.
I like the :version-satisfies option, though...
On 21 Nov 2013, at 18:45, Robert P. Goldman rpgoldman@sift.info wrote:
Faré wrote:
I'm going to take that as a vote to implement a continuation restart for version mismatch errors. [Yes, I'm grasping at straws! ;-)]
I vote NO to that.
Usually I find myself in agreement with you, but in this case I find myself two for two against, so I will share my rationale, and then propose a compromise solution:
- If one wants a programming language + environment that won't permit
you to expressly shoot yourself in the foot, then there's always Haskell, Pascal, and Ada, among others. I don't think it's our job to do that. I'm ok with providing protective warnings to keep the programmer from harming him or herself, but I don't think it's our job to prevent him/her from doing what s/he wants, assuming that it's clear that s/he understands the situation. Hence my suggestion that we allow people to continue through a version error.
- In my experience, the errors one gets when upgrading a dependency
that has changed its API are so odd, unpredictable, and difficult to trace to root cause that we should provide what help we can.
That said, there is clearly an *enormous* range of different opinions on this issue, and these are all reasoned opinions. So I suggest a compromise:
- For those who like semantic versioning, revert to the behavior that
an expressly indicated API upgrade by the supplier causes a signal to be conditioned. So if I release floyd-warshall 2.0, systems requiring 1.x will see a condition.
For those who *don't* like semantic versioning, this condition will be a WARNING, and not an ERROR. Those who *really* like semantic versioning can establish a handler for this warning that will treat it as an error (possibly continuable).
- #1 will meet the needs of those who want to be able to shoot
themselves in the foot. Now ASDF will have two conditions for version mismatch: one for library too old, and one for library (possibly) too new. Since the first condition is almost always really bad, we don't provide a continuation for it, and Faré should be satisfied. Since the second condition is a WARNING, I'm satisfied that if a library developer wishes to signal a disruption in his/her API, I will be so informed.
- I'm OK with upper and lower version constraints. I don't think it's
a substitute for semantic versioning, since it doesn't allow information to flow from a library supplier to a library consumer. But I agree that it's valuable for the case where a library consumer has looked at the new version of the library, knows that the change is incompatible, and for some reason will not or has not yet adapted the client library.
Does this provide some minimal level of satisfaction to all? I think it's a reasonable compromise, allowing those who want semantic versioning in a stronger sense to have it, without unduly burdening those who don't.
+1
For those who want a strict handling of both cases, they can always set *break-on-signals* accordingly. (The other way around, switching from a strict error to to something to ignore, is less convenient.)
Pascal
P.S.: Please don’t over-engineer this. People will not spend a lot of time thinking about what exactly happens when what part of their version numbers change how. We need to make the barriers for contributing libraries to the Lisp community lower, not higher.
-- Pascal Costanza The views expressed in this email are my own, and not those of my employer.
Semantic versioning makes one good thing - it concentrates on difference between API compatible changes and API incompatible changes.
This is an important concept and it remains valid for binary components and for source code.
But semantic versioning is not a complete solution. It allows to detect incompatibility problem, but it doesn't say how to solve or avoid this problem.
I wouldn't want to see library authors advised: "You can break API compatibility; in this case just increase major number in the :version attribute of your asfd:defystem, and the problem is solved."
It does not solve the problem. For example, I would not want to see a library like alexandria to break compatibility by renaming half of it's functions and reflecting this just by :version attribute. Suppose postmodern adopted new alexandria, but hunchentoot hasn't. As new and old alexandria can not be used simultaneously, my application build with hunchentoot and postmodern can not benefit from improvements and fixes in the new postmodern until hunchentoot is updated.
Releasing an incompatible library version essentially splits lisp world into two sets: libraries depending on new version and libraries depending on old version. And these two sets become incompatible and can not be used in the same application.
Doing so would impose a significant friction to the lisp wold evolution. Once you depend on a library from one set, you are locked in this set.
If we put the incompatible version into new package (alexandria2 or carthage, whatever) and allow it to coexist with the old version, we allow other libraries to evolve freely, without constraining the development agility.
I hope it is clear by now, that the library maintainer is not expected to provide support for old versions. Just leaving the old versions available for use results in sigificant benefits.
We can generalize it to other forms of :version specifiers and :depends-on specifiers, be it list designators, open ranges, etc. In general this may be described as consumer speficies some contraints on the supplier library consumer can work with, and the supplier describes itself so that the library loader, when loads an application dependencies, can solve a kind of constraint satisfaction problem and chose from the library verions available on the system the versions which satisfy all the consumers in the application.
People can experiment with various forms of the supplier properties/contrains specifiers: dot separated numbers, lists of features, etc. But it's not enough to just describe the constraints. We also need a guidelines or strategy for developers to ensure it is possible satisfy all the constraints. If my application dependency tree contains two incompatible versions of a library (some-lib 1.0.0 and some-lib 2.0.0 if we use semantic versioning), then it is impossible to satisfy such requirments, unless these two versions can be loaded simulaneously.
22.11.2013, 01:15, "Faré" fahree@gmail.com:
When you compile, you can detect API discrepancy, etc. In the C world, you do that with autoconf, #ifdef, etc. A same piece of source code will happily compile against a wide range of versions of the C library, and usually doesn't check version numbers, only the availability of given functions and CPP macros. Most of my C programs can compile unmodified against linux libc4, libc5, libc6 and whichever BSD libc, etc.
No, consider when libc7 is released, and some functions are removed or changed behaviour. It may break you C programs, because your code can not be prepared for arbitrary future changes. That's the question of backward compatibility - how supplier components should evolve to allow existing clients to remain functional.
I think it is not bad to make :version-satisfies customizable. In this case I would also suggest make :version not necessary a string, but allow some other values, like list of tags, etc.
Robert, if we want to continue from the version mismatch condition, the condition object might list the alternatives available: other versions of the required ASDF systems which are found. And if no alternatives available, we can't continue.
Best regards, - Anton