"Robert Goldman" rpgoldman@sift.info writes:
On 15 Oct 2020, at 12:19, Mark Evenson wrote:
@Robert: would it be satisfactory to have the output of the ASDF tests available in a non-Jenkins environment? Do we need to run the ADSF tests under OSs other than Linux, that is do you current test other environment likes macOS/Windows/FreeBSD?
I got started with Jenkins only because my company had it, and so I was familiar with it, but hadn't used the GitLab/GitHub frameworks. I'm not ready to configure GL/GH style CI myself, beyond crude bug fixing, but I am quite comfortable with it. Indeed, I believe it's simpler, because it can be merge request aware, but doing that with Jenkins requires additional work.
[One cranky request --- if we set this up in that way, please use `gitlab-ci.yml` or some other *visible* filename, instead of the dotted filenames that are hidden by default. I'm not sure why people think hiding critical configurations is a good idea, but I don't. Thanks!]
I currently run the ASDF tests on MacOS, but not using Jenkins, only doing it by hand (since that's what my laptop runs). So running MacOS automatically would be a nice-to-have, but is not critical.
I currently *don't* run the ASDF tests on Windows, but I should, since it's such a different platform. Dave Cooper has kindly offered me access to a Windows VM for testing, but I haven't gotten around to setting it up. It would be lovely if we could hook Windows testing into whatever CI framework we end up with.
I'm a huge fan of Gitlab CI and was getting tired of running all tests locally whenever I made a change, so I started a Gitlab CI configuration for ASDF at https://gitlab.common-lisp.net/asdf/asdf/-/merge_requests/146 and also modified it a bit more today. The recent modifications are marking the upgrade tests as `allow_failure: true` (since in a previous email it sounded like the previous Jenkins config didn't even run them), increasing some timeouts in tests (I don't know what happened, but some tests started failing under CCL when run on cl.net runners), reordering things such that all regression tests ran first, and addressing a bug in the latest SBCL that was triggered by the cl.net infrastructure.
If you want to take some radically different approach than I did, that's great! I just wanted to make sure everyone knew that an 80% solution exists and didn't waste their time coming up with something substantially similar. I think the biggest things left on !146 are deciding how often to run the tests (every push? only on MRs?) and adding more runners for licensed implementations or OSes.
Speaking of OSes, my research group has a Mac mini we've been using for our own CI purposes. By no means is it a speed demon, but it's lightly used so I can set up another runner on it for ASDF.
As for the licensed implementations: I'm not sure of what the cl.net infrastructure looks like and how paranoid you want to be about protecting the license, but it seems the easiest way forward would be to add a new VM based runner with the implementation and license already installed in the VM, register that runner so that only the asdf/asdf project can use it, and tag it with the implementation name. We can then add a job that is triggered only if it's run in the asdf/asdf project and have it require that tag.
As I said in another message, this would mean that forks can't use that runner (and MR tests are, unfortunately, run in the CI context of the fork). If it's OK for master to occasionally break, the changes from forks could be merged directly to master and breakage dealt with after that. If you want master to never break because of licensed implementations, then someone with write bits on asdf/asdf would need to pull the branch from the fork into asdf/asdf first.
Either way, the person pulling into asdf/asdf would have to do a sanity check to make sure the license isn't being exfiltrated, but I don't think that's any change from the previous approach (unless you were previously running the tests without external network access!).
-Eric