Hi Philip,

 Thank you for your analysis and proposals so far! 

 see my (probably dumb) questions below.. 


---- On Fri, 06 Oct 2023 02:28:19 -0400 Philipp Marek <philipp@marek.priv.at> wrote ---

> for example I see this at Hetzner:
>
> AMD Ryzen™ 5 3600
> CPU6 cores / 12 threads @ 3.6 GHz
> Generation: Matisse (Zen 2)
> RAM 64 GB DDR4 RAM
> Drives. 2 x 512 GB NVMe SSD
>
>
> Would that be sufficient for our gitlab at least?

It should be enough CPU and RAM;
if we want to run local RAID1 (which I'd like to have!),
storage could become a bit low.

root@common-lisp:~# lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINTS
sda 8:0 0 2G 0 disk
└─sda1 8:1 0 243M 0 part /boot
sdb 8:16 0 200G 0 disk /srv
/home
/mnt/btrfs
/usr
/
sdc 8:32 0 250G 0 disk /var
vda 254:0 0 32G 0 disk [SWAP]

Yeah, currently we have ~450GB in filesystems,
_used_ is only 111GB + 149GB = 260GB,
so there's a bit of reserve left.


Given the above, how much disk do you reckon we'd need to run local RAID1?  And I assume it has to be two separate drives? 

By the way, the above Hetzner rig is currently offered for EUR~45 per month. 


> Or for running a hypervisor with one VM for gitlab and maybe others
> for other services? 

I'd suggest to use Docker instead - according to our measurements,
a VM layer costs 15-25% of performance,
and having everything available in one filesystem makes backups much
easier.






I can imagine if we have all the services Dockerized then in principle, among other benefits, it should be easier to migrate to different physical hosts in the future -- just push and pull a Docker image.  Well, not really -- there will be loads of data to go along with each Docker image and service, plus orchestration  to be replicated on a new phycal host. But I think those are solved problems and overall made easier by containerizing stuff -- at least the local environment for each service isn't suddenly changing out from under it by moving to a new physical host with a newer or different Linux flavor. 

How about for Erik H's stated goal of being able to partition and delegate maintenance of different services to different volunteers?  In the case of VMs then each VM would literally have one volunteer in charge of it.  Could we do similar with Docker deployment? In that case, I'd expect the volunteer in charge of each Container to become familiar (and practice doing) backup, migration, restore of their particular container.  And For coordination of several containers running our several services, do you think we could we could use something simple such as docker-compose or would we better resort to a "real" orchestration setup such as kubernetes?

And given the services we are aiming to offer, do you think we are talking about standard off-the-shelf Docker images (e.g. for gitlab, nginx, etc) or are we looking at authoring and maintaining our own custom images (i.e. our own Dockerfiles). 


We might need VMs for cross-builds anyway, though,
if we want to support that at some time.
(Or at least qemu-static-<arch> - might be easier to handle)


I'm not sure what qemu-static-<arch> means.

What platforms would you have in mind to support for cross-builds? 

I assume each platform would add to CPU/RAM/disk requirements. Query whether to size for that eventuality now or later.
 


>> Can we define some common documentation area (perhaps using
>> cryptpad.fr
>> or a similar independent service) where we can create a list of tasks,
>> edit/complete/document/check there in parallel, and start the
>> migration
>> of services to the new VM in the background
> That sounds like a good idea.
> I suppose our Dropbox would not be sufficient, as it doesn't
> gracefully handle simultaneous fine-grained updates to a document...? 

I'm imagining that the people might meet every now and then (perhaps
half an hour 2x to 7x a week?), and having a document that can
*simultaneusly* be modified by several people with changes being
visible at once is a big benefit.

Philip could you go ahead and set up one document for this migration effort, in a place you find handy? If cryptpad will do what we need for now then let it be cryptpad.  The cryptpad would be not such much for discussion (this mailing list can play that role for now) but more as a live to-do list with up to date status (I don't know if that's formatted like an org-mode list or what). 

Another open point for me is general architecture.

I guess we want to go with a single server (no HA pair),
which should be fine for our use case.

- But what about backups?
Do we keep the old place, or do we restart them too?

- Do we want/need external monitoring?
If we do, run it on the backup machine or keep that one as just
a storage box?



In addition to HA (High Availability) and external monitoring, I can imagine just a couple reasons we might want to maintain separate hosts:

1. To have a dedicated build host just for running gitlab-runner and executors for it -- so that heavy pipeline jobs wouldn't bog down our whole server. That brings up the question of do we still support shell logins into the base OS (as we do now) for users as well as administrators? And do we still enable the "shell" executor for gitlab-runner, if everything is supposed to be dockerized and we're trying to avoid running things on the bare base OS?

 2. Maybe for a dedicated gitlab host as well, because that program is so freaking heavy. 

 3. And we might want hosts of different architectures, even Mac & Windows.

On that topic, we've had an offer of a donation of an Intel nuc to use as a permanent host (specs unknown). I also know of at least one Mac Mini M1 which could be donated as a build host.  The idea of collecting hardware sounds nice - but is there a viable way to co-locate, provision network resources, and physically care for such hardware which may come into the Foundation's possession? 


Thanks again,

Dave