![](https://secure.gravatar.com/avatar/7b066555f82ac0a48e1aad379db41c21.jpg?s=120&d=mm&r=g)
Hi all, hope this is the right place for this kind of request. I'm writing a program which will be doing Machine Learning over graph data structures. In it I'm going to be needing to represent a bunch of matrices, some sparse, some definitely not sparse, and do fairly basic operations on these matrices (add/ subtract/multiply, pseudoinverse, probably some others). I've started a basic prototype using the standard CL arrays but I always knew I would need GSLL to calculate a pseudo inverse. My question is, would it be a reasonably sensible decision to just use marrays for the whole program rather than worrying about converting to/ from CL-type arrays all the time? Are there any places where CL-arrays beat marrays either in terms of memory usage, access speed, or anything else? I might be needing to scale this system up be dealing with N*N square matrices where N is on the order of hundreds of thousands or even millions, if that makes any difference to the recommendation. Thanks in advance for any help! Malcolm Reynolds