The problems seen here are due to M being symmetric positive semi-definite. Such a matrix is numerically singular, which can have quite a bit of unfortunate effects.
EIG - 'chol' vs 'qz' and positive semi-definite M
In general, the 'qz' algorithm is more robust than the 'chol' algorithm, it works even if K and/or M are singular.
However, for symmetric problems (where K is symmetric and M is symmetric positive definite), EIG makes the promise that the outputs will have real eigenvalues and M-orthogonal eigenvectors (V'*M*V = I). This is not guaranteed for the 'qz' algorithm, which works for any non-symmetric system.
This is where the 'chol' option comes in: The Cholesky decomposition of M is computed, and this is used to rewrite the generalized eigenvalue problem as a simple, symmetric eigenvalue problem:
R = chol(full(M));
RKR = R'\K/R;
[Veig,Deig] = eig((RKR + RKR')/2);
[ome_eig,tmp] = sort(sqrt(diag(Deig)),'ascend');
frq_eig_chol_by_hand = ome_eig/2/pi;
You'll see that this is quite similar to what EIG computes with the 'chol' option.
In principle, CHOL will error unless M is symmetric positive definite. However, for a symmetric positive semi-definite M, chances are 50-50 on whether it sees this as a badly conditioned sps-d matrix, or rejects it as an indefinite matrix (in which case it would fall back to 'qz').
In short, the problem is that the 'chol' option solves a linear system with R = chol(M), which introduces large errors when M is ill-conditioned.
EIGS - M-orthogonal Krylov subspaces and cut-off choice
Now in EIGS(K, M, k, 'sm'), the algorithm used doesn't require inverting M: It only involves backslash with K and matrix multiply with M. What's done internally is that an M-orthogonal basis of a Krylov subspace is constructed, then K is projected into that basis and EIG is applied to this smaller matrix. This is safe to do for symmetric positive semi-definite matrices.
So why does it fail for k == 18, in such a similar way to how EIG failed? That's because it calls EIG(full(K), full(M)) in this case.
The subspace I just mentioned above has a size that's heuristically set to 2*k, which is usually significantly smaller than the size of the whole matrix (this is the main intention of EIGS, quickly computing few eigenvalues of a large matrix). When 2*k >= the matrix size, computing EIG is quicker and in almost all cases more accurate than doing the explicit algorithm.
You can avoid this issue at k == 18 by setting the 'SubspaceDimension' Name-Value pair to any value less than 36. Use the 'Display' Name-Value pair to see if EIGS is calling into EIG or starting a Krylov-Schur iteration.