Whether 'largestabs' or 'smallestabs' depends on the problem: 'smallestabs' needs to factorize the first input, which can make it slower than 'largestabs'. However, the speed of convergence of the iteration that's done afterwards depends on the spectrum of the matrix used, and sometimes that spectrum is better in the 'smallestabs' case (where we're using the inverse of the original problem), then if the matrix is just shifted. This roughly speaking comes down to how large the ratio between the eigenvalues to be calculated is.
It's usually a good idea to keep a generalized eigenvalue problem in the generalized form, because computing S*L will result in a nonsymmetric matrix, which prevents us from using symmetric methods which are usually faster and more accurate.
The problem here is that D+s*s' will be a dense matrix, so whenever that's computed it will slow down all the computation by a lot.
I'd first try solving the reverse eigenproblem instead, (D+s*s')*x = mu*L*x, where lambda = 1/mu and now you'd want to compute the second largest eigenvalue:
eigs(D+s*s', L, 2, "largestabs")
Here you can then use a function handle to apply S instead of forming the matrix explicitly.
Note I'm assuming here that L is symmetric positive definite. The approach will still work otherwise, but I'm not sure how well convergence will turn out in that case - generalized eigenvalue solvers are usually optimized for the case where the right-hand side matrix is s.p.d., since this is often the mass matrix.