Full matrices are faster here, because your matrices are simply not sparse, in the sense that you gain nothing from them as being sparse.
na = 1000;
nb = 500;
B = sprandn(na, nb, 0.1);
C = sprandn(na, nb, 0.1);
So 10% non-zero? Are ya kidding us? Those matrices are not even useably sparse. Just use full matrices. when you multiply them, fill-in will create matrices that have no non-zeros anyway. So you will bestoring the matrices as sparse, but they will really be full matrices. So no gain from being sparse, yet all the drawbacks from sparse storage.
Drawbacks???? Yes. Drawbacks. If you store a matrix with essentially no zeros in it as sparse, it will take longer to work with, and will use MORE memory.
Anyway, next, lets look at what you are trying to optimize. First, DON'T use tic and toc. Use timeit. No loop needed.
You want to optimze the product (a.*B)' *C.
Again, the way to test whatever you are doing is to use timeit.
timeit uses a loop internally. It deals with all the things it needs to do to give the best estimate of the time required. For example, the first few times you call any function will take just a wee bit longer. (Sometimes called warmup, because the function needs to get into cache.)
BB = full(B);
CC = full(C);
So, it is faster to just work with full matrices. B and C are not very large, or indeed, very sparse. sparse is just a mirage here.
Next, is there a more effcient way to compute (a.*B)'*C? a is a vector, so it is implicitly expanded to multiply every row with the same vector of elements in a. However, you can actually gain a little if you had created a as a sparse diagonal matrix.
aa = spdiags(rand(na,1),0,na,na);
But, the time goes back up if you create B and C as sparse.
The the very funny thing is, if you are purely interested in speed here, the matrices that you created as sparse, should really have been full. and for speed, the only thing you wanted to be sparse was a vector that SHOULD have been a sparse diagonal matrix.