# how to make symbolic computations fast?

29 views (last 30 days)

Show older comments

hello, I'm working on some calculation that requires great precision at very large or small values.

as I have searched the only way is symbolic "vpa". this way is more precise with a maximum of 2^29 digits but is really slow. for example consider that we are going to estimate bessel function of first kind.

% use double precision data (not precise but fast)

tic

besseli(1,100)

toc

>>ans = 1.068369390338163e+42

Elapsed time is 0.000295 seconds.

% use variable precision arithmetic data (precise but not fast)

tic

vpa(besseli(1,sym(100)),42)

toc

>>ans = 1068369390338162481206145763224295265446122.0

Elapsed time is 0.701682 seconds.

But I was wondering if there is a way somehow to store data and use interpolation function to speed it up. I list my questions as following:

- is there any way to speed up symbolic computation?
- is there memory-based method that reduces cpu consumption like "look-up tables and pre strored data interpolation" for symbolic numbers?
- if this way is some how inefficient, what other best method is proposed?

##### 4 Comments

Steven Lord
on 18 Aug 2020

### Answers (2)

John D'Errico
on 16 Aug 2020

Edited: John D'Errico
on 16 Aug 2020

No. Your conclusion the only way is VPA is simply wrong. You can use my HPF toolbox instead. It is available for free download from the File Exchange. However, I'm sorry, but interpolation simply makes virtually no sense in any such context.

Using both tools to compute 1000 digits for e = exp(1) using a simple approximation

I might do this:

DefaultNumberOfDigits 1000

tic

eappr = hpf(2.5); % Be careful. I know 2.5 is exactly representable as a double.

den = hpf(2);

for k = 1:150

den = den*(3*k)*(3*k+1)*(3*k+2);

eappr = eappr + hpf((3*k + 2)^2 +1)/den;

end

toc

exp(hpf(1)) - eappr

ans.Exponent

Elapsed time is 0.693297 seconds.

ans =

6.e-1009

Now for vpa...

digits 1000

tic

eappr = vpa(2.5);

den = vpa(2);

for k=1:270

den = den*(3*k)*(3*k+1)*(3*k+2);

eappr = eappr + sym((3*k + 2)^2 +1)/den;

end

toc

exp(vpa(1)) - eappr

Elapsed time is 0.516161 seconds.

ans =

0.0

150 terms from that series are sufficient to compute 1000 digits for exp(1).

In both cases, they are correct to the full precision required. VPA beat HPF in time by a little, but not a lot.

I will add that I used HPF to compute a million digits of pi once.

I would also add that much of the time, people are incorrrect in thinking they need to use high precision arithmetic to do computations. It is often the case they could have used logs to do essentially everything they really needed. But that is not always the case.

##### 14 Comments

Walter Roberson
on 19 Aug 2020

I wonder if the techniques in this paper would help:

Bjorn Gustavsson
on 20 Aug 2020

Pavel Holoborodko
on 29 Aug 2020

Edited: Pavel Holoborodko
on 29 Aug 2020

I am developer of Advanpix toolbox. You mentioned it in the comments and that is why I decided to respond.

There are several misconceptions regarding computation of special functions (especially difficult ones, like Bessel or Hypergeometric or alike).

(a) First one is that such functions are considered easy to compute. You can frequently see the suggestions like - just use series, Pade, Chebyshebv or some other approximations.

(b) Second one is that well established software (NAG, MATLAB, Python, Julia, etc.) provide high-quality implementation for such functions.

Both are false. We have been working in the area for quite some time and published some of our findings, e.g. see [1,2]. Even double-precision software fails miseably to deliver proper accuracy.

When it comes to extended precision the situation gets even more difficult. No fixed degree approximation can be used, accuracy must be tracked on the-fly during the computations, cancellation effects must be detected, overflow/underflow must be handled properly, etc. etc. Combine this with different kinds of arguments and orders (complex, pure imaginary), principal values and branch cuts - and this can easily become the subject of solid research for several years.

Luckily you don't have to do all this, as we already did it for our toolbox. There is a reason why profesionally developed software costs money - it simply saves you ton of time by providing ready-to-use high-quality functionality with guaranteed accuracy.

Another advantage of our toolbox over "free" software is performance. We are several orders of magnitude faster than VPA, HPF, MP or even Maple, Mathematica, Julia and Python [3]. Multi-theading are natively built-in and used in all operations.

So, if you consider the quality and performance, then it becomes clear that there is no such thing as "free" software. Seemingly "free" software costs you a lot of time due to longer execution times, pain in setting up, etc. But most importanly your research might fail just because some "free" library provided sub-optimal accuracy in computations. That is pretty high cost.

Note 1. We didn't run speed comparison with HPF. But results are clear because of the author claim: "As I showed, my own HPF is at leasrt similar in speed to the use of VPA. There will possibly be some cases where one may be faster, some slower. At least they are of the same order of magnitude, of that I am happy." The VPA is the slowest extended precision software out there, we made comprehensive comparison with it [2].

Note 2. Using your example, this is how Advanpix toolbox computes it with 100 digits of accuracy on my computer:

>> mp.Digits(100);

>> tic;besseli(1,mp(100));toc;

Elapsed time is 0.000923 seconds.

Links.

[3] (Japanese) https://qiita.com/hanaata/items/9a267bbf9f0547c807e7

##### 0 Comments

### See Also

### Categories

### Community Treasure Hunt

Find the treasures in MATLAB Central and discover how the community can help you!

Start Hunting!