Function: intnum
Section: sums
C-Name: intnum0
Prototype: V=GGEDGp
Help: intnum(X=a,b,expr,{tab}): numerical integration of expr from a to b with
 respect to X. Plus/minus infinity is coded as [+1]/ [-1]. Finally tab is
 either omitted (let the program choose the integration step), a positive
 integer m (choose integration step 1/2^m), or data precomputed with intnuminit.
Wrapper: (,,G)
Description:
  (gen,gen,gen,?gen):gen:prec intnum(${3 cookie}, ${3 wrapper}, $1, $2, $4, prec)
Doc: numerical integration
 of \var{expr} on $]a,b[$ with respect to $X$. The integrand may have values
 belonging to a vector space over the real numbers; in particular, it can be
 complex-valued or vector-valued. But it is assumed that the function is regular
 on $]a,b[$. If the endpoints $a$ and $b$ are finite and the function is regular
 there, the situation is simple:
 \bprog
 ? intnum(x = 0,1, x^2)
 %1 = 0.3333333333333333333333333333
 ? intnum(x = 0,Pi/2, [cos(x), sin(x)])
 %2 = [1.000000000000000000000000000, 1.000000000000000000000000000]
 @eprog\noindent
 An endpoint equal to $\pm\infty$ is coded as the single-component vector
 $[\pm1]$. You are welcome to set, e.g \kbd{oo = [1]} or \kbd{INFINITY = [1]},
 then using \kbd{+oo}, \kbd{-oo}, \kbd{-INFINITY}, etc. will have the expected
 behavior.
 \bprog
 ? oo = [1];  \\@com for clarity
 ? intnum(x = 1,+oo, 1/x^2)
 %2 = 1.000000000000000000000000000
 @eprog\noindent
 In basic usage, it is assumed that the function does not decrease
 exponentially fast at infinity:
 \bprog
 ? intnum(x=0,+oo, exp(-x))
   ***   at top-level: intnum(x=0,+oo,exp(-
   ***                 ^--------------------
   *** exp: exponent (expo) overflow
 @eprog\noindent
 We shall see in a moment how to avoid the last problem, after describing
 the last argument \var{tab}, which is both optional and technical. The
 routine uses weights, which are mostly independent of the function being
 integrated, evaluated at many sampling points. If \var{tab} is

 \item a positive integer $m$, we use $2^m$ sampling points, hopefully
 increasing accuracy. But note that the running time is roughly proportional
 to $2^m$. One may try consecutive values of $m$ until they give the same
 value up to an accepted error. If \var{tab} is omitted, the algorithm guesses
 a reasonable value for $m$ depending on the current precision only, which
 should be sufficient for regular functions. That value may be obtained from
 \tet{intnumstep}, and increased in case of difficulties.

 \item a set of integration tables as output by \tet{intnuminit},
 they are used directly. This is useful if several integrations of the same
 type are performed (on the same kind of interval and functions, for a given
 accuracy), in particular for multivariate integrals, since we then skip
 expensive precomputations.

 \misctitle{Specifying the behavior at endpoints}
 This is done as follows. An endpoint $a$ is either given as such (a scalar,
 real or complex, or $[\pm1]$ for $\pm\infty$), or as a two component vector
 $[a,\alpha]$, to indicate the behavior of the integrand in a neighborhood
 of $a$.

 If $a$ is finite, the code $[a,\alpha]$ means the function has a
 singularity of the form $(x-a)^{\alpha}$, up to logarithms. (If $\alpha \ge
 0$, we only assume the function is regular, which is the default assumption.)
 If a wrong singularity exponent is used, the result will lose a catastrophic
 number of decimals:
 \bprog
 ? intnum(x=0, 1, x^(-1/2))         \\@com assume $x^{-1/2}$ is regular at 0
 %1 = 1.999999999999999999990291881
 ? intnum(x=[0,-1/2], 1, x^(-1/2))  \\@com no, it's not
 %2 = 2.000000000000000000000000000
 ? intnum(x=[0,-1/10], 1, x^(-1/2))
 %3 = 1.999999999999999999999946438 \\@com using a wrong exponent is bad
 @eprog

 If $a$ is $\pm\infty$, which is coded as $[\pm 1]$, the situation is more
 complicated, and $[[\pm1],\alpha]$ means:

 \item $\alpha=0$ (or no $\alpha$ at all, i.e. simply $[\pm1]$) assumes that the
 integrand tends to zero, but not exponentially fast, and not
 oscillating such as $\sin(x)/x$.

 \item $\alpha>0$ assumes that the function tends to zero exponentially fast
 approximately as $\exp(-\alpha x)$. This includes oscillating but quickly
 decreasing functions such as $\exp(-x)\sin(x)$.
 \bprog
 ? oo = [1];
 ? intnum(x=0, +oo, exp(-2*x))
   ***   at top-level: intnum(x=0,+oo,exp(-
   ***                 ^--------------------
   *** exp: exponent (expo) overflow
 ? intnum(x=0, [+oo, 2], exp(-2*x))
 %1 = 0.5000000000000000000000000000 \\@com OK!
 ? intnum(x=0, [+oo, 4], exp(-2*x))
 %2 = 0.4999999999999999999961990984 \\@com wrong exponent $\Rightarrow$ imprecise result
 ? intnum(x=0, [+oo, 20], exp(-2*x))
 %2 = 0.4999524997739071283804510227 \\@com disaster
 @eprog

 \item $\alpha<-1$ assumes that the function tends to $0$ slowly, like
 $x^{\alpha}$. Here it is essential to give the correct $\alpha$, if possible,
 but on the other hand $\alpha\le -2$ is equivalent to $\alpha=0$, in other
 words to no $\alpha$ at all.

 \smallskip The last two codes are reserved for oscillating functions.
 Let $k > 0$ real, and $g(x)$ a non-oscillating function tending slowly to $0$
 (e.g. like a negative power of $x$), then

 \item $\alpha=k * I$ assumes that the function behaves like $\cos(kx)g(x)$.

 \item $\alpha=-k* I$ assumes that the function behaves like $\sin(kx)g(x)$.

 \noindent Here it is critical to give the exact value of $k$. If the
 oscillating part is not a pure sine or cosine, one must expand it into a
 Fourier series, use the above codings, and sum the resulting contributions.
 Otherwise you will get nonsense. Note that $\cos(kx)$, and similarly
 $\sin(kx)$, means that very function, and not a translated version such as
 $\cos(kx+a)$.

 \misctitle{Note} If $f(x)=\cos(kx)g(x)$ where $g(x)$ tends to zero
 exponentially fast as $\exp(-\alpha x)$, it is up to the user to choose
 between $[[\pm1],\alpha]$ and $[[\pm1],k* I]$, but a good rule of thumb is that
 if the oscillations are much weaker than the exponential decrease, choose
 $[[\pm1],\alpha]$, otherwise choose $[[\pm1],k* I]$, although the latter can
 reasonably be used in all cases, while the former cannot. To take a specific
 example, in the inverse Mellin transform, the integrand is almost always a
 product of an exponentially decreasing and an oscillating factor. If we
 choose the oscillating type of integral we perhaps obtain the best results,
 at the expense of having to recompute our functions for a different value of
 the variable $z$ giving the transform, preventing us to use a function such
 as \kbd{intmellininvshort}. On the other hand using the exponential type of
 integral, we obtain less accurate results, but we skip expensive
 recomputations. See \kbd{intmellininvshort} and \kbd{intfuncinit} for more
 explanations.

 \smallskip

 We shall now see many examples to get a feeling for what the various
 parameters achieve. All examples below assume precision is set to $105$
 decimal digits. We first type
 \bprog
 ? \p 105
 ? oo = [1]  \\@com for clarity
 @eprog

 \misctitle{Apparent singularities} Even if the function $f(x)$ represented
 by \var{expr} has no singularities, it may be important to define the
 function differently near special points. For instance, if $f(x) = 1
 /(\exp(x)-1) - \exp(-x)/x$, then $\int_0^\infty f(x)\,dx=\gamma$, Euler's
 constant \kbd{Euler}. But

 \bprog
 ? f(x) = 1/(exp(x)-1) - exp(-x)/x
 ? intnum(x = 0, [oo,1],  f(x)) - Euler
 %1 = 6.00... E-67
 @eprog\noindent
 thus only correct to $67$ decimal digits. This is because close to $0$ the
 function $f$ is computed with an enormous loss of accuracy.
 A better solution is

 \bprog
 ? f(x) = 1/(exp(x)-1)-exp(-x)/x
 ? F = truncate( f(t + O(t^7)) ); \\@com expansion around t = 0
 ? g(x) = if (x > 1e-18, f(x), subst(F,t,x))  \\@com note that $6 \cdot 18 > 105$
 ? intnum(x = 0, [oo,1],  g(x)) - Euler
 %2 = 0.E-106 \\@com perfect
 @eprog\noindent
 It is up to the user to determine constants such as the $10^{-18}$ and $7$
 used above.

 \misctitle{True singularities} With true singularities the result is worse.
 For instance

 \bprog
 ? intnum(x = 0, 1,  1/sqrt(x)) - 2
 %1 = -1.92... E-59 \\@com only $59$ correct decimals

 ? intnum(x = [0,-1/2], 1,  1/sqrt(x)) - 2
 %2 = 0.E-105 \\@com better
 @eprog

 \misctitle{Oscillating functions}

 \bprog
 ? intnum(x = 0, oo, sin(x) / x) - Pi/2
 %1 = 20.78.. \\@com nonsense
 ? intnum(x = 0, [oo,1], sin(x)/x) - Pi/2
 %2 = 0.004.. \\@com bad
 ? intnum(x = 0, [oo,-I], sin(x)/x) - Pi/2
 %3 = 0.E-105 \\@com perfect
 ? intnum(x = 0, [oo,-I], sin(2*x)/x) - Pi/2  \\@com oops, wrong $k$
 %4 = 0.07...
 ? intnum(x = 0, [oo,-2*I], sin(2*x)/x) - Pi/2
 %5 = 0.E-105 \\@com perfect

 ? intnum(x = 0, [oo,-I], sin(x)^3/x) - Pi/4
 %6 = 0.0092... \\@com bad
 ? sin(x)^3 - (3*sin(x)-sin(3*x))/4
 %7 = O(x^17)
 @eprog\noindent
 We may use the above linearization and compute two oscillating integrals with
 ``infinite endpoints'' \kbd{[oo, -I]} and \kbd{[oo, -3*I]} respectively, or
 notice the obvious change of variable, and reduce to the single integral
 ${1\over 2}\int_0^\infty \sin(x)/x\,dx$. We finish with some more complicated
 examples:

 \bprog
 ? intnum(x = 0, [oo,-I], (1-cos(x))/x^2) - Pi/2
 %1 = -0.0004... \\@com bad
 ? intnum(x = 0, 1, (1-cos(x))/x^2) \
 + intnum(x = 1, oo, 1/x^2) - intnum(x = 1, [oo,I], cos(x)/x^2) - Pi/2
 %2 = -2.18... E-106 \\@com OK

 ? intnum(x = 0, [oo, 1], sin(x)^3*exp(-x)) - 0.3
 %3 = 5.45... E-107 \\@com OK
 ? intnum(x = 0, [oo,-I], sin(x)^3*exp(-x)) - 0.3
 %4 = -1.33... E-89 \\@com lost 16 decimals. Try higher $m$:
 ? m = intnumstep()
 %5 = 7 \\@com the value of $m$ actually used above.
 ? tab = intnuminit(0,[oo,-I], m+1); \\@com try $m$ one higher.
 ? intnum(x = 0, oo, sin(x)^3*exp(-x), tab) - 0.3
 %6 = 5.45... E-107 \\@com OK this time.
 @eprog

 \misctitle{Warning} Like \tet{sumalt}, \kbd{intnum} often assigns a
 reasonable value to diverging integrals. Use these values at your own risk!
 For example:

 \bprog
 ? intnum(x = 0, [oo, -I], x^2*sin(x))
 %1 = -2.0000000000...
 @eprog\noindent
 Note the formula
 $$ \int_0^\infty \sin(x)/x^s\,dx = \cos(\pi s/2) \Gamma(1-s)\;, $$
 a priori valid only for $0 < \Re(s) < 2$, but the right hand side provides an
 analytic continuation which may be evaluated at $s = -2$\dots

 \misctitle{Multivariate integration}
 Using successive univariate integration with respect to different formal
 parameters, it is immediate to do naive multivariate integration. But it is
 important to use a suitable \kbd{intnuminit} to precompute data for the
 \emph{internal} integrations at least!

 For example, to compute the double integral on the unit disc $x^2+y^2\le1$
 of the function $x^2+y^2$, we can write
 \bprog
 ? tab = intnuminit(-1,1);
 ? intnum(x=-1,1, intnum(y=-sqrt(1-x^2),sqrt(1-x^2), x^2+y^2, tab), tab)
 @eprog\noindent
 The first \var{tab} is essential, the second optional. Compare:

 \bprog
 ? tab = intnuminit(-1,1);
 time = 30 ms.
 ? intnum(x=-1,1, intnum(y=-sqrt(1-x^2),sqrt(1-x^2), x^2+y^2));
 time = 54,410 ms. \\@com slow
 ? intnum(x=-1,1, intnum(y=-sqrt(1-x^2),sqrt(1-x^2), x^2+y^2, tab), tab);
 time = 7,210 ms.  \\@com faster
 @eprog\noindent
 However, the \kbd{intnuminit} program is usually pessimistic when it comes to
 choosing the integration step $2^{-m}$. It is often possible to improve the
 speed by trial and error. Continuing the above example:
 \bprog
 ? test(M) =
 {
 tab = intnuminit(-1,1, M);
 intnum(x=-1,1, intnum(y=-sqrt(1-x^2),sqrt(1-x^2), x^2+y^2,tab), tab) - Pi/2
 }
 ? m = intnumstep() \\@com what value of $m$ did it take ?
 %1 = 7
 ? test(m - 1)
 time = 1,790 ms.
 %2 = -2.05... E-104 \\@com $4 = 2^2$ times faster and still OK.
 ? test(m - 2)
 time = 430 ms.
 %3 = -1.11... E-104 \\@com $16 = 2^4$ times faster and still OK.
 ? test(m - 3)
 time = 120 ms.
 %3 = -7.23... E-60 \\@com $64 = 2^6$ times faster, lost $45$ decimals.
 @eprog

 \synt{intnum}{void *E, GEN (*eval)(void*,GEN), GEN a,GEN b,GEN tab, long prec},
 where an omitted \var{tab} is coded as \kbd{NULL}.
