Welcome to OGeek Q&A Community for programmer and developer-Open, Learning and Share
Welcome To Ask or Share your Answers For Others

Categories

0 votes
225 views
in Technique[技术] by (71.8m points)

python - Integrating a multidimensional integral in scipy

Motivation: I have a multidimensional integral, which for completeness I have reproduced below. It comes from the computation of the second virial coefficient when there is significant anisotropy:

enter image description here

Here W is a function of all the variables. It is a known function, one which I can define a python function for.

Programming Question: How do I get scipy to integrate this expression? I was thinking of chaining two triple quads (scipy.integrate.tplquad) together, but I'm worried about performance and accuracy. Is there a higher dimensional integrator in scipy, one that can handle an arbitrary number of nested integrals? If not, what is the best way to do this?

See Question&Answers more detail:os

与恶龙缠斗过久,自身亦成为恶龙;凝视深渊过久,深渊将回以凝视…
Welcome To Ask or Share your Answers For Others

1 Reply

0 votes
by (71.8m points)

With a higher-dimensional integral like this, monte carlo methods are often a useful technique - they converge on the answer as the inverse square root of the number of function evaluations, which is better for higher dimension then you'll generally get out of even fairly sophisticated adaptive methods (unless you know something very specific about your integrand - symmetries that can be exploited, etc.)

The mcint package performs a monte carlo integration: running with a non-trivial W that is nonetheless integrable so we know the answer we get (note that I've truncated r to be from [0,1); you'll have to do some sort of log transform or something to get that semi-unbounded domain into something tractable for most numerical integrators):

import mcint
import random
import math

def w(r, theta, phi, alpha, beta, gamma):
    return(-math.log(theta * beta))

def integrand(x):
    r     = x[0]
    theta = x[1]
    alpha = x[2]
    beta  = x[3]
    gamma = x[4]
    phi   = x[5]

    k = 1.
    T = 1.
    ww = w(r, theta, phi, alpha, beta, gamma)
    return (math.exp(-ww/(k*T)) - 1.)*r*r*math.sin(beta)*math.sin(theta)

def sampler():
    while True:
        r     = random.uniform(0.,1.)
        theta = random.uniform(0.,2.*math.pi)
        alpha = random.uniform(0.,2.*math.pi)
        beta  = random.uniform(0.,2.*math.pi)
        gamma = random.uniform(0.,2.*math.pi)
        phi   = random.uniform(0.,math.pi)
        yield (r, theta, alpha, beta, gamma, phi)


domainsize = math.pow(2*math.pi,4)*math.pi*1
expected = 16*math.pow(math.pi,5)/3.

for nmc in [1000, 10000, 100000, 1000000, 10000000, 100000000]:
    random.seed(1)
    result, error = mcint.integrate(integrand, sampler(), measure=domainsize, n=nmc)
    diff = abs(result - expected)

    print "Using n = ", nmc
    print "Result = ", result, "estimated error = ", error
    print "Known result = ", expected, " error = ", diff, " = ", 100.*diff/expected, "%"
    print " "

Running gives

Using n =  1000
Result =  1654.19633236 estimated error =  399.360391622
Known result =  1632.10498552  error =  22.0913468345  =  1.35354937522 %

Using n =  10000
Result =  1634.88583778 estimated error =  128.824988953
Known result =  1632.10498552  error =  2.78085225405  =  0.170384397984 %

Using n =  100000
Result =  1646.72936 estimated error =  41.3384733174
Known result =  1632.10498552  error =  14.6243744747  =  0.8960437352 %

Using n =  1000000
Result =  1640.67189792 estimated error =  13.0282663003
Known result =  1632.10498552  error =  8.56691239895  =  0.524899591322 %

Using n =  10000000
Result =  1635.52135088 estimated error =  4.12131562436
Known result =  1632.10498552  error =  3.41636536248  =  0.209322647304 %

Using n =  100000000
Result =  1631.5982799 estimated error =  1.30214644297
Known result =  1632.10498552  error =  0.506705620147  =  0.0310461413109 %

You could greatly speed this up by vectorizing the random number generation, etc.

Of course, you can chain the triple integrals as you suggest:

import numpy
import scipy.integrate
import math

def w(r, theta, phi, alpha, beta, gamma):
    return(-math.log(theta * beta))

def integrand(phi, alpha, gamma, r, theta, beta):
    ww = w(r, theta, phi, alpha, beta, gamma)
    k = 1.
    T = 1.
    return (math.exp(-ww/(k*T)) - 1.)*r*r*math.sin(beta)*math.sin(theta)

# limits of integration

def zero(x, y=0):
    return 0.

def one(x, y=0):
    return 1.

def pi(x, y=0):
    return math.pi

def twopi(x, y=0):
    return 2.*math.pi

# integrate over phi [0, Pi), alpha [0, 2 Pi), gamma [0, 2 Pi)
def secondIntegrals(r, theta, beta):
    res, err = scipy.integrate.tplquad(integrand, 0., 2.*math.pi, zero, twopi, zero, pi, args=(r, theta, beta))
    return res

# integrate over r [0, 1), beta [0, 2 Pi), theta [0, 2 Pi)
def integral():
    return scipy.integrate.tplquad(secondIntegrals, 0., 2.*math.pi, zero, twopi, zero, one)

expected = 16*math.pow(math.pi,5)/3.
result, err = integral()
diff = abs(result - expected)

print "Result = ", result, " estimated error = ", err
print "Known result = ", expected, " error = ", diff, " = ", 100.*diff/expected, "%"

which is slow but gives very good results for this simple case. Which is better is going to come down to how complicated your W is and what your accuracy requirements are. Simple (fast to evaluate) W with high accuracy will push you to this sort of method; complicated (slow to evaluate) W with moderate accuracy requirements will push you towards MC techniques.

Result =  1632.10498552  estimated error =  3.59054059995e-11
Known result =  1632.10498552  error =  4.54747350886e-13  =  2.7862628625e-14 %

与恶龙缠斗过久,自身亦成为恶龙;凝视深渊过久,深渊将回以凝视…
OGeek|极客中国-欢迎来到极客的世界,一个免费开放的程序员编程交流平台!开放,进步,分享!让技术改变生活,让极客改变未来! Welcome to OGeek Q&A Community for programmer and developer-Open, Learning and Share
Click Here to Ask a Question

...