返回介绍

Profiling

发布于 2025-02-25 23:43:59 字数 8877 浏览 0 评论 0 收藏 0

Profiling means to time your code so as to identify bottelnecks. If one function is taking up 99% of the time in your program, it is sensiblt to focus on optimizign that function first. It is a truism in computer science that we are generally hopeless at guessing what the bottlenecks in complex programs are, so we need to make use of profiling tools to help us.

Install profling tools:

pip install --pre line-profiler
pip install psutil
pip install memory_profiler

References:

  1. http://scipy-lectures.github.com/advanced/optimizing/index.html
  2. http://pynash.org/2013/03/06/timing-and-profiling.html
! pip install --pre line-profiler &> /dev/null
! pip install psutil &> /dev/null
! pip install memory_profiler &> /dev/null

Create an Ipython profile

$ ipython profile create

Add the exntesions to .ipython/profile_default/ipython_config.py

c.TerminalIPythonApp.extensions = [
    'line_profiler',
    'memory_profiler',
]

Using the timeit modules

We can measure the time taken by an arbitrary code block by starting timers before and after the code block, and measuring the difference.

def f(nsec=1.0):
    """Function sleeps for nsec seconds."""
    import time
    time.sleep(nsec)
import timeit

start = timeit.default_timer()
f()
elapsed = timeit.default_timer() - start
elapsed
1.0014

In the IPython notebook, individual functions can also be timed using %timeit. Useful options to %timeit include

  • -n: execute the given statement times in a loop. If this value is not given, a fitting value is chosen.
  • -r: repeat the loop iteration times and take the best result. Default: 3
%timeit f(0.5)
1 loops, best of 3: 500 ms per loop
%timeit -n2 -r4 f(0.5)
2 loops, best of 4: 501 ms per loop

We can also measure the time to execute an entire cell with %%time - this provdes 3 readouts:

  • Wall time - time from start to finish of the call. This is all elapsed time including time slices used by other processes and time the process spends blocked (for example if it is waiting for I/O to complete).
  • User is the amount of CPU time spent in executing the process, excluing operating system (kernel) calls.
  • Sys is the executing CPU time spent in system calls within the kernel, as opposed to library code.
%%time

f(1)
f(0.5)
CPU times: user 543 µs, sys: 762 µs, total: 1.31 ms
Wall time: 1.5 s

Using cProfile

This can be done in a notebook with %prun, with the following readouts as column headers:

  • ncalls
    • for the number of calls,
  • tottime
    • for the total time spent in the given function (and excluding time made in calls to sub-functions),
  • percall
    • is the quotient of tottime divided by ncalls
  • cumtime
    • is the total time spent in this and all subfunctions (from invocation till exit). This figure is accurate even for recursive functions.
  • percall
    • is the quotient of cumtime divided by primitive calls
  • filename:lineno(function)
    • provides the respective data of each function

Profiling Newton iterations

def newton(z, f, fprime, max_iter=100, tol=1e-6):
    """The Newton-Raphson method."""
    for i in range(max_iter):
        step = f(z)/fprime(z)
        if abs(step) < tol:
            return i, z
        z -= step
    return i, z
def plot_newton_iters(p, pprime, n=200, extent=[-1,1,-1,1], cmap='hsv'):
    """Shows how long it takes to converge to a root using the Newton-Raphson method."""
    m = np.zeros((n,n))
    xmin, xmax, ymin, ymax = extent
    for r, x in enumerate(np.linspace(xmin, xmax, n)):
        for s, y in enumerate(np.linspace(ymin, ymax, n)):
            z = x + y*1j
            m[s, r] = newton(z, p, pprime)[0]
    plt.imshow(m, cmap=cmap, extent=extent)
def f(x):
    return x**3 - 1

def fprime(x):
    return 3*x**2
stats = %prun -r -q plot_newton_iters(f, fprime)

# Restrict to 10 lines
stats.sort_stats('time').print_stats(10);
      1088832 function calls (1088459 primitive calls) in 1.938 seconds

Ordered by: internal time
List reduced from 445 to 10 due to restriction <10>

ncalls  tottime  percall  cumtime  percall filename:lineno(function)
 40000    0.623    0.000    1.343    0.000 <ipython-input-8-3671b81b1850>:1(newton)
     1    0.519    0.519    1.938    1.938 <ipython-input-9-0773c96453fa>:1(plot_newton_iters)
324388    0.312    0.000    0.312    0.000 <ipython-input-10-dbc2ff3e5adf>:1(f)
324388    0.290    0.000    0.290    0.000 <ipython-input-10-dbc2ff3e5adf>:4(fprime)
 40004    0.072    0.000    0.072    0.000 {range}
324392    0.045    0.000    0.045    0.000 {abs}
   421    0.003    0.000    0.008    0.000 path.py:199(_update_values)
   201    0.003    0.000    0.007    0.000 function_base.py:9(linspace)
   837    0.003    0.000    0.004    0.000 weakref.py:47(__init__)
  2813    0.003    0.000    0.003    0.000 __init__.py:871(__getitem__)
# Restrict using regular expression match
stats.sort_stats('time').print_stats(r'ipython');
      1088832 function calls (1088459 primitive calls) in 1.938 seconds

Ordered by: internal time
List reduced from 445 to 4 due to restriction <'ipython'>

ncalls  tottime  percall  cumtime  percall filename:lineno(function)
 40000    0.623    0.000    1.343    0.000 <ipython-input-8-3671b81b1850>:1(newton)
     1    0.519    0.519    1.938    1.938 <ipython-input-9-0773c96453fa>:1(plot_newton_iters)
324388    0.312    0.000    0.312    0.000 <ipython-input-10-dbc2ff3e5adf>:1(f)
324388    0.290    0.000    0.290    0.000 <ipython-input-10-dbc2ff3e5adf>:4(fprime)

Using the line profiler

%load_ext line_profiler
lstats = %lprun -r -f plot_newton_iters plot_newton_iters(f, fprime)

lstats.print_stats()
Timer unit: 1e-06 s

Total time: 2.40384 s
File: <ipython-input-9-0773c96453fa>
Function: plot_newton_iters at line 1

Line #      Hits         Time  Per Hit   % Time  Line Contents
==============================================================
     1                                           def plot_newton_iters(p, pprime, n=200, extent=[-1,1,-1,1], cmap='hsv'):
     2                                               """Shows how long it takes to converge to a root using the Newton-Raphson method."""
     3         1           81     81.0      0.0      m = np.zeros((n,n))
     4         1            1      1.0      0.0      xmin, xmax, ymin, ymax = extent
     5       201          396      2.0      0.0      for r, x in enumerate(np.linspace(xmin, xmax, n)):
     6     40200        74400      1.9      3.1          for s, y in enumerate(np.linspace(ymin, ymax, n)):
     7     40000       466076     11.7     19.4              z = x + y*1j
     8     40000      1708697     42.7     71.1              m[s, r] = newton(z, p, pprime)[0]
     9         1       154191 154191.0      6.4      plt.imshow(m, cmap=cmap, extent=extent)

Using the memory profiler

Sometimes the problem is that too much memory is being used, and we need to reduce it so that we can avoid disk churning (swapping of memory from RAM to hard disk). Two useful magic functions are %memit which works like %timeit but shows space rahter than time consumption, and %mprun which is like %lprun for memory usage.

Note that %mprun requires that the funciton to be evaluated is in a file.

%load_ext memory_profiler
%memit np.random.random((1000, 1000))
peak memory: 90.66 MiB, increment: 7.66 MiB
%%file foo.py

def foo(n):
    phrase = 'repeat me'
    pmul = phrase * n
    pjoi = ''.join([phrase for x in xrange(n)])
    pinc = ''
    for x in xrange(n):
        pinc += phrase
    del pmul, pjoi, pinc
Overwriting foo.py
import foo

%mprun -f foo.foo foo.foo(10000);
('',)

如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。

扫码二维码加入Web技术交流群

发布评论

需要 登录 才能够评论, 你可以免费 注册 一个本站的账号。
列表为空,暂无数据
    我们使用 Cookies 和其他技术来定制您的体验包括您的登录状态等。通过阅读我们的 隐私政策 了解更多相关信息。 单击 接受 或继续使用网站,即表示您同意使用 Cookies 和您的相关数据。
    原文