如何在 NDSolve 中引用函数的特定点?

发布于 2024-11-28 11:49:33 字数 923 浏览 2 评论 0原文

问题:

我正在尝试求解这个微分方程:

K[x_, x1_] := 1;
NDSolve[{A''[x] == Integrate[K[x, x1] A[x1], {x1, 0, 1}], 
         A[0] == 0, A'[1] == 1}, A[x], x]

并且出现错误(Function::slotnNDSolve::ndnum
(它应该返回一个等于3/16 x^2 + 5/8 x的数字函数)

我正在寻找一种方法来解决这个微分方程:有没有办法把它写成更好的形式,这样 NDSolve 就能理解它吗?还有其他函数或包可以提供帮助吗?

注 1: 在我的完整问题中,K[x, x1] 不是 1 - 它取决于(以复杂的方式)xx1
注2:单纯地求出关于x的方程两边是行不通的,因为积分极限是确定的。

我的第一印象:

似乎 Mathematica 不喜欢我引用 A[x] 中的点 - 当我做这个简化版本时,会发生同样的错误:

NDSolve[{A''[x] == A[0.5], A[0] == 0, A'[1] == 1}, A[x], x]

(它应该返回一个等于2/11 x^2 + 7/11 x的数字函数)

在这种情况下,可以通过分析解决A''[x]来避免这个问题== c,然后找到c,但在我的第一个问题中,它似乎不起作用——它只是将微分方程转换为积分方程,(N)DSolve 之后无法求解。

The problem:

I am trying to solve this diffrential equation:

K[x_, x1_] := 1;
NDSolve[{A''[x] == Integrate[K[x, x1] A[x1], {x1, 0, 1}], 
         A[0] == 0, A'[1] == 1}, A[x], x]

and I'm getting errors (Function::slotn and NDSolve::ndnum)
(it should return a numeric function that is equal to 3/16 x^2 + 5/8 x)

I am looking for a way to solve this differential equation: Is there a way to write it in a better form, such that NDSolve will understand it? Is there another function or package that can help?

Note 1: In my full problem, K[x, x1] is not 1 -- it depends (in a complex way) on x and x1.
Note 2: Naively deriving the two sides of the equation with respect to x won't work, because the integral limits are definite.

My first impression:

It seems that Mathematica doesn't like me referencing a point in A[x] -- the same errors occur when I'm doing this simplified version:

NDSolve[{A''[x] == A[0.5], A[0] == 0, A'[1] == 1}, A[x], x]

(it should return a numeric function that is equal to 2/11 x^2 + 7/11 x)

In this case one can avoid this problem by analytically solving A''[x] == c, and then finding c, but in my first problem it seems to not work -- it only transform the differential equation to an integral one, which (N)DSolve doesn't solve afterwards.

如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。

扫码二维码加入Web技术交流群

发布评论

需要 登录 才能够评论, 你可以免费 注册 一个本站的账号。

评论(3

香橙ぽ 2024-12-05 11:49:33

我可以建议一种将方程简化为积分方程的方法,该方程可以通过用矩阵近似其核来进行数值求解,从而减少矩阵乘法的积分。

首先,很明显,方程可以在 x 上积分两次,首先从 1x,然后从 0< /code> 到 x,这样:

在此处输入图像描述

我们现在可以离散化这个方程,将其放在等距网格上:

在此处输入图像描述

此处,A[x] 变为向量,集成的内核iniIntK 变成矩阵,而积分则被矩阵乘法取代。然后问题被简化为线性方程组。

最简单的情况(我将在这里考虑)是当内核iniIntK可以通过分析导出时 - 在这种情况下,该方法将非常快。这是生成集成内核作为纯函数的函数:

Clear[computeDoubleIntK]
computeDoubleIntK[kernelF_] :=
  Block[{x, x1},
   Function[
    Evaluate[
      Integrate[
         Integrate[kernelF[y, x1], {y, 1, x}] /. x -> y, {y, 0, x}] /. 
         {x -> #1, x1 -> #2}]]];

在我们的例子中:

In[99]:= K[x_,x1_]:=1;

In[100]:= kernel = computeDoubleIntK[K]
Out[100]= -#1+#1^2/2&

这是生成内核矩阵和 rh,s 向量的函数:

 computeDiscreteKernelMatrixAndRHS[intkernel_, a0_, aprime1_ , 
    delta_, interval : {_, _}] :=
  Module[{grid, rhs, matrix},
    grid = Range[Sequence @@ interval, delta];
    rhs = a0 + aprime1*grid; (* constant plus a linear term *)
    matrix = 
      IdentityMatrix[Length[grid]] - delta*Outer[intkernel, grid, grid];
    {matrix, rhs}]

给出一个非常粗略的想法(我在这里使用 < code>delta = 1/2):

In[101]:= computeDiscreteKernelMatrixAndRHS[kernel,0,1,1/2,{0,1}]

Out[101]= {{{1,0,0},{3/16,19/16,3/16},{1/4,1/4,5/4}},{0,1/2,1}}

我们现在需要求解线性方程,并对结果进行插值,这是通过以下函数完成的:

Clear[computeSolution];
computeSolution[intkernel_, a0_, aprime1_ , delta_, interval : {_, _}] :=
With[{grid = Range[Sequence @@ interval, delta]},
  Interpolation@Transpose[{
    grid,
    LinearSolve @@ 
      computeDiscreteKernelMatrixAndRHS[intkernel, a0, aprime1, delta,interval]
  }]]

这里我将使用 delta = 0.1 来调用它code>:

In[90]:= solA = computeSolution[kernel,0,1,0.1,{0,1}]
Out[90]= InterpolatingFunction[{{0.,1.}},<>] 

我们现在绘制结果与精确分析的结果@Sasha 找到的解决方案,以及错误:

在此处输入图像描述

我故意选择了 delta 足够大,以便错误可见。如果您选择delta(例如0.01),则绘图在视觉上将是相同的。当然,采用较小delta的代价是需要生成和求解更大的矩阵。

对于可以通过分析获得的内核,主要瓶颈将出现在 LinearSolve 中,但实际上它相当快(对于不太大的矩阵)。当核不能通过分析积分时,主要瓶颈将是在许多点上计算核(矩阵创建)。矩阵逆具有更大的渐近复杂性,但这将开始在真正大的矩阵中发挥作用 - 这在这种方法,因为它可以与迭代方法相结合 - 见下文)。您通常会定义:

intK[x_?NumericQ, x1_?NumericQ] := NIntegrate[K[y, x1], {y, 1, x}]
intIntK[x_?NumericQ, x1_?NumericQ] := NIntegrate[intK[z, x1], {z, 0, x}]

作为在这种情况下加速的一种方法,您可以在网格上预先计算内核 intK,然后进行插值,对于 intIntK 也是如此。然而,这会引入额外的错误,您必须估计(考虑)这些错误。

网格本身不需要是等距的(我只是为了简单起见而使用它),但可以(并且可能应该)是自适应的,并且通常是不均匀的。

作为最后的说明,考虑一个具有非平凡但符号可积内核的方程:

In[146]:= sinkern =  computeDoubleIntK[50*Sin[Pi/2*(#1-#2)]&]

Out[146]= (100 (2 Sin[1/2 \[Pi] (-#1+#2)]+Sin[(\[Pi] #2)/2] 
     (-2+\[Pi] #1)))/\[Pi]^2&

In[157]:= solSin = computeSolution[sinkern,0,1,0.01,{0,1}]
Out[157]= InterpolatingFunction[{{0.,1.}},<>]

在此处输入图像描述

以下是一些检查:

In[163]:= Chop[{solSin[0],solSin'[1]}]
Out[163]= {0,1.}

In[153]:= 
diff[x_?NumericQ]:=
  solSin''[x] - NIntegrate[50*Sin[Pi/2*(#1-#2)]&[x,x1]*solSin[x1],{x1,0,1}];

In[162]:= diff/@Range[0,1,0.1]

Out[162]=  {-0.0675775,-0.0654974,-0.0632056,-0.0593575,-0.0540479,-0.0474074,
   -0.0395995,-0.0308166,-0.0212749,-0.0112093,0.000369261}

总之,我只是想强调,必须对这种方法进行仔细的误差估计分析,而我没有这样做。

编辑

你也可以使用这种方法得到初始近似解,然后使用FixedPoint或其他方式迭代改进它 - 这样你就会有一个相对较快的收敛和将能够达到所需的精度,而不需要构造和求解巨大的矩阵。

I can suggest a way to reduce your equation to an integral equation, which can be solved numerically by approximating its kernel with a matrix, thereby reducing the integration to matrix multiplication.

First, it is clear that the equation can be integrated twice over x, first from 1 to x, and then from 0 to x, so that:

enter image description here

We can now discretize this equation, putting it on a equidistant grid:

enter image description here

Here, the A[x] becomes a vector, and the integrated kernel iniIntK becomes a matrix, while integration is replaced by a matrix multiplication. The problem is then reduced to a system of linear equations.

The easiest case (that I will consider here) is when the kernel iniIntK can be derived analytically - in this case this method will be quite fast. Here is the function to produce the integrated kernel as a pure function:

Clear[computeDoubleIntK]
computeDoubleIntK[kernelF_] :=
  Block[{x, x1},
   Function[
    Evaluate[
      Integrate[
         Integrate[kernelF[y, x1], {y, 1, x}] /. x -> y, {y, 0, x}] /. 
         {x -> #1, x1 -> #2}]]];

In our case:

In[99]:= K[x_,x1_]:=1;

In[100]:= kernel = computeDoubleIntK[K]
Out[100]= -#1+#1^2/2&

Here is the function to produce the kernel matrix and the r.h,s vector:

 computeDiscreteKernelMatrixAndRHS[intkernel_, a0_, aprime1_ , 
    delta_, interval : {_, _}] :=
  Module[{grid, rhs, matrix},
    grid = Range[Sequence @@ interval, delta];
    rhs = a0 + aprime1*grid; (* constant plus a linear term *)
    matrix = 
      IdentityMatrix[Length[grid]] - delta*Outer[intkernel, grid, grid];
    {matrix, rhs}]

To give a very rough idea how this may look like (I use here delta = 1/2):

In[101]:= computeDiscreteKernelMatrixAndRHS[kernel,0,1,1/2,{0,1}]

Out[101]= {{{1,0,0},{3/16,19/16,3/16},{1/4,1/4,5/4}},{0,1/2,1}}

We now need to solve the linear equation, and interpolate the result, which is done by the following function:

Clear[computeSolution];
computeSolution[intkernel_, a0_, aprime1_ , delta_, interval : {_, _}] :=
With[{grid = Range[Sequence @@ interval, delta]},
  Interpolation@Transpose[{
    grid,
    LinearSolve @@ 
      computeDiscreteKernelMatrixAndRHS[intkernel, a0, aprime1, delta,interval]
  }]]

Here I will call it with a delta = 0.1:

In[90]:= solA = computeSolution[kernel,0,1,0.1,{0,1}]
Out[90]= InterpolatingFunction[{{0.,1.}},<>] 

We now plot the result vs. the exact analytical solution found by @Sasha, as well as the error:

enter image description here

I intentionally chose delta large enough so the errors are visible. If you chose delta say 0.01, the plots will be visually identical. Of course, the price of taking smaller delta is the need to produce and solve larger matrices.

For kernels that can be obtained analytically, the main bottleneck will be in the LinearSolve, but in practice it is pretty fast (for matrices not too large). When kernels can not be integrated analytically, the main bottleneck will be in computing the kernel in many points (matrix creation. The matrix inverse has a larger asymptotic complexity, but this will start play a role for really large matrices - which are not necessary in this approach, since it can be combined with an iterative one - see below). You will typically define:

intK[x_?NumericQ, x1_?NumericQ] := NIntegrate[K[y, x1], {y, 1, x}]
intIntK[x_?NumericQ, x1_?NumericQ] := NIntegrate[intK[z, x1], {z, 0, x}]

As a way to speed it up in such cases, you can precompute the kernel intK on a grid and then interpolate, and the same for intIntK. This will however introduce additional errors, which you'll have to estimate (account for).

The grid itself needs not be equidistant (I just used it for simplicity), but may (and probably should) be adaptive, and generally non-uniform.

As a final illustration, consider an equation with a non-trivial but symbolically integrable kernel:

In[146]:= sinkern =  computeDoubleIntK[50*Sin[Pi/2*(#1-#2)]&]

Out[146]= (100 (2 Sin[1/2 \[Pi] (-#1+#2)]+Sin[(\[Pi] #2)/2] 
     (-2+\[Pi] #1)))/\[Pi]^2&

In[157]:= solSin = computeSolution[sinkern,0,1,0.01,{0,1}]
Out[157]= InterpolatingFunction[{{0.,1.}},<>]

enter image description here

Here are some checks:

In[163]:= Chop[{solSin[0],solSin'[1]}]
Out[163]= {0,1.}

In[153]:= 
diff[x_?NumericQ]:=
  solSin''[x] - NIntegrate[50*Sin[Pi/2*(#1-#2)]&[x,x1]*solSin[x1],{x1,0,1}];

In[162]:= diff/@Range[0,1,0.1]

Out[162]=  {-0.0675775,-0.0654974,-0.0632056,-0.0593575,-0.0540479,-0.0474074,
   -0.0395995,-0.0308166,-0.0212749,-0.0112093,0.000369261}

To conclude, I just want to stress that one has to perform a careful error - estimation analysis for this method, which I did not do.

EDIT

You can also use this method to get the initial approximate solution, and then iteratively improve it using FixedPoint or other means - in this way you will have a relatively fast convergence and will be able to reach the required precision without the need to construct and solve huge matrices.

宁愿没拥抱 2024-12-05 11:49:33

这是对 Leonid Shifrin 方法的补充。我们从一个线性函数开始,该函数在起点处对值和一阶导数进行插值。我们在与给定核函数的集成中使用它。然后,我们可以使用集成内核中的每个先前近似值进行迭代,该集成内核用于进行下一个近似值。

我在下面展示了一个示例,使用了比常量函数更复杂的内核。我将对其进行两次迭代并显示差异表。

kernel[x_, y_] := Sqrt[x]/(y^2 + 1/5)*Sin[x^2 + y]
intkern[x_?NumericQ, aa_] := 
 NIntegrate[kernel[x, y]*aa[y], {y, 0, 1}, MinRecursion -> 2, 
  AccuracyGoal -> 3]

Clear[a];
a0 = 0;
a1 = 1;
a[0][x_] := a0 + a1*x

soln1 = a[1][x] /. 
   First[NDSolve[{(a[1]^\[Prime]\[Prime])[x] == intkern[x, a[0], y], 
      a[1][0] == a0, a[1][1] == a1}, a[1][x], {x, 0, 1}]];
a[1][x_] = soln1;

In[283]:= Table[a[1]''[x] - intkern[x, a[1]], {x, 0., 1, .1}]

Out[283]= {4.336808689942018*10^-19, 0.01145100326794241, \
0.01721655945379122, 0.02313249302884235, 0.02990900241909161, \
0.03778448183557359, 0.04676409320217928, 0.05657128568058478, \
0.06665818935524814, 0.07624149919589895, 0.08412643746245929}

In[285]:= 
soln2 = a[2][x] /. 
   First[NDSolve[{(a[2]^\[Prime]\[Prime])[x] == intkern[x, a[1]], 
      a[2][0] == a0, a[2][1] == a1}, a[2][x], {x, 0, 1}]];
a[2][x_] = soln2;

In[287]:= Table[a[2]''[x] - intkern[x, a[2]], {x, 0., 1, .1}]

Out[287]= {-2.168404344971009*10^-19, -0.001009606971360516, \
-0.00152476679745811, -0.002045817184941901, -0.002645356229312557, \
-0.003343218015068372, -0.004121109614310836, -0.004977453722712966, \
-0.005846840469889258, -0.006731367269472544, -0.007404971586975062}

所以现阶段我们的误差小于 0.01。还不错。一个缺点是获得第二个近似值的速度相当慢。可能有一些方法可以调整 NDSolve 来改进这一点。

这与列昂尼德的方法是互补的,原因有两个。

(1) 如果由于初始线性近似不够接近真实结果而不能很好地收敛,则可以从有限差分格式找到的近似开始。这与他所做的类似。

(2) 他几乎自己指出了这一点,作为一种可能遵循他的方法并产生改进的方法。

丹尼尔·利希布劳

This is complementary to Leonid Shifrin's approach. We start with a linear function that interpolates the value and first derivative at the starting point. We use that in the integration with the given kernel function. We can then iterate, using each previous approximation in the integrated kernel that is used to make the next approximation.

I show an example below, using a more complicated kernel than just a constant function. I'll take it through two iterations and show tables of discrepancies.

kernel[x_, y_] := Sqrt[x]/(y^2 + 1/5)*Sin[x^2 + y]
intkern[x_?NumericQ, aa_] := 
 NIntegrate[kernel[x, y]*aa[y], {y, 0, 1}, MinRecursion -> 2, 
  AccuracyGoal -> 3]

Clear[a];
a0 = 0;
a1 = 1;
a[0][x_] := a0 + a1*x

soln1 = a[1][x] /. 
   First[NDSolve[{(a[1]^\[Prime]\[Prime])[x] == intkern[x, a[0], y], 
      a[1][0] == a0, a[1][1] == a1}, a[1][x], {x, 0, 1}]];
a[1][x_] = soln1;

In[283]:= Table[a[1]''[x] - intkern[x, a[1]], {x, 0., 1, .1}]

Out[283]= {4.336808689942018*10^-19, 0.01145100326794241, \
0.01721655945379122, 0.02313249302884235, 0.02990900241909161, \
0.03778448183557359, 0.04676409320217928, 0.05657128568058478, \
0.06665818935524814, 0.07624149919589895, 0.08412643746245929}

In[285]:= 
soln2 = a[2][x] /. 
   First[NDSolve[{(a[2]^\[Prime]\[Prime])[x] == intkern[x, a[1]], 
      a[2][0] == a0, a[2][1] == a1}, a[2][x], {x, 0, 1}]];
a[2][x_] = soln2;

In[287]:= Table[a[2]''[x] - intkern[x, a[2]], {x, 0., 1, .1}]

Out[287]= {-2.168404344971009*10^-19, -0.001009606971360516, \
-0.00152476679745811, -0.002045817184941901, -0.002645356229312557, \
-0.003343218015068372, -0.004121109614310836, -0.004977453722712966, \
-0.005846840469889258, -0.006731367269472544, -0.007404971586975062}

So we have errors of less than .01 at this stage. Not too bad. One drawback is that it was fairly slow to get the second approximation. There may be ways to tune NDSolve to improve on that.

This is complementary to Leonid's method for two reasons.

(1) If this did not converge well because the initial linear approximation was not sufficiently close to the true result, one might instead begin with an approximation found by a finite differencing scheme. That would be akin to what he did.

(2) He pretty much indicated this himself, as a method that might follow his and produce refinements.

Daniel Lichtblau

榆西 2024-12-05 11:49:33

您的方程当前的编写方式是A''[x] == const,并且constant 与x 无关。因此,解总是具有二次多项式的形式。然后你的问题就简化为求解不确定系数:

In[13]:= A[x_] := a2 x^2 + a1 x + a0;

In[14]:= K[x_, x1_] := 1;

In[16]:= Solve[{A''[x] == Integrate[K[x, x1] A[x1], {x1, 0, 1}], 
  A[0] == 0, A'[1] == 1}, {a2, a1, a0}]

Out[16]= {{a2 -> 3/16, a1 -> 5/8, a0 -> 0}}

In[17]:= A[x] /. First[%]

Out[17]= (5 x)/8 + (3 x^2)/16

The way your equation is currently written A''[x] == const, and than constant is independent of x. Hence the solution always has the form of quadratic polynomial. Your problem then reduces to a solving for indeterminate coefficients:

In[13]:= A[x_] := a2 x^2 + a1 x + a0;

In[14]:= K[x_, x1_] := 1;

In[16]:= Solve[{A''[x] == Integrate[K[x, x1] A[x1], {x1, 0, 1}], 
  A[0] == 0, A'[1] == 1}, {a2, a1, a0}]

Out[16]= {{a2 -> 3/16, a1 -> 5/8, a0 -> 0}}

In[17]:= A[x] /. First[%]

Out[17]= (5 x)/8 + (3 x^2)/16
~没有更多了~
我们使用 Cookies 和其他技术来定制您的体验包括您的登录状态等。通过阅读我们的 隐私政策 了解更多相关信息。 单击 接受 或继续使用网站,即表示您同意使用 Cookies 和您的相关数据。
原文