Welcome to pacopy’s documentation!¶

pacopy.
natural
(problem, u0, lambda0, callback, lambda_stepsize0=0.1, lambda_stepsize_max=inf, lambda_stepsize_aggressiveness=2, max_newton_steps=5, newton_tol=1e12, max_steps=inf, verbose=True, use_first_order_predictor=True, milestones=None)¶ Natural parameter continuation.
The most naive parameter continuation. This simply solves \(F(u, \lambda_0)=0\) using Newton’s method, then changes \(\lambda\) slightly and solves again using the previous solution as an initial guess. Cannot handle turning points.
Parameters:  problem – Instance of the problem class
 u0 – Initial guess
 lambda0 – Initial parameter value
 callback – Callback function
 lambda_stepsize0 (float) – Initial step size
 lambda_stepsize_aggressiveness (float) – The step size is adapted after each step
such that
max_newton_steps
is exhausted approximately. This parameter determines how aggressively the the step size is increased if too few Newton steps were used.  lambda_stepsize_max (float) – Maximum step size
 max_newton_steps (int) – Maxmimum number of Newton steps
 newton_tol (float) – Newton tolerance
 max_steps (int) – Maximum number of continuation steps
 verbose (bool) – Verbose output
 use_first_order_predictor (bool) – Once a solution has been found, one can use it to bootstrap the Newton process for the next iteration (order 0). Another possibility is to use \(u  s J^{1}(u, \lambda) \frac{df}{d\lambda}\), a firstorder approximation.
 milestones (Optional[Iterable[float]]) – Don’t step over these values.

pacopy.
euler_newton
(problem, u0, lmbda0, callback, max_steps=inf, verbose=True, newton_tol=1e12, max_newton_steps=5, predictor_variant='tangent', corrector_variant='tangent', stepsize0=0.5, stepsize_max=inf, stepsize_aggressiveness=2, cos_alpha_min=0.9, theta0=1.0, adaptive_theta=False, converge_onto_zero_eigenvalue=False)¶ Pseudoarclength continuation.
This implementation takes some inspiration from LOCA in that no single bordered system is solved, but the solution is constructed from two solves of the plain Jacobian system. This has several advantages, one being that preconditioners for the Jacobian can be reused.
Parameters:  problem – Instance of the problem class
 u0 – Initial guess
 lambda0 – Initial parameter value
 callback – Callback function
 max_steps (int) – Maximum number of continuation steps
 verbose (bool) – Verbose output
 newton_tol (float) – Newton tolerance
 max_newton_steps (int) – Maxmimum number of Newton steps
 predictor_variant (string) –
"tangent"
or"secant"
 corrector_variant (string) –
"tangent"
or"secant"
 stepsize0 (float) – Initial step size
 stepsize_max (float) – Maximum step size
 stepsize_aggressiveness (float) – The step size is adapted after each step
such that
max_newton_steps
is exhausted approximately. This parameter determines how aggressively the the step size is increased if too few Newton steps were used.  cos_alpha_min (float) –
To make a plotted parametersolution norm curve look smooth, subsequent predictors should have a small angle between them. The correct way to do this would be to look at
\[\begin{split}\cos(\alpha) = v_i^T v_{i+1} / \v_i\ / \v_{i+1}\ \\ v_i := (\u_i^p\  \u_i\, \lambda_i^p  \lambda_i).\end{split}\]Instead of taking the solution and predictor norms, the entire vectors are taken,
\[w_i := (u_i^p  u_i, \lambda_i^p  \lambda_i),\]with the appropriate inner product in the product space \(U\times \mathbb{R}\). This results in the expression
\[\cos(\alpha) = \frac{ \langle \frac{du_0}{ds}, \frac{du}{ds}\rangle + \frac{d\lambda_0}{ds} \frac{d\lambda}{ds}}{ \sqrt{\\frac{du_0}{ds}\^2 + \frac{d\lambda_0}{ds}^2} \sqrt{\\frac{du}{ds}\^2 + \frac{d\lambda}{ds}^2}}\]When using the tangent predictor, this can be written as
\[\cos(\alpha) = \frac{\langle \frac{du}{d\lambda}, \frac{du_0}{d\lambda}\rangle + 1}{\sqrt{\\frac{du}{d\lambda}\^2 + 1} \sqrt{\\frac{du_0}{d\lambda}\^2 + 1} }\]When removing the \(+1\) s, this is the expression that is used in LOCA (equation (2.25) in the LOCA book).
 theta0 (float) –
The arclength equation is
\[\left\\frac{du}{ds}\right\^2 + \left(\frac{d\lambda}{ds}\right)^2 = 1.\]Quoting from LOCA: It is numerically advantageous for the relative magnitudes of the parameter and solution updates to be of similar order. In particular, the advantage of using arclength parameterization can be lost if the solution contribution to the arc length equation becomes very small. In this algorithm, a single scaling factor \(\theta\) is used for the solution contribution in order to provide some control over the relative contributions of \(\lambda\) and \(u\). The modified arc length equation is then
\[\left\\frac{du}{ds}\right\^2 + \theta^2 \left(\frac{d\lambda}{ds}\right)^2 = 1.\]