next up previous contents
Next: Delay Differential Equations Up: Abstract of Contributed Talks Previous: Abstract of Contributed Talks   Contents

Numerical Methods for ODEs


On the properties of matrices defining BVMs
Lidia Aceto & Donato Trigiante
(Dipartimento di Energetica, Università di Firenze,Italy)


When we approximate the solution of the initial value problem

$\displaystyle \left \{ \begin{array}{ll}
y^{\prime}(t) = f(t,y(t)), &\qquad t \in [t_0,T] \\
y(t_0)= y_0 &
\end{array} \right.
$

by using at consecutive grid points of a uniform mesh with stepsize $ h$ the same Boundary Value Method (BVM) [1], with $ (\nu, k-\nu)$-boundary conditions, we obtain the discrete problem which may be written in matrix form as follows:

$\displaystyle A \: {\bf y} - h B \: {\bf f} = {\bf v},
$

where the vectors $ {\bf y}$ and $ {\bf f}$ contain the unknowns, $ {\bf v}$ is the known vector and the coefficient matrices $ A$ and $ B$ are Toeplitz band matrices.
In this talk we shall present the conditions needed for a Toeplitz band matrix to be positive definite. Such result will provide a useful tool to study the stability problem for BVMs in an alternative way. Classically such problem has been carried out by using the theory of difference equations. Here we shall prove that the necessary condition to obtain in each class of BVMs an $ A_{\nu,k-\nu}$-stable method is that the matrices $ A$ and $ B$ arising from its application are positive definite. In particular, our attention will be focused on the following class of methods: Generalized BDF (GBDF) and Top Order Methods (TOMs).


References

[1]
L. Brugnano, D.Trigiante. Solving Differential Problems by Multistep Initial and Boundary Value Methods, Gordon & Breach Science Publishers, Amsterdam, 1998.


Numerical methods for isodynamical matrix flows with application to balanced realization in control theory
Nicoletta Del Buono & Luciano Lopez & Carmen Mastroserio
(Dipartimento Interuniversitario di Matematica, Università di Bari, Italy)


Recently several numerical methods have been proposed for solving isospectral problems which are matrix differential systems whose solutions preserve the spectrum during the evolution. In this talk we consider matrix differential systems called isodynamical flows in which only a component of the matrix solution preserves the eigenvalues during the evolution and we propose procedures for their numerical solution.

Applications of such numerical procedures may be found in systems theory, in particular in balancing realization problems.


On some numerical methods for spectral computations in regular and non regular Sturm-Liouville problems
G.Gheri & Paolo Ghelardoni & Marco Marletta
(Dipartimento di Matematica Applicata, Università di Pisa, Italy)


In the regular Sturm-Liouville problems (SLPs) the main difficulty in the spectral computations arises from the approximation of the eigenvalues of higher index. Thus some particular procedure right for the purpose to correct the approximations obtained seems to be necessary. In this frame the class of the boundary value methods (BVMs) equipped with symmetric properties represents a mighty tool. Actually the symmetric schemes preserve the analyticity of the computed solutions on some discretization parameter allowing to set up an effective spectral correction procedure.
With regard to non regular SLPs, the presence of internal singularities is responsible of a decay of the accuracy of some classical methods. In particular, in the case of an eigenvalue embedded in the continous spectrum, the Magnus series method and JWKB method exhibit a somewhat poor approximation endowments and the standard BVMs experience a degeneration of their order of convergence. Nevertheless some BVMs employed in a non standard form act as in the regular case. Furthermore the symmetric schemes enable that enhanced estimates may still be obtained despite the fact that the problem is $ \lambda -$nonlinear.


On the solutions of symmetric BVMs for linear Hamiltonian Boundary Value Problems
Felice Iavernaro & Pierluigi Amodio & Donato Trigiante
(Dipartimento di Matematica, Università di Bari, Italy)


We consider the application of a symmetric Boundary Value Method to solve the linear Hamiltonian Boundary Value Problem

$\displaystyle \left\{ \begin{array}{l} {\bf y}'=JS{\bf y},    t \in [t_0,t_f], ...
...right. \qquad J= \left( \begin{array}{rr} 0 &I \  -I & 0 \end{array} \right) ,$ (3)

on a uniform mesh with stepsize $ h$ (S is symmetric and positive definite). Denoting by $ {\bf y}(t_n)$ and $ {\bf y}_n$ the solutions of (3) and of the BVM respectively, for a $ p$ order method it is easy to check that the Hamiltonian function $ \sigma=({\bf y}(t_n))^T S {\bf y}(t_n)$ and its approximation $ \sigma_n={\bf y}_n^T S {\bf y}_n$, satisfy the relation $ \sigma_n=\sigma+O(h^p)$, that is the convergence properties of the Hamiltonian function evaluated on the numerical solution are the same as those of the underlying method.

We firstly prove that symmetric schemes of even order (ETRs, ETR$ _2$s, TOMs) posses a superconvergence property, namely they produce a solution satisfying $ \sigma_n=\sigma+O(h^{p+2})$. After that, the continuous and the discrete problems are shown to be deeply interconnected each other according to the following symmetric properties:

(i)
the solution of (3), projected on the given mesh, may be regarded as a solution generated by a symmetric BVM of even order applied to a Hamiltonian problem obtained perturbing (3) by an Hamiltonian term $ O(h^p)$;
(ii)
apart the initial and final additional conditions, the solution of a symmetric BVM of even order applied to (3) may be regarded as the projection on the mesh of the solution of a Hamiltonian problem obtained perturbing (3) by an Hamiltonian term $ O(h^p)$.
From this correspondence a number of interesting properties satisfied by the true solution may be directly transferred to the numerical one. The additional methods work as a perturbation over this favourable situation, but their effect disappear very quickly as long as we move towards the middle of the time integration interval.


Galerkin-type methods with Chebyshev nodes for initial value problems
A. Napoli & F. Costabile
(Università della Calabria, Italy)


For the numerical solution of systems of nonlinear first-order ordinary differential equations we employ polynomial Galerkin-type method to devise global methods.

The basic idea is to approximate $ y'$ by a linear combination $ y'_n$ of some system of orthogonal polynomials $ \phi_k$ of degree $ k$ and determine the coefficients of $ y'_n$ by requiring that it provides Galerkin-type approximation on the hole given interval, using an appropriate discrete inner product.

Some methods which uses Chebyshev polynomials of first and second kind are studied which result to be collocation methods. The stability of the equivalent implicit Runge-Kutta methods is studied by classical proceedings. Extension to the solution of second order differential equations is outlined.


Adapting the parameters of the numerical method to an oscillatory behaviour
Beatrice Paternoster
(Dipartimento di Matematica e Informatica, Università di Salerno, Italy)


We have been considering the possibility of adapting the parameters of a numerical methods for ODEs to the special form of the solution. In particular, in the case of second order ODEs with oscillatory solutions, if the location of the frequencies are known in advance, we can tune the parameters to the frequencies to provide better approximations to the oscillations. We use and compare three differents approaches, First, assuming that the dominant frequencies $ \omega_j$ are a priori given, we extend trigonometric and mixed collocation to two step Runge-Kutta methods. Then we derive a phase-fitted Runge-Kutta-Nyström (RKN) method which results exact in phase for linear problems with periodic solutions. Finally, only assuming that the frequencies are located in a given nonnegative small interval $ [\omega_1, \omega_2]$, we adapt the parameters of the RKN method through least squares minimization.


Numerical treatment of second order neutral delay differential equations using deficient spline functions
Raffaella Pavani & Franca Calio' & Elena Marchetti
(Politecnico di Milano, Italy)


As well known, it is standard practice first to reduce a $ qth-$order ordinary differential equation (ODE) to a first-order system. However, as pointed out in [5] ''the only exception is when the equation is second order, for which special numerical methods have been devised''. As we showed in two previous papers about second order neutral delay differential equations (NDDE), such a special method can be easily implemented using deficient spline functions. Therefore for NDDE it can be advisable not to resort to reduction, provided that such a reduction can be seriously problematic for some NDDE problems

In particular we considered the following second order NDDE problem:

\begin{displaymath}\left\{
\begin{array}{r}
y^{\prime \prime }(x)=f(x,y(x),y(g(x...
...a =\inf_{x\in \lbrack
a,b]}(g(x))
\end{array}\right. \smallskip\end{displaymath}

As well known, recently a lot of robust efficient delay differential equation solvers were implemented and used as public domain software. However just a small subset of them deals even with neutral cases; anyway reduction is always used. Therefore our aim is to make it as easy as possible to solve effectively the previous NDDE problem, without reduction.

To this end we merge two classical techniques: the approximation of the solution by means of deficient spline functions and the use of a collocation method in order to compute the approximating function.

In [2] we presented the numerical method in details; moreover we extended Theorems assuring convergence and consistency provided in [1] for the first order problem, to the second order problem.

In [3] we improve the stability result: whereas for the first order NDDE problem the method is stable only for spline order $ m<4,
$ for the second order NDDE problem, we prove the stability for spline order $ m<5.$

A major disadvantage of special methods implemented for second order ODE is that the accumulation of rounding errors is very fast and is proportional to $ 1/h^{2}$, where $ h$ is the used stepsize [4]. However we show that our method for second order NDDE can use variable steps; therefore it converges efficiently even when relatively large values of stepsize are mostly used.

Our algorithm implemented in MATLAB reveals simple and flexible; moreover for problems whose solution exhibits low regularity, it provides an excellent numerical solution and can require less flops than other public domain software


References

1
Bellen A. and Micula G., Spline aprroximations for neutral delay differential equations, Revue d' Analyse Numer. et de Theorie de l' Approx., 23 (1994), 117-125

2
Calio' F., Marchetti E., Micula G., Pavani R.: A new deficient spline functions collocation method for the second order delay differential equations, submitted

3
Calio' F., Marchetti E., Pavani R.: About the stability of numerical solution by deficient spline functions of second order neutral delay differential equations, in progress

4
Henrici P., Discrete variable methods in ordinary differential equations, Springer Verlag, (1971)

5
Lambert J.D., Numerical methods for ordinary differential systems, John Wiley & Sons, (1991)


An Algorithm for the Computation of the G-Singular Values of a Real Matrix
Tiziano Politi & Giovanni Di Lena & Giuseppe Piazza
(Dipartimento Interuniversitario di Matematica, Politecnico di Bari, Italy)


The well-known Singular Values Decomposition ($ SVD$) states that every matrix $ A$ can be written in the form

$\displaystyle A=U\Sigma V,
$

where $ U$ and $ V$ are orthogonal and $ \Sigma$ is diagonal with nonnegative entries. In this work we consider the problem to compute a special $ SVD$ decomposition of the matrix $ A$ called $ G-SVD$. In this case given a real diagonal matrix $ G$ with main elements equal to $ \pm 1,$ the $ G-SVD$ of a matrix is a decomposition of the same kind of the usual $ SVD$ but matrices $ U$ and $ V$ are orthogonal with respect to the metric defined by $ G$ and not to the usual Euclidean metric. The elements of matrix $ \Sigma$ are called $ G-$singular values.

In this work we study some theoretical aspects about the conditions for the existence of the $ G-SVD$ and propose a qd-type algorithm to compute the $ G-$singular values.
The first step of the algorithm is the bidiagonalization of the matrix using the $ G-$orthogonal matrices called Hyperbolic Houselder Transforms.
The second step is the numerical computation of the $ G-$singular values of a bidiagonal matrix. We have considered also a modified algorithm which includes the computation of a shift at each step (as the algorithms for the computation of the singular values). Some numerical tests are given.


Rounding error reduction in extrapolation methods
Giulia Spaletta & Mark Sofroniou
(Dipartimento di Matematica, Università di Bologna, Italy)


Extrapolation methods are very efficient when high accuracy is desired in a numerical solution of an ordinary differential equation.

An example of Hairer is used to demonstrate how high order methods can suffer from cumulative rounding error propagation.

A new formulation for reducing the effect of cumulative rounding errors will be outlined and numerical examples will be given to illustrate the benefits over the standard formalism.

Finally, several features of a developmental implementation of extrapolation methods in Mathematica will be illustrated.



next up previous contents
Next: Delay Differential Equations Up: Abstract of Contributed Talks Previous: Abstract of Contributed Talks   Contents
Mazzia Francesca 2001-09-11