DepositOnce Community:https://depositonce.tu-berlin.de/handle/11303/72662019-05-25T13:58:28Z2019-05-25T13:58:28ZComputable convergence bounds for GMRESLiesen, Jörghttps://depositonce.tu-berlin.de/handle/11303/72992018-02-16T07:24:31Z2017-12-20T12:12:04ZMain Title: Computable convergence bounds for GMRES
Author(s): Liesen, Jörg
Abstract: The purpose of this paper is to derive new computable convergence bounds for GMRES. The new bounds depend on the initial guess and are thus conceptually different from standard "worst-case" bounds. Most importantly, approximations to the new bounds can be computed from information generated during the run of a certain GMRES implementation. The approximations allow predictions of how the algorithm will perform. Heuristics for such predictions are given. Numerical experiments illustrate the behavior of the new bounds as well as the use of the heuristics.2017-12-20T12:12:04ZLeast squares residuals and minimal residual methodsLiesen, JörgRozlozník, MiroslavStrakoš, Zdeněkhttps://depositonce.tu-berlin.de/handle/11303/72982018-04-20T10:02:51Z2017-12-20T12:05:49ZMain Title: Least squares residuals and minimal residual methods
Author(s): Liesen, Jörg; Rozlozník, Miroslav; Strakoš, Zdeněk
Abstract: We study Krylov subspace methods for solving unsymmetric linear algebraic systems that minimize the norm of the residual at each step (minimal residual (MR) methods). MR methods are often formulated in terms of a sequence of least squares (LS) problems of increasing dimension. We present several basic identities and bounds for the LS residual. These results are interesting in the general context of solving LS problems. When applied to MR methods, they show that the size of the MR residual is strongly related to the conditioning of different bases of the same Krylov subspace. Using different bases is useful in theory because relating convergence to the characteristics of different bases offers new insight into the behavior of MR methods.
Different bases also lead to different implementations which are mathematically equivalent but can differ numerically. Our theoretical results are used for a finite precision analysis of implementations of the GMRES method [Y. Saad and M. H. Schultz, SIAM J. Sci. Statist. Comput., 7 (1986), pp. 856--869]. We explain that the choice of the basis is fundamental for the numerical stability of the implementation. As demonstrated in the case of Simpler GMRES [H. F. Walker and L. Zhou, Numer. Linear Algebra Appl., 1 (1994), pp. 571--581], the best orthogonalization technique used for computing the basis does not compensate for the loss of accuracy due to an inappropriate choice of the basis. In particular, we prove that Simpler GMRES is inherently less numerically stable than the Classical GMRES implementation due to Saad and Schultz [SIAM J. Sci. Statist. Comput., 7 (1986), pp. 856--869].2017-12-20T12:05:49ZConvergence of GMRES for tridiagonal Toeplitz matricesLiesen, JörgStrakoš, Zdeněkhttps://depositonce.tu-berlin.de/handle/11303/72962018-02-16T07:24:31Z2017-12-20T11:57:29ZMain Title: Convergence of GMRES for tridiagonal Toeplitz matrices
Author(s): Liesen, Jörg; Strakoš, Zdeněk
Abstract: We analyze the residuals of GMRES [Y. Saad and M. H. Schultz, SIAM J. Sci. Statist. Comput., 7 (1986), pp. 856--859], when the method is applied totridiagonal Toeplitz matrices. We first derive formulas for the residuals as well as their norms when GMRES is applied to scaled Jordan blocks. This problem has been studied previously by Ipsen [BIT, 40 (2000), pp. 524--535] and Eiermann and Ernst [Private communication, 2002], but we formulate and prove our results in a different way. We then extend the (lower) bidiagonal Jordan blocks to tridiagonal Toeplitz matrices and study extensions of our bidiagonal analysis to the tridiagonal case. Intuitively, when a scaled Jordan block is extended to a tridiagonal Toeplitz matrix by a superdiagonal of small modulus (compared to the modulus of the subdiagonal), the GMRES residual norms for both matrices and the same initial residual should be close to each other. We confirm and quantify this intuitive statement. We also demonstrate principal difficulties of any GMRES convergence analysis which is based on eigenvector expansion of the initial residual when the eigenvector matrix is ill-conditioned. Such analyses are complicated by a cancellation of possibly huge components due to close eigenvectors, which can prevent achieving well-justified conclusions.2017-12-20T11:57:29ZOrthogonal Hessenberg reduction and orthogonal Krylov subspace basesLiesen, JörgSaylor, Paul E.https://depositonce.tu-berlin.de/handle/11303/72952018-02-16T07:24:32Z2017-12-20T10:40:15ZMain Title: Orthogonal Hessenberg reduction and orthogonal Krylov subspace bases
Author(s): Liesen, Jörg; Saylor, Paul E.
Abstract: We study necessary and sufficient conditions that a nonsingular matrix A can be B-orthogonally reduced to upper Hessenberg form with small bandwidth. By this we mean the existence of a decomposition AV=VH, where H is upper Hessenberg with few nonzero bands, and the columns of V are orthogonal in an inner product generated by a hermitian positive definite matrix B. The classical example for such a decomposition is the matrix tridiagonalization performed by the hermitian Lanczos algorithm, also called the orthogonal reduction to tridiagonal form. Does there exist such a decomposition when A is nonhermitian? In this paper we completely answer this question. The related (but not equivalent) question of necessary and sufficient conditions on A for the existence of short-term recurrences for computing B-orthogonal Krylov subspace bases was completely answered by the fundamental theorem of Faber and Manteuffel [SIAM J. Numer. Anal.}, 21 (1984), pp. 352--362]. We give a detailed analysis of B-normality, the central condition in both the Faber--Manteuffel theorem and our main theorem, and show how the two theorems are related. Our approach uses only elementary linear algebra tools. We thereby provide new insights into the principles behind Krylov subspace methods, that are not provided when more sophisticated tools are employed.2017-12-20T10:40:15ZBlock-diagonal and constraint preconditioners for nonsymmetric indefinite linear systemsSturler, Eric deLiesen, Jörghttps://depositonce.tu-berlin.de/handle/11303/72932018-02-16T07:24:33Z2017-12-20T10:25:09ZMain Title: Block-diagonal and constraint preconditioners for nonsymmetric indefinite linear systems
Author(s): Sturler, Eric de; Liesen, Jörg
Abstract: We study block-diagonal preconditioners and an efficient variant of constraint preconditioners for general two-by-two block linear systems with zero (2,2)-block. We derive block-diagonal preconditioners from a splitting of the (1,1)-block of the matrix. From the resulting preconditioned system we derive a smaller, so-called related system that yields the solution of the original problem. Solving the related system corresponds to an efficient implementation of constraint preconditioning. We analyze the properties of both classes of preconditioned matrices, in particular their spectra. Using analytical results, we show that the related system matrix has the more favorable spectrum, which in many applications translates into faster convergence for Krylov subspace methods. We show that fast convergence depends mainly on the quality of the splitting, a topic for which a substantial body of theory exists. Our analysis also provides a number of new relations between block-diagonal preconditioners and constraint preconditioners. For constrained problems, solving the related system produces iterates that satisfy the constraints exactly, just as for systems with a constraint preconditioner. Finally, for the Lagrange multiplier formulation of a constrained optimization problem we show how scaling nonlinear constraints can dramatically improve the convergence for linear systems in a Newton iteration. Our theoretical results are confirmed by numerical experiments on a constrained optimization problem.
We consider the general, nonsymmetric, nonsingular case. Our only additional requirement is the nonsingularity of the Schur-complement--type matrix derived from the splitting that defines the preconditioners. In particular, the (1,2)-block need not equal the transposed (2,1)-block, and the (1,1)-block might be indefinite or even singular. This is the first paper in a two-part sequence. In the second paper we will study the use of our preconditioners in a variety of applications.2017-12-20T10:25:09ZGMRES convergence analysis for a convection-diffusion model problemLiesen, JörgStrakoš, Zdenekhttps://depositonce.tu-berlin.de/handle/11303/72922018-02-16T07:24:32Z2017-12-19T15:49:28ZMain Title: GMRES convergence analysis for a convection-diffusion model problem
Author(s): Liesen, Jörg; Strakoš, Zdenek
Abstract: When GMRES [Y. Saad and M. H. Schultz, SIAM J. Sci. Statist. Comput.}, 7 (1986), pp. 856--869] is applied to streamline upwind Petrov--Galerkin (SUPG) discretized convection-diffusion problems, it typically exhibits an initial period of slow convergence followed by a faster decrease of the residual norm. Several approaches were made to understand this behavior. However, the existing analyses are solely based on the matrix of the discretized system and they do not take into account any influence of the right-hand side (determined by the boundary conditions and/or source term in the PDE). Therefore they cannot explain the length of the initial period of slow convergence which is right-hand side dependent.
We concentrate on a frequently used model problem with Dirichlet boundary conditions and with a constant velocity field parallel to one of the axes. Instead of the eigendecomposition of the system matrix, which is ill conditioned, we use its orthogonal transformation into a block-diagonal matrix with nonsymmetric tridiagonal Toeplitz blocks and offer an explanation of GMRES convergence. We show how the initial period of slow convergence is related to the boundary conditions and address the question why the convergence in the second stage accelerates.2017-12-19T15:49:28ZWhen is the adjoint of a matrix a low degree rational function in the matrix?Liesen, Jörghttps://depositonce.tu-berlin.de/handle/11303/72902018-02-16T07:24:32Z2017-12-19T15:42:20ZMain Title: When is the adjoint of a matrix a low degree rational function in the matrix?
Author(s): Liesen, Jörg
Abstract: We show that the adjoint $A^+$ of a matrix A with respect to a given inner product is a rational function in A, if and only if A is normal with respect to the inner product. We consider such matrices and analyze the McMillan degrees of the rational functions r such that $A^+=r(A)$. We introduce the McMillan degree of A as the smallest among these degrees, characterize this degree in terms of the number and distribution of the eigenvalues of A, and compare the McMillan degree with the normal degree of A, which is defined as the smallest degree of a polynomial p for which $A^+=p(A)$. We show that unless the eigenvalues of A lie on a single circle in the complex plane, the ratio of the normal degree and the McMillan degree of A is bounded by a small constant that depends neither on the number nor on the distribution of the eigenvalues of A. Our analysis is motivated by applications in the area of short recurrence Krylov subspace methods.2017-12-19T15:42:20ZThe Faber–Manteuffel theorem for linear operatorsFaber, VanceLiesen, JörgTichý, Petrhttps://depositonce.tu-berlin.de/handle/11303/72892018-02-16T07:24:31Z2017-12-19T15:34:32ZMain Title: The Faber–Manteuffel theorem for linear operators
Author(s): Faber, Vance; Liesen, Jörg; Tichý, Petr
Abstract: A short recurrence for orthogonalizing Krylov subspace bases for a matrix A exists if and only if the adjoint of A is a low-degree polynomial in A (i.e., A is normal of low degree). In the area of iterative methods, this result is known as the Faber–Manteuffel theorem [V. Faber and T. Manteuffel, SIAM J. Numer. Anal., 21 (1984), pp. 352–362]. Motivated by the description by J. Liesen and Z. Strakoš, we formulate here this theorem in terms of linear operators on finite dimensional Hilbert spaces and give two new proofs of the necessity part. We have chosen the linear operator rather than the matrix formulation because we found that a matrix-free proof is less technical. Of course, the linear operator result contains the Faber–Manteuffel theorem for matrices.2017-12-19T15:34:32ZOn optimal short recurrences for generating orthogonal Krylov subspace basesLiesen, JörgStrakoš, Zdenekhttps://depositonce.tu-berlin.de/handle/11303/72872018-02-16T07:24:31Z2017-12-19T15:29:17ZMain Title: On optimal short recurrences for generating orthogonal Krylov subspace bases
Author(s): Liesen, Jörg; Strakoš, Zdenek
Abstract: We analyze necessary and sufficient conditions on a nonsingular matrix A such that, for any initial vector $r_0$, an orthogonal basis of the Krylov subspaces ${\cal K}_n(A,r_0)$ is generated by a short recurrence. Orthogonality here is meant with respect to some unspecified positive definite inner product. This question is closely related to the question of existence of optimal Krylov subspace solvers for linear algebraic systems, where optimal means the smallest possible error in the norm induced by the given inner product. The conditions on A we deal with were first derived and characterized more than 20 years ago by Faber and Manteuffel (SIAM J. Numer. Anal., 21 (1984), pp. 352–362). Their main theorem is often quoted and appears to be widely known. Its details and underlying concepts, however, are quite intricate, with some subtleties not covered in the literature we are aware of. Our paper aims to present and clarify the existing important results in the context of the Faber–Manteuffel theorem. Furthermore, we review attempts to find an easier proof of the theorem and explain what remains to be done in order to complete that task.2017-12-19T15:29:17ZOn best approximations of polynomials in matrices in the matrix 2-normLiesen, JörgTichý, Petrhttps://depositonce.tu-berlin.de/handle/11303/72862018-02-16T07:24:31Z2017-12-19T15:20:18ZMain Title: On best approximations of polynomials in matrices in the matrix 2-norm
Author(s): Liesen, Jörg; Tichý, Petr
Abstract: We show that certain matrix approximation problems in the matrix 2-norm have uniquely defined solutions, despite the lack of strict convexity of the matrix 2-norm. The problems we consider are generalizations of the ideal Arnoldi and ideal GMRES approximation problems introduced by Greenbaum and Trefethen [SIAM J. Sci. Comput., 15 (1994), pp. 359–368]. We also discuss general characterizations of best approximation in the matrix 2-norm and provide an example showing that a known sufficient condition for uniqueness in these characterizations is not necessary.2017-12-19T15:20:18ZNonlinear eigenvalue problems with specified eigenvaluesKarow, MichaelKressner, DanielMengi, Emrehttps://depositonce.tu-berlin.de/handle/11303/72812018-02-16T07:24:32Z2017-12-14T16:08:28ZMain Title: Nonlinear eigenvalue problems with specified eigenvalues
Author(s): Karow, Michael; Kressner, Daniel; Mengi, Emre
Abstract: This work considers eigenvalue problems that are nonlinear in the eigenvalue parameter. Given such a nonlinear eigenvalue problem T, we are concerned with finding the minimal backward error such that T has a set of prescribed eigenvalues with prescribed algebraic multiplicities. We consider backward errors that only allow constant perturbations, which do not depend on the eigenvalue parameter. While the usual resolvent norm addresses this question for a single eigenvalue of multiplicity one, the general setting involving several eigenvalues is ignificantly more difficult. Under mild assumptions, we derive a singular value optimization characterization for the minimal perturbation that addresses the general case.2017-12-14T16:08:28ZStructured eigenvalue backward errors of matrix pencils and polynomials with palindromic structuresBora, ShreemayeeKarow, MichaelMehl, ChristianSharma, Punithttps://depositonce.tu-berlin.de/handle/11303/72802018-02-16T07:24:32Z2017-12-14T15:59:00ZMain Title: Structured eigenvalue backward errors of matrix pencils and polynomials with palindromic structures
Author(s): Bora, Shreemayee; Karow, Michael; Mehl, Christian; Sharma, Punit
Abstract: We derive formulas for the backward error of an approximate eigenvalue of a *-palindromic matrix polynomial with respect to *-palindromic perturbations. Such formulas are also obtained for complex T-palindromic pencils and quadratic polynomials. When the T-palindromic polynomial is real, then we derive the backward error of a real number considered as an approximate eigenvalue of the matrix polynomial with respect to real T-palindromic perturbations. In all cases the corresponding minimal structure preserving perturbations are obtained as well. The results are illustrated by numerical experiments. These show that there is a significant difference between the backward errors with respect to structure preserving and arbitrary perturbations in many cases.2017-12-14T15:59:00ZStructured eigenvalue backward errors of matrix pencils and polynomials with Hermitian and related structuresBora, ShreemayeeKarow, MichaelMehl, ChristianSharma, Punithttps://depositonce.tu-berlin.de/handle/11303/72792018-02-16T07:24:31Z2017-12-14T15:54:31ZMain Title: Structured eigenvalue backward errors of matrix pencils and polynomials with Hermitian and related structures
Author(s): Bora, Shreemayee; Karow, Michael; Mehl, Christian; Sharma, Punit
Abstract: We derive a formula for the backward error of a complex number λ when considered as an approximate eigenvalue of a Hermitian matrix pencil or polynomial with respect to Hermitian perturbations. The same are also obtained for approximate eigenvalues of matrix pencils and polynomials with related structures like skew-Hermitian, *-even, and *-odd. Numerical experiments suggest that in many cases there is a significant difference between the backward errors with respect to perturbations that preserve structure and those with respect to arbitrary perturbations.2017-12-14T15:54:31ZOn a perturbation bound for invariant subspaces of matricesKarow, MichaelKressner, Danielhttps://depositonce.tu-berlin.de/handle/11303/72782018-02-16T07:24:30Z2017-12-14T15:46:10ZMain Title: On a perturbation bound for invariant subspaces of matrices
Author(s): Karow, Michael; Kressner, Daniel
Abstract: Given a nonsymmetric matrix A, we investigate the effect of perturbations on an invariant subspace of A. The result derived in this paper differs from Stewart's classical result and sometimes yields tighter bounds. Moreover, we provide norm estimates for the remainder terms in well-known perturbation expansions for invariant subspaces, eigenvectors, and eigenvalues.2017-12-14T15:46:10ZPerturbation theory for Hamiltonian matrices and the distance to bounded-realnessAlam, RafikulBora, ShreemayeeKarow, MichaelMehrmann, VolkerMoro, Juliohttps://depositonce.tu-berlin.de/handle/11303/72772018-02-16T07:24:31Z2017-12-14T15:42:19ZMain Title: Perturbation theory for Hamiltonian matrices and the distance to bounded-realness
Author(s): Alam, Rafikul; Bora, Shreemayee; Karow, Michael; Mehrmann, Volker; Moro, Julio
Abstract: Motivated by the analysis of passive control systems, we undertake a detailed perturbation analysis of Hamiltonian matrices that have eigenvalues on the imaginary axis. We construct minimal Hamiltonian perturbations that move and coalesce eigenvalues of opposite sign characteristic to form multiple eigenvalues with mixed sign characteristics, which are then moved from the imaginary axis to specific locations in the complex plane by small Hamiltonian perturbations. We also present a numerical method to compute upper bounds for the minimal perturbations that move all eigenvalues of a given Hamiltonian matrix outside a vertical strip along the imaginary axis.2017-12-14T15:42:19Zμ-values and spectral value sets for linear perturbation classes defined by a scalar productKarow, Michaelhttps://depositonce.tu-berlin.de/handle/11303/72752018-02-16T07:24:30Z2017-12-14T15:35:24ZMain Title: μ-values and spectral value sets for linear perturbation classes defined by a scalar product
Author(s): Karow, Michael
Abstract: We study the variation of the spectrum of matrices under perturbations which are self- or skew-adjoint with respect to a scalar product. Computable formulas are given for the associated μ-values. The results can be used to calculate spectral value sets for the perturbation classes under consideration. We discuss the special case of complex Hamiltonian perturbations of a Hamiltonian matrix in detail.2017-12-14T15:35:24ZStructured pseudospectra for small perturbationsKarow, Michaelhttps://depositonce.tu-berlin.de/handle/11303/72732018-02-16T07:24:32Z2017-12-14T15:26:09ZMain Title: Structured pseudospectra for small perturbations
Author(s): Karow, Michael
Abstract: In this paper we study the shape and growth of structured pseudospectra for small matrix perturbations of the form $A \leadsto A_\Delta=A+B\Delta C$, $\Delta \in \boldsymbol{\Delta}$, $\|\Delta\|\leq \delta$. It is shown that the properly scaled pseudospectra components converge to nontrivial limit sets as $\delta$ tends to 0. We discuss the relationship of these limit sets with $\mu$-values and structured eigenvalue condition numbers for multiple eigenvalues.2017-12-14T15:26:09ZStructured pseudospectra and the condition of a nonderogatory eigenvalueKarow, Michaelhttps://depositonce.tu-berlin.de/handle/11303/72722018-02-16T07:24:31Z2017-12-14T15:20:48ZMain Title: Structured pseudospectra and the condition of a nonderogatory eigenvalue
Author(s): Karow, Michael
Abstract: Let $\lambda$ be a nonderogatory eigenvalue of $A\in\mathbb{C}^{n\times n}$ of algebraic multiplicity m. The sensitivity of $\lambda$ with respect to matrix perturbations of the form $A\leadsto A+\Delta$, $\Delta\in\boldsymbol{\Delta}$, is measured by the structured condition number $\kappa_{\boldsymbol{\Delta}}(A,\lambda)$. Here $\boldsymbol{\Delta}$ denotes the set of admissible perturbations. However, if $\boldsymbol{\Delta}$ is not a vector space over $\mathbb{C}$, then $\kappa_{\boldsymbol{\Delta}}(A,\lambda)$ provides only incomplete information about the mobility of $\lambda$ under small perturbations from $\boldsymbol{\Delta}$. The full information is then given by the set $K_{\boldsymbol{\Delta}}(x,y)=\{y^*\Delta x;$ $\Delta\in\boldsymbol{\Delta},$ $\|\Delta\|\leq1\}\subset\mathbb{C}$ that depends on $\boldsymbol{\Delta}$, a pair of normalized right and left eigenvectors $x,y$, and the norm $\|\cdot\|$ that measures the size of the perturbations. We always have $\kappa_{\boldsymbol{\Delta}}(A,\lambda)=\max\{|z|^{1/m};$ $z\in K_{\boldsymbol{\Delta}}(x,y)\}$. Furthermore, $K_{\boldsymbol{\Delta}}(x,y)$ determines the shape and growth of the $\boldsymbol{\Delta}$-structured pseudospectrum in a neighborhood of $\lambda$. In this paper we study the sets $K_{\boldsymbol{\Delta}}(x,y)$ and obtain methods for computing them. In doing so we obtain explicit formulae for structured eigenvalue condition numbers with respect to many important perturbation classes.2017-12-14T15:20:48ZProperties of worst-case GMRESFaber, VanceLiesen, JörgTichý, Petrhttps://depositonce.tu-berlin.de/handle/11303/72712018-02-16T07:24:32Z2017-12-14T15:10:30ZMain Title: Properties of worst-case GMRES
Author(s): Faber, Vance; Liesen, Jörg; Tichý, Petr
Abstract: In the convergence analysis of the GMRES method for a given matrix $A$, one quantity of interest is the largest possible residual norm that can be attained, at a given iteration step $k$, over all unit norm initial vectors. This quantity is called the worst-case GMRES residual norm for $A$ and $k$. We show that the worst-case behavior of GMRES for the matrices $A$ and $A^T$ is the same, and we analyze properties of initial vectors for which the worst-case residual norm is attained. In particular, we prove that such vectors satisfy a certain “cross equality.” We show that the worst-case GMRES polynomial may not be uniquely determined, and we consider the relation between the worst-case and the ideal GMRES approximations, giving new examples in which the inequality between the two quantities is strict at all iteration steps $k\geq 3$. Finally, we give a complete characterization of how the values of the approximation problems change in the context of worst-case and ideal GMRES for a real matrix, when one considers complex (rather than real) polynomials and initial vectors.2017-12-14T15:10:30ZA framework for deflated and augmented Krylov subspace methodsGaul, AndréGutknecht, Martin H.Liesen, JörgNabben, Reinhardhttps://depositonce.tu-berlin.de/handle/11303/72702018-02-16T07:24:31Z2017-12-14T14:51:27ZMain Title: A framework for deflated and augmented Krylov subspace methods
Author(s): Gaul, André; Gutknecht, Martin H.; Liesen, Jörg; Nabben, Reinhard
Abstract: We consider deflation and augmentation techniques for accelerating the convergence of Krylov subspace methods for the solution of nonsingular linear algebraic systems. Despite some formal similarity, the two techniques are conceptually different from preconditioning. Deflation (in the sense the term is used here) “removes” certain parts from the operator making it singular, while augmentation adds a subspace to the Krylov subspace (often the one that is generated by the singular operator); in contrast, preconditioning changes the spectrum of the operator without making it singular. Deflation and augmentation have been used in a variety of methods and settings. Typically, deflation is combined with augmentation to compensate for the singularity of the operator, but both techniques can be applied separately. We introduce a framework of Krylov subspace methods that satisfy a Galerkin condition. It includes the families of orthogonal residual and minimal residual methods. We show that in this framework augmentation can be achieved either explicitly or, equivalently, implicitly by projecting the residuals appropriately and correcting the approximate solutions in a final step. We study conditions for a breakdown of the deflated methods, and we show several possibilities to avoid such breakdowns for the deflated minimum residual (MinRes) method. Numerical experiments illustrate properties of different variants of deflated MinRes analyzed in this paper.2017-12-14T14:51:27ZOn Chebyshev polynomials of matricesLiesen, JörgFaber, VanceTichý, Petrhttps://depositonce.tu-berlin.de/handle/11303/72692018-02-16T07:24:32Z2017-12-14T14:27:34ZMain Title: On Chebyshev polynomials of matrices
Author(s): Liesen, Jörg; Faber, Vance; Tichý, Petr
Abstract: The mth Chebyshev polynomial of a square matrix A is the monic polynomial that minimizes the matrix 2-norm of $p(A)$ over all monic polynomials $p(z)$ of degree m. This polynomial is uniquely defined if m is less than the degree of the minimal polynomial of A. We study general properties of Chebyshev polynomials of matrices, which in some cases turn out to be generalizations of well-known properties of Chebyshev polynomials of compact sets in the complex plane. We also derive explicit formulas of the Chebyshev polynomials of certain classes of matrices, and explore the relation between Chebyshev polynomials of one of these matrix classes and Chebyshev polynomials of lemniscatic regions in the complex plane.2017-12-14T14:27:34Z