Eigel, MartinPfeffer, MaxSchneider, Reinhold2021-12-172021-12-172015-11-272197-8085https://depositonce.tu-berlin.de/handle/11303/15834http://dx.doi.org/10.14279/depositonce-14607The solution of PDE with stochastic data commonly leads to very high-dimensional algebraic problems, e.g. when multiplicative noise is present. The Stochastic Galerkin FEM considered in this paper then suffers from the curse of dimensionality. This is directly related to the number of random variables required for an adequate representation of the random fields included in the PDE. With the presented new approach, we circumvent this major complexity obstacle by combining two highly efficient model reduction strategies, namely a modern low-rank tensor representation in the tensor train format of the problem and a refinement algorithm on the basis of a posteriori error estimates to adaptively adjust the different employed discretizations. The adaptive adjustment includes the refinement of the FE mesh based on a residual estimator, the problem-adapted stochastic discretization in anisotropic Legendre Wiener chaos and the successive increase of the tensor rank. Computable a posteriori error estimators are derived for all error terms emanating from the discretizations and the iterative solution with a preconditioned ALS scheme of the problem. Strikingly, it is possible to exploit the tensor structure of the problem to evaluate all error terms very efficiently. A set of benchmark problems illustrates the performance of the adaptive algorithm with higher-order FE. Moreover, the influence of the tensor rank on the approximation quality is investigated.en510 Mathematikpartial differential equations with random coefficientstensor representationtensor trainuncertainty quantificationstochastic finite element methodsoperator equationsadaptive methodsALSlow-rankreduced basis methodsAdaptive Stochastic Galerkin FEM with Hierarchical Tensor RepresentationsResearch Paper