Abstract: Over the past two decades, significant strides have been made in theory and algorithms for optimization of systems governed by PDEs. These advances in PDE-constrained optimization have enabled many applications of optimal design and control to complex deterministic systems in computational science and engineering. A major remaining challenge for PDE-constrained optimization is accounting for uncertainty in the PDE models. Many PDE models are characterized by uncertain parameters due to lack of knowledge or intrinsic variability of the inputs, including initial or boundary conditions, sources, coefficients, or geometry. It is important to incorporate this uncertainty in the optimization problem to make the optimal solution more robust and reliable. Indeed, recent years have witnessed rapid growth in research on PDE-constrained optimization under uncertainty. It is a key challenge to solve PDE-constrained optimization problems under high/infinite-dimensional uncertainty due to the curse of dimensionality by most deterministic approximation methods or the slow convergence by statistical (Monte Carlo) methods. In this talk, two cases will be considered for the uncertainty-- infinite-dimensional lognormal random field and random field parametrized by uniformly distributed random variables. I will present sparse polynomial (Taylor, Legendre, and Hermite) approximations for such problems, and show theoretical results on the dimension-independent convergence rates which depend only on the sparsity or regularity of the control objective with respect to the uncertain parameter, and not on the nominal parameter dimensions, thus effectively breaking the curse of dimensionality and achieving fast convergence for sparse problems. I will also present sparse interpolation and sparse quadrature for pointwise and statistical moments evaluation, respectively, which retain the dimension-independent convergence property under certain mild assumptions.