Titles and Abstracts (updating)

**Speaker:** Weizhu Bao (National University of Singapore)

**Title:** Analysis and computation for ground states in degenerate quantum gas

**Abstract:** The ground state in degenerate quantum gas is defined as the minimizer of the energy functional under a constraint, which is an infinitely dimensional nonconvex minimization problem. In this talk, I will discuss the existence and uniqueness as well as non-existence of the ground state under different parameter regimes; present asymptotic approximations in some limiting parameter regimes; and propose some efficient and accurate numerical methods for computing the ground states. Extension to degenerate quantum gas in a rotational frame, with nonlocal interaction and spin-orbit coupling will be discussed.

**Speaker:** Emmanuel Candes (Stanford)

**Title:** Recent progress on the phase retrieval problem: solving quadratic equations via Wirtinger flows

**Abstract:** In many imaging problems such as X-ray crystallography, detectors can only record the intensity or magnitude of a diffracted wave as opposed to measuring its phase. This means that we need to solve quadratic equations — this is notoriously NP hard — as opposed to linear ones. The focus of the talk is on a novel non-convex algorithms, which is provably exact for solving such problems. This algorithm, dubbed the Wirtinger flow algorithm, finds the solution to randomized quadratic systems from a number of equations (samples) and flops that are both optimal. A a high level, the algorithm can be interpreted as a sort of stochastic gradient scheme, starting from a guess obtained by means of a spectral method. We demonstrate that the Wirtinger flow reconstruction degrades gracefully as the signal-to-noise ratio decreases. The empirical performance shall be demonstrated on phase retrieval problems, which is at the center of spectacular current research efforts collectively known under the name of coherent diffraction imaging aimed, among other things, at determining the 3D structure of large protein complexes.

This is joint work with Xiaodong Li, Mahdi Soltanolkotabi and Yuxin Chen.

**S****peaker:** Raymond Chan (CUHK)

**Title:** A Two-Stage Image Segmentation Method Using a Convex Vartant of the Mumford-Shah Model and Thresholding

**Abstract:** The Mumford-Shah model is one of the most important image segmentation models, and has been studied extensively in the last twenty years. In this talk, we propose a two-stage segmentation method based on the Mumford-Shah model. The first stage of our method is to find a smooth solution g to a convex variant of the Mumford-Shah model. Once g is obtained, then in the second stage, the segmentation is done by thresholding g into different phases. The thresholds can be given by the users or can be obtained automatically using any clustering methods. Because of the convexity of the model, g can be solved efficiently by techniques like the split-Bregman algorithm or the Chambolle-Pock method. We prove that our method is convergent and the solution g is always unique. In our method, there is no need to specify the number of segments K (K>=2) before finding g. We can obtain any K-phase segmentations by choosing (K-1) thresholds after g is found in the first stage; and in the second stage there is no need to recompute g if the thresholds are changed to reveal different segmentation features in the image. Experimental results show that our two-stage method performs better than many standard two-phase or multi-phase segmentation

methods for very general images, including anti-mass, tubular, MRI, noisy, and blurry images; and for very general noise models such as Gaussian, Poisson and multiplicative Gamma noise.

**Speaker:** Tony Chan (HKUST)

**Title:** Four color theorem for image segmentation

**Abstract:** Image segmentation is an essential problem in imaging science. One of the most successful segmentation models is the piecewise constant Mumford-Shah minimization model. This minimization problem is however difficult to carry out, mainly due to the non-convexity of the energy. Recent advances based on convex relaxation methods are capable of estimating almost perfectly the geometry of the regions to be segmented when the mean intensity and the number of segmented regions are known a priori. The next important challenge is to provide a tight approximation of the optimal geometry, mean intensity and the number of regions simultaneously while keeping the computational time and memory usage reasonable. In this work, we propose a new algorithm that combines convex relaxation methods with the four color theorem to deal with the unsupervised segmentation problem. More precisely, the proposed algorithm can segment any a priori unknown number of regions with only four intensity functions and four indicator (labeling) functions. The number of regions in our segmentation model is decided by one parameter that controls the regularization strength of the geometry, i.e., the total length of the boundary of all the regions. The segmented image function can take as many constant values as needed. We will present the detail about the new model as well the numerical techniques used to solve it.

This talk is based on joint work with: Xavier Bresson, Xue-Cheng Tai and Ruiliang Zhang.

**Speaker:** Xiao-Jun Chen (HKPU)

**Title:** Nonsmooth, Nonconvex Regularized Minimization for Sparse Approximations

**Abstract:** Minimization problems with nonsmooth, nonconvex, perhaps even non-Lipschitz regularization terms have wide applications in image restoration, signal reconstruction and variable selection. On minimization with a class of regularization terms including SCAD, MCP, logistic, fraction, hard thresholding and non-Lipschitz L_{p} penalties, we show that finding a global optimal solution is strongly NP-hard. On the other hand, we present lower bounds of nonzero entries in every local optimal solution. Such lower bounds can be used to classify zero and nonzero entries in local optimal solutions and select regularization parameters for desirable sparsity of solutions. Moreover, we show smoothing methods are efficient for solving such regularized minimization problems. In particular, we introduce a smoothing SQP method which can find an affine scaled ϵ-stationary point from any starting point with complexity O(ϵ^{-2}), and a smoothing trust region Newton method which can find a point satisfying the affine scaled second order necessary condition from any starting point. Examples with the six widely used nonsmooth nonconvex regularization terms are presented to illustrate the theory and algorithms.

**Speaker:** Yuhong Dai (AMSS, CAS)

**Title:** Efficient Projected Gradient Methods for A Class of $L_0$ Constrained Optimization

**Abstract:** Sparse optimization has attracted increasing attentions in numerous areas such as compressed sensing, finance optimization and image processing. In this paper, we consider a special class of $L_0$ constrained optimization, which involves box constraints and a singly linear constraint. An efficient approach is provided for calculating the projection over the feasibility set after a careful analysis on the projection subproblem. Then we present several types of projected gradient methods for the special class of $L_0$ constrained optimization. Global convergence of the methods are established under suitable assumptions. The computational results on signal recovery and enhanced indexation demonstrate that the proposed projected gradient methods are efficient in terms of both solution quality and speed.

This is a joint work with Fengmin Xu, Zhihua Zhao and Zongben Xu.

**Speaker:** Bjorn Engquist (University of Texas at Austin)

**Title:** Remarks on sparsity and complexity in numerical analysis

**Abstract:** Fast numerical algorithms are typically connected with sparse representation of functions or operators. We will discuss a couple of cases for which extra information allows for sparse representation and low computational complexity. One example is sparse representation of multiscale functions based on information theory. Another is compression of solution operators in the simulation of wave propagation.

**Speaker:** Jianqing Fan (Princeton University)

**Title:** Statistical Computing and Suprious Discoveries in Big Data

**Abstract:** We will first introduce the "Divide-and-conquer" approach, popularly used in Big-Data statistical computing. This method can be very powerful when it is combined with variable screening approach. We then address the issues of suprious discoveries in big data by data mining techniques. We derive the distributions of the maximum spurious correlations given certain number of predictors, and propose a multiplier bootstrap method to approximate the unknown distributions. Our approach is then applied to construct the upper confidence limit for the maximum spurious correlation and testing exogeneity of covariates. The former provides a baseline for guiding false discoveries due to data mining and the latter tests whether our fundamental assumptions for high-dimensional model selection are statistically valid. Our techniques and results are illustrated by both numerical examples.

**Speaker:** Jinyan Fan (Shanghai Jiaotong University)

**Title:** Completely positive tensor decomposition

**Abstract:** A symmetric tensor, which has a symmetric nonnegative decomposition, is called a completely positive tensor. In this talk, we discuss the completely positive tensor decomposition problem. A semidefinite algorithm is presented for checking whether a symmetric tensor is completely positive. If it is not completely positive, a certificate for it can be obtained; if it is completely positive, a nonnegative decomposition can be obtained.

**Speaker:** Michael Ferris (University of Wisconsin)

**Title:** Modeling and Optimization within Interacting Systems

**Abstract:** We consider models built up from a collection of optimizations within an interacting physical, economic or virtual system. We show how optimization and equilibrium concepts can be deployed and resulting models solved within an extended mathematical programming framework. Adaptive multi-period look-ahead policies based on streaming data or offline data collections will be demonstrated. Examples are drawn from land use modeling, economics, and risk analysis. The interplay between stochasticity, complementarity and optimization will be highlighted.

**Speaker:** Patrick Flandrin (CNRS & ENS de Lyon, France)

**Title:** On spectrogram zeros

**Abstract:** In close connection with time-frequency uncertainty relations, spectrograms have some built-in redundancy which constrains the landscape of their surface, thus leading to simplified sparse descriptions based on a reduced number of salient features. As a complement to a previous study devoted to spectrogram local maxima (and, hence, to sparse representations such as reassigned and/or synchrosqueezed transforms), this talk will focus on spectrogram zeros in the generic case of white Gaussian noise. A detailed study of the phase of the short-time Fourier transform will first evidence universal geometrical structures in the vicinity of zeros, namely singularities and dislocations. It will then be shown that attaching a Voronoi tessellation to zeros considered as a 2D spatial process in the time-frequency plane ends up with a constrained distribution of cells that reflects uncertainty and paves the way for identifying signal components in noisy observations on the basis of geometrical criteria.

**Speaker:** Don Goldfarb (Columbia University)

**Title:** Low-rank Matrix and Tensor Recovery: Theory and Algorithms

**Abstract:** Recovering a low-rank matrix or tensor from incomplete or corrupted observations is a recurring problem in signal processing and machine learning. To exploit the structure of data that is intrinsically more than three-dimensional, convex models such low-rank completion and robust principal component analysis (RPCA) for matrices have been extended to tensors. In this talk, we rigorously establish recovery guarantees for both tensor completion and tensor RPCA. We demonstrate that using the most popular convex relaxation for the tensor Tucker rank can be substantially sub-optimal in terms of the number of observations needed for exact recovery. We introduce a very simple, new convex relaxation which is shown be much better, both theoretically and empirically. Moreover, we propose algorithms to solve both low-rank matrix and tensor recovery models based on the Alternating Direction Augmented Lagrangian (ADAL), Frank-Wolfe and prox-gradient methods. Finally, we empirically investigate the recoverability properties of these convex models, and the computational performance of our algorithms on both simulated and real data.

*This is joint work with: Cun Mu, Bo Huang and Tony Qin {current and former IEOR PhD students at Columbia University} and John Wright {E.E. faculty member at Columbia University}

**Speaker:** Tiande Guo (University of CAS)

**Title:** An Optimization Model for Variable Dimension and It’s Applications in Fingerprint Recognition

**Abstract:** Optimization model for variable dimension, especially the optimization problem in the dynamic environment (DE), have a lot of applications, such as vulnerable nature of construction schedules, resource limitations, multiple objectives, project uncertainties, business priorities, image processing and pattern recognition. It is not only to locate an optimum, but track the moving optimum as close as possible. The presented optimization problem is very challenging, as it included many prediction and optimization elements. In this talk, a variable dimension optimization model is first established. Secondly, some algorithms are introduced to search for solution of above optimization problem. Finally, some applications of this optimization model in fingerprint recognition are proposed. Simulation experiments show that the models and the algorithms are better than tradition algorithms for such recognition problems.

**Speaker:** Thomas Yizhao Hou (Caltech)

**Title:** Intrinsic Sparse Mode Decomposition of High Dimensional Random Fields with Application to Stochastic Elliptic PDEs

**Abstract:** Inspired by the recent developments in data sciences, we introduce an intrinsic sparse mode decomposition method for high dimensional random fields. This sparse representation of the random field allows us to break a high dimensional stochastic field into many spatially localized modes with low stochastic dimension. Such decomposition enables us to break the curse of dimensionality in our local solvers. To obtain such representation, we first decompose the covariance function into low rank part plus a sparse part. We then extract the spatially localized modes from the sparse part by minimizing a surrogate of the total area of the support. Moreover, we provide an efficient algorithm to solve it. As one example of application, we apply this technique to solve stochastic elliptic PDEs with high dimensional stochastic coefficients. Various combinations of local and global solvers achieve different level of accuracy and efficiency. Complexity and accuracy analysis will be demonstrated. At the end of the talk, I will also briefly discuss other applications of the intrinsic sparse mode extraction. This is a joint work with Qin Li and Pengchuan Zhang.

**Speaker:** Norden E. Huang (National Central University)

**Title:** A New Holo-Hilbert Spectral Analysis for Nonlinear and Nonstationary Data

**Abstract:** Spectrum is an extremely useful tool for data analysis. Physically, the idea is to project data of arbitrary length to a finite frequency domain covering 1/T to Nyquist value, with T as the total data length and Nyquist is half of the sampling frequency. With this projection, the statistical properties can be study in details. To implement this idea mathematically, we have to define frequency. A readily choice is the Fourier Transform. For more than 200 years, Fourier transform has dominated the field because of its simplicity. The power of Fourier transform had led people to use it without proper consideration of its limitations: stationary and linear assumptions. In fact, if the data violate any of the assumptions, the answer provided is neither mathematically nor physically meaningful. The most serious flaw of the Fourier spectral analysis is to convert all process to additive. To extract information and to depict the effects from the nonlinear multiplicative processes that include inter-scale coupling (phase-amplitude and phase-locked modulations), we have to find new methods to analyze the temporal variations in the amplitude part of the Intrinsic Mode functions and to represent it appropriately in frequency space other than a trend. With the new Holo-Hilbert Spectral Analysis (HHSA) proposed here, we accomplish this objective by adding new dimensions to the spectrum to accommodate the FM and AM variations simultaneously. This full information spectral representation will give us a spectrum that can represent all the possible processes: additive and multiplicative, intra- and inter-mode, stationary and nonstationary, linear and nonlinear interactions. This new approach also makes a new index for quantifying the Inter-mode Degree of Nonlinearity possible. Examples of various nonstationary and nonlinear processes would be used to illustrate the new view of spectral representation.

**Speaker:** Bingyi Jing (HKUST)

**Title:** Fused-Lasso with non-convex penalty

**Abstract:** In this talk, we will investigate feature selections using non-convex penalty. In particular, we study the maximum concavity penalty (MCP) in the context of image reconstruction. We will illustrate that the new method will give much sharper images along the edges and better contrast than alternatives. Furthermore, we will try to develop algorithms for using these non-convex penalties.

**Speaker:** Ruo Li (Peking University)

**Title:** Finding 13-Moment System Beyond Grad

**Abstract:** We point out that the thermodynamic equilibrium is not an interior point of the hyperbolicity region of Grad's 13-moment system. With a compact expansion of the phase density, which is compacter than Grad's expansion, we derived a modified 13-moment system. The new 13-moment system admits the thermodynamic equilibrium as an interior point of its hyperbolicity region. We deduce a concise criterion to ensure the hyperbolicity, thus the hyperbolicity region can be quantitatively depicted.

**Speaker:** Tom Zhi-Quan Luo (University of Minnesota)

**Title:** Guaranteed Matrix Completion via Nonconvex Factorization

**Abstract:** Matrix factorization is a popular approach for large-scale low rank matrix completion. For instance, it is a basic ingredient of many proposed solutions for the notable NetFlix Prize competition. In this approach, the unknown low-rank matrix is expressed as the product of two much smaller matrices so that the low-rank property is automatically fulfilled. The resulting optimization problem, even with huge size, can be solved (to local optima) very efficiently through standard optimization algorithms such as alternating minimization and stochastic gradient descent. However, due to the nonconvexity caused by the factorization model, there is limited theoretical understanding of whether these algorithms will generate a good solution. In this paper, we establish a theoretical guarantee for the factorization based formulation to correctly recover low rank matrix. In particular, we show that under similar conditions to those in previous works, many standard optimization algorithms converge to the global optima of the factorization based formulation, thus recovering the true matrix. Our result applies to most locally convergent algorithms such as gradient descent, stochastic gradient descent and block coordinate descent type methods (including alternating minimization).

**Speaker:** Nelson Maculan (Federal University of Rio de Janeiro)

**Title:** Two Applications of Mixed Integer Nonlinear Programming (MINLP) in R^{n}: the Euclidean Steiner Tree Problem and Covering a Solid with Different Spheres

**Abstract:** 1-The Euclidean Steiner tree problem (ESTP) in R^{n} consists of finding a tree of minimal Euclidean length that spans a given set of points in Rn, using or not additional points. Only a few papers consider the exact solution for the ESTP in R^{n} (n>2) and there are just two works that considered a mathematical programming formulation for the ESTP. One of them presented a convex mixedinteger formulation that could be implemented in a Branch and Bound (B&B) algorithm. This work presents techniques to improve the performance of the B&B algorithm in order to implement this formulation.

2-We present a mathematical programming model for the problem of covering solids by spheres of different radii. Given a set of spheres, possibly with different diameters, and a solid, the goal is to locate the spheres in such a way their union forms a coverage for this solid, using the smallest possible number of spheres of this set. This problem has an application in the radio-surgical treatment planning known as Gamma Knife and can be formulated as a non-convex optimization problem with quadratic constraints and a linear objective function.

**Speaker:** Stanley Osher (UCLA)

**Title:** The impact of L1 optimization in Nonlinear PDE

**Abstract:** At this time almost everyone interested in finding sparse solutions to discrete equations is aware that l1 optimization plays a key role. However that fact that L1 optimization, using the techniques developed for l1, is a very powerful tool in nonlinear PDE and numerical analysis is less widely known. H. Brezis, in 1974, showed that adding an L1 type penalty to calculus of variations problems which lead to elliptic equations plus a signum type terms gives solutions with compact support. I will discuss this, numerical aspects, applications to Schrodinger equations, obstacle problems, numerical homogenization and certain high dimensional PDE's

(joint with many people including J. Darbon, V. Ozolins, R. Caflisch, H. Schaeffer, W. Feldman, C. Hauck and G. Tian)

**Speaker:** Alfio Quarteroni (EPFL)

**Title:** Reduced order models for simulation and control in fluid dynamics

**Abstract:** Parametrized problems governed by PDEs occur in several applied contexts ranging from optimal control and/or design problems to inverse identification problems. Parameters may arise from physical coefficients, the geometrical configuration, or the control/design variables themselves. Solving these problems is rather challenging because of their large scale and iterative nature. Projection-based reduced-order models (ROMs) provide efficient strategies to tackle parametrized optimization and control problems, thanks to Offline/Online computational stratagems, a posteriori error estimates, and the use of low-dimensional approximation spaces.

In this talk I will recall the mathematical concepts behind Reduced Order Models, present some new results, and address a few examples for the efficient simulation and optimization of flow problems arising in the haemodynamics context.

**Speaker:** Zuowei Shen (National University of Singapore)

**Title:** Image Restoration: Wavelet Frame Approach, Total Variation and Beyond

**Abstract:** This talk is about the wavelet frame-based image and video restorations. We start with some of main ideas of wavelet frame based image restorations. Some of applications of wavelet frame based models image analysis and restorations will be shown. Examples of such applications include image and video inpainting, denoising, decomposition, image deblurring and blind deblurring, segmentation, CT image reconstruction, 3D reconstruction in electronmicroscopy, and etc. In all of these applications, spline wavelet frames derived from the unitary extension principle are used. We will discuss the ideas behind frame based model for image restorations. Will will also establish connections between wavelet frame base method and various PDE based methods, that include the total variation model, nonlinear diffusion PDE based methods, and model of Mumford-Shah.

**Speaker:** Zuoqiang Shi (Tsinghua University)

**Title:** Data-Driven Time-Frequency Analysis

**Abstract:** In this talk, I will present a data-driven time-frequency analysis method to study the instantaneous frequency of nonlinear and nonstationary data. This method is inspired by the Empirical Mode Decomposition method (EMD) and the recently developed compressed sensing theory. The main idea is to look for the sparsest representation of multiscale data within the dictionary consisting of intrinsic mode functions. This problem can be formulated as a nonlinear optimization problem. In order to solve this optimization problem, we propose a nonlinear matching pursuit method by generalizing the classical matching pursuit. Further, we provide a convergence analysis of the nonlinear matching pursuit method under certain scale separation assumptions. We also apply our method to study some real problem and the result is very encouraging.

**Speaker:** Defeng Sun (National University of Singapore)

**Title:** Aug2QP: A two-phase augmented Lagrangian method for high-dimesional convex quadratic programming

**Abstract:** In this talk, we aim to solve high dimensional convex quadratic programming (QP) problems with a large number of linear equality and inequality constraints. In order to solve the targeted problems to a desired accuracy efficiently, we shall introduce a two-phase augmented Lagrangian method, with Phase I to generate a reasonably good initial point and Phase II to obtain an accurate solution fast. More specifically, in Phase I, we identify a class of convex composite quadratic programming problems and introduce a symmetric Gauss-Seidel (sGS) technique for handling their regularized counterparts. This technique allows us to design a novel sGS based semi-proximal augmented Lagrangian method for th epurpose of finding a solution of low to medium accuracy.

Then, in Phase II, a proximal augmented Lagrangian algorithm, with the inner subproblems to be solved intelligently by combining an inexact version of Nesterov's accelerated proximal gradient (APG) method with the sGS technique, is proposed to obtain a more accurate solution efficiently. Preliminary numerical results including the comparison between our approach and the state-of-the-art solver (Gurobi) will be presented to demonstrate the high efficiency of our proposed algorithm for large scale problems.

**Speaker:** Kim Chuan Toh (National University of Singapore)

**Title:** An Inexact Accelerated Block Coordinate Descent Method for Least Squares Semidefinite Programming

**Abstract:** We consider least squares semidefinite programming (LSSDP) where the primal matrix variable must satisfy given linear equality and inequality constraints, and must also lie in the intersection of the cone of positive semidefinite matrices and a simple polyhedral set. We propose an inexact accelerated block coordinate descent (ABCD) method for solving the problem via its dual, which can be reformulated as a convex composite minimization problem whose objective is the sum of a coupled quadratic function involving four blocks of variables and two separable non-smooth functions involving only the first and second block, respectively. Our ABCD method has $O(1/k^2)$ iteration complexity if the subproblems are solved progressively more accurately. The design of our ABCD method relied on recent advances in the symmetric Gauss-Seidel technique for solving a convex minimization problem whose objective is the sum of a multi-block quadratic function and a non-smooth function involving only the first block. Extensive numerical experiments on various class of over 600 large scale LSSDP problems demonstrate that our ABCD method not only can solve the problems to high accuracy, but it is also far more efficient than (a) the well-known BCGD (block coordinate gradient descent) method, (b) the ARBCGD (an enhanced version of the accelerated randomized BCGD) method, and (c) the APG (accelerated proximal gradient) method.

**Speaker:** Xiap-Ping Wang (HKUST)

**Title:** An efficient Numerical Method for Large Scale Simulation of Two-Phase Flows with Moving Contact Lines

**Abstract:** Moving contact line (MCL) problem, where the fluid–fluid interface intersects the solid wall, can be described by a phase field model consists of the coupled Cahn–Hilliard and Navier–Stokes equations with the generalized Navier boundary condition. We designed an efficient numerical method for the model based on convex splitting and projection method. Accurate simulations for the MCL problem require very fine meshes, and the computation in 3d is challenging because of the complex computational domain and the boundary condition. Thus the use of high performance computers and scalable parallel algorithms becomes indispensable. In our algorithm, a geometrical additive Schwarz preconditioned GMRES iterative method is used to solve the disceretized systems in parallel. Numerical experiments show that the algorithm is efficient and is scalable on a large number of processors.

**Speaker:** Yang Wang (HKUST)

**Title:** Sub-linear Algorithm for Recovering Sparse Fourier Series

**Abstract:** We present new deterministic algorithms for the sparse Fourier transform problem, in which we seek to identify $k \leq N$ significant Fourier coefficients from a signal of bandwidth $N$. Previous deterministic algorithms exhibit quadratic runtime scaling, while our algorithms scales linearly with $k$ in the average case in the noiseless setting. We also present a multi-scale algorithm for noisy signals which proves to be extremely robust and efficient. This multi-scale algorithm is based on the beta-expansion scheme for robust A/D conversion. We also present the first efficient algorithm for ultra-high dimensions signals.

**Speaker:** Zaiwen Wen (Peking University)

**Title:** Towards an optimal scalability in computing exterior eigenpairs of large matrices

**Abstract:** Almost all existing algorithms for computing eigenpairs of large-scale sparse matrices consist of two main iterative steps: a subspace update step and a Rayleigh-Ritz projection step. In this paper, we propose and study an augmented Rayleigh-Ritz procedure to accelerate convergence. Utilizing this augmented Rayleigh-Ritz step, we construct two block algorithms (in contrast to Krylov subspace algorithms), each with a distinct subspace update step. Since the Rayleigh-Ritz step, including orthogonalization, is arguably the bottleneck for scalability, our key design principle is for the algorithms to approach a certain optimal scalability under favorable conditions. That is, ideally they should call the Rayleigh-Ritz procedure only once to attain a good accuracy, while the rest of main operations are embarrassingly parallelizable under a suitable data mapping scheme. One of the proposed algorithm is a modified and accelerated subspace iteration method, another uses a recently proposed Gauss-Newton method for subspace updating. Extensive computational experiments, without explicit code parallelization, are performed to evaluate the performance of the algorithms in comparison to a few state-of-the-art eigensolvers. Numerical results clearly show strong potentials for the proposed algorithm to reach a high level of scalability on solving a wide range of problems.

**Speaker:** Hau-Tieng Wu (University of Toronto)

**Title:** Adaptive concentration of time-frequency representation

**Abstract:** Periodicity and trend are features describing an observed sequence, and extracting these features is an important issue in many scientific fields. However, it is not an easy task for existing methods to analyze simultaneously the dynamic periodicity and trend, and the adaptivity of the analysis to such dynamics and robustness to heteroscedastic, dependent errors is in general not guaranteed. These tasks become even more challenging when there exist multiple periodic components.

While synchrosqueezing transform (SST) has been used to study this kind of signals and several issues have been theoretically justified, including robustness to noise, de-trend, real time implementation, etc, however, analyzing signals with fast varying instantaneous frequency is still challenging. In this talk, in addition to stating the current progresses in SST, we propose the "adaptive harmonic model" to integrate oscillatory signal with fast varying instantaneous frequency. We then introduce a convex optimization approach to study this kind of signal. We would also discuss some recent clinical applications to show how to integrate time frequency analysis with manifold learning to achieve dynamical extraction, if time permits.

**Speaker:** Zhaohua Wu (Florida State University)

**Title:** Fast Multidimensional Ensemble Empirical Mode Decomposition Using a Data Compression Technique

**Abstract:** The process of obtaining key information on climate variability and change from large climate datasets often involves large computational costs and removal of noise from the data. In this study, the authors accelerate the computation of a newly developed, multidimensional temporal–spatial analysis method, namely multidimensional ensemble empirical mode decomposition (MEEMD), for climate studies. The original MEEMD uses ensemble empirical mode decomposition (EEMD) to decompose the time series at each grid point and then pieces together the temporal–spatial evolution of climate variability and change on naturally separated time scales, which is computationally expensive.

To accelerate the algorithm, the original MEEMD is modified by 1) using principal component analysis (PCA) to transform the original temporal–spatial multidimensional climate data into principal components (PCs) and corresponding empirical orthogonal functions (EOFs); 2) retaining only a small fraction of PCs and EOFs that contain spatially and temporally coherent structures; 3) decomposing PCs into oscillatory components on naturally separated time scales; and 4) obtaining the original MEEMD components on naturally separated time scales by summing the contributions of the similar time scales from different pairs of EOFs and PCs. The study analyzes extended reconstructed sea surface temperature (ERSST) to validate the accelerated (fast) MEEMD. It is demonstrated that, for ERSST climate data, the fast MEEMD can 1) compress data with a compression rate of one to two orders and 2) increase the speed of the original MEEMD algorithm by one to two orders.

**Speaker:** Jack Xin (UC Irvine)

**Title:** Transformed L1 penalty and applications in compressed sensing

**Abstract:** We introduce the transformed L1 penalty (TL1), a non-convex Lipschitz continuous sparsity promoting penalty function which interpolates L0 and L1 norms through a non-negative parameter `a', similar to Lp, 0<p<1. TL1 recovers L0 under restricted isometry property (RIP), and can be decomposed as a difference of two non-negative Lipschitz continuous convex functions. We study the difference of convex algorithms (DCA) for TL1 (DCATL1) in computing TL1-regularized problems in compressed sensing (CS). For a wide range of sensing matrices of various degree of incoherence (RIP and non-RIP), DCATL1 consistently ranks in the top compared with state of art algorithms based on Lp and L1, and is the least sensitive to incoherence (RIP).

TL1 has closed form iterative thresholding representation for all parameter values of `a'. In the sub-critical regime, the thresholding function is continuous similar to soft thresholding of L1, while in the super-critical regime, the thresholding function has a jump discontinuity similar to L-half.

An adaptive iterative thresholding algorithm utilizing both thresholding functions outperforms hard and half thresholding algorithms in CS test problems especially in the coherent sensing regime.

This is joint work with Shuai Zhang at UC Irvine.

**Speaker:** Naihua Xiu (Beijing Jiaotong University)

**Title:** On Solutions of Sparsity Constrained Optimization

**Abstract:** In this talk, we are concerned about the solution existence of sparsity constrained optimization (SCO). Based on the expressions of tangent cone and normal cone of sparsity constraint, we present and characterize two first-order necessary optimality conditions for the solution existence of SCO: T-stationarity and N-stationarity. Then we give the second-order necessary and sufficient optimality conditions for the solution existence of SCO. At last, we point out some extensions and future work.

**Speaker:** Dachuan Xu (Beijing University of Technology)

**Title:** A Complex Semidefinite Programming Rounding Approximation Algorithm for the Balanced Max-3-Uncut Problem

**Abstract:** In this paper, we consider the balanced Max-3-Uncut problem which has several applications in the design of VLSI circuits. We propose a complex discrete linear program for the balanced Max-3-Uncut problem. Applying the complex semidefinite programming rounding technique, we present a 0.3456-approximation algorithm by further integrating a greedy swapping process after the rounding step. One ingredient in our analysis diffierent from previous work for the traditional Max-3-Cut is the introduction and analysis of a bivariate function rather than a univariate function.

(Joint work with Chenchen Wu, Donglei Du, and Wen-qing Xu)

**Speaker:** Yuesheng Xu (Sun Yat-Sen University)

**Title:** Approximate Sparsity Models for Signal/Image Reconstruction

**Abstract:** We shall present approximate sparsity models for signal/image reconstruction and fixed-point proximity algorithms for solving the resulting non-convex non-smooth optimization problems.

We shall discuss a specific application in image inpainting. Convergence of the algorithms will be considered and numerical examples will be discussed.

**Speaker:** Zhiqiang Xu (AMSS, CAS)

**Title:** Phase Retrieval for Sparse Signals

**Abstract:** The aim of this talk is to build up the theoretical framework for the recovery of sparse signals from the magnitude of the measurement. We first investigate the minimal number of measurements for the success of the recovery of sparse signals without the phase information. We completely settle the minimality question for the real case and give a lower bound for the complex case. We then study the recovery performance of the L1 minimization. In particular, we present the null space property which, to our knowledge, is the first sufficient and necessary condition for the success of L1 minimization for sparse phase retrievable. We finally extend the RIP to a stronger version. And shows that one can recover the sparse signals from the magnitude of the measurement by L1 minimization provided the measurement matrix satisfies strong RIP property. This talk is based on the joint work with Yang Wang and Vladislav Voroninski.

**Speaker:** Zongben Xu (Xi’an Jiaotong University)

**Title:** High-order Compressive Sensing: A Target and Background

**Abstract:** Various applications, e.g., video surveillance, hyper-spectral image processing and dynamic MR image reconstruction, can be cost as a high-order compressive sensing (hrdCS) problem in which the to-be-processed signals are of high-order tensors with target and background separation form. As highlighted in the 2nd order case (namely, the Low Rank Decomposition of Matrices), Sparsity measure has been central in modeling and solving such hrdCS problems. The existing approaches to measure the sparsity of a tensor are through unfolding the tensor into different matrix forms and then using the matrix sparsity. Such matriacization methodologies fail to exploit the global sparse structure and effectively eliminate the spatio-temporal redundancy of the tensor. In this talk we introduce an rational sparsity measure for any high-order tensors in terms of the number of fundamental Kronecker basis. The introduced measure unifies the sparsity adopted extensively in the 1st order case (namely, the number of nonzero components of a vector) and the 2nd order case (namely, the rank of a matrix) , and also well characterizes the global sparse structure of a high-order tensor. With the new sparsity measure, we define a hrdCS model based on the target and background separation framework. Unlike the existing models, we model the target and background tensors respectively with their essential priors like sparsity, smoonthness and similarities. The well known alternating direction method of multipliers (ADMM) is then employed to solve the hrdCS model. To lay the theoretical foundation, we establish an recovery theory of the hrdCS based on tensor RIP, prove a convergence result of ADMM, and provide extensive simulated and real world experiments with video surveillance and hyper-spectral image reconstruction which support the superiority of the hrdCS model over the existing state-of-the-art methods.

**Speaker:** Yinyu Ye (Stanford University)

**Title:** Concave Penalized Sparse Regression and Random Permutation for ADMM

**Abstract:** We present two results related to Sparse Regression and Optimization:

1) We consider the folded concave penalized sparse linear regression (FCPSLR) model. We provide a spectrum of conditions for the parameters of the folded concave penalties, under which any local solution satisfying the standard necessary optimality conditions recovers the sparse or oracle solution. These results are independent of the algorithms adapted in solving the model.

2) We illustrate that the direct extension of alternating direction method with multipliers (ADMM) of three-block for solving linear equations is not necessarily convergent, although its convergence proof was established 40 years ago for one or two-block. However, we prove that, if in each step one randomly and independently permutes the updating order of blocks followed by the regular multiplier update, the method will converge in expectation when solving the square system of linear equations.

Joint work with Hongcheng Liu/Tao Yao/Runze Li and Ruoyu Sun/Tom Luo.

**Speaker:** Wotao Yin (UCLA)

**Title:** Three-Operator Splitting and its Optimization Applications

**Abstract:** Operator splitting schemes have been successfully used in computational sciences to reduce complex problems into a series of simpler subproblems. Since 1950s, these schemes have been widely used to solve problems in PDE and control. Recently, large-scale optimization problems in machine learning, signal processing, and imaging have created a resurgence of interest in operator-splitting based algorithms because they often have simple descriptions, are easy to code, and have (nearly) state-of-the-art performance for large-scale optimization problems. Although operator splitting techniques were introduced over 60 years ago, their importance has signiﬁcantly increased in the past decade.

This talk introduces a new operator-splitting scheme for solving a variety of problems that are reduced to a monotone inclusion of three operators, one of which is cocoercive. Our scheme is very simple, and it does not reduce to any existing splitting schemes. Our scheme recovers the existing forward-backward, Douglas-Rachford, and forward-Douglas-Rachford splitting schemes as special cases.

Our new splitting scheme leads to a set of new and simple algorithms for a variety of other problems, including the 3-set split feasibility problems, 3-objective minimization problems, and doubly and multiple regularization problems, as well as the simplest extension of the classic ADMM from 2 to 3 blocks of variables. In addition to the basic scheme, we introduce several modiﬁcations and enhancements that can improve the convergence rate in practice, including an acceleration that achieves the optimal rate of convergence for strongly monotone inclusions. Finally, we evaluate the algorithm on several applications.

This is joint work with Damek Davis.

**Speaker:** Pingwen Zhang (Peking University)

**Title:** Big Data Analysis in Wind Power Forecasting

**Abstract:** Wind power forecasting is a very challenging task due to the erratic nature of wind and the nonlinear relationship between the variables involved. In this talk, we introduce a few mathematical problems in wind power forecasting. First, a wind speed prediction system is designed based on the WRF software and datasets from the global numerical weather forecasting agency. A post-process model is then developed to improve the accuracy of the predicted wind speed. By using various techniques in machine learning, we establish a robust model for wind power forecasting. We are able to achieve very competitive accuracy even comparing to several known results in the world, for example, our root-mean-square error of 24-hour, 48-hour and 72-hour forecasting for a wind farm in Jiangsu Province in September, 2014 are 10.87%, 12.59% and 13.21%, respectively. Our system is going to be deployed in wind farms with thousands of wind turbines all over China managed by our collaborator --- the Keywind Technology CO., LTD, Shengyang. This is a joint work with Pengyu Qian, Qinwu Xu, Zaiwen Wen and Junzi Zhang.

**Speaker:** Hongkai Zhao (UC Irvine)

**Title:** Intrinsic complexity of differential operators

**Abstract:** Approximate separable representation of the Green’s functions for differential operators (with appropriate boundary conditions) is a basic and important question in analysis of differential equations and development of efficient numerical algorithms. It can reveal intrinsic complexity or degrees of freedom of the corresponding differential equation. Computationally, being able to approximate a Green’s function as a sum with few separable terms is equivalent to the existence of low rank approximation of the corresponding discretized system which can be explored for matrix compression and fast solution techniques such as in fast multiple method and direct matrix inverse solver. In this talk, two types of differential operators will be discussed. One is coercive elliptic equation in divergence form, which is highly separable. The other one is Helmholtz equation in high frequency limit for which we developed a new approach to study the approximate separability of Green’s function based on an geometric characterization of the relation between two Green's functions and a tight dimension estimate for the best linear subspace approximating a set of almost orthogonal vectors. We derive both lower bounds and upper bounds and show their sharpness and implications for computation setups that are commonly used in practice. This is a joint work with Bjorn Engquist.

**Speaker:** Haomin Zhou (GaTech)

**Title:** Adaptive Local Iterative Filtering for Instantaneous Frequency Analysis And Applications

**Abstract:** In this talk I will present an Adaptive Local Iterative Filtering (ALIF) algorithm for instantaneous frequency analysis for nonlinear and non-stationary signals that the classical FFT or wavelet based methods cannot achieve satisfactory results. ALIF is inspired by the empirical mode decomposition (EMD) method pioneered by Huang and collaborators. The goal of EMD is to decompose a signal into intrinsic mode function (IMFs) that possess better instantaneous frequency analysis. ALIF is an adaptive, local, and stable alternative for EMD. The adaptivity is insured by varying the filter lengths according to the signal itself. While the locality is guaranteed by means of smooth filters with compact supports produced from Fokker-Planck equations. The performance and stability of this algorithm is demonstrated by numerical examples. We also briefly present a new algorithm that combines ALIF with the heterogeneous multiscale method (HMM) to solve dynamical systems with multiple scales. This presentation is based on joint work with Antonio Cicone (L'Aquilla), Jingfang Liu (LinkedIn) and Seong Jun Kim (Math, Georgia Tech).

**Speaker:** Jun Zou (CUHK)

**Title:** Direct and Indirect Sampling-type Methods for Inverse Medium and Obstacle Scattering

**Abstract:** We shall discuss some recent developments in numerical solutions of inverse medium and obstacle scattering problems. Several direct and indirect sampling-type methods are addressed, which are rather efficient and robust against the noise in the observation data. Mathematical justifications and numerical efficiencies of the methods are also presented.