Applications of Matrix Computational Methods in the Analysis of Modern Data (AMCMD) Session 1

Time and Date: 10:35 - 12:15 on 12th June 2017

Room: HG E 41

Chair: Raja Velu

625 The epsilon-algorithm in matrix computations [abstract]
Abstract: E-algorithm is designed to expedite iterative algorithms in matrix computation. In this talk, this algorithm is explained in the connection with Krylov-Subspace Methods and matrix completion problem.
Walter Gander
39 Clustering Mixed-Attribute Data using Random Walk [abstract]
Abstract: Most clustering algorithms rely in some fundamental way on a measure of either similarity or distance, either between objects themselves, or between objects and cluster centroids. When the dataset contains mixed attributes, defining a suitable measure can be problematic. This paper presents a general graph-based method for clustering mixed-attribute datasets that does not require any explicit measure of similarity or distance. Empirical results on a range of well-known datasets using a range of evaluation measures show that the method achieves performance competitive with traditional clustering algorithms that require explicit calculation of distance or similarity, as well as with more recently proposed clustering algorithms based on matrix factorization.
Andrew Skabar
274 Regularized Computation of Oscillatory Integrals with Stationary Points [abstract]
Abstract: Ability to calculate integrals of rapidly oscillating functions is crucial for solving many problems in optics, electrodynamics, quantum mechanics, nuclear physics, and many other areas. The article considers the method of computing oscillatory integrals with the help of the transition to the numerical solution of the system of ordinary differential equations. Using the Levin’s collocation method, we reduce the problem to solving a system of linear algebraic equations. In the case where the phase function has stationary points, (its derivative vanishes on the interval of integration) the solution of the corresponding system becomes an ill-posed task. The regularized algorithm presented in the article describes the stable method of integration of rapidly oscillating functions at the presence of stationary points. Performance and high accuracy of the algorithms illustrated by various examples.
Konstantin P. Lovetskiy, Leonid A. Sevastianov and Nikolai Ed. Nikolaev
536 Optimizing the SVD Bidiagonalization Process for a Batch of Small Matrices [abstract]
Abstract: A challenging class of problems arising in many GPU applications, called batched problems, involves linear algebra operations on many small-sized matrices. We designed batched BLAS (Basic Linear Algebra Subroutines) routines, and in particular the Level-2 BLAS GEMV and the Level-3 BLAS GEMM routines, to solve them. We proposed device functions and big-tile settings in our batched BLAS design. We adopted auto-tuning to optimize different instances of GEMV routines. We illustrated our batched BLAS approach to optimize batched bi-diagonalization progressively on a K40c GPU. The optimization techniques in this paper are applicable to the other two-sided factorizations as well.
Tingxing Dong, Azzam Haidar, Stanimire Tomov and Jack Dongarra