GigaTensor
GigaTensor: scaling tensor analysis up by 100 times - algorithms and discoveries. Many data are modeled as tensors, or multi dimensional arrays. Examples include the predicates (subject, verb, object) in knowledge bases, hyperlinks and anchor texts in the Web graphs, sensor streams (time, location, and type), social networks over time, and DBLP conference-author-keyword relations. Tensor decomposition is an important data mining tool with various applications including clustering, trend detection, and anomaly detection. However, current tensor decomposition algorithms are not scalable for large tensors with billions of sizes and hundreds millions of nonzeros: the largest tensor in the literature remains thousands of sizes and hundreds thousands of nonzeros. Consider a knowledge base tensor consisting of about 26 million noun-phrases. The intermediate data explosion problem, associated with naive implementations of tensor decomposition algorithms, would require the materialization and the storage of a matrix whose largest dimension would be ≈7 x 1014; this amounts to 10 Petabytes, or equivalently a few data centers worth of storage, thereby rendering the tensor analysis of this knowledge base, in the naive way, practically impossible. In this paper, we propose GIGATENSOR, a scalable distributed algorithm for large scale tensor decomposition. GIGATENSOR exploits the sparseness of the real world tensors, and avoids the intermediate data explosion problem by carefully redesigning the tensor decomposition algorithm. Extensive experiments show that our proposed GIGATENSOR solves 100 times bigger problems than existing methods. Furthermore, we employ GIGATENSOR in order to analyze a very large real world, knowledge base tensor and present our astounding findings which include discovery of potential synonyms among millions of noun-phrases (e.g. the noun ’pollutant’ and the noun-phrase ’greenhouse gases’).
Keywords for this software
References in zbMATH (referenced in 6 articles )
Showing results 1 to 6 of 6.
Sorted by year (- Kaya, Oguz; Robert, Yves: Computing dense tensor decompositions with optimal dimension trees (2019)
- Vervliet, Nico; Debals, Otto; De Lathauwer, Lieven: Exploiting efficient representations in large-scale tensor decompositions (2019)
- Harrison, A. P.; Joseph, D.: High performance rearrangement and multiplication routines for sparse tensor arithmetic (2018)
- Kaya, Oguz; Uçar, Bora: Parallel Candecomp/Parafac decomposition of sparse tensors using dimension trees (2018)
- Chen, Yannan; Qi, Liqun; Wang, Qun: Computing extreme eigenvalues of large scale Hankel tensors (2016)
- Papalexakis, Evangelos E.; Mitchell, Tom M.; Sidiropoulos, Nicholas D.; Faloutsos, Christos; Talukdar, Partha Pratim; Murphy, Brian: Turbo-SMT: parallel coupled sparse matrix-tensor factorizations and applications (2016)