When algorithm selection meets Bi-linear Learning to Rank: accuracy and inference time trade off with candidates expansion

dc.contributor.authorYuan, Jing
dc.contributor.authorGeissler, Christian
dc.contributor.authorShao, Weijia
dc.contributor.authorLommatzsch, Andreas
dc.contributor.authorJain, Brijnesh
dc.date.accessioned2021-03-15T11:13:24Z
dc.date.available2021-03-15T11:13:24Z
dc.date.issued2020-10-09
dc.description.abstractAlgorithm selection (AS) tasks are dedicated to find the optimal algorithm for an unseen problem instance. With the knowledge of problem instances’ meta-features and algorithms’ landmark performances, Machine Learning (ML) approaches are applied to solve AS problems. However, the standard training process of benchmark ML approaches in AS either needs to train the models specifically for every algorithm or relies on the sparse one-hot encoding as the algorithms’ representation. To escape these intermediate steps and form the mapping function directly, we borrow the learning to rank framework from Recommender System (RS) and embed the bi-linear factorization to model the algorithms’ performances in AS. This Bi-linear Learning to Rank (BLR) has proven to work with competence in some AS scenarios and thus is also proposed as a benchmark approach. Thinking from the evaluation perspective in the modern AS challenges, precisely predicting the performance is usually the measuring goal. Though approaches’ inference time also needs to be counted for the running time cost calculation, it’s always overlooked in the evaluation process. The multi-objective evaluation metric Adjusted Ratio of Root Ratios (A3R) is therefore advocated in this paper to balance the trade-off between the accuracy and inference time in AS. Concerning A3R, BLR outperforms other benchmarks when expanding the candidates range to TOP 3. The better effect of this candidates expansion results from the cumulative optimum performance during the AS process. We take the further step in the experimentation to represent the advantage of such TOPK expansion, and illustrate that such expansion can be considered as the supplement for the convention of TOP 1 selection during the evaluation process.en
dc.description.sponsorshipTU Berlin, Open-Access-Mittel – 2020en
dc.description.sponsorshipBMBF, 01IS16046, CODA: Cognitive Data Analytics Frameworken
dc.identifier.eissn2364-4168
dc.identifier.issn2364-415X
dc.identifier.urihttps://depositonce.tu-berlin.de/handle/11303/12824
dc.identifier.urihttp://dx.doi.org/10.14279/depositonce-11624
dc.language.isoen
dc.rights.urihttps://creativecommons.org/licenses/by/4.0/
dc.subject.ddc004 Datenverarbeitung; Informatiken
dc.subject.otheralgorithm selectionen
dc.subject.otherbi-linear learning to ranken
dc.subject.othercandidates expansionen
dc.subject.othermulti-object evaluationen
dc.subject.othermachine learningen
dc.titleWhen algorithm selection meets Bi-linear Learning to Rank: accuracy and inference time trade off with candidates expansionen
dc.typeArticleen
dc.type.versionpublishedVersionen
dcterms.bibliographicCitation.doi10.1007/s41060-020-00229-xen
dcterms.bibliographicCitation.journaltitleInternational Journal of Data Science and Analyticsen
dcterms.bibliographicCitation.originalpublishernameSpringerNatureen
dcterms.bibliographicCitation.originalpublisherplaceLondon [u.a.]en
tub.accessrights.dnbfreeen
tub.affiliationFak. 4 Elektrotechnik und Informatik::Inst. Wirtschaftsinformatik und Quantitative Methoden::FG Agententechnologien in betrieblichen Anwendungen und der Telekommunikation (AOT)de
tub.affiliation.facultyFak. 4 Elektrotechnik und Informatikde
tub.affiliation.groupFG Agententechnologien in betrieblichen Anwendungen und der Telekommunikation (AOT)de
tub.affiliation.instituteInst. Wirtschaftsinformatik und Quantitative Methodende
tub.publisher.universityorinstitutionTechnische Universität Berlinen

Files

Original bundle
Now showing 1 - 1 of 1
Loading…
Thumbnail Image
Name:
Yuan_etal_When_2020.pdf
Size:
1.35 MB
Format:
Adobe Portable Document Format

Collections