70
Views
0
CrossRef citations to date
0
Altmetric
SI-Novel Approaches for Distributed Intelligent Systems

Block size, parallelism and predictive performance: finding the sweet spot in distributed learning

, , , &
Pages 379-398 | Received 10 Feb 2023, Accepted 12 Jun 2023, Published online: 27 Jun 2023
 

Abstract

As distributed and multi-organization Machine Learning emerges, new challenges must be solved, such as diverse and low-quality data or real-time delivery. In this paper, we use a distributed learning environment to analyze the relationship between block size, parallelism, and predictor quality. Specifically, the goal is to find the optimum block size and the best heuristic to create distributed Ensembles. We evaluated three different heuristics and five block sizes on four publicly available datasets. Results show that using fewer but better base models matches or outperforms a standard Random Forest, and that 32 MB is the best block size.

Disclosure statement

No potential conflict of interest was reported by the author(s).

Additional information

Funding

This work was supported by FCT – Fundação para a Ciência e Tecnologia within projects [grant number UIDB/04728/2020], [grant number EXPL/CCI-COM/0706/2021] and [grant number CPCA-IAC/AV/475278/2022.

Log in via your institution

Log in to Taylor & Francis Online

PDF download + Online access

  • 48 hours access to article PDF & online version
  • Article PDF can be downloaded
  • Article PDF can be printed
USD 61.00 Add to cart

Issue Purchase

  • 30 days online access to complete issue
  • Article PDFs can be downloaded
  • Article PDFs can be printed
USD 763.00 Add to cart

* Local tax will be added as applicable

Related Research

People also read lists articles that other readers of this article have read.

Recommended articles lists articles that we recommend and is powered by our AI driven recommendation engine.

Cited by lists all citing articles based on Crossref citations.
Articles with the Crossref icon will open in a new tab.