Change search
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • apa.csl
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf
DIOPT: Extremely Fast Classification Using Lookups and Optimal Feature Discretization
Karlstad University, Faculty of Health, Science and Technology (starting 2013), Department of Mathematics and Computer Science (from 2013).ORCID iD: 0000-0003-3461-7079
Karlstad University, Faculty of Health, Science and Technology (starting 2013), Department of Mathematics and Computer Science (from 2013).
2020 (English)In: 2020 INTERNATIONAL JOINT CONFERENCE ON NEURAL NETWORKS (IJCNN), IEEE , 2020Conference paper, Published paper (Refereed)
Abstract [en]

For low dimensional classification problems we propose the novel DIOPT approach which considers the construction of a discretized feature space. Predictions for all cells in this space are obtained by means of a reference classifier and the class labels are stored in a lookup table generated by enumerating the complete space. This then leads to extremely high classification throughput as inference consists only of discretizing the relevant features and reading the class label from the lookup table index corresponding to the concatenation of the discretized feature bin indices. Since the size of the lookup table is limited due to memory constraints, the selection of optimal features and their respective discretization levels is paramount. We propose a particular supervised discretization approach striving to achieve maximal class separation of the discretized features, and further employ a purpose-built memetic algorithm to search towards the optimal selection of features and discretization levels. The inference run time and classification accuracy of DIOPT is compared to benchmark random forest and decision tree classifiers in several publicly available data sets. Orders of magnitude improvements are recorded in classification runtime with insignificant or modest degradation in classification accuracy for many of the evaluated binary classification tasks.

Place, publisher, year, edition, pages
IEEE , 2020.
Series
IEEE International Joint Conference on Neural Networks (IJCNN), ISSN 2161-4393
National Category
Computer and Information Sciences
Research subject
Computer Science; Computer Science; Computer Science
Identifiers
URN: urn:nbn:se:kau:diva-83709DOI: 10.1109/IJCNN48605.2020.9207037ISI: 000626021403072Scopus ID: 2-s2.0-85089746323ISBN: 978-1-7281-6926-2 (print)OAI: oai:DiVA.org:kau-83709DiVA, id: diva2:1545308
Conference
International Joint Conference on Neural Networks (IJCNN) held as part of the IEEE World Congress on Computational Intelligence (IEEE WCCI), JUL 19-24, 2020, ELECTR NETWORK
Available from: 2021-04-19 Created: 2021-04-19 Last updated: 2025-10-17Bibliographically approved

Open Access in DiVA

No full text in DiVA

Other links

Publisher's full textScopus

Authority records

Garcia, JohanKorhonen, Topi

Search in DiVA

By author/editor
Garcia, JohanKorhonen, Topi
By organisation
Department of Mathematics and Computer Science (from 2013)
Computer and Information Sciences

Search outside of DiVA

GoogleGoogle Scholar

doi
isbn
urn-nbn

Altmetric score

doi
isbn
urn-nbn
Total: 184 hits
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • apa.csl
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf