System disruptions
We are currently experiencing disruptions on the search portals due to high traffic. We are working to resolve the issue, you may temporarily encounter an error message.
Change search
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • apa.csl
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf
Mixed Reality-Based 6D-Pose Annotation System for Robot Manipulation in Retail Environments
Karlstad University.
Ritsumeikan University, Japan.
Ritsumeikan University, Japan.
Panasonic Corporation, Japan.
Show others and affiliations
2024 (English)In: The proceedings of 2024 IEEE/SICE International Symposium on System Integration (SII), Institute of Electrical and Electronics Engineers (IEEE), 2024, p. 1425-1432Conference paper, Published paper (Refereed)
Abstract [en]

Robot manipulation in retail environments is a challenging task due to the need for large amounts of annotated data for accurate 6D-pose estimation of items. Onsite data collection, additional manual annotation, and model fine-tuning are often required when deploying robots in new environments, as varying lighting conditions, clutter, and occlusions can significantly diminish performance. Therefore, we propose a system to annotate the 6D pose of items using mixed reality (MR) to enhance the robustness of robot manipulation in retail environments. Our main contribution is a system that can display 6D-pose estimation results of a trained model from multiple perspectives in MR, and enable onsite (re-)annotation of incorrectly inferred item poses using hand gestures. The proposed system is compared to a PC-based annotation system using a mouse and the robot camera’s point cloud in an extensive quantitative experiment. Our experimental results indicate that MR can increase the accuracy of pose annotation, especially by reducing position errors.

Place, publisher, year, edition, pages
Institute of Electrical and Electronics Engineers (IEEE), 2024. p. 1425-1432
Keywords [en]
Mammals, Annotation systems, Data collection, Fine tuning, Large amounts, Lighting conditions, Manual annotation, Mixed reality, Pose-estimation, Robot manipulation, Varying lighting, Mixed reality
National Category
Robotics and automation
Research subject
Electrical Engineering
Identifiers
URN: urn:nbn:se:kau:diva-99148DOI: 10.1109/SII58957.2024.10417443Scopus ID: 2-s2.0-85186266074ISBN: 979-8-3503-1208-9 (print)ISBN: 979-8-3503-1207-2 (electronic)OAI: oai:DiVA.org:kau-99148DiVA, id: diva2:1848444
Conference
IEEE/SICE International Symposium on System Integration, Ha Long, Vietnam, January 8-11, 2024.
Available from: 2024-04-03 Created: 2024-04-03 Last updated: 2025-02-09Bibliographically approved

Open Access in DiVA

No full text in DiVA

Other links

Publisher's full textScopus

Authority records

Solis, Jorge

Search in DiVA

By author/editor
Solis, Jorge
By organisation
Karlstad UniversityDepartment of Engineering and Physics (from 2013)
Robotics and automation

Search outside of DiVA

GoogleGoogle Scholar

doi
isbn
urn-nbn

Altmetric score

doi
isbn
urn-nbn
Total: 56 hits
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • apa.csl
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf