Riku is a Ph.D. student at Human-Computer Interaction Institute (HCII), School of Computer Science, Carnegie Mellon University. He is advised by Prof. Mayank Goel. Before coming to CMU, he received his B.E. and M.S. at The University of Tokyo, where he worked with Prof. Hiroshi Saruwatari on real-time voice conversion using DNN, and Prof. Masahiko Inami on auditory intervention for behavioral change. He also worked as a research intern with Prof. Yang Zhang on mm-wave interactive sensing, and Dr. Fabrice Matulic on pen+tablet interaction.

His research focuses on developing and evaluating high-impact human-centered systems. He believes interactions through sensing users’ context ubiquitously and intervening them suitably are key for changing our daily lives in a pervasive manner. His recent works are across human behavior analysis, wearable sensing systems, human-AI collaboration, and persuasive technologies. He often adopts wide spectrum of ML techniques for achieving smart multimodal interactions, such as computer vision, speech recognition+synthesis, and natural language processing. He publishes papers at ACM CHI, CSCW, IUI, and other venues.

Curriculum Vitae (updated at 2022.06)
Google Scholar


click to expand


The figure below illustrates the map of my main works to date.

Peer-Reviewed Conferences & Journals

RGBDGaze: Gaze Tracking on Smartphones with RGB and Depth Data

Riku Arakawa, Mayank Goel, Chris Harrison, Karan Ahuja

ACM ICMI 2022 (To Appear)

VoLearn: A Cross-Modal Operable Motion-Learning System Combined with Virtual Avatar and Auditory Feedback

Chengshuo Xia, Xinrui Fang, Riku Arakawa, Yuta Sugiura

PACM IMWUT 2022 (Ubicomp)

Paper Video 

VocabEncounter: NMT-powered Vocabulary Learning by Presenting Computer-Generated Usages of Foreign Words into Users' Daily Lives

Riku Arakawa*, Hiromu Yakura*, Sosuke Kobayashi (*: equal contribution)

ACM CHI 2022

Paper Slide Video 

BeParrot: Efficient Interface for Transcribing Unclear Speech via Respeaking

Riku Arakawa*, Hiromu Yakura*, Masataka Goto (*: equal contribution)

ACM IUI 2022

Paper Slide Video Blog (Japanese)

Digital Speech Makeup: Voice Conversion Based Altered Auditory Feedback for Transforming Self-Representation

Riku Arakawa, Zendai Kashino, Shinnosuke Takamichi, Adrien Verhulst, Masahiko Inami



Reaction or Speculation: Building Computational Support for Users in Catching-Up Series Based on an Emerging Media Consumption Phenomenon

Riku Arakawa*, Hiromu Yakura* (*: equal contribution)


Paper Slide Blog (Japanese)

Mindless Attractor: A False-Positive Resistant Intervention for Drawing Attention Using Auditory Perturbation

Riku Arakawa*, Hiromu Yakura* (*: equal contribution)

ACM CHI 2021(: Honorable Mention Awards, top 5%)

Paper Video Blog (Japanese)

Hand with Sensing Sphere: Body-Centered Spatial Interactions with a Hand-Worn Spherical Camera

Riku Arakawa, Azumi Maekawa, Zendai Kashino, Masahiko Inami

ACM SUI 2020

Paper Video

INWARD: A Computer-Supported Tool for Video-Reflection Improves Efficiency and Effectiveness in Executive Coaching

Riku Arakawa*, Hiromu Yakura* (*: equal contribution)

ACM CHI 2020

Paper Slide Blog (Japanese)

PenSight: Enhanced Interaction with a Pen-Top Camera

Fabrice Matulic, Riku Arakawa, Brian Vogel, Daniel Vogel

ACM CHI 2020 (: Best Paper Awards, top 1%)

Paper Video

REsCUE: A framework for REal-time feedback on behavioral CUEs using multimodal anomaly detection

Riku Arakawa*, Hiromu Yakura* (*: equal contribution)

ACM CHI 2019, Travel Grant by NEC C&C Foundation

Paper Slide Blog (Japanese)

Workshops, Posters, and Demos

Human-AI communication for human-human communication: Applying interpretable unsupervised anomaly detection to executive coaching

Riku Arakawa*, Hiromu Yakura (*: equal contribution)

Communication in Human-AI Interaction Workshop (CHAI’22) at IJCAI 2022

Paper Slide

AI for human assessment: What do professional assessors need?

Riku Arakawa*, Hiromu Yakura (*: equal contribution)

Workshop on Trust and Reliance in AI-Human Teams (TRAIT’22) at CHI 2022

Paper  Poster

Low-Cost Millimeter-Wave Interactive Sensing through Origami Reflectors

Riku Arakawa, Yang Zhang

1st CHIIoT Workshop (: Best Paper Award)

Paper Video

Exploration of Reinforcement Learning for Event Camera using Car-like Robots

Riku Arakawa*, Shintaro Shiba* (*: equal contribution)

IEEE ICRA 2020 Unconventional Sensors in Robotics workshop

Paper Video

BulkScreen: Saliency-Based Automatic Shape Representation of Digital Images with a Vertical Pin-Array Screen

Riku Arakawa*, Yudai Tanaka*, Hiromu Kawarasaki, Kiyosu Maeda (*: equal contribution)

ACM TEI 2020 Work-in-Progress

Paper Video

TransVoice: Real-Time Voice Conversion for Augmenting Near-Field Speech Communication

Riku Arakawa, Shinnosuke Takamichi, Hiroshi Saruwatari

ACM UIST 2019 Poster

Paper Poster

Implementation of DNN-based real-time voice conversion and its improvements by audio data augmentation and mask-shaped device

Riku Arakawa, Shinnosuke Takamichi, Hiroshi Saruwatari

The 10th ISCA Speech Synthesis Workshop

Paper Video Poster

DQN-TAMER: Human-in-the-Loop Reinforcement Learning with Intractable Feedback

Riku Arakawa, Sosuke Kobayashi, Yuya Unno, Yuta Tsuboi, Shin-ichi Maeda

IEEE ICRA 2019 RT-DUNE workshop (Extended Abstract)

Paper Poster

Honors and Awards

Scholarship and Grant-in-Aid

Academic Honors and Awards

Programming & Science Competition Awards

Other Awards

Talks (selected)

I am willing to have talks, especially to (junior) high school students. This comes from my personal experience of having been exposed to the joy of science and technology at seminars I attended when in high school.

Media Coverage


[first name].[family name]1996 [at]