Riku is a first year Ph.D. student at Human-Computer Interaction Institute (HCII), School of Computer Science, Carnegie Mellon University. He is advised by Prof. Mayank Goel.

Riku received his B.E. and M.S. at The University of Tokyo, where he worked with Prof. Hiroshi Saruwatari on real-time voice conversion using DNN, and Prof. Masahiko Inami on auditory intervention for behavioral change. He also worked as a researcn intern with Prof. Yang Zhang on mm-wave interactive sensing, and Dr. Fabrice Matulic on pen+tablet interaction.

His research focuses on developing and evaluating high-impact human-centered systems. He believes interactions through sensing users’ context ubiquitously and intervening them suitably are key for changing our daily lives in a pervasive manner. His recent works are across human behavior analysis, wearable sensing systems, human-AI collaboration, and persuasive technologies. He often adopts wide spectrum of ML techniques for achieving smart interactions, such as computer vision, speech synthesis, and natural language processing. He publishes papers at ACM CHI, CSCW, and other venues.

Curriculum Vitae (updated at 2021.09)
Google Scholar


click to expand


More projects are on the way.

Digital Speech Makeup: Voice Conversion Based Altered Auditory Feedback for Transforming Self-Representation

Riku Arakawa, Zendai Kashino, Shinnosuke Takamichi, Adrien Verhulst, Masahiko Inami



Reaction or Speculation: Building Computational Support for Users in Catching-Up Series Based on an Emerging Media Consumption Phenomenon

Riku Arakawa*, Hiromu Yakura* (*: equal contribution)


Paper Blog (Japanese)

Mindless Attractor: A False-Positive Resistant Intervention for Drawing Attention Using Auditory Perturbation

Riku Arakawa*, Hiromu Yakura* (*: equal contribution)

ACM CHI 2021(: Honorable Mention Awards, top 5%)

Paper Video Blog (Japanese)

Independent Control of Supernumerary Appendages Exploiting Upper Limb Redundancy

Hideki Shimobayashi, Tomoya Sasaki, Arata Horie,Riku Arakawa, Zendai Kashino, Masahiko Inami

Augmented Humans 2021

Paper Video

Hand with Sensing Sphere: Body-Centered Spatial Interactions with a Hand-Worn Spherical Camera

Riku Arakawa, Azumi Maekawa, Zendai Kashino, Masahiko Inami

ACM SUI 2020

Paper Video

Mimicker-in-the-Browser: A Novel Interaction Using Mimicry to Augment the Browsing Experience

Riku Arakawa*, Hiromu Yakura* (*: equal contribution)



INWARD: A Computer-Supported Tool for Video-Reflection Improves Efficiency and Effectiveness in Executive Coaching

Riku Arakawa*, Hiromu Yakura* (*: equal contribution)

ACM CHI 2020

Paper Slide Blog (Japanese)

PenSight: Enhanced Interaction with a Pen-Top Camera

Fabrice Matulic, Riku Arakawa, Brian Vogel, Daniel Vogel

ACM CHI 2020 (: Best Paper Awards, top 1%)

Paper Video

REsCUE: A framework for REal-time feedback on behavioral CUEs using multimodal anomaly detection

Riku Arakawa*, Hiromu Yakura* (*: equal contribution)

ACM CHI 2019, Travel Grant by NEC C&C Foundation

Paper Slide Blog (Japanese)

Workshops, Posters, and Demos

Low-Cost Millimeter-Wave Interactive Sensing through Origami Reflectors

Riku Arakawa, Yang Zhang

1st CHIIoT Workshop (: Best Paper Award)

Paper Video

Exploration of Reinforcement Learning for Event Camera using Car-like Robots

Riku Arakawa*, Shintaro Shiba* (*: equal contribution)

IEEE ICRA 2020 Unconventional Sensors in Robotics workshop

Paper Video

BulkScreen: Saliency-Based Automatic Shape Representation of Digital Images with a Vertical Pin-Array Screen

Riku Arakawa*, Yudai Tanaka*, Hiromu Kawarasaki, Kiyosu Maeda (*: equal contribution)

ACM TEI 2020 Work-in-Progress

Paper Video

TransVoice: Real-Time Voice Conversion for Augmenting Near-Field Speech Communication

Riku Arakawa, Shinnosuke Takamichi, Hiroshi Saruwatari

ACM UIST 2019 Poster

Paper Poster

Implementation of DNN-based real-time voice conversion and its improvements by audio data augmentation and mask-shaped device

Riku Arakawa, Shinnosuke Takamichi, Hiroshi Saruwatari

The 10th ISCA Speech Synthesis Workshop

Paper Video Poster

DQN-TAMER: Human-in-the-Loop Reinforcement Learning with Intractable Feedback

Riku Arakawa, Sosuke Kobayashi, Yuya Unno, Yuta Tsuboi, Shin-ichi Maeda

IEEE ICRA 2019 RT-DUNE workshop (Extended Abstract)

Paper Poster


Displaying is a good opportunity to gain insight, so I actively exhibit what we have created.

Kininaruki - Be in"tree"sted in

International Virtual Reality Contest 2019@SCIENCE AGORA, Tokyo, 2 days. (: Laval Virtual Award)

In Laval Virtual 2020 (virtual)

Paper Video


SXSW 2019, Austin, 4 days



I like writing codes and developing products as a team, so I actively undertake work or start projects with my friends.

European Data Incubator (2020)

Analysis on missing data in smart meter readings. (my labor: lead data scientist)
Funded under the European Union’s Horizon 2020 programme.

White Paper Project

Juken 7 calculation practice apps (2019)

Entrusted development: link
iOS applications of math questions for exam preparation. Android versions are on the way.
Two members (my labor: project management, software development).

JEPX Price Checker (2018)

Entrusted development: link
A service that allows you to understand at a glance the power trading price at the Japan Wholesale Power Exchange JEPX and receive it daily via email.
Three members (my labor: project management, server development).

Event Camera Library (2018)

IPA Mitou Advanced Program.
Open source library for handling event camera written in C++.
Two core members and other contributors.


Your Pacifier (2017)

Smart pacifier with a built-in humidity sensor for babies and their parents.
: Stanford Health Hackathon 3rd prize & design prize,
: James Dyson Award 2018 1st prize in Japan
Six members.

Paper Video Project

LoverDuck (2017)

Duck-type device that detects anomaly in the bathtub.
: JPHacks 2017 Innovator Award & three company prizes
Five members.

Slide Video 

Honors and Awards

Scholarship and Grant-in-Aid

Academic Honors and Awards

Programming & Science Competition Awards

Other Awards

Invited Talks (selected)

I am willing to have talks, especially to (junior) high school students. This comes from my personal experience of having been exposed to the joy of science and technology at the seminars I attended when in high school.



[first name].[family name]1996 [at]