Figure 4. Processing procedure for data collected in
gesture-recognition experiment. (ML: machine learning; SVM:
support-vector machine)
For gesture recognition, the proposed GRW with three NFPSUs was placed
on the tester’s wrist to obtain mechanical signals related to hand
gestures. Data-processing methods were employed to calculate the
characteristics of each gesture signal, as shown in Figure 4 .
Light was launched into the three sensors and collected using a CMOS
camera. The time-varying output data—in the form of a CMOS image
(1280×720 pixels)—were then transferred to and processed by a
computer. By extracting the change in the gray level of the CMOS images
over time, we obtained the time-domain output light intensities of all
three sensor channels as they varied with different gestures. (The
time-domain signal was automatically collected using the
change-point-finder algorithm or the threshold setting method.)
Subsequently, data consolidation was achieved by an end-to-end merge of
the time-domain data obtained from the three sensors. The consolidated
data were then appended to a gesture label and collected in a dataset of
integrated time-domain signals containing all gestures and their
corresponding gesture labels. Because some degree of random motion is
inevitable for a GRW when a person is wearing it, a machine learning
algorithm for support-vector classification (SVC) was introduced to
relearn the tester’s gestures every time the wearing condition changed.
In this study, an SVM classification model was trained using the
consolidated database and was subsequently used as a classifier to
detect gestures. Once the real-time gesture-related data were collected,
the trained support-vector classifier (SVC) was used to predict the
real-time data and return the predicted gesture.
Figure 5a shows the mechanical signals obtained by our GRW for
twelve fundamental gestures. The effects of different gestures on the
output intensity are readily visible. The cross section of the wrist was
altered by the gesture-related movement of even a single tendon, and the
wearing conditions of the GRW changed accordingly. Even for similar
gestures (e.g., Gesture 1 and 2), notable differences were observed in
the corresponding time-varying output of NFPSU 2; this can be attributed
to the high sensitivity of the NFPSUs. Though the introduction of
machine learning algorithm can effectively solve the inevitable problem
of random wristband motion, disturbance in the output of GRW sensors
caused by sliding between the sensors and the skin surface may occur
during long-term wear, reducing the recognition accuracy. However, the
position-independent response of the NFPSU means that the effects of
sliding on the results are insignificant. Consequently, a stable output
of the NFPSUs during long-term wear is achieved even with very few
sensors.