A. Liu Cheng, H. Bier, and G. Latorre publish a paper on Actuation Confirmation and Negation via Facial-Identity and -Expression Recognition

Alexander Liu Cheng, Henriette Bier, and Galoget Latorre publish a paper on Actuation Confirmation and Negation via Facial-Identity and -Expression Recognition on occasion of the IEEE 3rd Ecuador Technical Chapters Meeting 2018 (http://sites.ieee.org/etcm-2018/) conference held in Cuenca, Ecuador (15-19 Oct. 2018).


This paper presents the implementation of a facial-identity and -expression recognition mechanism that confirms or negates physical and/or computational actuations in an intelligent built-environment. Said mechanism is built via Google Brain’s TensorFlow (as regards facial identity recognition) and Google Cloud Platform’s Cloud Vision API (as regards facial gesture recognition); and it is integrated into the ongoing development of an intelligent built-environment framework, viz., Design-to-Robotic-Production & -Operation (D2RP&O), conceived at Delft University of Technology (TUD). The present work builds on the inherited technological ecosystem and technical functionality of the Design-to-Robotic-Operation (D2RO) component of said framework; and its implementation is validated via two scenarios (physical and computational). In the first scenario—and building on an inherited adaptive mechanism—if building-skin components perceive a rise in interior temperature levels, natural ventilation is promoted by increasing degrees of aperture. This measure is presently confirmed or negated by a corresponding facial expression on the part of the user in response to said reaction, which serves as an intuitive override / feedback mechanism to the intelligent building-skin mechanism’s decision-making process. In the second scenario—and building on another inherited mechanism—if an accidental fall is detected and the user remains consciously or unconsciously collapsed, a series of automated emergency notifications (e.g., SMS, email, etc.) are sent to family and/or care-takers by particular mechanisms in the intelligent built-environment. The precision of this measure and its execution are presently confirmed by (a) identity detection of the victim, and (b) recognition of a reflexive facial gesture of pain and/or displeasure. The work presented in this paper promotes a considered relationship between the architecture of the built-environment and the Information and Communication Technologies (ICTs) embedded and/or deployed.