View on GitHub

Experiment Model for Brain Signals in HCI

Model Overview

The model we discuss here was created in an iterative process during initial passes of the surveyed literature: A superset of reported attributes was created, similar concepts (or identical concepts with different names) were grouped together and categories were defined in accordance to the typical section structure of the papers. This process resulted in the following categories: Technical aspects of recording, Task description, Participants, Experiment flow, Data processing, and Brain signal integration.

In the tables below, we present the experiment model for HCI research with brain signals. We describe each category and list the attributes contained within them. For each attribute, we give a definition of the attribute and present an example taken from the surveyed literature. We tried to identify examples which are biased towards a more detailed documentation, but individual examples may still lack certain information.

We invite the community to contribute to future versions of the experiment model.

Technical Aspects of Recording

Attribute Description Example
Type of Sensor For a given brain sensing modality, report the manufacturer and the specification of the sensor chain employed. “The EEG was recorded using a NeuroScan system with 32 Ag-AgCl electrodes” (Lee et al., 2014)
Sensor Position Report where on the scalp electrodes are positioned. For EEG, this is most often done in terms of the 10-20 positioning system or its refinements. For fNIRS, the placement of transmitters and receivers has to be distinguished and the respective distances need to be reported. “Electrodes were positioned according to the extended 10-20 system on CPz, POz, Oz, Iz, O1 and O2” (missing reference)
Sampling Rate Report the number of samples recorded per second in Hz. “A sampling rate of EEG signals was set as 300 Hz.” (Terasawa et al., 2017)
Measurement Quality For EEG, the threshold for the maximum impedance level (in kΩ) is often reported. For fNIRS, no standardized quality measurement exists, different devices provide different ways of measurement (e.g. photon count). “electrode impedance was below 5 KΩ” (Vi et al., 2014)
Reference Specific to the EEG signal, it is custom to report the electrode to which the recording was referenced. “Two electrodes were located at both earlobes as reference and ground.” (Terasawa et al., 2017)
Auxiliary signals The brain sensing modality may not be the only signal, which is captured during the experiment. Often, other sensors, such as eye trackers or heart rate monitors are employed. For these, similar information as for the brain sensing modality may be reported, especially about the specific type of sensor and its placement. “Eye positions were measured with an embedded infrared eye-tracking module: aGlass DKI from 7invensun (https://www.7invensun.com)” (Ma et al., 2018)
Synchronization with stimuli and other signals For analyzing a continous stream of brain signal data, it needs to be synchronized to the events of the experiments (and potentially any other signal sources). This can be done through timestamps, trigger signals, light sensors or other means and the method may be reported to determine the precision of the achieved synchronization. “A parallel port connection between recording PC and experimental PC synchronized the EEG recording with the experimental events, such as the sound onset and button press.” (Glatz et al., 2018)
Recording Environment This point reports where and under what conditions the experiment was conducted. Of relevance can be the conditions regarding control of light, sound, electromagnetic fields as well as the positioning of the participant. This attribute is often illustrated through a photo or video of the environment. “#Scanners was presented in an intimate 6 person capacity cinema, within a caravan […]. The space had no windows, low lighting, plush seating, an eight foot projected image, and stereo speakers. Figure 3 [shows a] participant wearing the EEG device, experiencing #Scanners inside the caravan.” (Pike et al., 2016)

Task Description

Attribute Description Example
Participant Restraints This attribute relates to any instructions or physical restraints, which were in place during the experiment to avoid artifacts or other undesired effects influencing the signal. “the participants were instructed to refrain from excessive movement by keeping arms at rest on the table in a position that allowed them to reach the keyboard without excessive movement.” (Crk et al., 2015)
Output devices Describes through which devices (e.g., computer screen, mobile phone, etc.) information and material is communicated to the user. As many brain activity patterns are sensitive to the specific characteristics of the stimulation, details of the presentation may be reported. “The […] game stimulus was run on a powerful high-end gaming PC (CPU: Intel® Core ™ i7-6850K @ 3.60 GHz; RAM: 32 GB; GPU: NVIDIA Geforce GTX 1080) and displayed on a 27-inch BenQ ZOWIE XL2720 144 hz gaming monitor at a 1920x1080 resolution.” (Terkildsen & Makransky, 2019)
Input devices Describes through which devices (besides the brain signal itself) the user communicates commands and other types of input to the system. “HMD-mounted Leap Motion (https://www. leapmotion.com) to track participants’ hands.” (Škola & Liarokapis, 2019)
User Input Describes which kind of commands and input users can enter into the system at which point of the task. May specify which input devices are used and whether there are any requirements or restrictions to the input. “They had to respond to auditory notifications whenever one was presented, with a button press using either their left or right index fingers. Six notifications (i.e., 3 complementary pairs of verbal commands and auditory icons) were pre-assigned to a left index-finger press and the remaining six, to a right index-finger press.” (Glatz et al., 2018)
Middleware/ Communication For interactive applications or distributed recording setups, this attribute reports how the different parts communicate to exchange data, triggers, commands, etc. “We wrote a custom Java bridge program to connect the headset to the Android OS and Unity application on the Game tablet. The Java program polled the headset 60 times a second for EEG power spectrum […] We connected the Calibrate tablet to the Game tablet using WiFi Direct […].” (Antle et al., 2018)
Framework/ Technical platform What software or development toolkit (in what version) was used as the foundation to implement the task (e.g., PsychoPy, Unity, etc.) “The scene was developed using Unity version 2017.3.0f3, for the representation of hands, the realistically looking hand models ``Pepper Hands’’ from Leap Motion suite were used (visible in Figure 3).” (Škola & Liarokapis, 2019)
Task Functionality Reports what functionality the involved software provides to the user (in the case of a working interactive application) and how it responds to different user input. For experiments which are based on or inspired by established paradigms (e.g., from cognitive psychology), this source may be reported (e.g., in reference to a source such as the Cognitive Atlas. “the main task for the study is a multi-robot version of the task introduced in [27]. Participants remotely supervised two robots (the blue robot and the red robot) that were exploring different areas of a virtual environment. Participants were told that the two robots had collected information that needed to be transmitted back to the control center. [continues…]” (Solovey et al., 2012)
Architecture For experiments which involve non-trivial custom software artifacts, this attribute reports the underlying software architecture, informing about structure of and information flow between modules.
Stimulus Material For tasks which involve the repeated presentation of uniform stimuli (e.g., pictures to rate, text prompts to enter, etc.), this attribute reports the form of these stimuli (e.g., picture size, length, language, etc.) and their source. “To prepare experimental materials, a dataset of notifications from the websites Notification Sounds and Appraw were collected. Seven musically trained raters were recruited to determine the melody complexity of the 40 notifications. […] (The stimuli can be downloaded at https://goo.gl/SnZrzG).” (Cherng et al., 2019)
Visualization provided? This attribute reports visually (through screen shots or video) the task as shown to the user. If the task has multiple distinct parts, all of them may be visualized. If the task is not in English, the visualization may be accompanied by a translation.
Timing Reports on if and how the task is (partially) paced by an internal clock, for example for controlling the duration for stimulus presentation or the time time for responding to a prompt. “Each trial began with a black screen for 3s, followed by a fixation dot in the center of the screen for 200ms. After that, the screen remains clear for 200ms before one of four stimuli was displayed for 300ms.” (Vi et al., 2014)
Code for task provided? Reports on whether the task is provided in source code or an executable file and under what licence. If custom hardware is involved, this could also include a blueprint or a circuit diagram.

Participants

Attribute Description Example
Recruitment strategy How where study participants recruited, e.g. through social media, in class, etc.? “A snowball procedure was used to gather the sample of study participants. The study was advertised via university courses, email and social media.” (Johnson et al., 2015)
Incentives What compensation (if any) was offered to study participants, e.g. money, class credit, etc.? What were the criteria for being eligible for the compensation? “Participants received monetary compensation for their participation (10 Euro).” (Putze et al., 2017)
Age How old are the participants (mean and standard deviation)? “mean age 24.53 (SD: 3.00)” (Frey et al., 2016)
Gender With what gender do participants identify (relative frequencies)? “2 females and 9 males” (Ma et al., 2018)
Occupation What is the profession or - in case of students - the field of study of the participants? “Data were collected from 34 computer science undergraduates at the first two authors’ institution” (Crk et al., 2015)
Inclusion or exclusion criteria Where there rules on which participants were eligible to take part in the experiment and what were these criteria (e.g. handedness, disabilities, caffeine consumption, etc. “Each of the individuals was enrolled in at least one computer science course” (Crk et al., 2015)
Approval of ethics committee Was the study approved by an ethics committee? If so, by which one? “the experimental protocol was approved by the University Research Ethics Committee prior to data collection.” (Burns & Fairclough, 2015)

Experiment Flow

Data Processing

Attribute Description Example
Derivation of labels Outside neurofeedback applications the recorded brain signal data is distributed between multiple groups or assigned a continuous value. This attribute may report how the label is derived from the collected data (e.g., defined by the experiment structure, by questionnaire responses, or external ratings). “We considered the mean of the three NASA-TLX parameters (effort, mental demand and frustration) to evaluate the overall mental workload. The average score was thresholded at the mean value of 2 (since the used scale was 0–4) to quantize or characterize a parameter block as inducing low/high workload.” (Bilalpur et al., 2018)
Data transformation This attribute refers to all processing steps which transform raw data while keeping it in the original time-domain representation. Examples of such transformation steps are: re-referencing, baseline normalization, downsampling, etc. “the common average was subtracted from all EEG channels.” (Lampe et al., 2014)
Filtering This attribute reports any filtering of the data. This may include the type of filter applied as well as necessary parameters, such as the filter order. “EEG data was first low-pass filtered with a cutoff frequency of 50hz and high-pass filtered with a cutoff frequency of 0.16hz, both using a third-order butterworth filter” (Rodrigue et al., 2015)
Windowing This attribute reports how segments of data are aligned (e.g., locked to an event in the experiment), how long they are and with which window function they are extracted. “The data was then segmented into 1.5-second epochs, overlapping each previous epoch by 50%” (Rodrigue et al., 2015)
Artifact cleaning Reports through which algorithms (beyond filtering) artifacts were removed and which artifacts are targeted. “Independent Component Analysis (ICA) is applied […] The components are first filtered using a band-pass filter with cut off frequencies 1 - 6 Hz. Choosing the component with the highest energy [and] applying a high-pass filter with a cut off frequency of 20 Hz.” (Jarvis et al., 2011)
Hyperparameter optimization For machine learning models, this attribute reports how the hyperparameters of the model were chosen (e.g. through grid search) and which hyperparameters where chosen in the final model. This also includes other parameters of the processing pipeline which are optimized (e.g. in preprocessing). “A grid search was performed to optimize sigma for all participants, the remaining parameters were left as default.” (Rodrigue et al., 2015)
Outlier handling Reports any methods for excluding certain samples, windows, or sessions based on the contained data or other external factors. “Any rest or trial period with 20 percent or higher error rate is considered noisy and can be excluded from the analysis” (Crk et al., 2015)
Feature extraction This attribute reports on how a feature vector for classification or regression is calculated from the preprocessed data. “[W]e partitioned each data window into smaller segments of 50 ms length. We then used the signal mean of the segment, calculated on the band-pass filtered signal, with cutoff frequencies at 4 and 13 Hz (i.e. θ- and α-bands).” (Putze et al., 2017)
Feature selection This attribute reports on procedures to reduce the number of features automatically. “we performed a feature selection using the Fisher ratio as selection criterion. The number k of selected features […] was a tuning parameter in the range between 5 and 50.” (Putze et al., 2017)
Learning model This attribute reports on the specific machine learning model that is employed (if any) to perform classification or regression. “the Neural Network Toolbox of MATLAB was used to create an artificial neural network (ANN) with 198 inputs, 20 hidden neurons and 4 outputs. The patternnetfunction, which creates a feed-forward neural network, was used. […]” (Lampe et al., 2014)
Evaluation procedure For machine learning models, this attribute reports how they were evaluated to assess their performance. This involves the exact metric used for assessment as well as the approach to (sometimes repeatedly) determine test and training data sets. “To assess the classifiers’ performance on the calibration data, we used 4-fold cross-validation (CV). […] The performance was measured using the area under the receiver-operating characteristic curve (AUROCC).” (Frey et al., 2016)
Processing code provided? Reports if the code for processing the brain signal data is released with the paper or in a separate repository. If the code cannot be provided, as a substitute it is possible to report the employed frameworks (e.g. EEGLAB). “The full classification pipeline is implemented in Python. For EEG processing, we use the MNE toolbox [17]. For machine learning and evaluation algorithms, we use scikit [28] and custom routines build on numpy and scipy.” (Putze et al., 2017)

Brain Signal Integration

Attribute Description Example
Brain Input effect This attribute describes how the output of the brain input processing influences the design, the behavior, or the content of the application or experimental paradigm. “when the system was confident that the user was in a state of low or high workload, one UAV would be added or removed, respectively. After a UAV was added or removed, there was a 20 second period where no more vehicles were added or removed.” (Afergan et al., 2014)
Type of integration This attribute describes the algorithmic implementation of the brain signal integration, i.e., whether an explicit conditional statement, an Influence Diagram, a state graph, or a different way of behavior modeling was used. “[The self-correction algorithm] inspects the probability distribution […] and picks the now highest scoring class […]. [W]e only used the second best class if its re-normalized confidence […] is above a certain threshold T […]. Otherwise, the user was asked to repeat the input.” (Putze et al., 2015)

References

  1. Lee, Y.-C., Lin, W.-C., King, J.-T., Ko, L.-W., Huang, Y.-T., & Cherng, F.-Y. (2014). An EEG-Based Approach for Evaluating Audio Notifications Under Ambient Sounds. Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, 3817–3826. https://doi.org/10.1145/2556288.2557076
  2. Terasawa, N., Tanaka, H., Sakti, S., & Nakamura, S. (2017). Tracking Liking State in Brain Activity While Watching Multiple Movies. Proceedings of the 19th ACM International Conference on Multimodal Interaction, 321–325. https://doi.org/10.1145/3136755.3136772
  3. Vi, C. T., Jamil, I., Coyle, D., & Subramanian, S. (2014). Error Related Negativity in Observing Interactive Tasks. Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, 3787–3796. https://doi.org/10.1145/2556288.2557015
  4. Ma, X., Yao, Z., Wang, Y., Pei, W., & Chen, H. (2018). Combining Brain-Computer Interface and Eye Tracking for High-Speed Text Entry in Virtual Reality. 23rd International Conference on Intelligent User Interfaces, 263–267. https://doi.org/10.1145/3172944.3172988
  5. Glatz, C., Krupenia, S. S., Bülthoff, H. H., & Chuang, L. L. (2018). Use the Right Sound for the Right Job: Verbal Commands and Auditory Icons for a Task-Management System Favor Different Information Processes in the Brain. Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems, 472:1–472:13. https://doi.org/10.1145/3173574.3174046
  6. Pike, M., Ramchurn, R., Benford, S., & Wilson, M. L. (2016). #Scanners: Exploring the Control of Adaptive Films Using Brain-Computer Interaction. Proceedings of the 2016 CHI Conference on Human Factors in Computing Systems, 5385–5396. https://doi.org/10.1145/2858036.2858276
  7. Crk, I., Kluthe, T., & Stefik, A. (2015). Understanding Programming Expertise: An Empirical Study of Phasic Brain Wave Changes. ACM Trans. Comput.-Hum. Interact., 23(1), 2:1–2:29. https://doi.org/10.1145/2829945
  8. Terkildsen, T., & Makransky, G. (2019). Measuring presence in video games: An investigation of the potential use of physiological measures as indicators of presence. International Journal of Human-Computer Studies, 126, 64–80. https://doi.org/https://doi.org/10.1016/j.ijhcs.2019.02.006
  9. Škola, F., & Liarokapis, F. (2019). Examining and Enhancing the Illusory Touch Perception in Virtual Reality Using Non-Invasive Brain Stimulation. Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems, 247:1–247:12. https://doi.org/10.1145/3290605.3300477
  10. Antle, A. N., Chesick, L., & Mclaren, E.-S. (2018). Opening Up the Design Space of Neurofeedback Brain–Computer Interfaces for Children. ACM Trans. Comput.-Hum. Interact., 24(6), 38:1–38:33. https://doi.org/10.1145/3131607
  11. Solovey, E., Schermerhorn, P., Scheutz, M., Sassaroli, A., Fantini, S., & Jacob, R. (2012). Brainput: Enhancing Interactive Systems with Streaming Fnirs Brain Input. Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, 2193–2202. https://doi.org/10.1145/2207676.2208372
  12. Cherng, F.-Y., Lee, Y.-C., King, J.-T., & Lin, W.-C. (2019). Measuring the Influences of Musical Parameters on Cognitive and Behavioral Responses to Audio Notifications Using EEG and Large-scale Online Studies. Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems, 409:1–409:12. https://doi.org/10.1145/3290605.3300639
  13. Vi, C. T., Jamil, I., Coyle, D., & Subramanian, S. (2014). Error Related Negativity in Observing Interactive Tasks. Proceedings of the 32Nd Annual ACM Conference on Human Factors in Computing Systems, 3787–3796. https://doi.org/10.1145/2556288.2557015
  14. Johnson, D., Wyeth, P., Clark, M., & Watling, C. (2015). Cooperative Game Play with Avatars and Agents: Differences in Brain Activity and the Experience of Play. Proceedings of the 33rd Annual ACM Conference on Human Factors in Computing Systems, 3721–3730. https://doi.org/10.1145/2702123.2702468
  15. Putze, F., Schünemann, M., Schultz, T., & Stuerzlinger, W. (2017). Automatic Classification of Auto-correction Errors in Predictive Text Entry Based on EEG and Context Information. Proceedings of the 19th ACM International Conference on Multimodal Interaction, 137–145. https://doi.org/10.1145/3136755.3136784
  16. Frey, J., Daniel, M., Castet, J., Hachet, M., & Lotte, F. (2016). Framework for Electroencephalography-based Evaluation of User Experience. Proceedings of the 2016 CHI Conference on Human Factors in Computing Systems, 2283–2294. https://doi.org/10.1145/2858036.2858525
  17. Burns, C. G., & Fairclough, S. H. (2015). Use of auditory event-related potentials to measure immersion during a computer game. International Journal of Human-Computer Studies, 73, 107–114. https://doi.org/10.1016/j.ijhcs.2014.09.002
  18. Bilalpur, M., Kankanhalli, M., Winkler, S., & Subramanian, R. (2018). EEG-based Evaluation of Cognitive Workload Induced by Acoustic Parameters for Data Sonification. Proceedings of the 20th ACM International Conference on Multimodal Interaction, 315–323. https://doi.org/10.1145/3242969.3243016
  19. Lampe, T., Fiederer, L. D. J., Voelker, M., Knorr, A., Riedmiller, M., & Ball, T. (2014). A Brain-computer Interface for High-level Remote Control of an Autonomous, Reinforcement-learning-based Robotic System for Reaching and Grasping. Proceedings of the 19th International Conference on Intelligent User Interfaces, 83–88. https://doi.org/10.1145/2557500.2557533
  20. Rodrigue, M., Son, J., Giesbrecht, B., Turk, M., & Höllerer, T. (2015). Spatio-Temporal Detection of Divided Attention in Reading Applications Using EEG and Eye Tracking. Proceedings of the 20th International Conference on Intelligent User Interfaces, 121–125. https://doi.org/10.1145/2678025.2701382
  21. Jarvis, J., Putze, F., Heger, D., & Schultz, T. (2011). Multimodal Person Independent Recognition of Workload Related Biosignal Patterns. Proceedings of the 13th International Conference on Multimodal Interfaces, 205–208. https://doi.org/10.1145/2070481.2070516
  22. Afergan, D., Peck, E. M., Solovey, E. T., Jenkins, A., Hincks, S. W., Brown, E. T., Chang, R., & Jacob, R. J. K. (2014). Dynamic Difficulty Using Brain Metrics of Workload. Proceedings of the 32Nd Annual ACM Conference on Human Factors in Computing Systems, 3797–3806. https://doi.org/10.1145/2556288.2557230
  23. Putze, F., Amma, C., & Schultz, T. (2015). Design and Evaluation of a Self-Correcting Gesture Interface Based on Error Potentials from EEG. Proceedings of the 33rd Annual ACM Conference on Human Factors in Computing Systems, 3375–3384. https://doi.org/10.1145/2702123.2702184

Contribute

To contribute to the Model Overview, please consider submitting a pull request to our Experiment Model for Brain Signals in HCI GitHub. You can suggest changes to existing attributes, descriptions, or examples. Or you can add new attributes to expand the experiment model. If you do so, please provide a name, description, and example from the literature (including references). For editing, provide your changes to the Experiment Model File, add references to the bibtex file, and open a pull request with your changes. Every pull request will be open for discussions.