Multimodal Data Acquisition
In experimental setups, it is often possible to combine data from multiple sources, depending on the research question or the function of the system being developed. This increases the complexity of both design and implementation and introduces additional considerations, such as timing, synchronization, and alignment of signals.
This approach is known as multimodal data acquisition and refers to the collection of data from more than one device. The devices may be of the same type or of different types. For example, a setup may involve two portable EEG devices recording from different participants, or the combination of eye-tracking and EEG data from a single participant.
Synchronization is achieved either through hardware-based mechanisms, such as shared triggers or clocks, or through software-based approaches that align data streams using timestamps. Multimodal setups may support offline analysis or real-time processing, depending on how the data are used.
Lab Streaming Layer (LSL)
The Lab Streaming Layer (LSL) is a software framework designed to support the collection and synchronization of multimodal time-series data across multiple devices and computers. It provides a common mechanism for streaming data from heterogeneous sources over a local network, while assigning time stamps to each sample and aligning all streams on a shared temporal reference.
LSL follows a publish–subscribe model, where devices or applications publish data streams and other applications can discover, subscribe to, and record these streams. Each stream carries both data and metadata, allowing different modalities (for example EEG, eye tracking, motion, or physiological signals) to be handled in a unified way. Time synchronization is performed in software using continuous clock offset estimation between networked systems, enabling alignment of data streams even when devices run on independent clocks.
LSL supports both offline recording and real-time access to synchronized data, and is used across a wide range of experimental setups involving distributed acquisition, multiple sampling rates, and mixed sensing modalities. A detailed description of the LSL architecture, synchronization mechanisms, design decisions, and practical considerations is provided in the LSL paper referenced at the end of this page (Kothe et al., 2025).
Device-agnostic design
LSL is designed to be device-agnostic, meaning that it does not depend on specific hardware vendors, device types, or sensing modalities. From LSL’s perspective, all data sources are treated uniformly as time-stamped streams with associated metadata, regardless of how the data were generated or which device produced them.
This design allows devices of different types, manufacturers, and sampling characteristics to coexist within the same recording setup. EEG systems, eye trackers, motion sensors, physiological sensors, software-generated markers, and simulated data can all be represented using the same streaming model. As a result, acquisition, recording, and downstream processing components do not need to be tailored to individual devices, as long as they expose their data through LSL.
LSL for infrastructure devices
All devices available in the infrastructure are compatible with Lab Streaming Layer (LSL). Some devices support LSL directly through manufacturer-provided integrations, such as Bitbrain EEG, Artinis fNIRS, and Pupil eye trackers. For other devices, custom LSL integrations are used.
For the BioSemi ActiveTwo Mk2 system, a dedicated LSL streaming application was developed during the CogniX project. This open-source tool enables streaming BioSemi data into LSL-based setups and is available here:
Beyond the infrastructure-specific tools, the wider open-source community maintains a large collection of applications, integrations, and utilities built around LSL. These tools support a broad range of devices and use cases:
Example project utilizing LSL
A recent project at the Interactive Technologies Lab, CogniX, used LSL to combine EEG and eye-tracking data within a single experimental setup. LSL was also used to stream real-time processing outputs to an external Unity application. This enabled interactive behavior based on the participant’s gaze direction (from eye tracking) and real-time classification of cognitive activity derived from motor imagery patterns in EEG signals.
For a more detailed description of the CogniX project, see:
References
Christian Kothe, Seyed Yahya Shirazi, Tristan Stenner, David Medine, Chadwick Boulay, Matthew I. Grivich, Fiorenzo Artoni, Tim Mullen, Arnaud Delorme, Scott Makeig. The lab streaming layer for synchronized multimodal recording. Imaging Neuroscience, 2025; 3: IMAG.a.136.