Digital Fingerprinting of Complex Liquids Using a Reconfigurable Multi-Sensor System with Foundation Models
Abstract
Combining chemical sensor arrays with machine learning enables designing intelligent systems to perform complex sensing tasks and unveil properties that are not directly accessible through conventional analytical chemistry. However, personalized and portable sensor systems are typically unsuitable for the generation of extensive data sets, thereby limiting the ability to train large models in the chemical sensing realm. Foundation models have demonstrated unprecedented zero-shot learning capabilities on various data structures and modalities, in particular for language and vision. We explore transfer learning from such models by providing a framework to create effective data representations for chemical sensors and ultimately describe a novel, generalizable approach for AI-assisted chemical sensing. We demonstrate the translation of signals produced by remarkably simple and portable multi-sensor systems into visual fingerprints of liquid samples under test, and illustrate how a pipeline incorporating pretrained vision models yields > 95% correct class identification in four unrelated chemical sensing tasks with limited domain-specific training measurements. Our approach matches or outperforms expert-curated sensor signal features, thereby providing a generalization of data processing for ultimate ease-of-use and broad applicability to enable interpretation of multi-signal outputs for generic sensing applications.