Komplexity AI / ML

Research in Complex Systems,

Artificial Intelligence, and Machine Learning

Next-Generation Language Models

We are developing new approaches to large-scale language modeling aimed at improving interpretability, transparency, reliability, and (energy) efficiency. Our work draws on principles from statistical physics, information theory, and complexity science to rethink how AI systems represent and communicate knowledge. Supported by an NSF ACCESS allocation for computational resources, we train and evaluate these architectures at scale, exploring new directions in model reliability and adaptive reasoning.


Foundations in Complexity Science

Our research program spans complexity science, artificial intelligence, and statistical physics. We are particularly interested in phase transitions, critical phenomena, and information flow in networks, as well as applications to reinforcement learning, uncertainty quantification, and optimization. Our work draws from statistical mechanics, information theory, and computational mechanics to better understand how systems organize, adapt, and compute. We also maintain active interests in sparsification and quantization methods for efficient machine learning, energy-efficient swarm optimization, and the role of cascades in both social and computational systems.