Preprints

This page contains preprints and work submitted for peer review, showcasing cutting-edge research in cross-modal learning, optimization, and adversarial machine learning.

2024

“6-Dimensional Contrastive Learning: Beyond Traditional CLIP Training”

Authors: Stephen Mander, et al.
Venue: arXiv preprint (submitted to ICLR 2025)
Status: Under Review

Links: arXiv | PDF | Code | Project

Abstract: This paper introduces a novel framework for training CLIP models with 6-dimensional loss functions, enabling more sophisticated cross-modal alignment and improved representation learning. Our approach demonstrates significant improvements over traditional CLIP training across multiple benchmarks.

Key Contributions:

  • Novel 6-dimensional loss function framework for contrastive learning

  • Comprehensive analysis of multi-dimensional embedding spaces

  • Advanced CKA (Centered Kernel Alignment) analysis for model comparison

  • Open-source framework with 18+ loss function variants

Technical Highlights:

  • Mathematical Innovation: Rigorous 6D loss formulations with numerical stability

  • Performance Engineering: CuPy implementation for large tensor operations

  • Evaluation Framework: Comprehensive test suite with 95%+ coverage

  • Visualization Tools: Advanced 6D embedding analysis with hypercube representations

Expected Impact:

  • Theoretical Advancement: New understanding of high-dimensional contrastive learning

  • Practical Applications: Improved vision-language models for multimodal AI

  • Community Resources: Comprehensive framework for contrastive learning research


“Scaling Laws in Multi-Modal Contrastive Autoencoders”

Authors: Stephen Mander, et al.
Venue: arXiv preprint (submitted to Nature Machine Intelligence)
Status: Under Review

Links: arXiv | PDF | Analysis Code

Abstract: We investigate fundamental scaling relationships governing multi-modal contrastive autoencoders, deriving theoretical bounds and validating them through extensive empirical studies across different architectural choices and data modalities.

Key Contributions:

  • Theoretical derivation of scaling laws for contrastive learning systems

  • Empirical validation across multiple domains and architectures

  • Power-law characterization of performance vs. computational cost relationships

  • Practical guidelines for efficient large-scale contrastive training

Research Innovation:

  • Mathematical Framework: Information-theoretic analysis of representation capacity

  • Empirical Validation: Large-scale experiments across multiple data modalities

  • Optimization Insights: Understanding of scaling-aware algorithm design

  • Hardware Efficiency: Guidelines for accelerator-aware training strategies


“Tactile Linear Sum Assignment: ML-Guided Optimization Through Haptic Interfaces”

Authors: Stephen Mander, et al.
Venue: arXiv preprint (submitted to CHI 2025)
Status: Under Review

Links: arXiv | PDF | Demo Code | Project

Abstract: This work explores the intersection of tactile computing and mathematical optimization, investigating whether haptic feedback can improve human understanding and algorithmic performance in linear sum assignment problems.

Key Contributions:

  • Novel tactile interface for optimization problem visualization

  • Machine learning models that learn from human optimization strategies

  • Hybrid human-AI optimization frameworks

  • Comprehensive evaluation of tactile vs. traditional optimization approaches

Methodological Innovation:

  • Interface Design: Haptic feedback systems for cost matrix manipulation

  • Learning Framework: Neural networks trained on human optimization sessions

  • Hybrid Algorithms: Integration of learned strategies with traditional assignment methods

  • Evaluation Metrics: Novel measures for human-AI optimization collaboration


2023

“SuperDARN-Chord: Distributed Atmospheric Data Processing at Scale”

Authors: Stephen Mander, et al.
Venue: arXiv preprint (submitted to Journal of Atmospheric Science)
Status: Under Review

Links: arXiv | PDF | Framework | Project

Abstract: We present a distributed processing framework for SuperDARN atmospheric data that enables real-time analysis of radio wave propagation patterns across global ionospheric monitoring networks.

Key Contributions:

  • Scalable distributed processing architecture for atmospheric data

  • Real-time analysis of radio wave propagation patterns

  • Integration with existing SuperDARN infrastructure

  • Open-source framework for atmospheric data processing

Technical Innovation:

  • Distributed Systems: Chord-based DHT for atmospheric data management

  • Real-time Processing: Low-latency analysis of ionospheric measurements

  • Scalability: Efficient handling of multi-terabyte atmospheric datasets

  • Integration: Seamless connection with existing SuperDARN networks


“LUStores: Intelligent Inventory Management for Academic Institutions”

Authors: Stephen Mander, et al.
Venue: arXiv preprint (submitted to ACM Computing Surveys)
Status: Under Review

Links: arXiv | PDF | System | Project

Abstract: LUStores presents an intelligent inventory management system specifically designed for academic institutions, incorporating machine learning for demand prediction and optimization algorithms for resource allocation.

Key Contributions:

  • Specialized inventory management system for academic environments

  • Machine learning-based demand prediction for research equipment

  • Optimization algorithms for multi-department resource allocation

  • Comprehensive evaluation in real academic institution settings

System Innovation:

  • Domain Specialization: Tailored for unique academic inventory requirements

  • Predictive Analytics: ML models for equipment demand forecasting

  • Optimization Framework: Efficient algorithms for resource allocation

  • Real-world Validation: Deployment and evaluation in academic settings


Research Trajectory & Future Directions

Emerging Research Themes

My preprint portfolio demonstrates evolution across several interconnected areas:

Cross-Modal Learning Evolution:

  • From traditional CLIP training to advanced 6-dimensional frameworks

  • Integration of theoretical scaling laws with practical implementation

  • Development of novel evaluation methodologies for multimodal systems

Human-AI Collaboration:

  • Exploration of tactile interfaces for optimization problems

  • Investigation of human intuition in mathematical problem solving

  • Development of hybrid frameworks combining human insight with algorithmic efficiency

Systems & Applications:

  • Large-scale distributed processing for scientific data

  • Specialized systems for academic and research environments

  • Integration of theoretical advances with practical deployment challenges

Upcoming Submissions (2025)

Target Venues:

  • ICLR 2025: 6-dimensional contrastive learning framework

  • NeurIPS 2025: Scaling laws in multimodal systems

  • CHI 2025: Tactile optimization interfaces

  • Nature Machine Intelligence: Theoretical scaling analysis


For published work, see Journal Articles and Conference Papers.

Last updated: Sep 16, 2025

Code and Data Availability

Associated code repositories for preprints and working papers:

6Dimcoco Preprint

Code Repository: 6DIMCOCO Language: Python Status: Active

6-dimensional multi-objective continuous optimization research code


Lsa Methodology

arXiv: 2025.XXXXX Code Repository: LSA-Preprint Language: R Status: Under Review

Code and data for LSA methodology preprint

View Code Documentation →


Pgd Analysis

arXiv: 2025.YYYYY Code Repository: PGD-Analysis Language: Python Status: Under Review

Projected Gradient Descent analysis and implementation

View Code Documentation →