Collected molecules will appear here. Add from search or explore.
A universal multi-state fMRI foundation model utilizing metadata-conditioned Diffusion Transformers (DiT) trained on 349,898 brain imaging sessions.
Defensibility
citations
0
co_authors
6
Brain-DiT scores a 7 for defensibility primarily due to 'Data Gravity' and the extreme technical debt associated with fMRI data harmonization. While the code for a Diffusion Transformer (DiT) is reproducible, the curated dataset of ~350,000 sessions across 24 disparate datasets (resting, task, sleep, disease) represents a massive barrier to entry. Most competing fMRI models (like Mind-Eye or earlier MAE-based models) rely on much smaller cohorts (e.g., HCP or UK Biobank alone). The 6 forks within 3 days of a 0-star repository suggest high interest from the academic community (likely peers or collaborators). Frontier labs like OpenAI or Anthropic are unlikely to compete here as fMRI is a niche medical modality with limited direct consumer application compared to general-purpose LLMs or vision models. The primary risk is academic consolidation; if this becomes the 'BERT of Brain Imaging,' the moat will be the community lock-in and the pre-trained weights. Platform domination risk is low because medical data privacy and the specialized nature of neuroimaging pipelines (e.g., fMRIPrep, surface-based mapping) are outside the core competency of generic cloud AI providers. The 3+ year displacement horizon is based on the significant effort required to aggregate, clean, and harmonize a larger multi-modal brain dataset.
TECH STACK
INTEGRATION
reference_implementation
READINESS