Erase Persona, Forget Lore: Benchmarking Multimodal Copyright Unlearning in Large Vision Language Models
Provide a benchmark (and likely associated evaluation methodology) for measuring how effectively multimodal large vision-language models (LVLMs) perform “unlearning” of copyrighted visual content (e.g., characters/logos) after training, addressing weaknesses in existing evaluation approaches for multimodal cross-modal memorization and removal.
unknown (paper-only context)expected: pythonexpected: transformers / multimodal model toolingexpected: vision preprocessing and multimodal eval harnesses3d ago
brand newby JuneHyoung KwonFR:HIGHPDR:HIGHMCR:MEDDH:6MO3/10