Multimodal Learning

'Multimodal Learning' 태그의 모든 글

총 3개의 글
시간순 정렬

Janus-Pro: UnifiedMultimodalUnderstanding and Generation with Data and Model Scaling

논문 링크 Janus-Pro 7B: Dual-Encoder Multimodal LLM That Outsmarts Bigger Models 한 줄 요약 (TL;DR) SigLIP 이해 인코더 + VQ 생성 인코더를 완전히 분리한 뒤 7 B …

31 분
DeepSeek 2501.17811v1 Janus-Pro Dual-Encoder Multimodal Learning Vision-Language Models Text-to-Image Image Understanding Large Language Models Adapter Networks Visual Tokenization GenEval MMBench DPG-Bench DeepSeek-LLM Efficient Training Synthetic Data

DeepSeek-VL2: Mixture-of-Experts Vision-Language Models for Advanced Multimodal Understanding

논문 링크 DeepSeek-VL2 — “작고 빠르면서 고해상도까지 정확한” 멀티모달 LLM 한 줄 요약 (TL;DR) Dynamic Tiling × MLA-MoE × 800 B VL 데이터라는 세 축의 설계로, 4.5 B …

31 분
2412.10302v1 DeepSeek Multimodal Learning Vision-Language Models High-Resolution Image Processing Dynamic Tiling Mixture of Experts (MoE) KV-Cache Compression Multi-head Latent Attention (MLA) Visual Grounding OCR Parameter Efficiency LLM Inference Optimization Edge AI Open Source Models Document Understanding Infographic QA Chart and Table QA Visual Reasoning Multilingual VQA Conversational AI with Images

검색 시작

검색어를 입력하세요

↑↓
ESC
⌘K 단축키