본문 바로가기
  • 책상 밖 세상을 경험할 수 있는 Playground를 제공하고, 수동적 학습에서 창조의 삶으로의 전환을 위한 새로운 라이프 스타일을 제시합니다.

Computer Vision134

[2025-1] 황징아이 - Temporal Feature Alignment and Mutual Information Maximization for Video-Based Human Pose Estimation 논문 : https://arxiv.org/abs/2203.15227코드 : https://github.com/Pose-Group/FAMI-Pose GitHub - Pose-Group/FAMI-Pose: This is an official implementation of our CVPR 2022 ORAL paper "Temporal Feature Alignment and MuThis is an official implementation of our CVPR 2022 ORAL paper "Temporal Feature Alignment and Mutual Information Maximization for Video-Based Human Pose Estimation" . - Pose-Group/FAMI-Po.. 2025. 2. 8.
[2025-1] 유경석 - MAISI: Medical AI for Synthetic Imaging https://arxiv.org/pdf/2409.11169v2 https://build.nvidia.com/nvidia/maisi maisi Model by NVIDIA | NVIDIA NIMMAISI is a pre-trained volumetric (3D) CT Latent Diffusion Generative Model.build.nvidia.com AbstractMAISI (Medical AI for Synthetic Imaging) : 3D 컴퓨터 단층촬영 (CT) 이미지 생성 모델Volume Compression Network : 고해상도 CT 이미지 생성Latent diffusion model : flexible volume dimensions과 voxel spacing 제공ControlNe.. 2025. 2. 8.
[2025-1] 김유현 - A Style-Based Generator Architecture for Generative Adversarial Networks https://arxiv.org/abs/1812.04948  A Style-Based Generator Architecture for Generative Adversarial NetworksWe propose an alternative generator architecture for generative adversarial networks, borrowing from style transfer literature. The new architecture leads to an automatically learned, unsupervised separation of high-level attributes (e.g., pose and identitarxiv.org 0. AbstractStyleGAN은 스타일 전.. 2025. 2. 8.
[2025-1] 김경훈 - SAM (Segment Anything Model) 원본 논문 링크 : https://arxiv.org/abs/2304.02643 Segment AnythingWe introduce the Segment Anything (SA) project: a new task, model, and dataset for image segmentation. Using our efficient model in a data collection loop, we built the largest segmentation dataset to date (by far), with over 1 billion masks on 11M licensearxiv.org  https://github.com/facebookresearch/segment-anything GitHub - facebookr.. 2025. 2. 5.