Uğur Güdükbay's Publications

Sorted by DateClassified by Publication TypeClassified by Research Category

Refining 3D Human Texture Estimation From a Single Image

Said Fahri Altindis, Adil Meric, Yusuf Dalva, Uğur Güdükbay, and Aysegul Dundar. Refining 3D Human Texture Estimation From a Single Image. IEEE Transactions on Pattern Analysis and Machine Intelligence, 46(12):11464–11475, December 2024.

Download

[PDF] 

Abstract

Estimating 3D human texture from a single image is essential in graphics and vision. It requires learning a mapping function from input images of humans with diverse poses into the parametric (uv) space and reasonably hallucinating invisible parts. To achieve a high-quality 3D human texture estimation, we propose a framework that adaptively samples the input by a deformable convolution where offsets are learned via a deep neural network. Additionally, we describe a novel cycle consistency loss that improves view generalization. We further propose to train our framework with an uncertainty-based pixel-level image reconstruction loss, which enhances color fidelity. We compare our method against the state-of-the-art approaches and show significant qualitative and quantitative improvements

BibTeX

@Article{AltindisMDGD24,
  author    = {Said Fahri Altindis and Adil Meric and Yusuf Dalva and 
               U{\^g}ur G{\"u}d{\"u}kbay and Aysegul Dundar},
  title     = {{Refining 3D Human Texture Estimation From a Single Image}},
  journal   = {IEEE Transactions on Pattern Analysis and Machine Intelligence},
  volume    = {46},
  number    = {12},
  month     = {December},
  year      = {2024},
  pages     = {11464-11475},
  abstract = {Estimating 3D human texture from a single image is essential in graphics and vision. It requires learning a mapping function from input images of humans with diverse poses into the parametric (uv) space and reasonably hallucinating invisible parts. To achieve a high-quality 3D human texture estimation, we propose a framework that adaptively samples the input by a deformable convolution where offsets are learned via a deep neural network. Additionally, we describe a novel cycle consistency loss that improves view generalization. We further propose to train our framework with an uncertainty-based pixel-level image reconstruction loss, which enhances color fidelity. We compare our method against the state-of-the-art approaches and show significant qualitative and quantitative improvements},
  ee        = {https://ieeexplore.ieee.org/document/10672560},
  bib2html_dl_pdf = "http://www.cs.bilkent.edu.tr/~gudukbay/publications/papers/journal_articles/Altindis_et_al_IEEE_PAMI_2024.pdf",
  bib2html_pubtype = {Refereed Journal Articles},
  bib2html_rescat = {Computer Vision}, 
  bibsource = {DBLP, http://dblp.uni-trier.de}
}

Generated by bib2html.pl (written by Patrick Riley ) on Sun Mar 16, 2025 14:21:39