Junho Park
I'm an AI researcher in Vision Intelligence Lab, led by Jaechul Kim, at LG Electronics.
At LG Electronics, I've worked on Large-Scale Generative Datasets, Vision Foundation Models (e.g. Object Detection, Panoptic Segmentation, Depth Estimation, Pose Estimation, Face Recognition, and Person Re-Identification), and On-Device (e.g. Lightweight Modeling, and Quantization).
I completed my Master's program at Sogang University advised by Suk-Ju Kang, and closely collaborated with Kyeongbo Kong.
At Sogang University, I've worked on 2D/3D Generative Models, Large Language Models, Pose/Gaze Estimation, Quantization, Image Restoration, and Machine Learning.
Additionally, I'm independently pursuing research on AR/VR, Embodied AI, and Robot Learning (e.g. Hand-Object Interaction, and Egocentric Vision) with Taein Kwon.
I'm open to collaboration—feel free to reach out!
Email /
CV /
Scholar /
LinkedIn /
Github
|
|
|
EgoWorld: Translating Exocentric View to Egocentric View using Rich Exocentric Observations
Junho Park,
Andrew Sangwoo Ye,
Taein Kwon†
Under Review, 2025  
Project Page
/
Paper
We introduce EgoWorld, a novel two-stage framework that reconstructs egocentric view from rich exocentric observations, including depth maps, 3D hand poses, and textual descriptions.
|
|
Describe Your Camera: Towards Implicit 3D-Aware Image Translation for Hand-Object Interaction
Junho Park*,
Yeieun Hwang*,
Suk-Ju Kang†
Under Review, 2025  
Paper (will be published)
We introduce TransHOI, a novel framework for implicit 3D-aware image translation of hand-object interaction, aiming to generate images from different perspectives while preserving appearance details based on user's description of camera.
|
|
Programmable-Room: Interactive Textured 3D Room Meshes Generation Empowered by Large Language Models
Jihyun Kim*,
Junho Park*,
Kyeongbo Kong*,
Suk-Ju Kang†
IEEE TMM (Transactions on Multimedia), 2025  
Project Page
/
Paper
Programmable-Room interactively creates and edits textured 3D meshes given user-specified language instructions. Using pre-defined modules, it translates the instruction into python codes which is executed in an order.
|
|
AttentionHand: Text-driven Controllable Hand Image Generation for 3D Hand Reconstruction in the Wild
Junho Park*,
Kyeongbo Kong*,
Suk-Ju Kang†
ECCV, 2024   (Oral Presentation)
Project Page
/
Paper
We propose a novel method, AttentionHand, for text-driven controllable hand image generation. The performance of 3D hand mesh reconstruction was improved by additionally training with hand images generated by AttentionHand.
|
|
Interactive 3D Room Generation for Virtual Reality via Compositional Programming
Jihyun Kim*,
Junho Park*,
Kyeongbo Kong*,
Suk-Ju Kang†
ECCV, 3rd Computer Vision for Metaverse Workshop, 2024   (Oral Presentation)
Paper
We introduce a novel framework, Interactive Room Programmer (IRP), which allows users to conveniently create and modify 3D indoor scenes using natural language.
|
|
Diffusion-based Interacting Hand Pose Transfer
Junho Park*,
Yeieun Hwang*,
Suk-Ju Kang†
ECCV, 8th Workshop on Observing and Understanding Hands in Action, 2024  
Paper
We propose a new interacting hand pose transfer model, IHPT, which is a diffusion-based approach designed to transfer hand poses between source and target images.
|
|
Mixup-based Neural Network for Image Restoration and Structure Prediction from SEM Images
Junho Park,
Yubin Cho,
Yeieun Hwang,
Ami Ma,
QHwan Kim,
Kyu-Baik Chang,
Jaehoon Jeong,
Suk-Ju Kang†
IEEE TIM (Transactions on Instrumentation and Measurement), 2024  
Paper
We present a new SEM dataset and a two-stage deep learning method (including SEMixup and SEM-SPNet) that achieve state-of-the-art performance in SEM image restoration and structure prediction under diverse conditions.
|
|
A Novel Framework for Generating In-the-Wild 3D Hand Datasets
Junho Park*,
Kyeongbo Kong*,
Suk-Ju Kang†
ICCV, 7th Workshop on Observing and Understanding Hands in Action, 2023  
Paper
We propose a novel framework, HANDiffusion, which generates new 3D hand datasets with in-the-wild scenes.
|
|
Improving Gaze Tracking in Large Screens with Symmetric Gaze Angle Amplification and Optimization Technique
Joseph Kihoon Kim*,
Junho Park*,
Yeon-Kug Moon†,
Suk-Ju Kang†
IEEE Access, 2023  
Paper
We propose a novel gaze tracking method for large screens using a symmetric angle amplifying function and center gravity correction to improve accuracy without personalized calibration, with applications in autonomous vehicles.
|
|