Skip to content
Hany Hamed

Hany Hamed

I am a reinforcement learning researcher focused on making RL agents generalize and adapt to unseen, diverse settings.

My recent work covers model-based and hierarchical RL, offline RL, and sim-to-real—developing world models for planning, policies that generalize to new tasks, and agents that keep learning after deployment.

I earned my M.S. in Computer Science from KAIST in 2024, after a B.S. in Computer Science from Innopolis University.

Portrait of Hany Hamed

Highlighted Research

Full list available on Google Scholar.
*Equal Contribution

  1. Extendable Planning via Multiscale Diffusion — thumbnail

    Extendable Planning via Multiscale Diffusion

    Submitted to AAAI

  2. Dr. Strategy — Model-Based Generalist Agents with Strategic Dreaming — thumbnail

    Dr. Strategy: Model-Based Generalist Agents with Strategic Dreaming

    ICML

Research Experience

  • Artificial and Mechanical Intelligence Lab, IIT Genoa, Italy

    Research Fellow · PI: Prof. Daniele Pucci  ·  co-PI: Giulio Romualdi & Giuseppe L’Erario
    — Present
    • Developing torque-control humanoid locomotion policies using reinforcement learning..
    • Designing and implementing RL experiments for ErgoCub with IsaacLab.
    • Building a Sim-to-Real transfer framework to enable robust humanoid locomotion.
  • University of Alberta & Amii (Remote)

    Research Assistant · PI: Prof. Rupam Mahmood (UoA)  ·  Colin Bellinger (NRC)
    — Present
    • Assisted in research on mask-based goal representation for robot learning, conducting experiments with multiple open-vocabulary object detection models to assess their effectiveness.
    • Investigated model-based (TD-MPC2) and model-free (SAC) reinforcement learning for Sim-to-Sim transfer with finetuning in the target domain.
  • Machine Learning & Mind Lab, KAIST, Daejeon, South Korea

    Graduate Student Research Assistant · PI: Prof. Sungjin Ahn
    • Researched zero-shot task generalization in model-based RL, leading to a state-of-the-art agent.
    • Explored intrinsic motivation in hierarchical model-based RL, focusing on how high-level policies guide low-level policies.
    • Worked on diffusion-based planning for long-horizon tasks while learning on short trajectories.
  • Unmanned Technology Laboratory, Innopolis University, Russia

    Junior Developer (Industry Oriented)
    • Developed ROS packages for drone modules including gimbal control, RTK GPS, and payload control.
    • Integrated Livox lidar with lab’s drone for outdoor mapping.
    • Developed a handheld Lidar device for indoor/outdoor mapping for an industrial partner.
  • Center of Robotics, Innopolis University, Russia

    Undergraduate Research Assistant · PI: Prof. Sergei Savin
    • Conducted RL experiments for a stabilizing control policy of a tensegrity hopper.
    • Designed and implemented a contactless differentiable physics simulator for tensegrity robots using Taichi.
    • Performed research on sim2real transfer for a three-prism tensegrity robot.

Education

  • School of Computing, KAIST

    M.S. student
  • Computer Science, Innopolis University

    Undergraduate student

Service

Reviewer

  • Conferences: ICLR 2024, 2025, IROS 2024, ACML 2024, Humanoids 2025, AAAI 2026
  • Workshops: ICML AutoRL 2024
  • Journals: IEEE RA-L 2024

Teaching Assistant

  • CS492: Deep Reinforcement Learning
    with Prof. Sungjin Ahn ·
  • Introduction to ROS
    with Dr. Geesara Kulathunga ·

Awards

  • 2024 – ICML Travel Grant
  • 2022–2024 – KAIST Graduate School Full Scholarship (Master)
  • Fall 2020 – Outstanding Achievements, Innopolis University
  • 2020, 2021 – Outstanding Contribution to Science, Innopolis University
  • June 2021 – 3rd Place, DOTS Competition (Bristol Robotics Lab & Toshiba)
  • 2018–2022 – Innopolis University Full Scholarship (Bachelor)