Who am I?
I am an active, sport-lover, self-taught and a passionate about technology millenial who was born in 1995. I graduated in Telecommunication Engineering at the University of the Basque Country (UPV/EHU) in 2017 and then I completed the consequent M.Sc in Telecommunication Engineering in 2019.
During my time at college, I was part of Formula Student Bizkaia where I was in charge of the control and telemetry group (2015-2017) where my duty took on programming all the control and car communication systems (mainly in C and LabVIEW). Additionally, I pursued an internship in cybersecurity, researching Post-Quantum algorithms to ensure the privacy and pseudonymization of personal data.
In 2019, I embarked on my PhD journey at Tecnalia, under the supervision of Javier Del Ser and Esther Villar-Rodriguez, diving deep into the complexities of Reinforcement Learning (RL). My research specifically focused on sparse reward problems, emphasizing the importance of exploration in environments where feedback signals are limited. This challenge led me to explore various approaches, including Intrinsic Motivation and Imitation Learning techniques. In 2022, I undertook a research stay at the University of Edinburgh within the Autonomous Agents Research Group, under the supervision of Dr. Stefano V. Albrecht. A special mention goes to Lukas Schäfer, whose expertise, mentorship and passion for the subject were invaluable.
In 2023, I completed my PhD in Artificial Intelligence with a Cum Laude distinction. My research not only advanced my expertise in RL but also expanded my interests to Explainable AI, Metaheuristic Optimization, Generative Models, and Language Models
Since 2024, I have been serving as a Lecturer at the University of Deusto, where I teach and mentor students in Artificial Intelligence, including overseeing PhD candidates and guiding both undergraduate and postgraduate students.
News
đź“„ Article Accepted at Neurocomputing
Our article "Using Offline Data to Speed-up Reinforcement Learning in Procedurally Generated Environments" has been accepted in the Neurocomputing journal. This paper builds upon findings presented at the ALA Workshop during the 2023 AAMAS conference, combining reinforcement learning and imitation learning for improved generalization and sample efficiency.
Impact Factor: 5.5 (Q1)
đź“„ Article Accepted at Results in Engineering
Our work "On the Black-box Explainability of Object Detection Models for Safe and Trustworthy Industrial Applications" has been accepted in the Results in Engineering journal. This study provides post-hoc explanations for object detectors, focusing on one-stage YOLO models applied to real-world data.
Impact Factor: 6.0 (Q1)
🎉 NeurIPS IMOL and OWA Workshops
Our works "Fostering Intrinsic Motivation in Reinforcement Learning with Pretrained Foundation Models" and "Words as Beacons: Guiding RL Agents with High-Level Language Prompts" have been accepted at the Intrinsically Motivated Open-ended Learning (IMOL) and Open-World Agents (OWA) workshops, respectively. These studies explore leveraging large language models (LLMs) and vision-language models (VLMs) to enhance reinforcement learning agents' learning and performance.