GSoC 2025

Robotics applications are typically distributed, made up of a collection of concurrent asynchronous components which communicate using some middleware (ROS messages, DDS…). Building robotics applications is a complex task. Integrating existing nodes or libraries that provide already solved functionality, and using several tools may increase the software robustness and shorten the development time. JdeRobot provides several tools, libraries and reusable nodes. They have been written in C++, Python or JavaScript. They are ROS-friendly and full compatible with ROS2-Humble (and Gazebo Harmonic).

Our community mainly works on three development areas:

  • Education in Robotics. RoboticsAcademy is our main project. It is a ROS-based framework to learn robotics and computer vision with drones, autonomous cars…. It is a collection of Python programmed exercises and challenges for engineering students.

  • Robot Programming Tools. For instance, BT-Studio, for robot programming with Behavior Trees; VisualCircuit for robot programming with connected blocks, as in electronic circuits, in a visual way.

  • Machine Learning in Robotics. For instance, the BehaviorMetrics tool for assessment of neural networks in end-to-end autonomous driving. Another example is DetectionMetrics tool for evaluation of visual detection neural networks and algorithms.

Ideas list

This open source organization welcomes contributors in these topics:

Project #1: Robotics-Academy: support for solutions directly using ROS2 topics

Brief explanation: Robotics-Academy is a framework for learning robotics and computer vision. It consists of a collection of robot programming exercises. The students have to code in Python the behavior of a given (either simulated or real) robot to fit some task related to robotics or computer vision. It uses standard middleware and libraries such as ROS 2 or OpenCV.

Nowadays, Robotics Academy offers the student up to 26 exercises, and another 11 prototype exercises. All of them come ready to use in the RoboticsAcademy docker image (RADI). The only requirement for the students its to download the docker image, all the dependencies are installed inside the RADI.

Currently, exercises can be solved using Python without any knowledge of ROS 2 thank to our Hardware Abstraction Layer (HAL). However, some students asked for the possibility to use (and learn) ROS 2 interfaces (topics and services) while coding the solution. Project #2 will explore new ways of coding the exercise solution, where a few changes like ROS 2 spin control or editor autocompletions are required.

  • Skills required/preferred: React, Python, ROS2.
  • Difficulty rating: medium
  • Expected results: New way of coding your solutions in RoboticsAcademy using ROS 2 topics
  • Expected size: 90h
  • Mentors: Pedro Arias (pedro.ariasp AT upm.es) and Apoorv Garg (apoorvgarg.ms AT gmail.com)

Project #2: Robotics-Academy: CI & Testing

Brief explanation: Robotics-Academy is a framework for learning robotics and computer vision. It consists of a collection of robot programming exercises. The students have to code in Python the behavior of a given (either simulated or real) robot to fit some task related to robotics or computer vision.

Project #2 seeks to identify and address potential issues early on, fostering the maintainability and reliability of its software. This commitment to improving the quality of the software will be achieved through the implementation of automated testing and continuous integration (CI) processes. The project will also include the development of a testing strategy and the implementation of a CI pipeline.

  • Skills required/preferred: Docker, GitHub actions, Python, JavaScript, pytest, Selenium
  • Difficulty rating: medium
  • Expected results: Tests on RAM, RI and RA
  • Expected size: 350h
  • Mentors: Pedro Arias (pedro.ariasp AT upm.es), Pawan Wadhwani (pawanw17 AT gmail.com) and Miguel Fernandez (miguel.fernandez.cortizas AT upm.es)

Project #3: Robotics-Academy: improve Computer Vision exercises, including with real cameras

Brief explanation: As of now, Robotics-Academy includes two deprecated Computer Vision exercises: ColorFilter and OpticalFlow Teleoperator. These exercises allow users to learn de foundations of image processing using OpenCV library. A Work-In-Progress from the JdeRobot Computer Vision Working Group is the multiplatform support of real cameras in RoboticsAcademy exercises (it works on Linux, Windows and MacOS machines), using WebRTC.

The main objective of this project is to prepare several exercises in RoboticsAcademy about Computer Vision (including the update of the previous two) which can be performed with real camera from the browser.

  • Skills required/preferred: ComputerVision, OpenCV, React, Python
  • Difficulty rating: easy
  • Expected results: new exercises on ComputerVision in Robotics-Academy, including with real cameras.
  • Expected size: 90h
  • Mentors: David Pascual (d.pascualhe AT gmail.com) and David Pérez (david.perez.saura AT upm.es)

Project #4: Robotics-Academy: new exercise on End-to-End Visual Control of an Autonomous Vehicle using DeepLearning

Brief explanation: The goal of this project is to develop a new deep learning exercise focused on visual robotic control in the context of Robotics-Academy. We will create a web-based interface that allows users to upload a trained model, which will take camera input from a drone or car and output the linear speed and angular velocity of the vehicle. The controlled robot and its environment will be simulated using Gazebo. The objectives of this project include:

  • Updating the web interface to accept models trained with PyTorch/TensorFlow.
  • Building new widgets to monitor exercise results.
  • Preparing a simulated environment.
  • Coding the core application that feeds input data to the trained model and returns the results.
  • Training a naive model to demonstrate how the exercise can be solved.

This new exercise may utilize the infrastructure developed for the “Human detection” Deep Learning exercise. The following videos showcase one of our current web-based exercises and a visual control task solved using deep learning:

  • Skills required/preferred: Python, Deep Learning, Gazebo, React, ROS2
  • Difficulty rating: medium
  • Expected results: a web-based exercise for robotic visual control using deep learning
  • Expected size: 350h
  • Mentors: David Pascual ( d.pascualhe AT gmail.com ) and Pankhuri Vanjani (pankhurivanjani AT gmail.com)

Project #5: VisualCircuit: Improving Functionality & Expanding the Block Library

Brief explanation: VisualCircuit allows users to program robotic intelligence using a visual language similar to electronic circuits, simplifying the creation of code for robotics applications such as Deep Learning, ROS, and more.

Over the past few years, we have focused on making VisualCircuit more robust by resolving Nested Blocks (multi-level blocks) with the Block Composition feature, developing a working prototype for dockerized execution of robotics applications directly from the browser, migrating the old POSIX IPC implementation to a cross-platform compatible Python Shared Memory implementation, and more.

Now, for GSoC 2025, the goal of the project is to further refine VisualCircuit by improving Shared Memory, Block Composition, and other features, developing more real-world robotics applications utilizing the latest functionalities of VC, expanding the block library to cater to a larger audience, and enhancing automated testing. Additionally, we aim to address other issues such as implementing Undo and Redo functionality, adding more keyboard shortcuts for faster circuit creation, and making various other improvements. You can read further about the tool on the website.

  • Skills required/preferred: ROS2, Gazebo, Python, TypeScript
  • Difficulty rating: medium
  • Expected results: Expanding Block library for VisualCircuit, improving automated testing using GitHub Actions and creating real world robotics applications developed with latest functionalities of VC and resolving other major issues.
  • Expected size: 175h
  • Mentors: Toshan Luktuke (toshan1603 AT gmail.com), Pankaj Borade and Suhas Gopal.

Project #6: Robotics Academy: using the Open3DEngine as robotics simulator

  • Brief explanation: Open 3D Engine (O3DE) is an Apache 2.0-licensed multi-platform 3D engine that enables developers and content creators to build AAA games, cinema-quality 3D worlds, and high-fidelity simulations. It supports also simulation of most common robot sensors and actuators. The idea of this project is to integrate Open3DEngine into the RoboticsAcademy framework, with at least one exercise using it instead of Gazebo.
  • Skills required/preferred: C++ programming, ROS
  • Difficulty rating: Medium
  • Expected results: A new robotics exercise in RoboticsAcademy using the Open3DEngine
  • Expected size: 175h
  • Repository Link - Open 3D Engine, RoboticsAcademy, RoboticsInfrastructure
  • Mentors: José M. Cañas (josemaria.plaza AT gmail.com) and Jan Hanca ( jan.hanca AT googlemail.com )

Project #7: Robotics Academy: improvement of Gazebo scenarios and robot models

Brief Explanation: Currently Robotics Academy offers the student up to 26 exercises, and another 11 prototype exercises. The main goal of this project is to improve the current Gazebo scenarios and robot models for many the exercises making them more appealing and realistic without increasing too much the required computing power for rendering. For instance the world model of the FollowLine exercise will be enhanced in order to have a 3D race circuit and several models will be reviewed to be low poly (= faster to simulate).

  • Skills required/preferred: experience with Gazebo, SDF, URDF, ROS2 and Python
  • Difficulty rating: easy
  • Expected results: New Gazebo scenarios
  • Expected size: small (~90h)
  • Mentors: Pedro Arias (pedro.ariasp AT upm.es) and Shashwat Dalakoti (shash.dal623 AT gmail.com)

Project #8: Robotics Academy: improvement of industrial robotics exercises with MoveIt2 and ROS2

Brief explanation: A few years ago we developed some exercises on industrial robots using MoveIt1 and ROS1-Noetic. A Work-In-Progress from the JdeRobot Industrial Robotics Working Group is the support in RoboticsAcademy to MoveIt2 and ROS2. A proof of concept has already been developed and it opens the door to new applications with manipulators (pick-and-place, palletizing…). Developing an easy API for the programming of industrial robots, similar to other solutions such as RAPID from ABB is an open topic. The main goal of this project is to integrate, refine this support and develop a new exercise regarding manipulation.

  • Skills required/preferred: Gazebo, URDF, ROS2, MoveIt2
  • Difficulty rating: medium
  • Expected results: updated and operating new exercise on industrial robot manipulation
  • Expected size: medium (~175h)
  • Mentors: Diego Martín (diego.martin.martin AT gmail.com) and Pankhuri Vanjani (pankhurivanjani AT gmail.com)

Project #9: BT-Studio: a tool for programming robots with Behavior Trees

Brief explanation: Behavior Trees is a recent paradigm for organizing robot deliberation. There are brilliant open source projects such as BehaviorTrees.CPP and PyTrees. Our BT-Studio has been created to provide a web based graphical editor of Behavior Trees. It also includes a conversor to Python language, the dockerized execution of the generated robotics application from the browser and supports subtrees (a feature created along a previous GSoC-2024 project).

The goal of this GSoC proposed project is to develop a Behavior Tree library, a collection of example robotics applications and further refine the tool.

  • Skills required/preferred: Python, ROS, Gazebo
  • Difficulty rating: medium
  • Expected results: Behavior Trees library for BT-Studio and several example robotics applications developed with them.
  • Expected size: 175h
  • Mentors: José M. Cañas (josemaria.plaza AT gmail.com) and Oscar Martínez (oscar.robotics AT tutanota.com)

Project #10: Extend DetectionMetrics: GUI, CI Workflow, and Object Detection

Brief explanation: DetectionMetrics is a toolkit for evaluating perception models across frameworks and datasets. Past GSoC projects (Vinay Sharma, Jeevan Kumar) contributed to its first stable release, published in Sensors (Paniego et al., 2022). Recently, we revamped the tool to improve usability and installation—check it out here! Currently, DetectionMetrics functions as both a Python library and CLI, focusing on the quantitative evaluation of image and LiDAR segmentation models, with plans to expand into object detection. It supports PyTorch and TensorFlow models, along with multiple public datasets. Its modular design allows easy integration of new models and datasets. While redesigning the core functionality, some useful features were lost. This project aims to bring them back and enhance DetectionMetrics by:

  • Recovering object detection evaluation support, including metrics like mAP and IoU. This involves restoring compatibility with COCO-style and Pascal VOC-style datasets and implementing necessary pipeline adjustments.
  • Building a GUI for visualizing dataset samples (images, point clouds) and evaluation results (IoU, confusion matrices, etc.). Ideally, it will also provide an interactive interface for launching batched evaluation jobs. Possible tools: Streamlit, Gradio.
  • Setting up a Continuous Integration (CI) workflow to automate testing, documentation generation, and versioning using Sphinx, pytest, and GitHub Actions.

Since this new version of DetectionMetrics is still in its early stages, your contributions will have a lasting impact on a tool already featured in a scientific journal!

  • Skills required/preferred: Python, GitHub Actions, PyTorch, Deep Learning
  • Difficulty rating: Medium
  • Expected results: A brand-new GUI and CI workflow for DetectionMetrics
  • Expected size: Long (~350h)
  • Mentors: David Pascual Hernández (d.pascualhe AT gmail.com), Sergio Paniego (sergiopaniegoblanco AT gmail.com), Santiago Montiel Marín (santiago.montiel AT uah.es)

Application instructions for GSoC-2025

We welcome students to contact relevant mentors before submitting their application into GSoC official website. If in doubt for which project(s) to contact, send a message to jderobot AT gmail.com We recommend browsing previous GSoC student pages to look for ready-to-use projects, and to get an idea of the expected amount of work for a valid GSoC proposal.

Requirements

  • Git experience
  • C++ and Python programming experience (depending on the project)

Programming tests

Project #1 #2 #3 #4 #5 #6 #7 #8 #9 #10
Academy (A) X X X X X X X X X X
Python (B) X X X X X X X X X X
ROS2 (C) X X O X X X X X X X
React (D) X O X X O O * * O O


Where:  
* Not applicable
X Mandatory
O Optative

Before accepting any proposal all candidates have to do these programming challenges:

Send us your information

AFTER doing the programming tests, fill this web form with your information and challenge results. Then you are invited to ask the project mentors about the project details. Maybe we will require more information from you like this:

  1. Contact details
    • Name and surname:
    • Country:
    • Email:
    • Public repository/ies:
    • Personal blog (optional):
    • Twitter/Identica/LinkedIn/others:
  2. Timeline
    • Now split your project idea in smaller tasks. Quantify the time you think each task needs. Finally, draw a tentative project plan (timeline) including the dates covering all period of GSoC. Don’t forget to include also the days in which you don’t plan to code, because of exams, holidays etc.
    • Do you understand this is a serious commitment, equivalent to a full-time paid summer internship or summer job?
    • Do you have any known time conflicts during the official coding period?
  3. Studies
    • What is your School and degree?
    • Would your application contribute to your ongoing studies/degree? If so, how?
  4. Programming background
    • Computing experience: operating systems you use on a daily basis, known programming languages, hardware, etc.
    • Robot or Computer Vision programming experience:
    • Other software programming:
  5. GSoC participation
    • Have you participated to GSoC before?
    • How many times, which year, which project?
    • Have you applied but were not selected? When?
    • Have you submitted/will you submit another proposal for GSoC 2025 to a different org?

Previous GSoC students

  • Prajyot Jadhav (GSoC-2024) Robotics-Academy: migration to Gazebo Fortress
  • Mihir Gore (GSoC-2024) Robotics-Academy: improve Deep Learning based exercises
  • Pankaj Borade (GSoC-2024) VisualCircuit: block library
  • Óscar Martínez (GSoC-2024) BT-Studio: a tool for programming robots with Behavior Trees
  • Zebin Huang (GSoC-2024) End-to-end autonomous vehicle driving based on text-based instructions: research project regarding Autonomous Driving + LLMs
  • Pawan Wadhwani (GSoC-2023) Robotics Academy: migration to ROS2 Humble
  • Meiqi Zhao (GSoC 2023) Obstacle Avoidance for Autonomous Driving in CARLA Using Segmentation Deep Learning Models
  • Siddheshsingh Tanwar (GSoC 2023) Dockerization of Visual Circuit
  • Prakhar Bansal (GSoC 2023) RoboticsAcademy: Cross-Platform Desktop Application using ElectronJS
  • Apoorv Garg (GSoC-2022) Improvement of Web Templates of Robotics Academy exercises
  • Toshan Luktuke (GSoC-2022) Improvement of VisualCircuit web service
  • Nikhil Paliwal(GSoC-2022) Optimization of Deep Learning models for autonomous driving
  • Akshay Narisetti(GSoC-2022) Robotics Academy: improvement of autonomous driving exercises
  • Prakarsh Kaushik(GSoC-2022) Robotics Academy: consolidation of drone based exercises
  • Bhavesh Misra (GSoC-2022) Robotics Academy: improve Deep Learning based Human Detection exercise
  • Suhas Gopal (GSoC-2021) Shifting VisualCircuit to a web server
  • Utkarsh Mishra (GSoC-2021) Autonomous Driving drone with Gazebo using Deep Learning techniques
  • Siddharth Saha (GSoC-2021) Robotics Academy: multirobot version of the Amazon warehouse exercise in ROS2
  • Shashwat Dalakoti (GSoC-2021) Robotics-Academy: exercise using Deep Learning for Visual Detection
  • Arkajyoti Basak (GSoC-2021) Robotics Academy: new drone based exercises
  • Chandan Kumar (GSoC-2021) Robotics Academy: Migrating industrial robot manipulation exercises to web server
  • Muhammad Taha (GSoC-2020) VisualCircuit tool, digital electronics language for robot behaviors.
  • Sakshay Mahna (GSoC-2020) Robotics-Academy exercises on Evolutionary Robotics.
  • Shreyas Gokhale (GSoC-2020) Multi-Robot exercises for Robotics Academy In ROS2.
  • Yijia Wu (GSoC-2020) Vision-based Industrial Robot Manipulation with MoveIt.
  • Diego Charrez (GSoC-2020) Reinforcement Learning for Autonomous Driving with Gazebo and OpenAI gym.
  • Nikhil Khedekar (GSoC-2019) Migration to ROS of drones exercises on JdeRobot Academy
  • Shyngyskhan Abilkassov (GSoC-2019) Amazon warehouse exercise on JdeRobot Academy
  • Jeevan Kumar (GSoC-2019) Improving DetectionSuite DeepLearning tool
  • Baidyanath Kundu (GSoC-2019) A parameterized automata Library for VisualStates tool
  • Srinivasan Vijayraghavan (GSoC-2019) Running Python code on the web browser
  • Pankhuri Vanjani (GSoC-2019) Migration of JdeRobot tools to ROS 2
  • Pushkal Katara (GSoC-2018) VisualStates tool
  • Arsalan Akhter (GSoC-2018) Robotics-Academy
  • Hanqing Xie (GSoC-2018) Robotics-Academy
  • Sergio Paniego (GSoC-2018) PyOnArduino tool
  • Jianxiong Cai (GSoC-2018) Creating realistic 3D map from online SLAM result
  • Vinay Sharma (GSoC-2018) DeepLearning, DetectionSuite tool
  • Nigel Fernandez GSoC-2017
  • Okan Asik GSoC-2017, VisualStates tool
  • S.Mehdi Mohaimanian GSoC-2017
  • Raúl Pérula GSoC-2017, Scratch2JdeRobot tool
  • Lihang Li: GSoC-2015, Visual SLAM, RGBD, 3D Reconstruction
  • Andrei Militaru GSoC-2015, interoperation of ROS and JdeRobot
  • Satyaki Chakraborty GSoC-2015, Interconnection with Android Wear

How to increase your chances of being selected in GSoC-2025

If you put yourself in the shoes of the mentor that should select the student, you’ll immediately realize that there are some behaviors that are usually rewarded. Here’s some examples.

  1. Be proactive: Mentors are more likely to select students that openly discuss the existing ideas and / or propose their own. It is a bad idea to just submit your idea only in the Google web site without discussing it, because it won’t be noticed.

  2. Demonstrate your skills: Consider that mentors are being contacted by several students that apply for the same project. A way to show that you are the best candidate, is to demonstrate that you are familiar with the software and you can code. How? Browse the bug tracker (issues in github of JdeRobot project), fix some bugs and propose your patch submitting your PullRequest, and/or ask mentors to challenge you! Moreover, bug fixes are a great way to get familiar with the code.

  3. Demonstrate your intention to stay: Students that are likely to disappear after GSoC are less likely to be selected. This is because there is no point in developing something that won’t be maintained. And moreover, one scope of GSoC is to bring new developers to the community.

RTFM

Read the relevant information about GSoC in the wiki / web pages before asking. Most FAQs have been answered already!