GSoC 2023

Robotics applications are typically distributed, made up of a collection of concurrent asynchronous components which communicate using some middleware (ROS messages, DDS…). Building robotics applications is a complex task. Integrating existing nodes or libraries that provide already solved functionality, and using several tools may increase the software robustness and shorten the development time. JdeRobot provides several tools, libraries and reusable nodes. They have been written in C++, Python or JavaScript. They are ROS-friendly and full compatible with ROS-Noetic (and Gazebo11).

Our community mainly works on four development areas:

  • Education in Robotics. RoboticsAcademy is our main project. It is a ROS-based framework to learn robotics and computer vision with drones, autonomous cars…. It is a collection of Python programmed exercises and challenges for engineering students.

  • Robot Programming Tools. For instance, VisualCircuit for robot programming with connected blocks, as in electronic circuits, in a visual way.

  • MachineLearning in Robotics. For instance, the BehaviorMetrics tool for assessment of neural networks in end-to-end autonomous driving. RL-Studio, a library for the training of Reinforcement Learning algorithms for robot control. DetectionMetrics tool for evaluation of visual detection neural networks and algorithms.

  • Reconfigurable Computing in Robotics. FPGA-Robotics for programming robots with reconfigurable computing (FPGAs) using open tools as IceStudio and F4PGA. Verilog-based reusable blocks for robotics applications.

Selected contributors

In the year 2023, the contributors selected for the Google Summer of Code have been the following:

GSoC 2023 - Pawan Wadhwani

Pawan Wadhwani

Robotics Academy: migration to ROS2 Humble

Project #2

GSoC 2023 - Meiqi Zhao

Meiqi Zhao

Obstacle Avoidance for Autonomous Driving in CARLA Using Segmentation Deep Learning Models

Project #8

GSoC 2023 - Siddheshsingh Tanwar

Siddheshsingh Tanwar

Dockerization of Visual Circuit

Project #7

GSoC 2023 - Prakhar Bansal

Prakhar Bansal

RoboticsAcademy: Cross-Platform Desktop Application using ElectronJS

Project #1

Ideas list

This open source organization welcomes contributors in these topics:

Project #1: RoboticsAcademy: Cross-Platform Desktop Application using ElectronJS

Brief Explanation: Robotics-Academy is a framework for learning robotics and computer vision. It consists of a collection of robot programming exercises. The students have to code in Python the behavior of a given (either simulated or real) robot to fit some task related to robotics or computer vision. It uses standard middleware and libraries such as ROS or openCV.

Currently, the platform offers its resources through a website, making it difficult for users to access and manage the content.The objective of this project is to develop a cross-platform desktop application for the Robotics Academy using ElectronJS. The application will provide a convenient and user-friendly interface for users to access and manage the content, making it more accessible and efficient for users to learn and explore the world of robotics.

The scope of this project is to develop a cross-platform desktop application for the Robotics Academy using ElectronJS. The application will be designed to provide a user-friendly interface for users to access and manage the resources and content available on the Robotics Academy platform. The project will involve the integration of the existing resources and content into the application, as well as the implementation of features such as resource management and a user-friendly interface. The project will also include thorough testing and bug fixing to ensure the stability and usability of the application. Finally, the project will involve documenting the project and providing clear instructions for future maintenance and development.

  • Skills required/preferred: Proficiency in JavaScript and ElectronJS framework
  • Difficulty rating: medium
  • Expected results: desktop application for Robotics Academy
  • Expected size: 175h
  • Mentors: Apoorv Garg (apoorvgarg.ms AT gmail.com) and David Roldán (david.roldan AT urjc.es)

Project #2: Robotics Academy: migration to ROS2 Humble

Brief Explanation: Robotics-Academy is a framework for learning robotics and computer vision. It consists of a collection of robot programming exercises. The students have to code in Python the behavior of a given (either simulated or real) robot to fit some task related to robotics or computer vision. It uses standard middleware and libraries such as ROS or OpenCV.

Nowadays, Robotics Academy offers the student up to 26 exercises, and another 11 prototype exercises. All of them come ready to use in the RoboticsAcademy docker image (RADI). The only requirement for the students its to download the docker image, all the dependencies are installed inside the RADI.

The RADI is one key point of the platform and Project #2 aims to keep improving it. One main component of the RADI is ROS. Currently, the RADI is based in ROS noetic version, which its end of life (in May, 2025) is getting closer. The main goal of the project is to migrate the RADI to ROS2. Moving from a ros one distritution to a ros two distribution is a major improvement. This migration will take form by having a couple of exercises running in ROS2 with the new RADI.

Back in time, some ROS2 Foxy exercise prototypes were implemented. For more information, have a look at GSoC 2021 project and corresponding academy exercises 1, 2 and the FollowPerson exercise on ROS2 Foxy 3.

Besides, Project #2 may also explore some RADI related topics as size optimization, multi-rosdistro docker image or hardware acceleration.

The project is bound to Project #3, and both students will work together in the new RADI.

  • Skills required/preferred: Docker, ROS2, Python and ROS.
  • Difficulty rating: medium
  • Expected results: New docker image based on ROS2 Humble.
  • Expected size: 350h
  • Mentors: Pedro Arias (pedro.ariasp AT upm.es) and Luis Roberto Morales (lr.morales.iglesias AT gmail.com)

Project #3: Robotics Academy: migration to Gazebo Fortress

Brief Explanation: Robotics-Academy is a framework for learning robotics and computer vision. It consists of a collection of robot programming exercises. The students have to code in Python the behavior of a given (either simulated or real) robot to fit some task related to robotics or computer vision. It uses standard middleware and libraries such as ROS or openCV.

Nowadays, Robotics Academy offers the student up to 26 exercises, and another 11 prototype exercises. All of them come ready to use in the RoboticsAcademy docker image (RADI). The only requirement for the students its to download the docker image, all the dependencies are installed inside the RADI.

The RADI is one key point of the platform and Project #3 aims to keep improving it. One main component of the RADI is Gazebo. Currently, the RADI is based in Gazebo11 version, which its end of life (in Sep, 2025) is getting closer. The main goal of the project is to migrate the RADI to Gazebo Fortress. This migration will take form by having a couple of exercises running in ROS2 with the new RADI.

The project is bound to Project #2, and both students will work together in the new RADI.

  • Skills required/preferred: Docker, Gazebo Fortress, Python and ROS/ROS2.
  • Difficulty rating: medium
  • Expected results: New docker image based on Gazebo Fortress.
  • Expected size: 175h
  • Mentors: Pedro Arias (pedro.ariasp AT upm.es) and Arkajyoti Basak (arkajbasak121 AT gmail.com)

Project #4: Robotics-Academy: improve Deep Learning based Human Detection exercise

Brief Explanation: Robotics-Academy is a framework for learning robotics and computer vision. It consists of a collection of robot programming exercises. The students have to code in Python the behavior of a given (either simulated or real) robot to fit some task related to robotics or computer vision. It uses standard middleware and libraries such as ROS or openCV.

The idea for this project is to improve the “Human detection” Deep Learning exercise at Robotics-Academy, developed along GSoC-2021. Instead of asking the user to code the solution in a web-based editor, he or she will have to upload a deep learning model that matches video inputs and outputs the image coordinates of the detected humans. In this project, the support for neural network models in the open ONNX format, which is becoming a standard, is one goal. The exercise fluent execution is also a goal, hopefully taking advantage of GPU at the user’s machine from the RoboticsAcademy docker container. An automatic evaluator to test the performance of the network provided by the user will be also explored. All the above functionalities have, to some extent, already been implemented in the exercise. The goal is to improve/refine them either on the existing implementation or from scratch. In short, the scope of improvements in the exercise include:

  • Custom train or find an enhanced DL model trained to detect only humans specifically. Changes to the pre-processing and post-processing part would have to be made as per the input and output structure of the new model.
  • Enhancing the model benchmarking part in terms of its interpretability, use case, accuracy, and visual appeal to the user.
  • Enabling GPU support while executing the exercise from the docker container.
  • Fluent exercise execution.
  • Other scopes of improvements are also welcome. This may include adding/modifying features and making the exercise more user-friendly and easy to use.
  • Skills required/preferred: Python, OpenCV, PyTorch/Tensorflow
  • Difficulty rating: medium
  • Expected results: a web-based exercise for solving a visual detection task using deep learning
  • Expected size: 175h
  • Mentors: David Pascual (d.pascualhe AT gmail.com) and Shashwat Dalakoti (shash.dal623 AT gmail.com)

Project #5: Robotics-Academy: new exercise using Deep Learning for Visual Control

Brief Explanation: Robotics-Academy is a framework for learning robotics and computer vision. It consists of a collection of robot programming exercises. The students have to code in Python the behavior of a given (either simulated or real) robot to fit some task related to robotics or computer vision. It uses standard middleware and libraries such as ROS or openCV.

The idea for this project is to develop a new deep learning exercise for visual robotic control within the Robotics-Academy context. We will build a web-based interface that allows the user to input a trained model that matches as input the camera installed on a drone or a car, and as outputs the linear speed and angular velocity of the vehicle. The controlled robot and its environment will be simulated using Gazebo. In this project, we will:

  • Update the web interface for accepting models trained with PyTorch/Tensorflow as input.
  • Build new widgets for monitoring results for the particular exercise.
  • Get a simulated environment ready.
  • Code the core application that will feed the trained model with input data and send back the results.
  • Train a naive model that allow us to show how the exercise can be solved.

This new exercise may reuse the infrastructure developed for the “Human detection” Deep Learning exercise. The following videos show one of our current web-based exercises and a visual control task solved using deep learning:

  • Skills required/preferred: Python, OpenCV, PyTorch/Tensorflow, Gazebo
  • Difficulty rating: medium
  • Expected results: a web-based exercise for robotic visual control using deep learning
  • Expected size: 175h
  • Mentors: David Pascual ( d.pascualhe AT gmail.com ) and Pankhuri Vanjani (pankhurivanjani AT gmail.com)

Project #6: Robotics-Academy: support for raw robotics applications, without template

Brief Explanation: Robotics-Academy is a framework for learning robotics and computer vision. It consists of a collection of robot programming exercises. The students have to code in Python the behavior of a given (either simulated or real) robot to fit some task related to robotics or computer vision. It uses standard middleware and libraries such as ROS or openCV.

For each exercise there is a webpage (exercise.html) and a Python template (exercise.py), both connected through websockets. The template provides an easy Python API for accessing to robot sensors and actuators, a Hardware Abstraction Layer (HAL-API) that avoids the explicit management of ROS topics. It also provides a computing pattern based on a sequential part of code, which is executed once at the very beginning of the application, and an iterative part of the code which is executed inside an infinite loop at a controlled frequency. That template is combined on the fly with the user code to generate the robot application, whose execution may be safely interrupted anytime. The goal of this project is to support also robotics applications without using any template. The user will be required to program the robotics application from scratch, just using the available ROS topics.

  • Skills required/preferred: Python, JavaScript, HTML, CSS
  • Difficulty rating: medium
  • Expected results: users may program their applications from scratch using ROS topics directly
  • Expected size: 350h
  • Mentors: David Roldán (david.roldan AT urjc.es) and David María (dmariaa70 AT gmail.com)

Project #7: Dockerization of Visual Circuit

Brief Explanation: VisualCircuit allows users to program robotic intelligence using a visual language which consists of blocks and wires, as electronic circuits. The last year focused on migrating the old POSIX IPC implementation to a Cross Platform compatible Python Shared Memory Implementation. In addition, new functions for sharing data as well as applications demos and functionality to support Finite State Machines were added. A web service of Visual Circuit also exists. Currently, the process to build a Visual Circuit application is simple: Edit Application on the Web → Download Python Application File → Run Locally. You can read further about the tool on the website.

The aim for this year is to make Visual Circuit easy to host and deploy for a larger audience. To that end, we want to build a Docker image of the application. The final goal of this Docker implementation is to smoothly bundle Visual Circuit along with Robotics Academy. Aside from this, we want to make the process of adding blocks to Visual Circuit easier by creating a GitHub Repo where people can contribute their designs, these designs will then be verified by maintainers and then merged. Any new Robotics Academy applications made using Visual Circuit will be a good bonus.

  • Skills required/preferred: Python, Docker, ROS, React and some basic JavaScript knowledge
  • Difficulty rating: medium
  • Expected results: A dockerized version of Visual Circuit, a new repository and workflow for adding blocks to Visual Circuit, good documentation for both of the above.
  • Expected size: 175h
  • Mentors: Toshan Luktuke (toshan1603 AT gmail.com) and Suhas Gopal (suhas.g96.sg AT gmail.com)

Project #8: Obstacle avoidance for autonomous driving in CARLA using segmentation deep learning models.

Brief explanation: BehaviorMetrics is a tool used to compare different autonomous driving architectures.

It uses CARLA, a well-known really powerful autonomous driving simulator and CARLA ROS Brigde. We are currently able to drive a car autonomously on different circuits using deep learning models to do the robot control processing using imitation learning. Check out the videos below to get an insight.

For this project, we would like to improve the stack of autonomous driving tasks supported, adding obstacle avoidance based on segmentation. We would like to develop models able to avoid obstacles on an end-to-end basis, avoiding different types of object in a range of situations.

Some references:

  • Skills required/preferred: Python and deep learning knowledge (Tensorflow/PyTorch).
  • Difficulty rating: medium
  • Expected results: new autonomous driving deep learning models to avoid obstacles.
  • Expected size: 350h
  • Mentors: Sergio Paniego Blanco (sergiopaniegoblanco AT gmail.com) and Nikhil Paliwal ( nikhil.paliwal14 AT gmail.com )

Project #9: DeepLearning models for autonomous drone piloting and support in BehaviorMetrics

Brief Explanation: BehaviorMetrics is a tool used to compare different autonomous driving architectures. It uses Gazebo, a well-known really powerful robotics simulator and ROS Noetic. During a past GSoC 2021 project, we explored the addition of drone support for Behavior Metrics. We were able to add this support for IRIS drone, so we would like to exploring adding autonomous drone piloting techniques based on deep learning for this configuration.

We would like to generate new drone scenarios specially suited for the robot control problem, train deep learning models based on state of the art architectures and refine this support in the BehaviorMetrics stack. The drone 3D control is a step forward from the 2D car control, as there is an additional degree of freedom and the onboard camera orientation typically significantly oscillates while the vehicle is moving in 3D.

  • Skills required/preferred: Python and deep learning knowledge (specially PyTorch).
  • Difficulty rating: hard
  • Expected results: broader support for the autonomous driving drone using Gazebo and PyTorch.
  • Expected size: 350 hours
  • Mentors: Sergio Paniego Blanco (sergiopaniegoblanco AT gmail.com) and Nikhil Paliwal ( nikhil.paliwal14 AT gmail.com )

Project #10: Robotics Academy: improvement of Gazebo scenarios of existing exercises

Brief Explanation: Currently Robotics Academy offers the student up to 26 exercises, and another 11 prototype exercises. Most of them are based on ROS1 Noetic and Gazebo 11, there are also several prototypes based on ROS2 Foxy. The main goal of this project is to improve the current Gazebo scenarios and robot models for many the exercises making them more appealing and realistic without increasing too much the required computing power for rendering. For instance the robot model of the Formula1 car model in FollowLine and ObstacleAvoidance exercise will be enhanced in order to have Ackermann steering. This project will be based on a previous GSoC 2022 project

  • Skills required/preferred: C++, Python programming skills, experience with Gazebo, SDR, URDF. Good to know: ROS2
  • Difficulty rating: medium
  • Expected results: New Gazebo scenarios for 10 exercises
  • Expected size: 175 hours
  • Mentors: José María Cañas (josemaria.plaza AT gmail.com) and Shreyas Gokhale (shreyas6gokhale AT gmail.com).

Application instructions for GSoC-2023

We welcome students to contact relevant mentors before submitting their application into GSoC official website. If in doubt for which project(s) to contact, send a message to jderobot AT gmail.com We recommend browsing previous GSoC student pages to look for ready-to-use projects, and to get an idea of the expected amount of work for a valid GSoC proposal.

Requirements

  • Git experience
  • C++ and Python programming experience (depending on the project)

Programming tests

Project #1 #2 #3 #4 #5 #6 #7 #8 #9 #10
Academy (A) X X X X X X X X X X
C++ (B) O O O O O O O O O X
Python (C) X X X X X X X X X X
ROS2 (D) O X X X X X O - - X
React (E) X O O O O X X - - O


Where:  
* Not applicable
X Mandatory
O Optative

Before accepting any proposal all candidates have to do these programming challenges:

Send us your information

AFTER doing the programming tests, fill this web form with your information and challenge results. Then you are invited to ask the project mentors about the project details. Maybe we will require more information from you like this:

  1. Contact details
    • Name and surname:
    • Country:
    • Email:
    • Public repository/ies:
    • Personal blog (optional):
    • Twitter/Identica/LinkedIn/others:
  2. Timeline
    • Now split your project idea in smaller tasks. Quantify the time you think each task needs. Finally, draw a tentative project plan (timeline) including the dates covering all period of GSoC. Don’t forget to include also the days in which you don’t plan to code, because of exams, holidays etc.
    • Do you understand this is a serious commitment, equivalent to a full-time paid summer internship or summer job?
    • Do you have any known time conflicts during the official coding period?
  3. Studies
    • What is your School and degree?
    • Would your application contribute to your ongoing studies/degree? If so, how?
  4. Programming background
    • Computing experience: operating systems you use on a daily basis, known programming languages, hardware, etc.
    • Robot or Computer Vision programming experience:
    • Other software programming:
  5. GSoC participation
    • Have you participated to GSoC before?
    • How many times, which year, which project?
    • Have you applied but were not selected? When?
    • Have you submitted/will you submit another proposal for GSoC 2023 to a different org?

Previous GSoC students

  • Apoorv Garg (GSoC-2022) Improvement of Web Templates of Robotics Academy exercises
  • Toshan Luktuke (GSoC-2022) Improvement of VisualCircuit web service
  • Nikhil Paliwal(GSoC-2022) Optimization of Deep Learning models for autonomous driving
  • Akshay Narisetti(GSoC-2022) Robotics Academy: improvement of autonomous driving exercises
  • Prakarsh Kaushik(GSoC-2022) Robotics Academy: consolidation of drone based exercises
  • Bhavesh Misra (GSoC-2022) Robotics Academy: improve Deep Learning based Human Detection exercise
  • Suhas Gopal (GSoC-2021) Shifting VisualCircuit to a web server
  • Utkarsh Mishra (GSoC-2021) Autonomous Driving drone with Gazebo using Deep Learning techniques
  • Siddharth Saha (GSoC-2021) Robotics Academy: multirobot version of the Amazon warehouse exercise in ROS2
  • Shashwat Dalakoti (GSoC-2021) Robotics-Academy: exercise using Deep Learning for Visual Detection
  • Arkajyoti Basak (GSoC-2021) Robotics Academy: new drone based exercises
  • Chandan Kumar (GSoC-2021) Robotics Academy: Migrating industrial robot manipulation exercises to web server
  • Muhammad Taha (GSoC-2020) VisualCircuit tool, digital electronics language for robot behaviors.
  • Sakshay Mahna (GSoC-2020) Robotics-Academy exercises on Evolutionary Robotics.
  • Shreyas Gokhale (GSoC-2020) Multi-Robot exercises for Robotics Academy In ROS2.
  • Yijia Wu (GSoC-2020) Vision-based Industrial Robot Manipulation with MoveIt.
  • Diego Charrez (GSoC-2020) Reinforcement Learning for Autonomous Driving with Gazebo and OpenAI gym.
  • Nikhil Khedekar (GSoC-2019) Migration to ROS of drones exercises on JdeRobot Academy
  • Shyngyskhan Abilkassov (GSoC-2019) Amazon warehouse exercise on JdeRobot Academy
  • Jeevan Kumar (GSoC-2019) Improving DetectionSuite DeepLearning tool
  • Baidyanath Kundu (GSoC-2019) A parameterized automata Library for VisualStates tool
  • Srinivasan Vijayraghavan (GSoC-2019) Running Python code on the web browser
  • Pankhuri Vanjani (GSoC-2019) Migration of JdeRobot tools to ROS 2
  • Pushkal Katara (GSoC-2018) VisualStates tool
  • Arsalan Akhter (GSoC-2018) Robotics-Academy
  • Hanqing Xie (GSoC-2018) Robotics-Academy
  • Sergio Paniego (GSoC-2018) PyOnArduino tool
  • Jianxiong Cai (GSoC-2018) Creating realistic 3D map from online SLAM result
  • Vinay Sharma (GSoC-2018) DeepLearning, DetectionSuite tool
  • Nigel Fernandez GSoC-2017
  • Okan Asik GSoC-2017, VisualStates tool
  • S.Mehdi Mohaimanian GSoC-2017
  • Raúl Pérula GSoC-2017, Scratch2JdeRobot tool
  • Lihang Li: GSoC-2015, Visual SLAM, RGBD, 3D Reconstruction
  • Andrei Militaru GSoC-2015, interoperation of ROS and JdeRobot
  • Satyaki Chakraborty GSoC-2015, Interconnection with Android Wear

How to increase your chances of being selected in GSoC-2023

If you put yourself in the shoes of the mentor that should select the student, you’ll immediately realize that there are some behaviors that are usually rewarded. Here’s some examples.

  1. Be proactive: Mentors are more likely to select students that openly discuss the existing ideas and / or propose their own. It is a bad idea to just submit your idea only in the Google web site without discussing it, because it won’t be noticed.

  2. Demonstrate your skills: Consider that mentors are being contacted by several students that apply for the same project. A way to show that you are the best candidate, is to demonstrate that you are familiar with the software and you can code. How? Browse the bug tracker (issues in github of JdeRobot project), fix some bugs and propose your patch submitting your PullRequest, and/or ask mentors to challenge you! Moreover, bug fixes are a great way to get familiar with the code.

  3. Demonstrate your intention to stay: Students that are likely to disappear after GSoC are less likely to be selected. This is because there is no point in developing something that won’t be maintained. And moreover, one scope of GSoC is to bring new developers to the community.

RTFM

Read the relevant information about GSoC in the wiki / web pages before asking. Most FAQs have been answered already!