Course projects

Deep Learning

This project involved deep learning and computer vision. I used a dataset to perform person re-identification and the MOT dataset to perform tracking. I used the Spatial Transformer Network to learn better features for my feature extraction method, that of cosine similarity used in Deep-SORT. I trained the classifier first to distinguish individual people and then applied it on the tracking problem using the Deep-SORT algorithm.

My report is attached:

Link to report


Experimental Robotics with ROS

In this project, the goal was to assemble a mini rover and integrate it with sensors to perform SLAM and target localization. I worked with two other members, implemented Hector SLAM using a 2D lidar and localized a tag in the environment after detecting it with a camera. The camera was programmed also to continously observe that target while the rover was moving.

Our report is attached:

Link to report


Robotic Interactions and Manipulation

In this project, I programmed the robot manipulator Sawyer to perform autonomous grasping of cardboard blocks using a magnet gripper. The goal was to develop a gripping strategy for MBZIRC 2020 challenge II. The Sawyer robot has a camera on its end-effector which was used to identify the tag on the block. After that was done, I worked on estimating the distance and applying inverse kinematics to reach the final position and pose. The system could succesfully complete a reach, grab, return and release pipeline.

My report is attached:

Link to report


Bayesian Robotics

This project involved Bayesian Estimation Theory. Me and two more team members, built a Gazebo simulation in which we created a field of obstacles and had two drones, the first drone chasing the second one, the intruder. It was done to simulate the MBZIRC 2020, Challenge I in which the goal is to get close to the intruder drone and capture the load it carries using another drone autonomously. The project implemented Extended Kalman Filter for the estimation of the drones location and color based perception for identifying the target ball. Simulated sensors used included GPS and stereo camera. The drones location was compared with ground truth from the simulation environment.

Our report is attached:

Link to report