CIT 581 SDR Final project blog update – May 1,2020

Final Project blog update – May 1, 2020

This is the final project update post. In this blog post we will be discussing about:

  1. Simulation GUI
  2. Demo
  3. Result analysis and Validation
  4. Conclusion and Future plans

Simulation GUI

As discussed in earlier updates, our project is a simulation based project. In order to present the simulation, we have created a simple GUI. The GUI has the following features:

  1. A drop down menu to select the room number from which the document/package has to be picked up.
  2. A drop down menu to select the drop-off room number.
  3. Buttons to – i) Start the simulation ii) Pause the simulation iii) Reset the simulation
  4. Map of the entire floor map on the left which shows – i) The bot’s movements ii) the waypoints shown in yellow
  5. Zoomed view of the bot on the right as it starts moving and completing the objective.

Demo of the simulation

For the demo, we will choose the pick up room number as room number 112 and the drop off room number as room number 121

Room number 112 and 121 shown in the image

Once we hit ‘Begin Simulation’, the iBoiler will do the following tasks:

  1. Plan the path from the bot’s current location to the location of room 112
  2. Split the planned paths into waypoints
  3. Start autonomous navigation by navigating to one waypoint to the next until it reaches room 112
  4. Plan the path from the bot’s current location to the location of room 121
  5. Split the planned paths into waypoints
  6. Start autonomous navigation by navigating to one waypoint to the next until it reaches room 121

In the above screen grab, we can observe how the bot planned the path from current location to room 121. We can also see how the bot splits the path into waypoints (show by yellow stars).

In the above screen capture, we can see how the bot has navigated from it’s initial position (extreme right) to i) room 112 and then ii) room 121. The way points are shown in yellow circles.

A snippet of the demo. Apologies for the poor video quality.

Result analysis and Validation

During the simulation, we have access to the following live updating graphs. From the top two graphs we can analyse the velocities of the left and the right wheels. We can observe how to bot accelerates between waypoints and stops after it reaches a waypoint and accelerates again.

From the 3rd graph we can see how the estimation errors vary. The bot calibrates its location every time it reaches a waypoint. The error is calculated as the euclidean distance between the bot’s true location and the estimated location.

From the 4th graph we can see how the navigation error varies. Navigation error is calculated as the euclidean distance between the bot’s true location and the waypoint location. We can mainly see that at waypoint #7 (destination waypoint – Room 121) the navigation error was around 4 euclidean distance. This means that the bot managed to reach the destination with an error of 4 euclidean distance. We approximate that in the real world this would have been equivalent to an error of 5-6 feet. Therefore, the bot reached within 6 feet of the destination.

These details serve as validation parameters for us and we declare the algorithms and the bot’s simulation as a successful run.

Conclusion and Future plans

Conclusion

  • We have achieved most of the goals – mapping & familiarizing and autonomous navigation – we set in the beginning of the project.
    (Except for physical robot implementation)
  • As we changed our project scope and switched it into simulation based project, we also have learned how to simulate a robot’s performance.

Limitations

Since the simulation only is able to plot integer coordinates, at any given point we are only able to see the bot on the integer coordinate. But in reality, each calculation from the bot’s drive system outputs a decimal value and it has to be floored to the nearest integer. The bot also has to stop at each waypoint before leaving for the next waypoint. This is an inefficient system and has to be resolved if it is being implemented in physical form.

Future Works

  • Actual physical robot implementation
  • Adding some features – such as patrol, autonomous sleeping when it is idle, delivery, or etc.
  • Making the bot more intelligent

With that we come to the end of the final blog update. We had a ton of fun while developing this system. We learnt a lot about the challenges of building an autonomous system using indoor navigation methods. The one thing which challenged us the most was to differentiate between the information that the bot is ‘allowed’ to know and the information that we as ‘validation sources’ know. In the real world this is simple as the bot is a separate physical entity. Because this was a simulation, it was a mighty challenge.

Nevertheless, we managed to achieve almost all of the initially set objectives. We are also attaching the video of our final presentation as it has a comprehensive summary of our entire work flow and the final demo. Hope you enjoy!

Thank you.

Create your website with WordPress.com
Get started