BEST代写-线上编程学术专家

Best代写-最专业靠谱代写IT | CS | 留学生作业 | 编程代写Java | Python |C/C++ | PHP | Matlab | Assignment Project Homework代写

Matlab代写 | Mini project – Putting everything together

Matlab代写 | Mini project – Putting everything together

这次任务是使用Matlab完成一个机器人编程

Mini project – Putting everything together
Due: 2019.11.29 End of day
G. Fainekos – Intro to Robotics – Fall 2019
The goal of the mini-project is to put together several of the algorithms we have done in class in order to
navigate a robot from its initial position to a goal position. We will assume a differential drive robot
mounted with a range sensor (either a Lidar or a Kinect). A typical such robotic system is shown in Fig. 1.
Figure 1. Left: TurtleBot with a Kinect sensor. Right: Simulated TurtleBot in Gazebo.
The mini-project has two components:
1. Navigate the robot with accurate global position information (Easy task)
2. Navigate the robot without a global positioning sensor (Harder task)
3. A bonus part: Run part 2 using the TurtleBot simulator in Gazebo
Part I (70pt)
In this part, you are given an environment as in Fig. 2. Your goal is to construct a path for the robot in its
free workspace and then follow that path with your favorite motion controller. The easiest solution to
construct a path would be to use PRM; however, any other method for path planning can be used (you
will have to implement a method within template class for Part 1). Similarly, any motion control
algorithm could be used. Matlab provides the control algorithm robotics.PurePursuit that you can
use. Figure 2 shows the path returned by the PRM and the resulting robot trajectory using the Pure
Pursuit control algorithm.
Figure 2. Left: The simple environment, the PRM path and final position of the robot over the goal position. Right: The resulting
robot trajectory.
Warning: When the robot follows the path, there is no guarantee that the path is not going to be too
close to the obstacles. Hence, the robot might collide with the obstacles as in Fig. 3. It is advised that
you use the range sensor to detect and avoid obstacles / and or modify the workspace before
generating the graph and computing a path as in Fig. 3.
Figure 3. Left: Robot collides with an obstacle while following the shortest path on the PRM graph. Right: A PRM graph in the
modified robot workspace.
Assumptions
1. No start or goal point will be within distance 0.3 from the obstacle boundaries.
Part II (30pt)
Unfortunately, for indoor, underground and extraterrestrial robotic applications, we typically do not have
a global positioning system. But we may have an accurate map and we may know an estimate of our initial
position. Therefore, for this part of the mini project you will use the map to localize the robot using the
range sensor measurements. There are two parts: A. (25pt) You have an estimate of your position, B. (5pt)
you do not know anything about your initial position.
You can decide to use some of the build-in Matlab functions:
1. robotics.MonteCarloLocalization (See also Monte Carlo Localization Algorithm at Matlab
Documentation > Robotics System Toolbox > Ground Vehicle Algorithms). This class is specifically
addressing the localization problem for ground vehicles (differential drive robots) using a binary
occupancy grid map.
2. robotics.ParticleFilter (See also Particle Filter Workflow and Track a Car-Like Robot Using
Particle Filter at Matlab Documentation > Robotics System Toolbox > Ground Vehicle Algorithms).
This class gives you the components to design your own particle filter.
Of course, you can also decide to not use any of the build-in classes and just implement the pseudocode
presented in class.
For this part of the project, you will also have to use an odometry model to try to predict your future
position based on the control inputs that you compute. You will have to use the class
DiffDriveModelEulerSimulator provided with the mini project package.
Figure 4 Left: Robot trajectory; Center: Robot trajectory as predicted by the MCL; PRM path after localization.
Assumptions
1. No start or goal point will be within distance 0.3 from the obstacle boundaries.
2. Keep in mind that the lidar takes noisy measurements, but it does not produce erroneous
measurements (e.g., phantom objects, or missing entirely surfaces).
a. In the terminology of the robotics.LikelihoodFieldSensorModel class this
means that the probability of random measurements is zero, while the probability of
expected measurements is 1.
Notes
1. Even though the odometry model (DiffDriveModelEulerSimulator) and the motion model
(DiffDriveRobotSimulator) this does not mean that the particle propagation through the
odometry model should be noiseless. That is, if you are using the Monte Carlo Localization
(robotics.MonteCarloLocalization) algorithm, the build in odometry model object
(robotics.OdometryMotionModel) must account for noise (property Noise). If you do not, then
the particles will quickly converge to a single point, degrading, thus, the performance of the filter.
Bonus Part [5pt]
Use ROS and connect to Gazebo as described in
https://www.mathworks.com/support/product/robotics/v3-installation-instructions.html
or in the Matlab documentation: Localize TurtleBot Using Monte Carlo Localization (at Matlab Examples
> Robotics System Toolbox > Ground Vehicle Algorithms).
Files and templates provided
The following files are provided:
1. project_init_script_part1.m : It initializes the data for Part I and runs the
simulation.
2. ProjectControlLoopPart1.m : The template class for creating your solution. The current
file implements a go to point algorithm for a differential drive robot. It does not check for
collisions.
3. project_init_script_part2.m : It initializes the data for Part II and runs the
simulation.
4. ProjectControlLoopPart2.m : The template class for creating your solution. The current
file implements a go to point algorithm for a differential drive robot. It does not check for
collisions and it does not try to localize the robot.
5. exampleMaps.mat : Data file with sample maps
Other files:
1. DiffDriveModelEulerSimulator.m : A simple class for simulating a differential drive
robot. It can be used within ProjectControlLoopPart2 in Part II for computing odometry
(predicited future robot position if you only knew the inputs to the robot).
2. DiffDriveRobotSimulator.m : A class for simulating a differential drive robot with a
range sensor.
3. DiffDriveSimRangeSensor.m : A class simulating a range sensor used in the class
DiffDriveRobotSimulator.m.
Grading
Part I: 70pt; Out of 100% for the 70pt:
• 10% Coding style
• If you reach the goal and stop in the goal neighborhood your receive all points.
Penalties:
• If you do not reach the goal, you are penalized by how far you were at the end of the simulation
• A collision results in a penalty of 5 pt. After that the robot will be initialized with a different set
of random initial and goal positions. Every time a collision occurs, the penalty will be
accumulated.
Part I: 30pt; Out of 100% for the 30pt:
• 10% Coding style
• Since the performance of the algorithm is stochastic, you only need to reach the goal once out
of 5 attempts. Otherwise you are scored by how close you were to the goal.
Deliverable
• Submit a zip file called miniproject_ASUID.zip with the following:
o ProjectControlLoopPart1.txt (just change .m to .txt)
o ProjectControlLoopPart2.txt (just change .m to .txt)
o TurtleBotControlLoop.txt (submit a script which can control the TurtleBot –
no restrictions, but you can use as base the code in Localize TurtleBot Using Monte Carlo
Localization)

bestdaixie