1 Project description
Imagine in a warehouse, a fleet of robots are designed to pick up packages from certain locations and deliver them to certain ports. They should deliver the packages as e iciently as
possible, and at the same time, must not bump into each other or obstacles (no collision).
In this project, you are going to design environment specific controllers for these robots.
There are three environments called small, medium and large – see the appendix. Each environment comes with two fixed goal positions, one for each robot.
For each environment, you need to come up with 2 policies (π1, π2), one for each robot. A policy for a robot is a function from states to actions, with the following definitions of states and actions:
- A state is a tuple (l1, l2), where li is the current location of robot i.
- The actions are: up, down, right, lef t, nil, with their usual meaning.
We will test your policies on some random initial states, and measure the quality of them by the sum of the numbers of actions the robots take to arrive at their respective goal locations.
2 Project details
This project comes with a simulator and a code skeleton.
2.1 File Description
- game.py: provides the class Env which takes as input a map and a set of goals, and the class Game which takes as input a set of initial positions, a group of agents and an instantiated environment.
- base.py: provides several helper functions as well as the BaseAgent class which re stricts the inputs of an agent to only include a name and an instantiated environment.
- run.py: parses the given commandline. You can also see how the agent will be tested in the main function.
- animator.py: visualizes the whole process, you do not have look into this file.
- map/: provides three maps named small.map, medium.map, large.map. You agent will only be tested on these three given maps (i.e., no hidden map will be used for evaluation). Each map file is also associated with a set of goals and a constant MAX_NUM_STEP representing the maximum number of steps allowed in this map.
- agent.py: a simple demo agent.
2.2 Environment Set-up
- Check if anaconda has been installed on your PC, conda -V
- Create a virtual environment for this project with python 3.8 and activate the environment, conda create -n mapf python=3.8 conda activate mapf
- Install dependency packages,pip install -r requirements.txt
2.3 Run the Code
- Activate the environment,conda activate mapf
- You are given a simple agent in agent.py, to run it,python run.py –agents p1 p2 –map empty –goals 5_5 1_5 –vis
This command launched two agents named ‘p1’ and ‘p2’ on the map named ‘empty’ and goals are specified as (5,5) and (1,5) for each, respectively. The –vis option turns on the visualizer. Note that the visualizer for large maps is quite slow, you may want to turn off the –vis option for large maps.