Problem Set 2 is aimed at introducing basic building blocks of image processing. Key areas that we wish to see you implement are: loading and manipulating images, producing some valued output of images, and comprehension of the structural and semantic aspects of what makes an image. For this and future assignments, we will give you a general description of the problem.
It is up to the student to think about and implement a solution to the problem using what you have learned from the lectures and readings. You will also be expected to write a report on your approach and lessons learned.
- Use Hough tools to search and fifind lines and circles in an image.
- Use the results from the Hough algorithms to identify basic shapes.
- Use template matching to identify shapes
- Understand the Fourier Transform and its applications to images
- Address the presence of distortion / noise in an image.
Methods to be used
You may use image processing functions to fifind color channels and load images. Don’t forget that those have a variety of parameters and you may need to experiment with them. There are certain functions that may not be allowed and are specifified in the assignment’s autograder Ed post.
Refer to this problem set’s autograder post for a list of banned function calls.
Please do not use absolute paths in your submission code. All paths should be relative to the submission directory. Any submissions with absolute paths are in danger of receiving a penalty!
Obtaining the Starter Files:
Obtain the starter code from canvas under fifiles.
Your main programming task is to complete the api described in the fifile ps2.py. The driver program experiment.py helps to illustrate the intended use and will output the fifiles needed for the writeup. Additionally there is a fifile ps2_test.py that you can use to test your implementation.
Create ps2_report.pdf – a PDF fifile that shows all your output for the problem set, including images labeled appropriately (by fifilename, e.g. ps2-1-a-1.png) so it is clear which section they are for and the small number of written responses necessary to answer some of the questions (as indicated). For a guide as to how to showcase your results, please refer to the latex template for PS2.
How to Submit
Two assignments have been created on Gradescope: one for the report – PS2_report, and the other for the code – PS2_code where you need to submit ps2.py and experiment.py.
- HOUGH TRANSFORMS [10 POINTS]
1.a. Traffific Light
First off, you are given a generic traffific light to detect from a scene. For the sake of the problem, assume that traffific lights are shown as below: (with red, yellow, and green) lights that are vertically stacked. You may also assume that there is no occlusion of the traffific light.
It is your goal to fifind a way to determine the state of each traffific light and position in a scene.
Position is measured from the center of the traffific light. Given that this image presents symmetry, the position of the traffific light matches the center of the yellow circle.
Complete your python ps2.py such that traffific_light_detection returns the traffific light center coordinates (x, y) ie (col, row) and the color of the light that is activated (‘red’, ‘yellow’, or ‘green’). Read the function description for more details.
A traffific light scene that we will test will be randomly generated, like in the following pictures and examples in the github repo.
For the sake of simplicity, we are using a basic color scheme, but assume that the scene may have different color objects and backgrounds [relevant for part 2 and 3]. The shape of the traffific light will not change, nor will the size of the individual lights relative to the traffific light. Size range of the lights can be reasonably expected to be between 10-30 pixels in radius. There will only be one traffific light per scene, but its size and location will be generated at random (that is, a traffific light could appear in the sky or in the road–no assumptions should be made as to its logical position). While the traffific light will not be occluded, the objects in the background may be.
Complete traffific_light_detection(img_in, radii_range)
Place the coordinates using cv2.putText before saving the output images. Input: scene_tl_1.png.
Output: ps2-1-a-1.jpg 
1.b. Construction sign one per scene [5 points]
Now that you have detected a basic traffific light, see if you can detect road signs. Below is the construction sign that you would see in the United States (apologies to those outside the United States).