Best代写-最专业靠谱代写IT | CS | 留学生作业 | 编程代写Java | Python |C/C++ | PHP | Matlab | Assignment Project Homework代写

Matlab代写 | CSE 5523. Homework 2

Matlab代写 | CSE 5523. Homework 2


CSE 5523. Homework 2
Problem 1. Run a linear SVM on the two class dataset given online (you
can use a standard toolbox). Compare its performance to that of the least
squares linear classifier.
Instructions: download train79.mat (and test79.mat), which contains images of digits. Each image is given as a 28×28 matrix of grayscale pixel
values. It is stored as a 784 (= 28 × 28) array. You are given 1000 images of
7 and 1000 images of 9. These are stored as a single 2000×784 matrix in the
file 79.mat. The first 1000 digits are sevens, the rest are nines. Download
that file and type ”load 79.mat” in Matlab. The matrix d79 contains the
data. You can visualize the digits by typing, e.g., the following:
x = reshape (d79(1234,:),28,28);
y = x(:,28:-1:1);
This bit of code shows you the digit number 1234 (which is a 9).
Problem 2.
Implement (do not use standard toolboxes) the Least Squares classifier
using gradient descent. Compare your results to standard least squares
classifier (obtained using pseudo-inverse).
Problem 3. Reduce the dimension of the dataset (both train and test)
to 400 using the Principal Components Analysis (we have not discussed
it yet but you can use a standard toolbox). Apply linear regression and
SVM (using large value of the parameter C) to 50, 100,150,…2000 training
examples (i.e., 25, 50, . . . , 1000 from each class, you can choose them at
random). Plot the error on the test set. Observations?
Problem 4. Use gradient descent (instead of the explicit solution) for linear
regression in Problem 3. For 50, 200, 400, 1000 and 2000 training examples
plot the dependence of the test error on the number of iterations. What do
you observe?
Problem 5. Implement a kernel machine with Gaussian kernel (choose
the bandwdith by appropriate cross-validation). You can train it to have
(square) loss zero. Specifically, construct a kernel matrix K (Kij = k(xi
, xj ))
and find the coefficients by the formula α = K−1y, where y is the (column)
vector of labels. The final classifier has the form Pαik(xi
, x). Apply it to
the digit data and report the results.
Problem 6. Apply (1) decision trees, (2) bagged and (3) boosted decision
trees to the digit dataset. (You may use the standard libraries of Matlab or
download Matlab code from the Web.) Use appropriate cross-validation on
the training set. Compare performance.
Problem 7.
1. Implement PCA and apply it to the digit data, reducing the dimension
to two. Visualize the data after dimensionality reduction using colors for
different classes.
2. Produce pictures of ”eigendigits” for the dataset combining both classes
and for each class separately. Observations?
Problem 8.
Apply k-means clustering to the digits data set for k = 2, 5, 10, 50. How well
does it identify the different digits? (Note that clustering is unsupervised –
how do you compare classification and clustering results?)