# BEST代写-线上编程学术专家

Best代写-最专业靠谱代写IT | CS | 留学生作业 | 编程代写Java | Python |C/C++ | PHP | Matlab | Assignment Project Homework代写

# MATLAB代写 | NUMERICAL OPTIMISATION ASSIGNMENT 3

### MATLAB代写 | NUMERICAL OPTIMISATION ASSIGNMENT 3

EXERCISE 1

theoretical result.

EXERCISE 3

(a) Minimise the function

[0pt]

NUMERICAL OPTIMISATION ASSIGNMENT 3
submit by 4pm on Wednesday 17/03

Marta Betcke

1. (a)  Implement the BFGS method by modifying the descentLineSearch function. More help is pro- vided inside MATLAB Grader.
2. (b)  Make your implementation efficient i.e. avoid explicitly forming the inverse Hessian matrix Hk. Copy the code lines implementing the update of Hk into your report and briefly explain what makes your implementation efficient.
Submit your solution via TurnitIn. [5pt]

EXERCISE 2 [DEMO]

1. (a)  Implement the SR-1 method by modifying the trustRegion function. More help is provided inside MATLAB Grader. Note: Here you are not expected to provide an efficient implementation as it would require some changes to solverCM2dSubspaceExt which are out of scope at this point. [0pt]
2. (b)  Apply your implementation of the SR-1 method to minimise the Rosenbrock function f(x,y)=100(y−x2)2 +(1−x)2

starting from initial points x0 = (1.2, 1.2)T and x0 = (−1.2, 1)T. Choose a relatively small stopping tolerance tol = 1e − 12. Motivate your choices of the relevant parameters: the trust region radius ∆ and the step acceptance threshold η. Visualise the iterates over the function contours. [0pt]

3. (c)  Analyse the quality and definiteness of the Hessian approximations computed by SR-1 in (b). Hint: Compare the approximation to the corresponding true Hessian.
4. (d)  Investigate convergence of SR-1 iterates in (b) a posteriori and include relevant error plots. [0pt] What are the empirical convergence rates and how did you obtain them? Do they agree with the theoretical predictions? Paraphrase the relevant theoretical result. [0pt]
5. (e)  Can global convergence and of what type be expected or not, and why? Paraphrase the relevant

f(x,y)=(x−3y)2 +x4

using BFGS (Ex 1a) and SR-1 (Ex 2a) methods starting from x0 = (10,10)T. Visualise the paths traced by the iterates over the function contour. Which line search did you choose and why? Briefly motivate your choices of the relevant trust region and line search parameters. [5+5pt]

1. (b)  Investigate convergence of BFGS iterates in (a) a posteriori and include a relevant error plot. [5pt] What is the empirical convergence rate and how did you obtain it? Does it agree with the theoretical prediction? Paraphrase the relevant theoretical result. [5pt]
2. (c)  Can global convergence of BFGS and of what type be expected or not, and why? Paraphrase the relevant theoretical result. [5pt]
1. (d)  Investigate convergence of SR-1 iterates in (a) a posteriori and include a relevant error plot. [5pt] What is the empirical convergence rate and how did you obtain it? Does it agree with the theoretical prediction? Paraphrase the relevant theoretical results. [5pt]
2. (e)  Both implementations have an option (which is active) to return a sequence of (inverse) Hessian approximations as a field of the info structure:

(i) {HBFGS}k≥0 when using BFGS, k

(ii) {BSR1}k≥0 when using SR-1. k

Investigate the quality of the respective (inverse) Hessian approximations computed by BFGS and SR-1 in (a). In particular, plot

(i) {||I − HBFGS∇2f(xk)||2}k≥0, {||I − (∇2f(xk))−1BSR1||2}k≥0. kk

(ii) {||BSR1 − ∇2f(xk)||2}k≥0, {||(HBFGS)−1 − ∇2f(xk)||2}k≥0, kk

What do you observe? Do your observations agree with theoretical predictions? Paraphrase the relevant theoretical results. [10pt] Submit your solution via TurnitIn.

EXERCISE 4

Implement the Gauss-Newton method for solution of nonlinear least square problems. As Gauss-Newton is a line search method, it can be easiest implemented modifying the descentLineSearch function. More help is provided in MATLAB Grader.

EXERCISE 5 [DEMO]

Implement Levenberg-Marquardt method using trurstRegion shell method. As a solver for the con- straint quadratic problem solverCMLM adapt the Algorithm 4.3 in Nocedal Wright (which, in contrast to dogleg or 2d subspace methods, is not an approximate but an exact method). In particular, pay attention to how the problem simplifies in Levenberg-Marquardt case. [0pt]

EXERCISE 6

Consider the following model

φ(x1, x2, x3; t) = (x1 + x2t2) exp(−x3t)

with parameters (x1, x2, x3).
Simulate the noisy measurements via sampling this model for a fixed choice of parameters (x1, x2, x3) =

(3, 150, 2) at 200 equi-spaced points in ti ∈ (0, 4] and then adding Gaussian noise n(ti) ∼ N (0, σ2) drawn from a normal distribution with 0 mean and standard deviation 5% of the maximal amplitude of the sampled model signal i.e. σ = 0.05 maxti |φ(ti)|,

φ ̃ ( x 1 , x 2 , x 3 ; t j ) = φ ( x 1 , x 2 , x 3 ; t j ) + n ( t j ) .

1. (a)  Formulate the least-squares problem for fitting the model φ and derive its Jacobian.
2. (b)  Estimate the parameters (x1, x2, x3) from your simulated measurements using (i) Gauss-Newton (GN) method (implemented in Ex 4);

[5pt]

(ii) Levenberg-Marquardt (LM) method (implementation provided on Moodle).
Visualise the fit by plotting the estimated signal versus the measurements. Briefly motivate your

choices of all relevant parameters. [5+5pt]

(c) Investigate convergence of GN and LM methods a posteriori and include relevant error plots. What are the empirical convergence rates and how did you obtain them? Do they agree with the theoretical predictions? Paraphrase the relevant theoretical results. Hint: Evaluate the factor

∥[J T J (x⋆ )]−1 H (x⋆ )∥.

[10pt]

2

(d) Can global convergence of GN and LM methods and of what type be expected or not, and why? Paraphrase the relevant theoretical results. [5+5pt] 