Sat Mar 28 10:23:00 2026 gradient_descent_test(): numpy version: 1.26.4 python version: 3.10.12 gradient_descent() seeks the minimizer of a function using the method of gradient descent. gradient_descent_linear_test(): gradient_descent_linear() approximates the solution of a linear problem A*x=b, by minimizing ||Ax-b|| using gradient descent. Learning rate = 0.02 Stepsize tolerance = 1e-06 Maximum iterations = 10000 Number of iterations = 9999 Estimated solution: [ 61.27030372 -39.05883784] Exact solution: [ 61.272 -39.062] ||x-x_exact|| = 0.003588404159077953 ||A*x-b|| = 2.7368885715508395 ||A*x_exact-b|| = 2.73688879364872 gradient_descent_nonlinear_test(): Seek local minimizer of a scalar function quartic(x) Minimizer is probably in the interval [ -2.0 , 2.0 ] Use a very simple version of the gradient descent method. Graphics saved as "quartic.png" it x f(x) f'(x) 0 -1.6 21.2672 -18.968 1 0.41698125 19.781952 -1.7558345 2 0.59065042 19.438596 -2.0767315 3 0.79249415 19.069192 -1.3581648 4 0.92198967 18.966951 -0.1059085 5 0.93203348 18.96652 0.020890668 6 0.93005236 18.966504 -0.0044759221 7 0.93047683 18.966503 0.00094422104 8 0.93038728 18.966503 -0.0001998583 9 0.93040624 18.966503 4.2273081e-05 10 0.93040223 18.966503 -8.9427395e-06 11 0.93040307 18.966503 1.891749e-06 12 0.9304029 18.966503 -4.0018366e-07 13 0.93040293 18.966503 8.4655383e-08 14 0.93040293 18.966503 -1.7908117e-08 15 0.93040293 18.966503 3.7883074e-09 16 0.93040293 18.966503 -8.0138385e-10 17 0.93040293 18.966503 1.6952484e-10 18 0.93040293 18.966503 -3.5861092e-11 19 0.93040293 18.966503 7.5868201e-12 20 0.93040293 18.966503 -1.6049384e-12 21 0.93040293 18.966503 3.4017233e-13 22 0.93040293 18.966503 -7.283063e-14 23 0.93040293 18.966503 1.5099033e-14 24 0.93040293 18.966503 -3.5527137e-15 25 0.93040293 18.966503 8.8817842e-16 26 0.93040293 18.966503 0 27 0.93040293 18.966503 0 28 0.93040293 18.966503 0 29 0.93040293 18.966503 0 30 0.93040293 18.966503 0 31 0.93040293 18.966503 0 32 0.93040293 18.966503 0 33 0.93040293 18.966503 0 34 0.93040293 18.966503 0 35 0.93040293 18.966503 0 36 0.93040293 18.966503 0 37 0.93040293 18.966503 0 38 0.93040293 18.966503 0 39 0.93040293 18.966503 0 40 0.93040293 18.966503 0 40 gradient descent steps were taken. Initial x = -1.6 f(x) = 21.2672 f'(x) = -18.968000000000007 Final x = 0.9304029265558517 f(x) = 18.9665029834295 f'(x) = 0.0 Graphics saved as "quartic_minimizer.png" gradient_descent_vector_x_test(): Seek minimizer (x,y) of a scalar function z(x,y). Initial x,y = [1. 1.5] f(x,y) = 4.866666666666667 f'(x,y) = [2.3 4. ] Final x,y = [ 0.00634006 -0.01481108] f(x,y) = 0.0002058559054571092 f'(x,y) = [ 0.0105481 -0.02328209] gradient_descent_vector_f_test(): Seek minimizer of vector function f(x). it ||x|| ||f(x)|| ||J(x)|| 0 0 58.456136 20.322401 1 0.2095833 23.306394 20.230019 2 0.33544226 10.61703 20.224214 3 0.41112011 6.0130914 20.22475 4 0.45672694 4.3199587 20.225238 5 0.48433465 3.6747938 20.226044 6 0.50118626 3.4069799 20.22701 7 0.51162759 3.2750993 20.228166 8 0.51826553 3.1922467 20.229501 9 0.52266348 3.1271417 20.231015 10 0.52575865 3.0685298 20.232705 11 0.52811252 3.0123663 20.23457 12 0.5300612 2.9572029 20.236609 13 0.53180555 2.9025265 20.238819 14 0.53346532 2.8481614 20.241199 15 0.53511159 2.7940537 20.243749 16 0.53678623 2.7401943 20.246466 17 0.53851357 2.6865905 20.249349 18 0.54030737 2.6332562 20.252396 19 0.54217505 2.5802079 20.255605 20 0.54412017 2.5274639 20.258975 21 0.54614395 2.4750431 20.262502 22 0.54824619 2.4229654 20.266185 23 0.55042577 2.3712511 20.270021 24 0.552681 2.3199211 20.274008 25 0.55500985 2.2689967 20.278142 26 0.55741 2.2184997 20.282421 27 0.55987895 2.168452 20.286841 28 0.5624141 2.1188757 20.291399 29 0.56501271 2.0697931 20.296091 30 0.56767197 2.0212267 20.300915 31 0.570389 1.9731988 20.305865 32 0.57316086 1.9257314 20.310939 33 0.57598455 1.8788467 20.316131 34 0.57885703 1.8325664 20.321438 35 0.58177523 1.7869117 20.326854 36 0.58473605 1.7419035 20.332377 37 0.58773636 1.6975623 20.338 38 0.59077303 1.6539076 20.343718 39 0.59384289 1.6109585 20.349528 40 0.59694281 1.5687333 20.355423 41 0.60006961 1.5272492 20.361399 42 0.60322018 1.4865227 20.36745 43 0.60639137 1.4465692 20.373571 44 0.6095801 1.4074032 20.379756 45 0.61278328 1.3690378 20.386 46 0.61599788 1.3314853 20.392297 47 0.6192209 1.2947564 20.398641 48 0.62244939 1.2588609 20.405028 49 0.62568044 1.2238071 20.411451 50 0.62891121 1.1896019 20.417904 51 0.63213891 1.1562513 20.424382 52 0.63536084 1.1237594 20.43088 53 0.63857433 1.0921293 20.437392 54 0.64177682 1.0613626 20.443912 55 0.64496582 1.0314597 20.450435 56 0.6481389 1.0024195 20.456955 57 0.65129374 0.97423974 20.463467 58 0.65442811 0.94691668 20.469967 59 0.65753984 0.92044546 20.476448 60 0.66062689 0.89481994 20.482906 61 0.66368728 0.87003284 20.489336 62 0.66671916 0.84607569 20.495734 63 0.66972073 0.82293897 20.502094 64 0.67269034 0.80061211 20.508413 65 0.67562641 0.77908359 20.514686 66 0.67852745 0.75834094 20.520909 67 0.68139209 0.73837086 20.527079 68 0.68421904 0.71915928 20.53319 69 0.68700712 0.70069136 20.539241 70 0.68975523 0.68295165 20.545227 71 0.69246239 0.6659241 20.551146 72 0.69512769 0.64959213 20.556994 73 0.69775032 0.63393872 20.562769 74 0.70032955 0.61894646 20.568468 75 0.70286476 0.60459761 20.574089 76 0.7053554 0.59087418 20.579629 77 0.707801 0.57775798 20.585086 78 0.71020118 0.56523068 20.590459 79 0.71255564 0.55327386 20.595746 80 0.71486413 0.5418691 20.600946 81 0.7171265 0.53099798 20.606056 82 0.71934267 0.52064216 20.611077 83 0.7215126 0.51078342 20.616007 Initial x = (0,0,0) ||f(x)|| = 58.456135561607546 ||J(x)|| = 20.322401432901575 Final x = (0.496451,0.001604,-0.52356) ||f(x)|| = 0.5107834195360512 ||J(x)|| = 20.616006885006687 gradient_descent_test(): Normal end of execution. Sat Mar 28 10:23:02 2026