08-Jan-2023 09:14:20 gradient_descent_test(): MATLAB/Octave version 4.2.2 gradient_descent() uses derivative information to iteratively estimate the minimizer of a function. gradient_descent_data_fitting_test(): gradient_descent_data_fitting() approximates the solution of a data fitting problem using gradient descent. Maximum iterations = 100 Learning rate = 0.03 Stepsize tolerance = 0.001 c = 0.99727 0.64872 0.49408 c = 0.189666 -0.158893 0.067324 ans = 1.2193 c = 0.62841 0.27985 0.21979 ans = 0.63893 c = 0.414580 0.066021 0.072505 ans = 0.33636 c = 0.541339 0.192780 0.085023 ans = 0.17970 c = 0.489303 0.140744 0.016862 ans = 0.10031 c = 0.5301354 0.1815764 -0.0063354 ans = 0.062231 c = 0.521786 0.173227 -0.050404 ans = 0.045623 c = 0.538574 0.190015 -0.081032 ans = 0.038753 c = 0.54166 0.19310 -0.11628 ans = 0.035514 c = 0.55138 0.20282 -0.14681 ans = 0.033478 c = 0.55711 0.20856 -0.17760 ans = 0.031847 c = 0.56445 0.21589 -0.20615 ans = 0.030379 c = 0.57048 0.22192 -0.23387 ans = 0.029004 c = 0.57675 0.22819 -0.26011 ans = 0.027698 c = 0.58248 0.23392 -0.28530 ans = 0.026452 c = 0.58808 0.23952 -0.30928 ans = 0.025263 c = 0.59336 0.24480 -0.33223 ans = 0.024128 c = 0.59845 0.24989 -0.35412 ans = 0.023044 c = 0.60328 0.25472 -0.37504 ans = 0.022009 c = 0.60791 0.25935 -0.39502 ans = 0.021020 c = 0.61232 0.26376 -0.41410 ans = 0.020075 c = 0.61654 0.26798 -0.43232 ans = 0.019173 c = 0.62056 0.27200 -0.44972 ans = 0.018312 c = 0.62441 0.27585 -0.46634 ans = 0.017489 c = 0.62808 0.27952 -0.48222 ans = 0.016703 c = 0.63159 0.28303 -0.49738 ans = 0.015953 c = 0.63494 0.28638 -0.51186 ans = 0.015236 c = 0.63814 0.28958 -0.52569 ans = 0.014551 c = 0.64120 0.29264 -0.53890 ans = 0.013897 c = 0.64412 0.29556 -0.55151 ans = 0.013273 c = 0.64691 0.29835 -0.56356 ans = 0.012677 c = 0.64957 0.30101 -0.57507 ans = 0.012107 c = 0.65211 0.30355 -0.58606 ans = 0.011563 c = 0.65454 0.30598 -0.59655 ans = 0.011043 c = 0.65686 0.30830 -0.60658 ans = 0.010547 c = 0.65908 0.31052 -0.61615 ans = 0.010073 c = 0.66119 0.31263 -0.62529 ans = 0.0096208 c = 0.66321 0.31466 -0.63403 ans = 0.0091885 c = 0.66514 0.31659 -0.64237 ans = 0.0087756 c = 0.66699 0.31843 -0.65033 ans = 0.0083813 c = 0.66875 0.32019 -0.65794 ans = 0.0080047 c = 0.67043 0.32187 -0.66521 ans = 0.0076450 c = 0.67204 0.32348 -0.67215 ans = 0.0073015 c = 0.67357 0.32501 -0.67877 ans = 0.0069735 c = 0.67503 0.32648 -0.68510 ans = 0.0066601 c = 0.67643 0.32787 -0.69115 ans = 0.0063609 c = 0.67777 0.32921 -0.69692 ans = 0.0060751 c = 0.67905 0.33049 -0.70244 ans = 0.0058021 c = 0.68026 0.33171 -0.70770 ans = 0.0055414 c = 0.68143 0.33287 -0.71273 ans = 0.0052924 c = 0.68254 0.33398 -0.71754 ans = 0.0050546 c = 0.68360 0.33504 -0.72212 ans = 0.0048275 c = 0.68462 0.33606 -0.72651 ans = 0.0046106 c = 0.68558 0.33703 -0.73069 ans = 0.0044034 c = 0.68651 0.33795 -0.73469 ans = 0.0042056 c = 0.68739 0.33883 -0.73851 ans = 0.0040166 c = 0.68824 0.33968 -0.74215 ans = 0.0038361 c = 0.68904 0.34048 -0.74563 ans = 0.0036638 c = 0.68981 0.34125 -0.74896 ans = 0.0034992 c = 0.69055 0.34199 -0.75214 ans = 0.0033419 c = 0.69125 0.34269 -0.75517 ans = 0.0031918 c = 0.69192 0.34336 -0.75807 ans = 0.0030484 c = 0.69256 0.34400 -0.76083 ans = 0.0029114 c = 0.69317 0.34461 -0.76348 ans = 0.0027806 c = 0.69376 0.34520 -0.76600 ans = 0.0026556 c = 0.69431 0.34575 -0.76841 ans = 0.0025363 c = 0.69485 0.34629 -0.77071 ans = 0.0024223 c = 0.69535 0.34680 -0.77291 ans = 0.0023135 c = 0.69584 0.34728 -0.77501 ans = 0.0022096 c = 0.69630 0.34775 -0.77702 ans = 0.0021103 c = 0.69675 0.34819 -0.77893 ans = 0.0020155 c = 0.69717 0.34861 -0.78076 ans = 0.0019249 c = 0.69758 0.34902 -0.78251 ans = 0.0018384 c = 0.69796 0.34940 -0.78418 ans = 0.0017558 c = 0.69833 0.34977 -0.78577 ans = 0.0016769 c = 0.69868 0.35012 -0.78729 ans = 0.0016016 c = 0.69902 0.35046 -0.78875 ans = 0.0015296 c = 0.69934 0.35078 -0.79014 ans = 0.0014609 c = 0.69965 0.35109 -0.79146 ans = 0.0013952 c = 0.69994 0.35138 -0.79273 ans = 0.0013325 c = 0.70022 0.35166 -0.79394 ans = 0.0012727 c = 0.70049 0.35193 -0.79509 ans = 0.0012155 c = 0.70074 0.35218 -0.79620 ans = 0.0011609 c = 0.70099 0.35243 -0.79725 ans = 0.0011087 c = 0.70122 0.35266 -0.79826 ans = 0.0010589 c = 0.70144 0.35288 -0.79922 ans = 0.0010113 c = 0.70165 0.35310 -0.80014 ans = 9.6588e-04 Number of iterations = 87 Estimated solution: 0.70165 0.35310 -0.80014 gradient_descent_linear_test(): gradient_descent_linear() approximates the solution of a least squares problem, to find a solution x to A*x=b, by minimizing ||Ax-b|| using gradient descent. Learning rate = 0.02 Stepsize tolerance = 1e-06 Maximum iterations = 10000 Number of iterations = 10000 Estimated solution: 61.270 -39.059 Exact solution: 61.272 -39.062 Error = 0.0035884 Residual for estimated solution = 2.73689 Residual for exact solution = 2.73689 gradient_descent_nonlinear_test(): Seek local minimizer of a scalar function quartic(x) Minimizer is probably in the interval [-2,2] Use a very simple version of the gradient descent method. Graphics saved in "quartic.png" it x f(x) f'(x) 0 -1.5 19.625 -14 1 -0.12625 19.810502 1.9939015 2 -0.32375095 19.278963 3.3185368 3 -0.64364068 18.042511 4.0159801 4 -1.0059325 16.994351 0.90423354 5 -1.0827668 16.976666 -0.49321323 6 -1.0409012 16.973035 0.30488295 7 -1.0667752 16.971318 -0.17779907 8 -1.0516878 16.970804 0.10777215 9 -1.0608327 16.970601 -0.063938188 10 -1.0554073 16.970532 0.038442935 11 -1.0586693 16.970507 -0.022934387 12 -1.0567233 16.970498 0.013747163 13 -1.0578897 16.970495 -0.0082171272 14 -1.0571925 16.970493 0.0049199442 15 -1.05761 16.970493 -0.0029428151 16 -1.0573603 16.970493 0.0017612783 17 -1.0575097 16.970493 -0.0010537468 18 -1.0574203 16.970493 0.00063057739 19 -1.0574738 16.970493 -0.0003772979 20 -1.0574418 16.970493 0.00022576883 21 -1.0574609 16.970493 -0.00013509008 22 -1.0574495 16.970493 8.0834172e-05 23 -1.0574563 16.970493 -4.8368132e-05 24 -1.0574522 16.970493 2.894196e-05 25 -1.0574547 16.970493 -1.7317851e-05 26 -1.0574532 16.970493 1.036243e-05 27 -1.0574541 16.970493 -6.2005221e-06 28 -1.0574536 16.970493 3.7101843e-06 29 -1.0574539 16.970493 -2.2200481e-06 30 -1.0574537 16.970493 1.3284018e-06 31 -1.0574538 16.970493 -7.9487062e-07 32 -1.0574537 16.970493 4.756237e-07 33 -1.0574538 16.970493 -2.8459711e-07 34 -1.0574538 16.970493 1.7029329e-07 35 -1.0574538 16.970493 -1.0189773e-07 36 -1.0574538 16.970493 6.0972157e-08 37 -1.0574538 16.970493 -3.6483675e-08 38 -1.0574538 16.970493 2.1830596e-08 39 -1.0574538 16.970493 -1.3062689e-08 40 -1.0574538 16.970493 7.8162721e-09 Initial x = -1.5, f(x) = 19.625, f'(x) = -14 Final x = -1.05745, f(x) = 16.9705, f'(x) = 7.81627e-09 Graphics saved in "quartic_minimizer.png" gradient_descent_stochastic_test(): Seek minimizer of vector function f(x). it j ||x|| ||f(x)|| ||J(x)|| 0 0 0 58.456136 20.322401 1 2 0.002 58.454637 20.229928 2 2 0.00150075 58.454543 20.224129 3 3 0.20944489 23.362616 20.224129 4 3 0.33510658 10.729522 20.224129 5 2 0.33510716 10.729516 20.223773 6 3 0.41050466 6.1816024 20.223773 7 3 0.45574328 4.5443534 20.223773 8 3 0.48288647 3.9549437 20.223773 9 1 0.48294474 3.8987024 20.223863 10 1 0.48311924 3.8425754 20.224134 11 2 0.48311918 3.8425753 20.224112 12 1 0.48340944 3.7865641 20.224563 13 1 0.48381489 3.730671 20.225193 14 1 0.48433485 3.6748996 20.226002 15 2 0.48433492 3.6748994 20.226034 16 1 0.48496863 3.6192543 20.227021 17 1 0.4857152 3.5637418 20.228185 18 2 0.48571523 3.5637417 20.228203 19 3 0.50190767 3.3515989 20.228203 20 2 0.50190761 3.3515988 20.228164 21 2 0.50190763 3.3515988 20.22817 22 2 0.50190762 3.3515988 20.228169 23 1 0.50273838 3.2962338 20.229509 24 1 0.50367646 3.2410166 20.231026 25 2 0.50367649 3.2410166 20.231041 26 1 0.50472077 3.1859565 20.232733 27 1 0.50587004 3.1310639 20.234598 28 2 0.50587006 3.1310639 20.234613 29 1 0.50712296 3.0763503 20.236651 30 3 0.51674196 2.9999978 20.236651 31 1 0.51807184 2.9454802 20.238861 32 3 0.52383102 2.9179956 20.238861 33 1 0.5252441 2.8636856 20.241241 34 2 0.52524404 2.8636854 20.241201 35 2 0.52524405 2.8636854 20.241205 36 2 0.52524405 2.8636854 20.241204 37 3 0.5286912 2.8537921 20.241204 38 2 0.52869118 2.8537921 20.241198 39 3 0.53075992 2.8502304 20.241198 40 3 0.53200133 2.8489482 20.241198 41 1 0.53349157 2.7948608 20.243747 42 3 0.53423416 2.7943995 20.243747 43 1 0.53581576 2.7405469 20.246464 44 2 0.53581574 2.7405468 20.246461 45 1 0.53748906 2.6869448 20.249344 46 3 0.53793149 2.6867791 20.249344 47 2 0.5379315 2.6867791 20.249344 48 3 0.53819697 2.6867194 20.249344 49 2 0.53819697 2.6867194 20.249344 50 2 0.53819697 2.6867194 20.249344 51 1 0.53995785 2.6333854 20.25239 52 3 0.54011639 2.633364 20.25239 53 3 0.54021152 2.6333563 20.25239 54 3 0.54026859 2.6333535 20.25239 55 2 0.54026859 2.6333535 20.25239 56 2 0.54026859 2.6333535 20.25239 57 3 0.54030284 2.6333525 20.25239 58 3 0.54032339 2.6333521 20.25239 59 3 0.54033572 2.633352 20.25239 60 2 0.54033572 2.633352 20.25239 61 1 0.54218315 2.5803036 20.255599 62 3 0.5421903 2.5803035 20.255599 63 2 0.5421903 2.5803035 20.255599 64 1 0.54412348 2.527559 20.258968 65 1 0.54614031 2.4751377 20.262496 66 1 0.54823857 2.4230594 20.266178 67 3 0.54824215 2.4230593 20.266178 68 2 0.54824215 2.4230593 20.266178 69 3 0.54824429 2.4230593 20.266178 70 3 0.54824558 2.4230593 20.266178 71 1 0.55042294 2.3713444 20.270014 72 3 0.55042349 2.3713444 20.270014 73 3 0.55042382 2.3713444 20.270014 74 2 0.55042382 2.3713444 20.270014 75 3 0.55042401 2.3713444 20.270014 76 3 0.55042413 2.3713444 20.270014 77 2 0.55042413 2.3713444 20.270014 78 3 0.5504242 2.3713444 20.270014 79 1 0.55267828 2.3200137 20.274 80 1 0.55500659 2.2690886 20.278134 81 3 0.5550062 2.2690886 20.278134 82 2 0.5550062 2.2690886 20.278134 83 3 0.55500597 2.2690886 20.278134 Initial x = (0,0,0), ||f(x)|| = 58.4561, ||J(x)|| = (20.3224) Final x = (0.184088,0.00160079,-0.523584), ||f(x)|| = 2.26909, ||J(x)|| = (20.2781) gradient_descent_vector_x_test(): Seek minimizer of a function z(x,y). Initial x,y = (1,1.5), f(x,y) = 4.86667, f'(x,y) = (2.3,4) Final x,y = (0.00634006,-0.0148111), f(x,y) = 0.000205856, f'(x,y) = (0.0105481,-0.0232821) gradient_descent_vector_f_test(): Seek minimizer of vector function f(x). it ||x|| ||f(x)|| ||J(x)|| 0 0 58.456136 20.322401 1 0.2095833 23.306394 20.230019 2 0.33544226 10.61703 20.224214 3 0.41112011 6.0130914 20.22475 4 0.45672694 4.3199587 20.225238 5 0.48433465 3.6747938 20.226044 6 0.50118626 3.4069799 20.22701 7 0.51162759 3.2750993 20.228166 8 0.51826553 3.1922467 20.229501 9 0.52266348 3.1271417 20.231015 10 0.52575865 3.0685298 20.232705 11 0.52811252 3.0123663 20.23457 12 0.5300612 2.9572029 20.236609 13 0.53180555 2.9025265 20.238819 14 0.53346532 2.8481614 20.241199 15 0.53511159 2.7940537 20.243749 16 0.53678623 2.7401943 20.246466 17 0.53851357 2.6865905 20.249349 18 0.54030737 2.6332562 20.252396 19 0.54217505 2.5802079 20.255605 20 0.54412017 2.5274639 20.258975 21 0.54614395 2.4750431 20.262502 22 0.54824619 2.4229654 20.266185 23 0.55042577 2.3712511 20.270021 24 0.552681 2.3199211 20.274008 25 0.55500985 2.2689967 20.278142 26 0.55741 2.2184997 20.282421 27 0.55987895 2.168452 20.286841 28 0.5624141 2.1188757 20.291399 29 0.56501271 2.0697931 20.296091 30 0.56767197 2.0212267 20.300915 31 0.570389 1.9731988 20.305865 32 0.57316086 1.9257314 20.310939 33 0.57598455 1.8788467 20.316131 34 0.57885703 1.8325664 20.321438 35 0.58177523 1.7869117 20.326854 36 0.58473605 1.7419035 20.332377 37 0.58773636 1.6975623 20.338 38 0.59077303 1.6539076 20.343718 39 0.59384289 1.6109585 20.349528 40 0.59694281 1.5687333 20.355423 41 0.60006961 1.5272492 20.361399 42 0.60322018 1.4865227 20.36745 43 0.60639137 1.4465692 20.373571 44 0.6095801 1.4074032 20.379756 45 0.61278328 1.3690378 20.386 46 0.61599788 1.3314853 20.392297 47 0.6192209 1.2947564 20.398641 48 0.62244939 1.2588609 20.405028 49 0.62568044 1.2238071 20.411451 50 0.62891121 1.1896019 20.417904 51 0.63213891 1.1562513 20.424382 52 0.63536084 1.1237594 20.43088 53 0.63857433 1.0921293 20.437392 54 0.64177682 1.0613626 20.443912 55 0.64496582 1.0314597 20.450435 56 0.6481389 1.0024195 20.456955 57 0.65129374 0.97423974 20.463467 58 0.65442811 0.94691668 20.469967 59 0.65753984 0.92044546 20.476448 60 0.66062689 0.89481994 20.482906 61 0.66368728 0.87003284 20.489336 62 0.66671916 0.84607569 20.495734 63 0.66972073 0.82293897 20.502094 64 0.67269034 0.80061211 20.508413 65 0.67562641 0.77908359 20.514686 66 0.67852745 0.75834094 20.520909 67 0.68139209 0.73837086 20.527079 68 0.68421904 0.71915928 20.53319 69 0.68700712 0.70069136 20.539241 70 0.68975523 0.68295165 20.545227 71 0.69246239 0.6659241 20.551146 72 0.69512769 0.64959213 20.556994 73 0.69775032 0.63393872 20.562769 74 0.70032955 0.61894646 20.568468 75 0.70286476 0.60459761 20.574089 76 0.7053554 0.59087418 20.579629 77 0.707801 0.57775798 20.585086 78 0.71020118 0.56523068 20.590459 79 0.71255564 0.55327386 20.595746 80 0.71486413 0.5418691 20.600946 81 0.7171265 0.53099798 20.606056 82 0.71934267 0.52064216 20.611077 83 0.7215126 0.51078342 20.616007 Initial x = (0,0,0), ||f(x)|| = 58.4561, ||J(x)|| = (20.3224) Final x = (0.496451,0.001604,-0.52356), ||f(x)|| = 0.510783, ||J(x)|| = (20.616) gradient_descent_test(): Normal end of execution. 08-Jan-2023 09:14:21