# OPTIMAL_CONTROL_1D Optimal Control of a 1D System

OPTIMAL_CONTROL_1D is a MATLAB program which implements an algorithm for finding the optimal control for a 1D system.

The program is intended as a demonstration and teaching aid.

The form of this problem is as follows. We are given

• an interval [a,b],
• a right hand side function f(x),
• a "target" function u_hat(x).

Consider the following two point boundary value problem:

``````
- d/dx ( q(x) * du(x)/dx ) = f(x)
u(a) = 0; u(b) = 0
``````
There are two unknowns in this problem, the functions u(x) and q(x). Let us suppose that u(x) represents a state variable, or the value of some important physical quantity, while the function q(x) is a function that we are allowed to select. Because it represents our ability to "control" the problem, we call the function q(x) the control function.

As long as the function q(x) satisfies certain conditions (differentiability would be nice, for instance, though it will turn out not to be essential), the two point boundary value problem can be solved for u(x) as soon as we have chosen q(x). In this case, u(x) depends on q(x), and we can think of the state function u(x) as a "function" of the control function q(x). To emphasize this, we may sometimes write u as u(q)

We now pose the optimal control problem. We seek the control function q(x) (out of some space of allowable choices) which is "optimal". A control function which is optimal is termed, of course, an optimal control function. But what is our definition of "optimal"? Optimality depends on what we want to happen, and how we measure that. For this example code, we are going to want the solution u(q) to match, as closely as possible, the prescribed function u_hat(x). Note that there might be many control functions which achieve a perfect match - in that case, they would all be "optimal". More commonly, no control function achieves a perfect match, and then we would be interested in the control function that achieved the best approximation. Thus, for our problem, we define an optimal control function q(x) as one that minimizes the difference between the corresponding state function u(x) and the target solution u_hat(x).

We can write

``````
J(u) = 1/2 sqrt ( integral ( a < x < b ) ( u(x) - u_hat(x) )2 dx )
``````
so that our optimal problem becomes
Find a control function q that minimizes J(u(q)).

The function J(u) is known as the cost function. Many other cost functions are possible, depending on the application.

Now it is common to modify the optimal control problem to include what is called "the cost of the control". In other words, it is often the situation that a very reasonable approximation of u_hat(x) can be gotten for a control function q(x) of relatively small norm, and that better approximations are only "slightly" better, and come at the cost of a great increase in the norm of the control function q(x).

If the cost of the control function is to be included, this can be done by reformulating the original cost function to include a scaled value of the norm of the control function. A typical formulation might be:

``````
J(u,q) = 1/2 sqrt ( integral ( a < x < b ) ( u(x) - u_hat(x) )2 dx )
+ alpha * sqrt ( integral ( a < x < b ) ( q(x)            )2 dx )
``````
The form of the cost function will vary from problem to problem.

### The Example Problem

For the particular problem considered in our example, we have the data:

```
[a,b] = [0,1]
```
and
``````
u_hat(x) = x ( 1-x2 )
``````
and
``````
f(x) = -15x4+3x2-6x
``````

The cost function used for the example problem does not take the square root of the integrals, and the "cost of control" portion of the cost involves the derivative of q(x):

``````
J(u,q) = 1/2 integral ( a < x < b ) ( u(x) - u_hat(x) )2 dx
+ alpha * integral ( a < x < b ) ( dq(x)/dx        )2 dx
``````

For this problem, if the coefficient alpha in the cost function J(u,q) is zero, then the optimal control function q(x) can be shown analytically to be:

``````
q(x) = x3+1
``````

The code optimal_control_1d_driver carries out an iterative procedure to determine the optimal control function. The value of alpha is a parameter that can be set by the user. If it is not zero, then the control function will vary from the exact solution known for alpha=0.

### Languages:

OPTIMAL_CONTROL_1D is available in a MATLAB version.

### Related Data and Programs:

ONED, a MATLAB library which contains functions useful for 1D finite element calculations.

STOCHASTIC_GRADIENT_ND_NOISE, a MATLAB program which solves an optimal control problem involving a functional over a system with stochastic noise.

### Author:

Jeff Borggaard, John Burkardt, Catalin Trenchea, Clayton Webster

### Reference:

1. Richard Macki, Aaron Strauss,
Introduction to Optimal Control Theory,
Springer, 1982,
ISBN: 038790624X,
LC: QA402.3.M317.

### Source Code:

• cost_function.m, evaluates quantities associated with the cost function.
• finite_element.m, sets up and solves the finite element system.
• geometry_1d.m, defines the finite element geometry.
• graphics.m, displays the computed and optimal functions.
• oned_bilinear.m integrates kernel(x) * basis function(x) * test function(x).
• oned_f_int.m computes the integral of f(x) times a test function.
• oned_gauss.m sets Gauss integration points on (-1,1).
• oned_mesh.m generates a mesh with a prescribed density. This routine returns elements of the same type as xb, e_connb (linear or quadratic)
• oned_shape.m computes test functions and derivatives for a Lagrange C0 element given element coordinates and Gauss points. (assumes all nodes are uniformly distributed in the element.)
• optimal_control_1d.m, solves the optimal control problem given a finite element mesh, control function Q, right hand side F, and U-HAT function.
• timestamp.m, prints the YMDHMS date as a timestamp.

### Examples and Tests:

You can go up one level to the MATLAB source codes.

Last revised on 29 November 2011.