TRACKING OF NON-STANDARD TRAJECTORIES USING MPC METHODS WITH CONSTrAINTS HANDLING ALGOrITHM

: In recent decades, a Model-Based Predictive Control (MPC) has revealed its dominance over other control methods such as having an ability of constraints handling and input optimization in terms of the value function. However, the complexity of the realization of the MPC algorithm on real mechatronic systems remains one of the major challenges. Traditional predictive control approaches are based on zero regulation or a step change. Nevertheless, more complicated systems still exist that need to track setpoint trajectories. Currently, there is an active development of robotics and the creation of transport networks of movement without human participation. Therefore, the issue of programming the given trajectories of vehicles is relevant. In this article, authors reveal the alternative solution for tracking non-standard trajectories in spheres such as robotics, IT in mechatronics, etc., that could be used in self-driving cars, drones, rockets, robot arms and any other automized systems in factories. The ability of Model-Based Predictive Control (MPC) such as the constraints handling and optimization of input in terms of the value function makes it extremely attractive in the industry. Nevertheless, the complexity of implementation of MPC algorithm on real mechatronic systems remains one of the main challenges.


Introduction
The general concept of control systems is based on the feedback loop. Usually, controllers perform conventional steps, such as gathering actual outputs of the system, comparing them with the requested result, and designing control to stabilize the system.
The technique that allows to control system is called a control law. Thus, designing a suitable control law is the most important in system engineering since many objectives are needed to be considered. PID is one of the most popular and widely used controllers in industries. However, there are different types of controllers such as Model-based Predictive Control (MPC), which is the advanced method in control process and that has been investigated until now. MPC is not an algorithm with a set of equations; it is an art of control with various approaches, such as DMC, GPC, PFC, RHC, etc. The key idea of predictive control is: • Accurate use of a model to predict the process output of the system; • Computation of an approach sequence that will minimize an objective function; • Receding horizon idea, at each iteration the horizon is moved to forward, which enables the use of the first control signal of the sequence computed at each iteration. There are illustrated advantages of MPC over other methods such as: • It is possible to design control with limited knowledge of control, since the steps are very intuitive, and the tuning is relatively easy; • Control could be applied to simply dynamics as well as to complex ones; • Can be applied to the multivariable case; • It represents feed forward control in a natural way to compensate for measurable disturbances; • Constraints could be applied to the approach during the design process; • It is very beneficial when future references are established.

Part 1. Problem Definition
Along with the benefits of the MPC approach appear to have drawbacks. Even though it is assumed that this type of control law is easy to implement, the system's derivation becomes more complicated when constraints are applied to the system. Also, in extended cases, MPC could require a lot of computational effort. Nevertheless, the main difficulty is the need of precise model of the system. Because the model of the system is prior knowledge based on which decisions will be made.
In addition, the usual rule of predictive control is that a traceable set point must be zero or a step change. Many systems have more complicated set point trajectories, so the traditional MPC approach cannot be applied in these cases and need to be modified. This project intends to deal with the problems that occur when advance knowledge of future set point changes is available and the algorithm used to track the reference signal is a Model-based Predictive Controller, specifically Dual Mode MPC is implemented to carry out the experiments needed to reason the proposed solutions.

Part 2. Model-Based Predictive Control
MPC is a control approach when the operating control input is computed at every iteration by minimizing the value function. This turns out to be a finite horizon LQR problem, when the state at each step is taken as an initial point, and using this information, it is needed to generate the most optimum input in order to implement one step ahead algorithm. Consequently, the optimization leads to the optimal control vector where the first element is used in the system. Hence, the basic distinction of MPC from other control laws is that it does not use pre-calculated values for the off-line control law [1].
MPC relates to optimal control. The main MPC approach is a dynamic model usage to predict the plant`s future values and minimize cost function to make the most optimum solution. Then the controller does a step ahead and start all over again. MPC tries to read all past data and predict the future behavior of the system applying the best optimum input to the system. The same processes are done in each step dynamically. This differentiates the model-based predictive control approach from other control laws.
For instance, the PID control law is not calculated at each step and there are no future predictions made in each step. The basic approach of PID is to obtain some gains from the start and to keep applying the same gains throughout the whole process. Conversely, MPC estimates the future over a specific horizon at each step.
In the other word Model Predictive Control uses the same principles as a human being where the optimum action is applied in order to obtain the best output. For example, when a human drives a car, the objective is to get home as soon as possible while not breaking the speed limit. As human eyes cannot see the home from their workplace, they cannot predict the whole way, but the best thing to do is to predict as much as they can see the so-called prediction horizon and apply the best decisions each time they go ahead.

Receding Horizon
Optimization-based control links to the usage of online and optimum future path calculation as a part of the feedback stabilization of a (which is usually nonlinear) system. The main idea is to utilize a receding horizon control method: the optimum predicted path is calculated from the current state to the desired state over a finite horizon N, used for only this specific iteration, and recalculated further based on the new data. The basic principle of receding horizon is illustrated in Figure 1.

27
The horizon must contain all important dynamics as settling time; otherwise, performance can be poor and some significant points can be lost or unobserved.
Constraints are very common in real life but it is very difficult to handle. Traditional methods are aiming to handle the constraints off-line [2].

Controllability and observability
There is a frequent usage of terms like stabilizability and detectability. The basic concept of system controllability, observability, stabilizability, and detectability could be described as follows: Controllability: A control system is said to be reachable if, for any given finite time interval [t 0 , t f ], it is possible to find some input signal u(t) for all that will steer any given initial state x(t 0 ) to any final state x(t f ). Observability: A control system is said to be observable if, for any t > t 0 , it is possible to determine the state of the system x(t) through measurements y(t) and u(t) for . Stabilizability: A linear system is said to be stabilizable if all its unstable modes, if any, are controllable.
Detectability: A linear system is said to be detectable if all its unstable modes, if any, are observable [3].
Theorem: A control system is said to be controllable if and only if the rank of controllability matrix C shown in equation 2.1 is equal to n.
The significance of stabilizability is that even if some modes is not controllable, in the case of having them stable or asymptotically stable, these specific modes will stay bounded or decay to zero [3].
Theorem: A control system is said to be observable if and only if the rank of observability matrix shown in equation 2.2 is equal to n. (2. 2) The significance of detectability is that even if some modes is unstable, at least this behavior can be observed.

Existing model overview
Numerous studies on the computational efficiency of trajectory tracking are currently available. Jiao and Wang [8] looked into an event-triggered trajectory tracking control method based on the guidance-control framework for fully actuated surface vessels. The suggested method results in fewer controller executions and less calculation and signal transfer. An event-triggered reset controller with a nonlinear disturbance observer was created by Wang and Zhang [9]. It has been demonstrated that the suggested plan is significantly simpler to adopt and can save resources more effectively. In order to track unicycle robots, Sun et al. [10] suggested two event-based adaptive prediction horizon MPC algorithms that took into account the computational complexity at each triggering instant. The outcomes demonstrated that the methods are successful in reducing the computational load [11].
Additionally, a number of research addressed employing meta-heuristic algorithms to solve MPC optimization problems. Three different meta heuristic optimization strategies were covered by Merabti et al. [12] in order to finish the optimization of the nonlinear MPC for control of tracking the course of the mobile robot. MPC and path planning were developed by Falcone et al. [13] using a bicycle vehicle model. Alternative MPC was created by Yakub and Mori [14] based on the Borelli principle paired with a feed forward controller. However, these investigations relied on trial-and-error techniques to calculate the MPC controller gain value. Additionally, the linearization of the vehicle's dynamic model, which the MPC controller uses, is done at a chosen speed [15].

Part 3. Designing MPC
The basic aim of model-based predictive control is to provide controls u so that the system meets all control goals such as having output y and tracking a set point r, and zero regulation of x state. Furthermore, it is assumed that the controller should be tuned so that the cost is minimized, and the performance is maximized. Finally, the system must meet all constraints. (Trodden, 2015) Therefore, the process of controlling via MPC is as follows: -Calculate the state or output of the system at the current iteration; -Solve an optimization problem in order to obtain the best input which is further used in real device. The optimization: -uses a dynamic model of the system to predict/forecast behavior for a limited time ahead from the current state; -chooses the forecast that optimizes the performance measure while satisfying all constraints.
-Apply the optimum control; -Repeat at the next iteration [4]. The graphical design of the controlling process of the system through MPC is illustrated in Figure 2 below. Discrete-time linear state-space models are often convenient if the system of interest is sampled at discrete times. If the sampling rate is selected in a right way, the behavior between the samples can be safely ignored and the model describes exclusively the behavior at the sample times. The infinite dimensional, linear, time-invariant, discrete time model is shown in equation 3.1:

(3.1)
where state x(k) is measurable at every step k and u(k) is a vector of manipulated variables to be determined by the controller to regulate x to 0 while minimizing a stage cost function . From an initial state x 0 the cost function shown in equation 3.2: is subject to (2.1) and has a unique optimal solution if R is positive definite (p.d.), Q is positive semidefinite (p.s.d.), the pair (A, B) is stabilizable and the pair (Q 1/2 , A) is detectable. (Rawlings and Mayne, 2012) In this case it is assumed that which is an infinite-horizon LQR problem where Q, R are weighting matrices which are commonly p.d.
Positive definite (p.d. or Q > 0) means that all its eigenvalues are positive whereas positive semidefinite (p.s.d. or Q ≥ 0) means that its eigenvalues are non-negative and at least one of them is zero.
Infinite horizon control law assumes an infinite sequence and decision variables in the optimization problem, which is not the case in reality because of its intractability [5].
Thereby, the prediction horizon length is reduced to N so that its cost function transforms as it is demonstrated in equation 3.3: where P is a terminal cost matrix. In this case only N control inputs are optimized from x 0 .
Hence, the obtained control sequence is and recursive application of this method leads to 3.4: Therefore, the problem is reformulated into the form of min which is subject to where: Finally, at initial x 0 the finite-horizon LQR task can be written as following 3.7: where is Hessian matrix, and , where and R > 0 which leads to H > 0.
In order to solve this optimization task, it is necessary to take a gradient of the cost function and set it to 0. This helps to determine the basic that could be applied as an input, which is described in (3.8): (3.8) In the case of having H > 0 the input is unique global minimum. The Hessian matrix H is not directly inverted, because it can even be non-invertible. Instead, the pseudo-inverse technique is used in order to handle this problem as in equation 3.9: (3.9) The same principle made at initial x 0 is continuously repeated at every step. Therefore, the overall procedure can be described as: where . By this method only the first control input from is applied to system. The horizon N is chosen to be as small as it can afford itself to be. Because the more horizon the more computational cost and the more Q and P matrices are out of condition [6].
However, very small horizon can cause significant difference between , hence, bad prediction and performance as it is illustrated in Figure 4. In this figure it is revealed that when . Sometimes could not even be stabilizing the closed loop of the system . And one of the method that could handle that problem is to set terminal state constraint This means that the last state is forced to be zero and this continues to be zero for every . However, this approach is very rude and can lead to bad robustness in regard to uncertainties of modelling error and disturbances. Moreover, it can also cause to have some problems while constraints handling.
Instead of doing this another approach so called Dual-Mode MPC can be used in order to improve the performance having two modes .
However, control in the presence of constraints is difficult, even for linear systems. The trouble is that optimal operation usually means operating at limits, as seen in Figure 5. Figure 5. Operating near limits. [4] The linear inequality method is used to handle the constraints: Consider the input and state constraints, which can be rewritten as follows: The method above could give the constraints in the form of . The is linear system when there are no constraints, but it is non-linear in the presence of limits. In this case, the feasible region may not cover the unconstrained minimum and the best choice is to take the point from feasible region, which is the nearest to the unconstrained minimum. Therefore, it is required to recalculate the at every step through solving constrained Quadratic Program. Constrained system can be locally stable only for some x 0 , and in order to ensure the local stability it is necessary to have value function as a Lyapunov function and asymptotical to 0. Lyapunov function means that the value function is positive definite and is reducing monotonically. In the case of having the ability to meet all those criteria, we would also guarantee the extremely attractive property of recursive feasibility [7]. Recursive feasibility can be reached through the invariant set. The set is said to be invariant if for all k > 0. So the basic objective is to have , which leads to have as it is represented in Figure 6. Part 5. Trajectory tracking A conventional Model-based predictive control algorithm, which is used to track a set point, expects either regulation or a standard step shift. In this approach, it is assumed that the set point equals to a static variable. Nevertheless, instead of this traditional method, the setpoint can be chosen to be not a variable but a vector. This vector can consist of the basic points of the trajectory, which is supposed to be tracked as in 5.1: where n is the number of basic points of the trajectory. The figure 7 illustrates the key idea of this approach. In this graph it is seen that some basic points are taken from the required trajectory for the speed. These points can form a vector of setpoints which can be further used in the MPC algorithm.
Thereby, now a setpoint r to be tracked becomes a vector of points. This means that the output having . Thus, the steady-state target optimization (SSTO) turned to calculate for the provided set point vector as it is shown in Figure 8. As in this case the and it is necessary to regulate the state error and output error to zero as it is described in 5.2: The setpoint equilibriums is at the equilibrium when and the output steers to the setpoint r(i) without any errors only when . This leads to the following: As it could be seen the matrix T is not changing which makes it very easy to implement and make only a few modifications to the algorithm.

Conclusion
This scientific paper performs a realization of Model-based Predictive Control with constraints handling algorithm. Furthermore, regular MPC logic is refined so that the controller can allow the system to track a set point trajectory. The realization of this approach was executed without violating the constraints, and all goals were achieved successfully. In addition, a traditional MPC algorithm is improved in order to deal with trajectory tracking problems, and the realization is performed successfully without any constraint violation. All the mathematical computations, explanations, and reasoning were provided throughout the whole process. The approach and methodology provided in this article could be applied to different areas where control systems are needed, i.e., Mechatronics, Robotics, IoT, etc.