Tecniques used in this work
Least squares method
The least squares method is typically used to estimate regression
parameters by minimization of the sum of squared errors (SSE). Let theexperimental region , R , be the ith-dimensional hyper-space
made up of all possible values that each input variable can take;
\begin{equation}
\mathbf{R}:\left\{\text{\ f}\left(x_{i}\right)\ \right|\ x_{i}\in\left[\ x_{i}^{\min}\ {,\ x}_{i}^{\max}\ \right]\ \}\ \nonumber \\
\end{equation}In this work, the SSE is given by:
\begin{equation}
\begin{matrix}SSE=\ \sum_{j=1}^{n}{\ \left(Y\left(\mathbf{R}\right)-Z\left(\mathbf{R}\right)\right)}^{2}\ \#\left(1\right)\\
\end{matrix}\nonumber \\
\end{equation}Where , \(\mathbf{Y\ =\ f}\left(\mathbf{R}\right)\), is the response
of the function to approximate that needs to be optimally addressed and\(\mathbf{Z\ =\ f}\left(\mathbf{R}\right)\), is the response of the
model or function to superimpose, which has desired and well-established
optimality properties, i.e. it is convex and has a global optimum.
Experimental region
discretization
To generate the grid of experimental points used in the proposed method,
a discretization size, or step size, ∆x can be chosen when the input
variable initialization values are selected not to be integers. This
step size can be user-defined, and its use is presented later on in the
evaluation of the method using global optimization test functions.
Multiple starting points
The multiple starting points technique, a heuristic method, is
frequently used in order to increase the chance of finding an attractive
solution close to the global optimum. When a local optimization method
is used, this method is executed many times using different starting
points to increase the chance of convergence to a competitive solution
[7].