The motion recognition layer is a key component of the proposed model. It is responsible for identifying the diverse sports in which athletes contribute training by predefined motion data collection. The underlying algorithms used in the motion recognition layer include the GSAAN [
45] model for action recognition. In action recognition, GSAAN can be trained on a set of predefined motion data to recognize and classify different actions based on their characteristic features. GSAAN is a deep learning-based framework that can be used for soccer teaching and training using computer aided technology. The GSAAN framework comprises two main stages: the first stage involves generating the graph structure for the soccer training, while the second stage involves teaching for soccer game. In the first stage, the graph structure is generated by sampling points on the surface and connecting them to form a graph. The sampled points are used as nodes in the graph, while the edges represent the connections between the nodes. In the second stage, the GSAAN is used to teach the soccer game. Specifically, the GSAAN aggregates information from the neighbouring nodes on the graph then applies attention mechanisms to the aggregated data to compute the new features for each node. The updated features are then used to compute the optimal values for the soccer teaching and training purpose. The GSAAN framework can be trained using supervised learning, where the training data comprises input–output pairs of soccer game and their corresponding gain values are derived using Eq. (
5).
$$O(i) = Sample\,\,[O(i), l]$$
(5)
where
\(O(i)\) as set of training of neighbours for node
\(i\), and
\(l\) is denoted as feature vectors for the nodes.Next, an attention mechanism to assign weights to the sampled neighbours based on their relevance to the target node. The attention weight for neighbour
\(j\) with respect to node
\(i\) can be computed in Eq. (
6)
$$a(ij ) = soft\max (f(a) * (X(i) + X(j))$$
(6)
where
\(f(a)\) denotes learnable function that maps the concatenation of the feature vectors for nodes
\(i\) and
\(j\) to a scalar value, and softmax is a function which normalizes the attention weights for each node
\(i\). The aggregated depiction for node
\(i\) is derived in Eq. (
7)
$$h\_i^{(l + 1)} = g\_agg\,\,[sum\,(j) [a\,(ij) W^{(l)} h\_j^{(l)} ]]$$
(7)
where
\(g\_agg\) signifies non-linear activation function,
\(W^{(l)}\) denotes learnable weight matrix for layer
\(l\), and
\(h\_j^{(l)}\) signifies hidden state of neighbour
\(j\) at layer
\(l\).The output of the GSAAN network is typically obtained by passing the final hidden state matrix through a linear layer and a soft max function. A probability distribution over the nodes in the graph in terms on its degrees is labelled in Eq. (
8)
$$p\_i = \deg (i) / sum\_j \deg (j)$$
(8)
where
\(p\_i\) denotes distribution over the nodes,
\(\deg (i)\) is degree of node
\(i\),
\(sum\_j \deg (j)\) is sum of degrees of all nodes in the graph. Teaching in GSAAN network for soccer game consists of skill improvement over the time in the game. To teaching soccer game it is trained using Eq. (
9)
$$R(t) = R(O) + \sum {[\Delta R(i)]\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,i = 1\,\,to\,\,t}$$
(9)
where
\(R(t)\) denotes player skill level at time
\(t\),
\(R(O)\) denotes skill level of player at the beginning of training,
\([\Delta R(i)]\) is denoted as the change in skill level at each discrete time. The cross-entropy loss is expressed by using Eq. (
10)
$$L = - Sum(Y * \log (Y))/N$$
(10)
where
\(L\) is denoted as cross entropy loss,
\(N\) is denoted as number of classes and
\(Y\) is denoted as total no of players. By minimizing the overall loss by adjusting the network parameters the output is derived using Eq. (
11)
$$PC = ArgMax(W_{out} * Context + B_{out} )$$
(11)
where
\(PC\) is denoted as predicted class of output,
\(W_{out}\) is denoted as weight matrix for the linear transformation,
\(B_{out}\) is denoted as bias term.Finally the index of the predicted class with the highest score indicating the predicted outcome in GSAAN. During testing, the model does not update its parameters; instead, it uses the learned weights from training to make predictions for new inputs efficiently. The training and testing are done using GSAAN in this layer for getting better accuracy. But GSAAN does not show any optimization adoption method to calculate optimal parameter to give accurate teaching and training. So for optimize the weight parameters
\(h\_i^{(l + 1)}\) and
\(R(t)\) from the GSAANis done using Artificial Rabbits Optimization.
Optimized GSAAN using artificial rabbits optimization
The Artificial Rabbits Optimisation algorithm [
46] was inspired by rabbits’ inherent survival strategies. Rabbits use a foraging strategy called diversion to get food away from their nests. They dig burrows around the nests and sporadically conceal inside one of them to evade hunters and predators; this technique is known as the random hiding technique. They hide themselves according to their energy. They search food near the nests, if they have energy and they will hide in nests if they have low energy. Artificial Rabbit’s Optimization Algorithm optimizes the weight parameter of GSAAN. The weight parameters of the GSAAN is p
\(h\_i^{(l + 1)}\) and
\(R(t)\) and they are optimized using following steps:
Step 1: Initialization
The initialization of populace in parameter space is on the basis of Artificial Rabbits Optimization algorithm. A random uniform distribution and individual fitness and is characterized in Eq. (
12).
$$G = \left[ \begin{gathered} G \hfill \\ . \hfill \\ . \hfill \\ . \hfill \\ G_{j} \hfill \\ . \hfill \\ . \hfill \\ . \hfill \\ G_{o} \hfill \\ \end{gathered} \right]_{o \times n} = \left[ \begin{gathered} G(z_{1} ) \hfill \\ . \hfill \\ . \hfill \\ . \hfill \\ G(z_{j} ) \hfill \\ . \hfill \\ . \hfill \\ . \hfill \\ G(z_{o} ) \hfill \\ \end{gathered} \right]_{O \times 1}$$
(12)
The size of female group G represents mean distribution and standard deviation \(G(z_{o} )\) denotes remaining populace, up to \(G(z_{1} )\) individuals, contains males.
Step 2: Random generation
The weight parameters are produced in random after initialization. The best fitness value is selected depend on clear hyper parameter conditions.
Step 3: Evaluation of fitness function
The random solution is created from initial assessments. The fitness function is evaluated through parameter optimization values to optimize loop parameter
\(ph\_i^{(l + 1)}\) and
\(R(t)\). It is exhibited in Eq. (
13)
$$Fitness\,Function = optimizing\,\,\,\left[ {{\text{ph\_i}}^{{{\text{(l}} + {1)}}} {\text{ and R(t) }}} \right]$$
(13)
Step 4: Transition from exploration to exploitation using \(ph\_i^{(l + 1)}\)
In the Artificial Rabbits Optimization, rabbits tend to exhibit continual detour foraging in the initial iteration stages. However, as the search progresses, they shift their behavior and start engaging in random hiding more frequently. This adaptation helps the rabbits strike a balance among exploration and exploitation during the optimization process, ensuring they conserve their energy effectively. The energy is computed using Eq. (
14)
$$Ey = z_{JJ} \left( {l + \beta 1 * \left[ {z_{best} (l) - z_{bestup} (l)} \right]} \right)$$
(14)
where
\(Ey\) is denoted as energy for rabbits,
\(z_{JJ}\) is the population member who is in his or her adolescence,
\(z_{bestup}\) signifies current best group member position,
\(z_{best}\) represents current best optimal solution, and
\(\beta 1\) denotes random integer drawn at random from an uniform distribution in range [0, 1].
Step 5: Random hiding using \(R(t)\)
In the face of potential threats from predators, rabbits instinctively take measures to ensure their survival. In the Artificial Rabbits Optimization algorithm, each rabbit, during every iteration, creates multiple burrows scattered across the search space dimensions. The rabbit randomly choses among these burrows to use as a hiding spot. This strategic behavior of creating and choosing burrows helps the rabbit reduce the risk of being captured, increasing its chances of survival during the optimization process. For hiding purpose it is done by using Eq. (
15)
$$c_{j,s} (u) = \left[ {y_{j} (u) + I \times h_{s} \times y_{j} (v)\,\,} \right]\,\,\,\,\,\,\,\,\,\,\,\,j = 1,....o$$
(15)
where
\(I\) is the hiding parameter,
\(c_{j,s} (u)\) is denoted as randomly selected burrow for hiding the
\(o^{th}\) rabbit.
Step 6: Natural repulsion for optimizing \(C_{p} b_{2}\) using artificial rabbit’s optimization
The survival stratagems utilized through rabbits at nature served an inspiration for artificial rabbit’s optimization approach. They consist of two steps; to find time integral, absolute error in control loop. It is Energy Shrink and Random Hiding. The loop parameters from the cheetah is optimized here by the Eq. (
16)
$$K_{tt} = L_{q} .D_{p}$$
(16)