Accurate classification of rehabilitation exercises in musculoskeletal disorders under the control of artificial intelligence (AI)
Departament ECE, SRM Institute of Science and Technology, SRM Nagar, Kattankulathur 603203, Chengalpattu, Chennai, Tamil Nadu, India
SRM Department of Physiotherapy, Faculty of Health and Medical Sciences, SRM Nagar, SRM Institute of Science and Technology, SRM Nagar, Kattankulathur 603203, Chengalpattu, Chennai, Tamil Nadu, India
Author's email for correspondence:
vijayakp@srmist.edu.in
Side: 767-773 (view, other). DOI: https://doi.org/10.18280/ts.400237
received:
November 6, 2022
|
adopted:
December 10, 2022
|
Published:
April 30, 2023
|quote
ts_40.02_37.pdf
open access
Abstract:
Musculoskeletal pain is one of the major health issues facing the information technology (IT) industry and healthcare professionals. Current IT departments require people to sit and work in one place for a long time (around 3-4 hours). This can cause severe pain in the hip, neck and shoulder and can lead to paralysis. Fusion of three-dimensional (3D) images into flat projections to accurately classify images of trunk extension and flexion, wrist extension, and flexion exercise positions. Since predictions in different planes are misdetected during the convergence process, deep learning algorithms are superior techniques to improve recognition accuracy and processing speed. A dataset of 200 images of wrist and trunk extension and flexion positions in various planes was created. Comparison of the performance of the proposed deep learning algorithm with CNN with image data read from the accelerometer, DNN with RGB images, CNN-GRU with Kinect depth images, deep hybrid CNN with keyframe images of body parts, mind-based spatial transformation network (STN) data are compared from many images. CNN scaling with Grad CAM images. Observations show the effectiveness of these systems in the rehabilitation of the musculoskeletal system, as the proposed system based on deep learning effectively identifies the completion of rehabilitation activities with a training accuracy of 98.12% and a validation accuracy of 95%. Our method can track and improve the effectiveness of patients' rehabilitation training with more satisfactory accuracy than some other state-of-the-art traditional CNN-based core architectures.
Keywords:
Artificial intelligence (AI), deep neural network, work-related musculoskeletal disorders, home rehabilitation training, upper body exercises
1. Introduction
In a developing country like India, all industries will grow rapidly and the number of people employed in the IT industry will increase significantly. IT workers are the most marginalized part of society and vulnerable to exploitation. Even in developed countries, the safety and health of these workers is at risk. In rural areas, there is still a lack of human resources, rehabilitation and training facilities. This has led to the worldwide spread of MSDs [1]. The number one health issue faced by IT workers is WMSD. It affects the muscles, ligaments and nervous system of the body. IT staff work on a pattern of stable body posture and insufficient movement of smaller body parts such as wrists and fingers. This way of working causes symptoms such as joint stiffness, redness, swelling of the affected part of the body, possible changes in the color of the body surface, and may also result in reduced sweating [2]. In addition, psychologically similar social risk factors, work-related stress, lack of support from co-workers or supervisors, excessive mental strain and lack of appreciation for the work performed are additional factors driving the development of WMSD in this area. Most of these disorders can lead to a neurological disorder called Parkinson's disease. It is a rapidly evolving disease among IT personnel [3].
In 2018/19, the overall average prevalence of work-related musculoskeletal disorders was 1.13% among 100,000 workers in industry, 1.83% in the construction sector and 1.5% in health and social services [4] . The Tamil Nadu University of Physical Education and Sports recently conducted a study with 296 health professionals. About 156/296 people had to deal with an uncomfortable position at work, which was a high risk factor for WMSD. 205/296 reported low back pain and 137/296 required consultation with a surgeon for further treatment [5]. Another regional study conducted in Surat, involving 100 people, assessed scapula asymmetry in MNC personnel, in which 58 people were affected by scapular asymmetry, which could lead to MSD [6].
Methods implemented in previous studies to recognize and analyze the position of motion, such as inertial measurement units (IMUs) based on sensors and accelerometers, Kinect and RGB cameras. There are still some problems with intelligently applied sensitivity tools in upper limb rehabilitation [7]. Image-based analysis using attention-based multi-scale deep neural networks, GRUs and CNNs [8-10] is an approach hidden in existing moving image classification methods. Compared with many types of sensors, single sensor data has problems such as large errors, high noise and low misidentification accuracy. However, self-control of patients' rehabilitation remains a complex task [11, 12]. The method of recognizing rehabilitation activities of the upper limbs based on surface electromyography signals extracts features in the time domain, frequency domain, time frequency and entropy through physiological signals [13, 14]. Figure 1 shows various deep learning techniques used in image-based motion analysis.
1. png
picture 1.Various existing deep learning methods for motion position classification
Our contributions to this article are considered as follows:
The article uses state-of-the-art technology to provide the comfort of smart rehabilitation for MSD patients in the current COVID-19 situation.
Grad-CAM RGB image analysis is less accurate. Most existing image analysis methods require external hardware to classify motion positions. In the proposed system, RGB images captured by any camera are directly analyzed and classification results are provided.
Different pose moving pictures with different planar projections is a new method proposed in this article to classify different pose moving pictures from different projection angles.
The model can classify and evaluate various rehabilitation activities in minimal time.
Newer advanced options such as connected care and artificial intelligence, 3D printing and prosthetics that mimic human movement are encouraging. For patients with musculoskeletal problems, the availability of micro- and macrodata can serve as a catalyst to provide care tailored to behavioral, cultural, genetic and psychological needs [15]. Healthcare providers will greatly benefit from practical algorithms coupled with biomarkers that can indicate when and how to intervene. Exoskeletons, 3D printing and virtual rehabilitation promise precise, low-cost interventions for patients, but are still in the early stages of development. Modern simulators, tools and systems are now a part of motor rehabilitation technology, such as microprocessor-based and computer-aided devices with real-time monitoring [16]. Robotic rehabilitation devices are becoming more and more popular in the treatment of limb injuries. An important area of research is the properties of various devices that make them useful in extreme traumatic events [17]. Correct and consistent use of mechanotherapy equipment is effective.
The following overview illustrates the workflow: Section 2 reviews the literature related to AI-based sports therapy systems. Section 3 discusses the proposed approach, including data augmentation and preprocessing steps, hyperparameter tuning, and a hybrid deep learning approach for model training. Part IV presents the results and discussion of the various rehabilitation measures studied. In the fifth and final part, we present our conclusions and recommendations for future research.
2. Related works
An accelerometer worn by a person is used to track their mobility and record upper body data. 24 models with 12 full motion positions and 12 partial motion poses were evaluated [1]. Kinect software development kit for automatic collaborative 3D evaluation. The standard deviations of the mean absolute error range from ±5.6 degrees, ±5.1 degrees to ±8.5 degrees and ±8.1 degrees [2]. An automated system based on RGB depth cameras has been developed to detect and recognize activities involving the upper limbs [6]. In the current context of WMSD recommended by most physiotherapists, exercise therapy is a very cost-effective and more accessible rehabilitation technique. A smart fitness system that tracks your activity level, number of repetitions and total exercise time. Create your own dataset with a total of 420 samples [8].
Recent studies have shown excellent accuracy in estimating the position of motion; most of the work uses sensory systems. Acceleration, roll, vibration, rotation and degrees of freedom of movement can be detected and measured using passive sensors [10]. Among the existing methods for recognizing the positions of motion listed in Table 1, various methods are used.
Table 1.Various existing motion-based position analysis methods
reference | purpose/objective of work | method | Measurement parameters | Achievements |
[1] | To help the patient analyze each of the twelve hand and arm rehabilitation exercises that the user has performed correctly. | 15-layer CNN from accelerometer sensor data | Wrist extension, wrist flexion, pronation, supination, radial deviation, ulnar deviation, arm flexion, arm extension, arm to chest, arm away from trunk, shoulder abduction, shoulder adduction. | Thanks to the data collected from the accelerometer sensor, an accuracy of 98.61% was achieved. |
[8] | The system can monitor the type, quantity and cycle of operations. | Neural networks | The body movements identified include four dumbbell exercises and three leg exercises: dumbbell curl, side raise, dumbbell shoulder press, dumbbell raise, sit up, calf raise, standing, calf raise, and heel lift. | Deep neural networks learn well on small datasets, with an accuracy of 97.61% for action recognition and over 96% for SVM. |
[9] | The RGB-Depth camera captures images of the patient's exercises and provides guidance/feedback to the patient without human intervention. | CNN-GRU | Measures upper body movements such as shoulder, elbow and flexion or extension in abduction. | CNN-GRU model with 100% accuracy. |
[10] | Fitness movement is then estimated using real-time images and DL algorithms. The effectiveness of the proposed system is then assessed by simulation. | Deep hybrid convolutional networks | Collecting data enables efficient movement and possible diagnosis of injuries. | Accuracy of 97.80% was achieved. |
[11] | Compare training poses with images of standard poses and provide patient feedback and guidance. | Spatial Transformation Network (STN) with multiple mind-based CNNs | Body posture assessment using glitter exercises on eight segments of different parts of the human body, such as the left arm, left forearm, left thigh, left lower leg of the trunk, right shoulder, right forearm, right thigh and right lower leg. | The average accuracy of matching ST-AMCNN positions to different parts of the human body is 70.02%. |
3. Methodology
Given the current state and complexity of functional rehabilitation of the upper limb in MSDs, this study will use deep learning methods to process the postural data collected. To complete the categorization of upper extremity rehabilitation activities for MSD patients and support clinicians in developing the most appropriate rehabilitation and support program. The following subsections discuss the functions of the individual stages of the proposed system. Figure 2 shows the structure of the proposed method.
The proposed course of the system is as follows.
Using the image data generator feature, import a moving pose image dataset from volunteers and online sources into your local directory.
Resize the input image to 200x200 and split the image into an 80% training dataset and a 20% validation dataset in preprocessing.
The CNN DenseNet architecture is used to train modalities to classify moving pose images from the input data.
The results are generated by importing the test pattern image data from the appropriate directory.
Each phase of the proposed system is discussed in the following subsections.
3.1 Data augmentation and pre-processing
In this phase, data were collected from 15 volunteers (12 men and 3 women) aged 25 to 40 years. Some are images from open source online databases that do upper body exercises on RGB images. An example image is shown in Figure 3. Each image is divided according to its class. Our work includes four categories, namely wrist flexion (WF), wrist extension (WE), trunk extension (TE) and trunk flexion (TF).
2.jpg
photo 2.Block diagram of the proposed model
3. png
photo 3.image sample
4a.png
4b.png
Figure 4.Distribution of different classes in training and validation datasets
These images are then resized to 200x200 to process all image datasets via the CNN architecture. A total of 200 samples were used in the work. All images are divided into 80% training data and 20% test data as shown in Figure 4.
The reason why the Pareto Principle uses 80% of training data and 20% of test data is to avoid overtraining in neural networks. They determined that the proportion of samples retained for verification should be proportional to the square root of the total number of independent variables. Therefore, our findings generalize and confirm our results.
3.2 Model training using CNN
Figure 5 shows a CNN design using the DenseNet architecture pipeline to classify traffic positions.
5. png
Figure 5.Dense CNN architecture
The best-trained models using CNN further describe the various stages of the learning process in this section.
3.2.1 Convolutional 2D layers
When processing an image, it is used for filtering. In this case, it has an output size of 100x100 with 8 filters in one stage; in a later step, 25x25 and 16 output filters are used for 2D convolution. To create feature maps from pre-processed data, 2D convolutions use the following equations.
$“ f[x, y]=(g * \mathrm{k})[x, y]=\sum_m \sum_n k[m, \mathrm{n}] g[\mathrm{x}-\mathrm{m }, \mathrm{y}-n]”$(1)
The input image is represented as'G',The kernel is expressed as„k”.The row and column indexes of the output matrix are defined as"m, n"For a convolutional nucleus, yes„x i y”.
Downsampling/max_shrinkage:This feature reduces the image size. The design consists of two layers: the first layer reduces the kernel size from 100 X 100 (Conv2D) to 50 X 50, and the second layer reduces the kernel size from 25 X 25 (Conv2D_1) to 12 X 12.
1D layer:Commonly used in the transition from convolutional layers to fully connected layers, smoothing layers are used to reduce multi-dimensional inputs to one-dimensional ones.
Thick layer:Based on the results of convolutional layers, images are classified using dense layers. Neurons exist in each layer of the neural network and compute a weighted average of their inputs before passing them to a non-linear function called the "activation function".
Table 2 describes the two convolutional networks of brain blocks and their boundaries.
Table 2.Summary of the CNN weather model
layer (type) | output form | range |
Konw. 2D | 100, 100, 8 | 224 |
MaxPooling2D | 50, 50, 8 | |
2D_1 conversion | 25, 25, 16 | 1168 |
MaxPooling2D_1 | 12, 12, 16 | |
straighten | 2304 | |
as | 512 | 1180160 |
gust 1 | 512 | 262656 |
densely packed 2 | 4 | 2052 |
Parameters in the CNN architecture are evaluated using the following expressions,
$„Konw2D =i * k(h * w) * N+b”$ (2)
Among them i - the number of applied filters; k(h*w) - kernel size; N - number of input channels; b - deviation number.
$"Thick=P \fucking O+b"$ (3)
Where P is the number of outputs of the previous layer; O - output channel; b - the number of biases.
3.2.2 Bending layer details
first level:The filter size multiplied by the number of filters equals the number of outputs. Each filter in the network is 3x3, and the number of filters in layer 1 is 8 (8*3*3 = 72). Multiplying this by multiple inputs gives 3. Then 72*3 = 216, the number of deviations used in CNN is equal to the number of filters. Therefore, the total parameters of layer 1 are 216 + 8 = 224.
Level 2:Similarly for the initial layer Conv2D_1, 16*3*3 =144; this value is multiplied by several filters of the previous layer (i.e. 8), 144*8 = 1152. Then a numerical error is added to the final value. Therefore, 1152+16=1168 is the second layer parameter.
3.2.3 Dense layer
It is estimated as the number of inputs multiplied by the number of outputs in that layer followed by the deviation. For our scene, Dense: 2304*512+512 = 1180160; Density 1: 512*512+512 = 262656, Density 2: 512*4+4 = 2052
Downsampling and smoothing calculates only one number; there is no backpropagation science involved, so there are no parameters to learn. These pool tier parameters are "0".
4. Results and discussion
Performance results are presented in this section. Jupyter Notebook was used to design each system featured in this study. Models are trained offline and can then be used to detect motion classes from RGB images. CNN's proposed DenseNet uses the following hyperparameters to train the model with Adaptive Estimation of Moment (ADAM) as the optimizer for up to 200 epochs. The Relu activation function is used for linear output in sequential model preparation. The softmax activation function is used to collect the classification output in a multiclass environment to predict the class of the input. Figure 6(a) shows the accuracy achieved in different iterations. Figure 6(b) shows the overall decrease in system performance for various iterations. Figure 6(c) shows the results of the comparison of losses and accuracy.
The accuracy of the training and validation data has been shown to increase gradually over the first 25 epochs before stabilizing. Consecutive trends indicate slight changes in the ratio of missing training data to validation data and a slight improvement in accuracy. Similarly, the amount of data lost during training and validation decreases sharply during the first 25 epochs, during which we observe oscillating decreases. Over time, the errors became less and less frequent. The model is likely to classify the images well in the test dataset as the bending and acceptance errors are effectively tuned, showing that it does not lead to overfitting.
6a.png
Figure 6(a).Accuracy and era
6b.png
Figure 6(b).loss and old age
6c.jpg
Figure 6(c).Loss and accuracy comparison results
4.1 Evaluation of performance indicators
Systems that generate classification model results use a confusion matrix to describe the classes of test data, as shown in Figure 7.
7. png
Figure 7.Classification of different types of verification images
Ten images in each category were used to generate a confusion matrix. The following expressions and equations are used to determine the performance metrics for this confusion matrix.
True Positive (TP): prediction of a true positive result
True Negative (TN): True Negative Forecast
False Positive (FP): False positive prediction
False Negatives (FN): Prediction of True Negatives
Use the following equations to estimate system performance metrics.
$“ \operatorname{Točnost}(\%)=\frac{T P+T N}{T P+T N+F P+F N} * 100 ”$ (4)
$ "Turnover (\%)=\frac{T P}{T P+F P} * 100 "$ (5)
$ "Turnover (\%)=\frac{T P}{T P+F P} * 100 "$ (6)
$“ F 1-$ Result $(\%)=\frac{2 * \text { Precision } * \text { Recall }}{\text { Precision }+ \text { Recall }} \quad ”$ ( 7)
Table 3 shows the classification report using the proposed method. The following overview covers the math formulas used to calculate precision, recall, and F1 scores.
Table 3.Category report
Class name | accuracy | accurate | Remember | F1 score |
Boot extension (TE) | 100% | 100% | 100% | 100% |
Flexible Troupe (TF) | 100% | 100% | 100% | 100% |
Joint stretching (US) | 100% | 100% | 100% | 100% |
wrist flexion (WF) | 90% | 90% | 90% | 90% |
4.2 Performance Comparison
This section compares the proposed system identification accuracy with other existing methods. Figure 8 shows a comparison of the performance of the proposed system with different techniques.
8. png
Number 8.Comparison of the performance of the proposed system with existing methods
Here we discuss the comparison of the proposed method with existing deep learning methods. One approach uses time series image data from accelerometer and CNN sensors to recognize motion positions. The framework achieves 98.61% accuracy [1]. DNNs must be optimized for accuracy using RGB image methods [8]. The model training method based on CNN-GRU can provide good accuracy (100%) of model prediction; requires more hardware implementation to complete the process [9]. The deep hybrid convolutional network approach only tracks the movement of a single target limb and provides 97.80% accuracy [10]. The average accuracy of matching ST-AMCNN positions to different parts of the human body is 70.02%. However, it takes more computer time to complete the recovery process [11]. The proposed system effectively provides a recognition accuracy of 98.12% using real-time data. Our method outperforms other state-of-the-art traditional CNN designs in monitoring and improving the effectiveness of patient rehabilitation training and providing a higher level of user satisfaction.
5. Conclusion
The goal of physiotherapists is to facilitate the use of discovered very interesting rehabilitation equipment for a wide range of problems. Modern approaches, including deep learning, reduce the need for less detailed knowledge as the level of discovery increases. Most of the existing techniques relate to waveform sensor images [1], Kinect depth images, RGB fitness exercise images, body keyframe images [8-10] and the transformation of spatial images with gradient features [11] to Classification of Exercise Positions In this work, RGB images of various movements were captured by the mobile smartphone camera to analyze the classification of complex postures of rehabilitation exercises. Our method can track and improve the effectiveness of patient rehabilitation treatments with more satisfactory accuracy across different image plane projections than some of the other best traditional CNN-based core architectures. However, despite the use of a smaller set of samples, the proposed method is characterized by acceptable recognition accuracy. Still, the accuracy of the system can be improved by importing such a large amount of data.
Although the method we propose can effectively recognize and classify motion poses from RGB images. There are still some limitations that need to be resolved in the future. Different parts of the body in different postures tend to overlap and may not be able to locate minor deviations caused by the patient - real-time video surveillance algorithms are refined through deep image analysis. Finally, further development is needed to identify feelings such as fear and contempt expressed by patients. Since this framework is designed for home rehabilitation, sensitive data exchange operations must be carried out between patients around the world. In this context, we plan to implement our method on the blockchain through federated learning in the future.
nomenclature
G | input image |
Potassium | core |
Os | Kernel row and column indexes |
rice, rice | Print the row and column index |
I | number of filters |
k(height*width) | core size |
So | Number of input channels |
Other | number of deviations |
P | The number of exits from the previous layer |
Europa | output channel |
TP | really positive |
Tennessee | real woman |
family planning | false positive |
in front of | false negative |
reference
[1] Nair, B.B. and Sakthivel, N.R. (2022). Motion recognition system based on deep learning in upper limb rehabilitation Arab Journal of Science and Engineering. https://doi.org/10.1007/s13369-022-06702-y
[2] Rodrigues, P. B., Xiao, Y., Fukumura, Y. E., Awada, M., Aryal, A. (2022). Ergonomic assessment of the posture of an office worker using automatic 3D joint angle assessment. Advanced Engineering Computer Science, 52:101596. https://doi.org/10.1016/j.aei.2022.101596
[3] Adem, HM, Tessema, AW, Simegn, GL (2022). Classification of Parkinson's disease based on EMG signals from various movements of the upper limbs based on a multiclass support vector machine. International Journal of Bioautomation, 26(1): 109-125. https://doi.org/10.7546/ijba.2022.26.1.000849
[4] R. Govaerts, B. Tassignon, J. Ghillebert, B. Serrien, S. De Bock, T. Ampe, I. El Makrini, B. Vanderborght, R. Meeusen, De Pauw, K. (2021). Prevalence and incidence of work-related musculoskeletal disorders in European secondary industry in the 21st century: a systematic review and meta-analysis. Musculoskeletal Disorders BMC, 22(1):1-30. https://doi.org/10.1186/s12891-021-04615-9
[5] Shankar, C. M., Venkatesan, R. (2021). Work-related musculoskeletal disorders among male health professionals in a private Indian institution: prevalence and associated risk factors. Indian Journal of Physiotherapy and Research, 3(1):3-7. https://doi.org/10.4103/ijptr.ijptr_45_20
[6] Parikh, S. M., Mehta, J. N., Thakkar, M., Thakkar, N., Gauswami, M. (2022). Prevalence and risk factors for musculoskeletal disorders among tier 4 workers in a tertiary hospital in rural western India: a cross-sectional study. National Journal of Physiology, Pharmacy and Pharmacology, 12(12): 2131-2131. https://doi.org/10.5455/njppp.2022.12.03144202212052022
[7] Shen Cheng, Ning Xu, Zhu Qing, Miao Sheng, Lu Hai (2022). Application and comparison of deep learning methods based on multimodal inertial data for the assessment of upper limb function
[8] Chen, C. (2022). Machine Learning Research in Smart Fitness Systems Journal of Sensors, e6293856-e6293856. https://doi.org/10.1155/2022/6293856
[9] Bijalwan, V., Semwal, V.B., Singh, G., Mandal, T.K. (2022). HDL-PSR: Modeling Spatial Features in Stroke Rehabilitation Using a Hybrid Deep Learning Approach. Neural processing letters, 1-20. https://doi.org/10.1007/s11063-022-10744-6
[10] Cai Hai (2022). The use of real-time intelligent image processing in motion detection in fitness as part of the Internet of Things. Journal of Supercomputing, 78(6):7788-7804. https://doi.org/10.1007/s11227-021-04145-0
[11] Qiu Yan, Wang Jian, Jin Zhi, Chen Hao, Zhang Min, Guo Li (2022). Assessment of movement quality in rehab training with postural adjustment guided by deep learning. Biomedical Signal Processing and Control, 72:103323. https://doi.org/10.1016/j.bspc.2021.103323
[12] Ponnusamy, V., Coumaran, A., Shunmugam, A.S., Rajaram, K., Senthilvelavan, S. (2020). Smart Glasses: Real-time leaf disease detection with YOLO Transfer Learning. 2020 International Conference on Communications and Signal Processing (ICCSP), 1150-1154. https://doi.org/10.1109/ICCSP48568.2020.9182146
[13] Zhang, C., Zou, J., Ma, Z., Wu, Q., Sheng, Z. and Yan, Z. (2021). Recognition of upper limb functions based on physiological signals and their application in limb rehabilitation Applications of Signal Training, 38(6): 1887-1894. https://doi.org/10.18280/ts.380633
[14] Zhang, C., Zou, J., and Ma, Z. (2021). Signal recognition and analysis Signal training in limb rehabilitation based on wavelet transformation, 38(3):689-697. https://doi.org/10.18280/ts.380316
[15] Ponnusamy, V., Marur, DR., Dhanaskodi, D., Palaniappan, T. (2021). X-ray detection of hazardous substances in baggage based on deep learning - FPGA implementation. Revue d'Intelligence Artificielle, 35(5): 431-435. https://doi.org/10.18280/ria.350510
[16] Kataria, S., Ravindran, V. (2022). Musculoskeletal care - convergence of data science, sensors, engineering and computer science. Musculoskeletal Disorders BMC, 23(1):1-11. https://doi.org/10.1186/s12891-022-05126-x
[17] Zhang Yan, Li Wen, Yang Jing, Liu Zhi, Wu Li (2022) Cutting-edge methods and innovations in sports rehabilitation training: the effectiveness of new technologies. Education and Information Technology, 1-18. https://doi.org/10.1007/s10639-022-11438-1
FAQs
What are the four categories under which AI is classified? ›
- Reactive machines. Reactive machines are AI systems that have no memory and are task specific, meaning that an input always delivers the same output. ...
- Limited memory. The next type of AI in its evolution is limited memory. ...
- Theory of mind. ...
- Self-awareness.
Artificial intelligence technologies are considered crucial in supporting a decentralized model of care in which therapeutic interventions are provided from a distance. In the last years, various approaches have been proposed to support remote monitoring and smart assistance in rehabilitation services.
What are the classification of AI based on capabilities? ›AI has three types on the criteria of capability: Artificial Narrow Intelligence (Narrow AI), Artificial General Intelligence (General AI), and Artificial Super Intelligence (Super AI).
What are the available AI techniques for intelligent control? ›Intelligent control is a class of control techniques that use various artificial intelligence computing approaches like neural networks, Bayesian probability, fuzzy logic, machine learning, reinforcement learning, evolutionary computation and genetic algorithms.