An intelligent method of energy monitoring and analysis based on image recognition technology (2023)

An intelligent method of energy monitoring and analysis based on image recognition technology

An intelligent method of energy monitoring and analysis based on image recognition technology (1)

Lei Yu*An intelligent method of energy monitoring and analysis based on image recognition technology (2) |Mazar AliAn intelligent method of energy monitoring and analysis based on image recognition technology (3) |Imran Ali KhanAn intelligent method of energy monitoring and analysis based on image recognition technology (4) |Tahira MaksoodaAn intelligent method of energy monitoring and analysis based on image recognition technology (5) |Yinling WangAn intelligent method of energy monitoring and analysis based on image recognition technology (6) |Haining GaoAn intelligent method of energy monitoring and analysis based on image recognition technology (7)

Henan Provincial Engineering Laboratory of New Energy Conversion and Control Technology, Huanghuai University, Zhumadian 463000


Department of Computer Science, COMSATS University Islamabad, Abbottabad 22060, Pakistan


Author's email for correspondence:

yulei@huanghuai.edu.cn

Side:

657-665 (view, other).

|

DOI:

https://doi.org/10.18280/ts.400224

received:

December 25, 2022

|

adopted:

6 March 2023 r

|

Published:

April 30, 2023

|quote

ts_40.02_24.pdf

(Video) AI in Manufacturing | Quality Inspection | AI project + Industry 4.0 Analysis

open access

Abstract:

Smart energy monitoring and analysis based on image recognition technology can provide more accurate real-time data support for the energy system and improve energy efficiency and management. This method is sensitive to factors such as image quality, lighting, angle, etc. When image quality is not high, recognition performance may be poor. Some methods, such as feature extraction and deep learning, require a large amount of computation and relatively poor real-time performance, which can affect the timeliness of energy monitoring. Therefore, in this study, an intelligent energy monitoring and analysis method based on image recognition technology is being investigated. The power monitoring dashboard is pre-processed with brightness adjustment and Hough transform. Once the pointer dashboard is extracted, linked domain analysis, a thinning algorithm, a line fitting, and a pointer direction estimation engine are used to detect the pointer and calculate the angle. Specifies a method for identifying the indications of energy monitoring instruments. The effectiveness of the proposed method was confirmed by the analysis of experimental results.

1. Introduction

With the rapid development of society and the growing demand for energy, the importance of energy monitoring and management is becoming more and more visible [1-5]. Smart energy monitoring and analysis based on image recognition technology can provide more accurate real-time data support for the energy system and improve the efficiency and level of energy management [6-13]. Against the background of Industry 4.0 and digital transformation, all segments of life are constantly evolving towards intelligence. The energy industry also needs to move with the times and use technical means such as image recognition to intelligently monitor and analyze energy. In all combinations of energy production, transport and use, it is necessary to monitor the operation of devices and safety hazards in real time [14-21]. The use of image recognition technology can quickly and accurately identify abnormal states of power equipment, thereby improving the work efficiency and safety of power equipment. It is also necessary to study this method meaningfully.

Imran et al. [22] evaluated and compared the state-of-the-art You Only Look Once (YOLO) deep learning algorithm with traditional manual text extraction and recognition functions. The image dataset contains 10,000 electricity meter images, further augmented by in-plane rotation and upward data to make the deep learning algorithm resilient to these image variations. For training and evaluation purposes, the image dataset is flagged to create a baseline truth for all images. The proposed method can be very helpful in reducing the time and effort currently associated with meter reading, i.e. staff visiting homes, photographing meters and manually extracting readings from these images. To solve the problems of low efficiency and low quality of manual sorting in traditional meter recycling, Hu et al. [23] used digital image processing technology to build an automatic sorting system for meter recycling based on an artificial neural network. First, the basic requirements of the system construction were analyzed in detail, and then the principle and method of image recognition using an artificial neural network was presented in detail. On this basis, the entire skeleton of the automatic sorting of the recycling of electric meters was built. Finally, the design and application of functional modules were made and the Azure database was built via the SQL Server platform, which implemented the system application of this research. The final application shows that the automatic sorting system built according to the research results is characterized by a simple interface and easy operation, which can significantly improve the efficiency and quality of meter recycling and sorting, and has some practical significance for the development of the State Grid. industry. Bajaj and Yemula [24] proposed the Instrumental Image Capture and Processing System (MICAPS). MICAPS is an image-based data extraction technique. MICAPS is implemented by integrating the Raspberry Pi with the Raspberry Pi camera. Python code is used to configure the camera, take photos and send them to the cloud. This image is a seven-segment display. Optical computer image recognition algorithms are used to convert a seven-segment display into text. Kanagarathinam and Sekar [25] describe collecting sets of raw images of digital electricity meter screens, detecting text and recognizing seven-digit segments from collected samples, which can help to reduce the costs of Advanced Metering Infrastructure (AMI). The proposed dataset has great potential for electricity billing based on fully automatic optical character recognition (OCR).

At present, the image data recognition methods of energy monitoring instruments for intelligent energy management and analysis mainly include: template matching method, edge detection method, feature extraction method and deep learning based methods. Most of these methods are sensitive to factors such as input image quality, lighting, angle, etc., and recognition performance may be poor when image quality is not high. Some methods, such as feature extraction and deep learning, require a large amount of computation and relatively poor real-time performance, which can affect the timeliness of energy monitoring. In addition, existing methods work well for some types of energy monitoring instruments, but may have poor generalizability for new or other instrument types that require re-training or re-tuning of the model. For this purpose, the study conducted research on intelligent energy management and analysis methods based on image recognition technology. Part 2 of the study involves pre-processing the panels with energy monitors using brightness control and Hough transform. After removing the dial of the pointer instrument in section 3, the combined domain analysis, thinning algorithm, line fitting and pointer direction estimation mechanism are used to detect the pointer and calculate the angle. The fourth part shows how to recognize the readings of the electricity monitoring device. The effectiveness of the proposed method was confirmed by the analysis of experimental results.

2. Pre-processing of the shield of the electricity monitoring instrument

11. png

An intelligent method of energy monitoring and analysis based on image recognition technology (8)

picture 1.General architecture of an intelligent energy management and analysis system

In the actual intelligent power management analysis scene shown in Figure 1, due to the complexity of the recording environment, the light easily affects the recording process of the analog meter, which causes great interference in the automatic identification and reading of the meter. . instruments later. To improve the accuracy of automatic identification of instrument readings, image preprocessing is essential. In real-life scenes, it is often difficult to maintain constant lighting conditions, and adjusting the brightness helps to eliminate the effects of changes in lighting conditions and bring the image closer to the ideal state. Moreover, the image contrast can be improved to make the difference between the instrument panel, pointer and scale more obvious, which is convenient for later identification and positioning. The Hough transform is a method for detecting specific geometric shapes in images, such as lines and circles. With the Hough transform, the edge of the instrument's target can be efficiently detected and thus accurately locate the target. At the same time, a straight line in the image can be detected, which is useful for pointer positioning and detection, and then serves as the basis for pointer angle calculation and reading calculation.

Histogram equalization(THAT) IContrast Limited adaptive histogram equalization(conflict) can be used to adjust the brightness of the Energy Monitor image. However, in the real scenario of intelligent energy management and analysis, the useconflictAdjusts the brightness factor of the power consumption monitor imageTHATsuch as local brightness adjustment, preventing excessive noise, preserving image detail information, avoiding local saturation, etc. These advantages help improve the image quality and further improve the energy monitoring instrument's image data recognition accuracy and performance. The basic flow of the algorithm is as follows:

(1) Convert the input color image to luminance-chrominance space so that only the luminance component can be processed in subsequent steps without affecting the color information.

(2) Split Areas: Divide the brightness component image into multiple small areas (tiles).

(Video) Fuzzy Logic in Artificial Intelligence with Example | Artificial Intelligence

Local histogram equalization: equalizes the brightness histogram of each small area. This step can improve the contrast of local regions while preserving detailed image information.

(3) For the brightness histogram of each small region, each gray level corresponds to a set. If the set size is greater than the specified threshold, grayscale pixels that exceed the threshold are removed. Suppose the gray level of each processed small area is expressed asIG, the width of the small image is expressed asaskA, the height of each small surface is expressed asaskB, then the average pixel can be calculated using the following formula:

$X l Q=\frac{Q_a \times Q_b}{I_g}$ (1)

The actual threshold can be calculated using the following formula:

$V_g=I_{p j} \times X l Q$ (2)

Removed grayscale pixels are evenly spaced. Suppose the interval at which pixels are removed is expressed asV, the total number of pixels removed is expressed asPotassium, the maximum gray level of the image is expressed asVC, then the sliding interval can be selected where:

$x_l=\frac{K}{I_g}$ (3)

$V=\frac{V_c}{K}$ (4)

(4) Contrast Threshold: To avoid excessive noise amplification, the contrast threshold can be set. When performing local histogram equalization, gray levels above the threshold are clipped, and the trimmed gray levels are evenly distributed to other gray levels to reach the contrast boundary.

(5) Region fusion: Because each small region has undergone independent histogram alignment processing, unnatural edge transitions may appear. Therefore, it is necessary to perform a weighted average for the adjacent regions to achieve a smooth transition effect.

Specify the center point of each small area as a pattern point. allowo=C-C-/C+-C-,vas=G-G-/G+-G-.Suppose the gray pixel value (C,G) Expressed asC(N), the gray value of the four adjacent points around a pixel point is expressed asC--,C+-,C-+, IC++. Going through the image, the following gives the grayscale linear interpolation formula:

$\begin{wyrównane} & C(n)=o\left[u C_{--}(n)+(1-u) C_{+-}(n)\right] \\ & +(1+o )\left[u C_{--}(n)+(1-u) C_{+-}(n)\right]\end{对齐}$ (5)

(6) Convert back to original color space: combined luminance componentconflictManipulate the color components of the original image, then convert the image back to its original color space (e.gRGBspace).

After performing the above steps, after adjusting the brightness, you can get an image of the energy meter, which is used for subsequent image recognition and analysis.

2. png

An intelligent method of energy monitoring and analysis based on image recognition technology (9)

photo 2.Schematic of Hough's circuit detection principle

In the real-world scenario of smart energy monitoring and analysis, the Hough transform is an effective geometric shape detection method for detecting a circular dashboard in images of energy monitoring instruments. For circle detection, the parameter space includes the coordinates of the center (A,B) sum radiusRThe Hough transform finds the combination of parameters most likely to represent the circuit by discretizing and accumulating in the parameter space. Before performing the Hough transform, you must first perform image edge detection and extract the dash board edge information. Then, according to the equation of the circle, draw a circle for each edge point of the image in parameter space, taking that point as a point on the circumference, and the center and radius as parameters. In the parameter space, circles corresponding to all edge points are accumulated and the combination of parameters with the highest cumulative value is found, i.e. the detected circular target. Figure 2 is a schematic detection diagram of Hough's circuit. Circle Detection Hough locates circles using a voting mechanism. In particular, the standard circuit equation (C-o)2+(G-vas)2=F2converted intotelcoordinate system, namely:

$o=c-f \cos \varphi ; u=g-f \sin \varphi$ (6)

3. Detection of energy monitoring instrument indicators

Many energy monitoring instruments use pointer readings, and the position and angle of the pointer directly determine the accuracy of the measurement. By detecting the pointer and calculating its angle, accurate data can be obtained from the image, providing a reliable basis for intelligent energy monitoring and analysis. Traditional data collection from energy monitoring meters usually relies on manual reading, which is inefficient and error-prone. By detecting the pointer and calculating the angle, the automation and intelligence of data collection by energy monitoring instruments can be realized, improving the efficiency of data collection and reducing the number of errors. Existing methods for detecting the pointer and calculating the pointer angle have their advantages and disadvantages and can be affected by factors such as noise, lighting, and shape changes, leading to a reduction in detection and calculation accuracy. Therefore, the existing methods can be considered improved in practical applications.

For reliable recognition of image data from energy monitoring instruments in complex environments, this study used linked domain analysis, a thinning algorithm, a line fitting and a pointer direction evaluation mechanism to detect the pointer and calculate the angle after pointer extraction. Linked domain analysis can effectively detect areas of indicators with similar color and grayscale characteristics, adapting to indicators of different shapes and sizes. The thinning algorithm can extract the skeleton of the indicator and better describe the geometric features of the indicator. By aligning the lines, the angle of the pointer can be calculated more accurately, thus improving the accuracy of data recognition. Combined with the pointer direction estimation mechanism, the pointer direction can be automatically identified, realizing the automation and intelligence of energy monitoring instrument data collection. It can effectively reduce the errors caused by manual operations and improve the efficiency of energy monitoring and analysis.

This study breaks down pointer detection and pointer angle calculation into three steps: locating the pointer area and center of the instrument, determining the pointer-to-line alignment, and a mechanism for estimating the direction of the pointer. In the first step of locating the pointer area and gauge center, the goal is to determine the approximate position of the pointer in the gauge and the center of the dial. First, the image is pre-processed using image processing technology to extract the edge of the dial. Then use related domain analysis or contour detection methods to find pick areas. The center position of the instrument is then obtained by calculating the geometric center of the dial area or by finding the center using the circular Hough transform. Finally, by analyzing the color, grayscale, or edge information in the dial area, the pointer area can be predetermined.

The second step improves the alignment of the indicator and the line, with the aim of extracting the skeleton of the indicator and calculating its adjusted line. First, Zhang Suen's algorithm can be used to process the indicator area to get the indicator skeleton. This step removes the difference in the width of the pointer, making it one pixel wide. Then, the skeleton of the indicator is fitted by least squares to produce a line that represents the direction of the indicator. This line helps to calculate the angle between the pointer and the center of the instrument.

The Zhang-Suen algorithm is a classic algorithm for reducing the number of binary images. Extracts the skeleton of an image by iteratively removing edge pixels. When using the Zhang-Suen algorithm to extract the skeleton of the indicator region, it is important to perform some preprocessing on the image before running the algorithm to improve its performance. This may include noise removal, smoothing and sharpening procedures. The algorithm can generate imperfect skeletons such as small fractures and bifurcations. In practical applications, the results of the armature can be optimized by post-processing operations (such as morphological operations), and the accuracy of pointer detection and angle calculation can be improved.

When converting a cursor area image to a binary image, you must select the appropriate binarization method. Different binarization methods can lead to different matching effects, so it is necessary to choose the appropriate method according to the actual situation. To avoid infinite loops, the maximum number of iterations can be set as the termination condition of the algorithm. When the number of iterations reaches its maximum value, the iterative process should be stopped even if the image continues to change.

In addition, the skeleton of the indicator uses the method of least squares to fit the line. The method of least squares consists in subtracting the actual measurement value of the indicator skeleton from the theoretical value, and then taking the sum of the results of the least squares calculation. ifIadjustment point ((CN,GN), (N=1, 2...I), the actual point is expressed asvasN, the expression for the matched line is:

$g(c)=o c+u$ (7)

The formula below gives the formula for calculating the sum of squared differences:

$h^2=\sum_{n=1}^i\left(g_n-g\left(c_n\right)\right)^2=\sum_{n=1}^i\left(g_n-\left( o c_n+u\右)\右)^2$ (8)

found valueoIvasprovide the minimumH2, partial derivativeH2about the unknownoIvaswe find the partial derivative set to 0. The following formula gives the calculationoIvas:

$o=\frac{i \sum_{n=1}^i c_n g_n-\sum_{n=1}^i c_n \sum_{n=1}^i g_n}{i \sum_{n=1}^ i c_n^2-\lijevo(\sum_{n=1}^i c_n\desno)^2}$ (9)

$u=\frac{\sum_{n=1}^i c_n^2 \sum_{n=1}^i g_n-\sum_{n=1}^i c_n \sum_{n=1}^i c_n g_n }{i \sum_{n=1}^i c_n^2-\left(\sum_{n=1}^i c_n\right)^2}$ (10)

The third step is the pointer direction evaluation mechanism, the purpose is to evaluate the actual direction of the pointer and calculate the angle. According to the adjustment line obtained in the previous step, the angle between the line and the center of the instrument can be calculated. However, since the pointer has two ends, it is necessary to further specify which end is the actual end of the pointer. This can be done based on the length of the indicator, color, shape and other characteristics. For example, the tip of the indicator is usually shorter and darker in color. The exact angle of the pointer can be obtained using the pointer direction estimation mechanism. Table 1 shows the indicator direction evaluation mechanism.

Table 1.Indicator direction evaluation mechanism

Europa

u

quadrant

kut

CEuropa>OtherEuropa

Cu=Otheru

Opozytivna poluos

Phi=0°

CEuropa<OtherEuropa

Cu=Otheru

O Negativna poluos

Phi=180°

CEuropa<OtherEuropa

Cu<Otheru

Positive semi-U axis

Phi=90°

CEuropa=OtherEuropa

Cu>Otheru

In the negative semi-axis

Phi=90°

CEuropa>OtherEuropa

Cu<Otheru

first quadrant

0°<Phi<90°

CEuropa>OtherEuropa

Cu<Otheru

second quadrant

90°<Phi<180°

CEuropa<OtherEuropa

Cu=Otheru

third quadrant

180°<Phi<270°

CEuropa>OtherEuropa

Cu<Otheru

fourth quarter

270°<Phi<360°

4. Recognition of energy meter readings

Figure 3 shows the process of recognizing electricity meter readings. After the electricity meter dial is finished preprocessing and the electricity meter pointer is detected, it is necessary to remove the scale of the electricity meter. When a circular gauge face is detected, the edge features of the face are extracted using either the edge detection algorithm or the Sobel operator. In the case of a target edge image, the marks are detected using the Hough transform. When a scale line is detected, it is filtered and classified by characteristics such as length and angle. For filtered characters, calculate their angle relative to the center of the gauge. It should be noted that the angle calculation method must be adapted to the actual situation in order to adapt to different types of instruments. Depending on the sign angle and gauge range, the angle can be mapped to a real value. After calculating the pointer angle, match it to the extracted scale value.

3. png

An intelligent method of energy monitoring and analysis based on image recognition technology (10)

photo 3.The process of identifying indications of electricity meters

Since the scale value usually appears on the large scale line, in this document the target image is segmented according to the detected location of the large scale line to obtain a local area containing the scale value. That is, set the search area near the main line of the scale to get the area where the scale value is located. The segmented local area requires some image preprocessing to improve recognition accuracy, including grayscale, binarization, noise removal, and smoothing. In the preprocessed image, the text area is in . Optical Character Recognition (OCR) technology is used to recognize a localized text area. The recognized tag values ​​are compared to the corresponding reference tags. This can be done by comparing the tick value area and the location of the major ticks. After the calibration is completed, the scale values ​​corresponding to the main scale lines on the meter are obtained.

In this article, after extracting the scale value image, the roller-based fast character recognition algorithm is further used to recognize the scale value. The wavelet transform can effectively capture the local features of the image, which helps to improve the accuracy of character recognition. Appropriate basic valence functions (such asThen,mixed raceetc.) and distribution levels can be selected to meet identification needs. The conversion of two-dimensional image data into one-dimensional data is achieved by vertical and horizontal mapping of the binary network character image. Projection data is decomposed into two wavelet layers, and feature information is extracted from the smooth components of each layer. Compare and analyze with the characteristic information of character templates for fast and efficient character recognition.

Suppose the valence function is expressed asOhOther,Rice(C), the wavelet coefficients are expressed asGOther,Rice.Expression of any functionR(C) in the spaceV2(F) is given by equation (11).

$r(c)=\sum_{s, m \in W} G_{s, m} \Omega_{s, m}(c)$ (11)

Conditions metV2(F) is given by equation (12):

$\int_{-\infty}^{+\infty}|r(c)|^2 q c<+\infty, c \in F$ (12)

The scaling function expression used in the roll-based fast character recognition algorithm is given by Eq. (13):

$\theta(c)=\left\{\begin{array}{l}1,0 \leq c \leq 1 \\ 0, \text { other }\end{array}\right.$ (13)

It uses the wavelet functionHalvalić as shown in the equation. (14):

$\Omega(c)=\theta(2 c)+\theta(2 c-1)$ (14)

Suppose the number of unit labels is expressed asJ, the value of the scale line at the right end is expressed asX, the value of the left scale line is expressed asI. The formula for calculating the scale value of the ith line of the minor scale in the appropriate scale unit is (15).

$S(i)=\frac{i-1}{j} \times(X-Y)+Y$ (15)

The reading recognition algorithm based on the deflection of the pointer mainly relies on the accurate calculation of the angle of the pointer to obtain the correct reading. While this method allows for relatively accurate identification in many cases, it still has some drawbacks. For example, perspective distortion can occur when the angle between the camera and the dial changes. This distortion will affect the calculation of the pointer angle and therefore the accuracy of the reading recognition. In the case of a multi-pointer energy meter, the method must simultaneously handle multiple viewing angles. This may increase the complexity of the recognition algorithm and may lead to recognition errors. The method of using the scale line value as a reference value to identify the reading has the advantages of reducing angle calculation errors, improving stability and greater adaptability to different gauges. In the application scenario in this article, it can support multi-point meters.

Based on the existing scale mark calibration, the reading corresponding to each scale line can be further determined. Suppose the number of detected ticks is expressed asI, distance to point (CN,GRice) to the indicator lineoak+I+G=0 meansQN, the indicator reading is expressed asFI, the readings corresponding to the upper and lower scale lines closest to the pointer are expressed asFOtherIFV, the distance between the top and bottom ticks closest to the pointer is usedQ1IQ0The distance between the pointer and the scale line can be calculated according to the equation. (16).

$q_n=\frac{\left|O c_n+U g_n+G\right|}{\sqrt{X^2+Y^2}} ; n=1,2 \cdots i$ (16)

The formula for calculating the meter reading is given by the equation. (17).

$F_i=F_V+\frac{q_1}{q_0+q_1} \times\lijevo(F_E-F_V\desno)$ (17)

(Video) Neural Network In 5 Minutes | What Is A Neural Network? | How Neural Networks Work | Simplilearn

5. Results and analysis of experiments

Observing the binary horizontal projection of the image from the energy monitor shown in Figure 4, it can be seen that there may be obvious fluctuations in the brightness of some areas. These fluctuations can be caused by uneven lighting, reflections from instrument materials, and other factors. In this case, relying solely on binary processing may not be able to fully extract effective information from the instrument, which affects the subsequent identification and analysis process. Therefore, in order to obtain more accurate readings from energy monitoring instruments, it is necessary to adjust the brightness of the image, eliminate the influence of uneven lighting, and increase the readability and accuracy of image recognition.

Table 2 compares the results of the experiments before and after image preprocessing. A more detailed analysis of differences in experimental results before and after pretreatment can be done in four aspects: error analysis, time analysis, stability analysis, and practical analysis. The table shows that the recognition error after preprocessing is generally low. This shows that image pre-processing, such as brightness adjustment and Hough transform, can reduce uneven lighting and ambient noise in an image and make the instrument's display more recognizable. This helps to improve the accuracy of pointer detection and angle calculation, reducing recognition errors. The recognition time after pre-processing is generally shorter, which may be because the pre-processing process reduces noise and noise in the image, making it easier for the algorithm to recognize accurate features. In addition, the processed image is clearer, which helps the algorithm to quickly locate the hands and dial, thus reducing the recognition time. In the experimental results after pretreatment, the error and time fluctuation is small, which shows that the pretreatment method has better stability for instruments with different displayed values. This is critical for intelligent energy monitoring analysis, where stability is a key factor in practical applications where a large number of meters with different displayed values ​​need to be processed. The pre-treatment method not only has small error and short recognition time, but also has better adaptability to instruments with different displayed values. This makes the method more practical for practical applications and helps improve the efficiency and accuracy of intelligent energy monitoring and analysis. In conclusion, image preprocessing such as brightness adjustment and Hough transform have a significant impact on improving the recognition accuracy of energy monitoring instrument readings, reducing recognition time and increasing stability. In a real-world smart energy monitoring and analysis scenario, this pre-processing method is of great importance and helps to achieve efficient and accurate energy monitoring.

4. png

An intelligent method of energy monitoring and analysis based on image recognition technology (11)

Figure 4.Horizontal projection of a binary image of an energy monitor

Table 2.Comparison of experimental results before and after image pre-processing

Value display

Before pre-treatment

After pre-treatment

cognitive value

mistake

time

cognitive value

mistake

time

30

41

37

36,65

0,35

32

35,55

1,45

49

39

38,99

0,01

27

37,82

1.18

37

47

47,76

-0,76

33

48,98

-1,98

45

58

58,98

-0,98

30

57,61

0,39

37

61

61,87

-0,87

31

60.45

0,55

39

63

63,87

-2,87

32

61,83

-0,83

43

75

63.32

-0,32

32

65,92

-2,92

47

100

100,31

-0,31

29

100,43

-0,43

46

5. png

An intelligent method of energy monitoring and analysis based on image recognition technology (12)

Figure 5.Run-time curves for different methods of pointer detection and pointer angle calculation

6. png

An intelligent method of energy monitoring and analysis based on image recognition technology (13)

Figure 6.The result of fitting the straight line of the indicator

Figure 5 shows the time curves of different pointer detection and pointer angle calculation methods. By observing the data in the table, you can see that the operating time of each indicator detection method varies with different values ​​on the display of the instrument. The operation time of the method proposed in this article is relatively short at various values ​​displayed by the instrument, and the fluctuation is small, which proves the stability of the method and the high speed of calculations. PeriodostroumanThe method is subject to large fluctuations depending on the different values ​​displayed on the instrument, and the total operating time is much longer than other methods. It showsostroumanthe method may be less efficient in the task of detecting pointers. The uptime of the template matching method is relatively stable, but for some of the values ​​displayed by the gauge, the uptime is high. This suggests that template matching methods may be less effective in some cases. PeriodAuxiliary vector machinesThe method varies slightly depending on the different values ​​displayed on the instrument, but its total running time is longer than the proposed method. It showsAuxiliary vector machinesthe method is less efficient in the task of detecting the pointer. In short, according to the analysis of the operating time, the method proposed in the article is characterized by greater efficiency and stability in the task of detecting the indicator at various values ​​of the device's indications, and the efficiency is better thanostrouman, template matching andAuxiliary vector machinesmethod.

Figure 6 shows the results of fitting the indicator to a straight line. Observing this result, it can be seen that the straight line of the indicator skeleton is very close to the actual direction of the indicator, and the adjustment effect is better. In this work, the least squares method was used to fit the skeleton of the hand, the goal is to obtain a straight line, the sum of the distances from the straight line to the points of the hand skeleton is as small as possible. This method can effectively eliminate the effect of image noise on the editing result.

In addition, in this study, the experimental results of various methods of recognizing indications of energy meters were compared, and the results of the comparison are presented in Table 3. Comparing the methods proposed in this study,Maska R-CNN, Iwatershed, four conclusions can be drawn: error analysis, time analysis, stability analysis, and practical analysis. The table shows that the recognition error of the method proposed in this study is generally low, indicating that the recognition accuracy is better. mistakeMaska R-CNNIwatershedThese methods are relatively large, which may be due to their sensitivity to lighting and environmental disturbances when processing energy meter images. The recognition time of the method proposed in this study is generally shorter, which implies higher efficiency of processing images from energy meters. But the time of recognitionMaska R-CNNIwatershedThe method is long and may affect the efficiency of the intelligent energy monitoring analysis. In the experimental results of the method proposed in this paper, errors and time fluctuations are smaller, which indicates better stability. fluctuationsMaska R-CNNIwatershedThe method is relatively large and may cause unstable recognition results for instruments with different displayed values. The method proposed in this study has better adaptability in the case of meters with different displayed values, while maintaining a low error and short recognition time.Maska R-CNNIwatershedThe method performs poorly in terms of adaptability and may require fine-tuning of parameters for different instruments. To sum up, the method proposed in this paper is characterized by greater accuracy, shorter recognition time and better stability of recognizing electricity meter indications and a greater value of practical applicability than traditional methods.Maska R-CNNIwatershedmethod. In a real scenario of smart energy regulation analysis, efficient and accurate energy monitoring can be achieved by adopting the method proposed in this paper.

Finally, this paper compares the relative errors of different methods of recognizing electricity meter indications, and the results of the comparison are presented in Table 4.Maska R-CNN,watershed, before and after pre-treatment, the following conclusions can be drawn. Comparing the errors before and after pre-processing, it is clear that pre-processing significantly reduces the recognition error, which indicates the importance of the pre-processing step for improving the accuracy of recognizing electricity meter readings. The preprocessed method has less error compared toMaska R-CNNIwatershedmethods, proving that preprocessing has better performance in terms of recognition accuracy. In contrast,Maska R-CNNIwatershedThe error of the method is relatively large, which may be affected by factors such as light and environmental interference. Regarding error performance, the preprocessed method has less jitter and shows better stability.Maska R-CNNIwatershedThe method varies greatly, which can lead to unstable recognition results for instruments with different displayed values. The preprocessing method has better adaptability for instruments with different display values, and generally has a small error.Maska R-CNNIwatershedThe method performs poorly in terms of adaptability and may require fine-tuning of parameters for different instruments. To sum up, the method proposed in this paper, after initial processing, is characterized by greater accuracy and better stability in recognizing electricity meter indications and has a greater practical value than traditional methods.Maska R-CNNIwatershedmethod. In a real scenario of smart energy regulation analysis, efficient and accurate energy monitoring can be achieved by adopting the method proposed in this paper.

Table 3.Comparison of experimental results of different methods of recognizing indications of energy monitoring devices

(Video) Energy monitoring, auditing and Targeting | ENERGY AUDIT AND MANAGEMENT | Lecture 1

Value display

The method of this work

mask R-CNN

watershed

cognitive value

mistake

time

cognitive value

mistake

time

cognitive value

mistake

time

30

199

47

37

36,65

0,35

32

35,55

1,45

191

36.11

0,89

43

39

38,99

0,01

27

37,82

1.18

187

37.21

1,79

49

47

47,76

-0,76

33

48,98

-1,98

173

48,87

-1,87

37

58

58,98

-0,98

30

57,68

0,32

177

57,78

0,22

45

61

61,87

-0,87

31

60.01

0,99

176

60,89

0,11

37

63

63,87

-2,87

32

61,82

-0,82

183

61.23

-0,23

39

75

63.32

-0,32

32

65,92

-2,92

184

63,68

-0,68

43

100

100,31

-0,31

29

97,77

2.23

179

99,81

0,19

47

Table 4.Comparison of relative errors of various methods of recognizing electricity meter readings

Value display

mask R-CNN

watershed

Before pre-treatment

After pre-treatment

37

1,45

0,89

1,45

0,35

39

1.18

1,79

1.18

0,01

47

-1,98

-1,87

-1,98

-0,76

58

0,32

0,22

0,39

-0,98

61

0,99

0,11

0,55

-0,87

63

-0,82

-0,23

-0,83

-2,87

75

-2,92

-0,68

-2,92

-0,32

100

2.23

0,19

-0,43

-0,31

6. Conclusion

In this study, research is conducted on the method of analysis of intelligent energy monitoring based on image recognition technology. The energy meter dial is pre-processed with brightness adjustment and Hough transform. After removing the pointer switchboard, connected component analysis, thinning algorithm, line matching and pointer direction estimation mechanism are used to detect the pointer and calculate the angle. A method of recognizing indications of electricity meters has been proposed. Comparison and verification of the results of the experiments before and after the initial image processing shows that brightness adjustment and the Hough transform can significantly improve the accuracy of recognizing electricity meters, shorten the recognition time and increase the stability of recognition. Execution time curves of various pointer detection methods and pointer angle calculation are given, showing that the method proposed in this paper has higher performance and stability in pointer detection tasks with different values ​​displayed by the instrument and is better than Canny methods, template matching methods and SVM. The results of fitting the palm lines are presented and the legitimacy of using the least squares method to fit the skeleton of the hand was verified. In addition, this study also compares the experimental results of different methods for recognizing energy meter readings, showing that the method proposed in this study has better practical value than the Mask R-CNN and Watershed methods. Finally, this study compares the relative errors of different methods of recognizing energy meter readings. From the experimental results, it can be seen that after pre-processing, the method proposed in this study is useful for obtaining efficient and accurate energy monitoring.

Thank you

This research was funded by the Henan Provincial Life Sciences Foundation (grant number: 222300420239); the youth project of the National Research Fund Cultivation Project of Huanghuai University (Grant number: XKPY-202104).

reference

[1] He Qing, Wu Min, Liu Cheng, Jin Dong, Zhao Min (2023). Real-time management and monitoring of connected energy centers with digital twins: a machine learning approach. Solar Energy, 250: 173-181. https://doi.org/10.1016/j.solener.2022.12.041

[2] Selvaraj, R., Kuthadi, V.M., Baskar, S. (2023). An energy management system for smart buildings and monitoring in smart cities based on artificial intelligence. Sustainable Energy Technology and Assessment, 56: 103090. https://doi.org/10.1016/j.seta.2023.103090

[3] Sahu, A., Davis, K., Huang, H., Umunnakwe, A., Zonouz, S., Goulart, A. (2023). Designing Next Generation Cyber-Physical Energy Management Systems: From Monitoring to Mitigation. IEEE Open Access Journal of Power and Energy, 10: 151-163. https://doi.org/10.1109/OAJPE.2023.3239186

[4] Rohmingtluanga, C., Datta, S., Sinha, N., Ustun, T.S. (2023). SCADA-based input monitoring to improve energy management programs: a case study. Energy Reports, 9(S1): 402-410. https://doi.org/10.1016/j.egyr.2022.11.037

[5] A. Fetanat, M. Tayebi, G. Shafipour, M. Moteraghi (2023). A new integrated fsQCA and digital design approach for sustainability monitoring and assessment of building energy management systems: a case study. Journal of Building Performance Modeling, 16(1):107-130. https://doi.org/10.1080/19401493.2022.2112758

[6] Tao Yan, Qiu Jia, Lai Sheng, Wang Yan, Sun Xiao (2022). Microgrid reserve estimation and energy management for a common electricity market based on non-intrusive load monitoring IEEE Transactions on Industry Applications, 59(1): 207-219. https://doi.org/10.1109/TIA.2022.3217747

[7] Ullah Z., Wang S., Wu G., Xiao M., Lai J., Elkadeem MR. (2022). Advanced microgrid energy management strategies using a real-time monitoring interface. Journal of Energy Storage, 52:104814. https://doi.org/10.1016/j.est.2022.104814

[8] Xue Zhi, Dong Bin (2022). Design and implementation of a system for monitoring and managing battery warehouses. 2022 IEEE 5th International Electricity and Energy Conference (CIECEC), Nanjing, China, pp. 489-493. https://doi.org/10.1109/CIEEC54735.2022.9846574

[9] Ayinla, L.S., Aziz, A.A., Drieberg, M., Azubogu, A.C., Amosa, T.I. (2022). Wireless sensor network energy management algorithm for pipeline monitoring. International Conference on Future Trends in Smart Communities (ICFTSC) 2022, Kuching, Sarawak, Malaysia, pp. 70-75. https://doi.org/10.1109/ICFTSC57269.2022.10039825

[10] Alam, M.M., Haque, A., Khan, M.A., Sobahi, N.M., Mehedi, I.M., Khan, A.I. (2022). Condition monitoring and maintenance management using grid-connected renewable energy systems. CMC-Computers Materials & Continua, 72(2): 3999-4017. https://doi.org/10.32604/cmc.2022.026353

[11] A. Magrini, L. Marenco, A. Bodrato (2022). Energy intelligence management and NZEB performance monitoring: application analysis. Energy Reports, 8: 8896-8906. https://doi.org/10.1016/j.egyr.2022.07.010

[12] Li Qing, Jiang Zheng, Yuan Feng (2022) Neural Computing and Applications for Monitoring and Visualization Smart Urban Energy Management Based on IoT Sensors, 34(9): 6695-6704. https://doi.org/10.1007/s00521-021-06108-1

[13] Summarad, KAA, Sulaiman, N., Wahab, N.A., Hizam, H. (2022). Microgrid energy management system based on a monitoring platform with fuzzy logic and data analysis. Energy, 15(11):4125. https://doi.org/10.3390/en15114125

[14] Li Cheng, Wang Hao, Shen Hao, Li Qing, Tao Peng, Li Bao, Wang Yu (2022). A fusion algorithm for low light image enhancement based on fractional Retinex. 2022. IEEE 10th International Conference on Information Technology and Artificial Intelligence (ITAIC), 10: 1224-1232. https://doi.org/10.1109/ITAIC54216.2022.9836944

[15] Tang, Y., Ten, C. W., Wang, C., Parker, G. (2015). Extract energy information from analog meters using image processing. IEEE Smart Grid Transactions, 6(4): 2032-2040. https://doi.org/10.1109/TSG.2015.2388586

[16] G. Salomon, R. Laroca, D. Menotti (2022). Image-based automatic meter reading in unlimited scenarios. Measurement, 204:112025. https://doi.org/10.1016/j.measurement.2022.112025

[17] Ma Yan, Zhao Zhi, Chen Min, Guo Jing, Lan Sheng, Wang Yuanning (2022). Comparative Study of Smart Meter Images in Proceedings of the 6th International Conference on Advances in Imaging Processing, Zhanjiang, China, pp. 74-79. https://doi.org/10.1145/3577117.3577132

[18] AV Devyatkin, A.R. Muzalevskiy, A.S. Morozov (2022). Computer vision system for automatic reading of clocks based on the image. 2022 XXV International Conference Soft Computing and Measurement (SCM), St. St. Petersburg, Russian Federation, pp. 222-225. https://doi.org/10.1109/SCM55405.2022.9794897

[19] F. Martinelli, F. Mercaldo, A. Santone (2022). Image-based automatic meter reading for smart grid monitoring based on deep learning. 2022 IEEE International Conference on Big Data, Osaka, Japan, pp. 4534-4542. https://doi.org/10.1109/BigData55660.2022.10020523

[20] Chen, M., Ji, C., Mao, D., Wang, Y. N., Ma, Y., Cao, X. (2022). Application of Color Filter and Image Filter Algorithm in Diagnostics of Smart Meter Box Circuits in Proceedings of the 7th International Conference on Multimedia Systems and Signal Processing 2022, pp. 1-7. https://doi.org/10.1145/3545822.3545823

[21] Luo Peng, Hu Xiao, Zhao Yu, Jiang Yu, Lu Feng, Liang Jian, Xu Li (2022). PCA-Based Nonlinear Image Enhancement Engineering A lateral energy counter accompanying a character recognition method, 11(1):232-240. https://doi.org/10.1515/nleng-2022-0028

[22] M. Imran, H. Anwar, M. Tufail, A. Khan, M. Khan, D. A. Ramli. (2023). Automated image-based energy meter reading using deep learning. CMC-Computers Materials & Continua, 74(1):203-216. https://doi.org/10.32604/cmc.2023.029834

[23] Hu Hao, Liu Bo, Li Wenjie, Sun Hailong, Ni Tiansheng (2021). Digital image processing technology in the design and development of an automatic sorting system for energy meter recycling. In Journal of Physics: Conference Series, 2143(1):012042. https://doi.org/10.1088/1742-6596/2143/1/012042

(Video) Time Series Kya hota hai l Machine Learning

[24] Bajaj S., Yemula P.K. (2020). Computer vision energy monitoring system using the Instrumentation Image Capture System (MICAPS). 2020 1st International Conference on Power, Control and Information Technology (ICPC2T), Raipur, India, pp. 246-249. https://doi.org/10.1109/ICPC2T48082.2020.9071459

[25] Kanagarathinam, K., Sekar, K. (2019). Text detection and recognition in raw image data sets for seven-segment digital energy meter displays. Energy Reports, 5: 842-852. https://doi.org/10.1016/j.egyr.2019.07.004

FAQs

What type of intelligent technique helps Netflix? ›

What type of intelligent technique helps Netflix develop a personalized selection of videos for customers? use multiple layers of neural networks to detect patterns in input data.

What is Alan Turing's definition of AI? ›

The founding father of AI, Alan Turing, defines this discipline as: “AI is the science and engineering of making intelligent machines, especially intelligent computer programs.”

How does AI image detection work? ›

Image recognition algorithms use deep learning datasets to distinguish patterns in images. These datasets consist of hundreds of thousands of tagged images. The algorithm looks through these datasets and learns how the image of a particular object looks like.

What are 3 types of intelligent systems? ›

Artificial narrow intelligence (ANI), which has a narrow range of abilities; Artificial general intelligence (AGI), which is on par with human capabilities; or. Artificial superintelligence (ASI), which is more capable than a human.

What are three 3 main categories of AI algorithms? ›

There are three major categories of AI algorithms: supervised learning, unsupervised learning, and reinforcement learning. The key differences between these algorithms are in how they're trained, and how they function.

What are intelligent methods? ›

Intelligent Techniques. Intelligence techniques may be used for: Capturing individual and collective knowledge and extending a knowledge base, using artificial intelligence and database technologies. Capturing tacit knowledge, using expert systems, case-based reasoning, and fuzzy logic.

What is the most advanced AI? ›

OpenAI, a leading research organization in the field of artificial intelligence (AI), has recently released Chat GPT-4, the latest iteration of their language model. This release has generated a lot of excitement and anticipation, as it is the most advanced and powerful AI yet.

How does Netflix AI work? ›

Netflix's AI considers your viewing habits and hobbies to provide Netflix recommendations. Users can take charge of their multimedia streaming and customize their interactions owing to the system's ability to compile and recommend content based on their preferences.

Can a human take the Turing test? ›

Can a Human Fail the Turing Test? Yes. Although a Turing test is based on knowledge and intelligence, it is also about evaluating how responses are given and whether the answers are interpreted to be sneaky.

Who invented AI technology? ›

The term "AI" could be attributed to John McCarthy of MIT (Massachusetts Institute of Technology), which Marvin Minsky (Carnegie-Mellon University) defines as "the construction of computer programs that engage in tasks that are currently more satisfactorily performed by human beings because they require high-level ...

Who is the godfather of AI? ›

Artificial intelligence pioneer Geoffrey Hinton announced he was leaving his part-time job at Google on Monday so that he could speak more freely about his concerns with the rapidly developing technology.

What is the best image recognition algorithm? ›

Rectified Linear Units (ReLu) are seen as the best fit for image recognition tasks. The matrix size is decreased to help the machine learning model better extract features by using pooling layers.

What is an example of image recognition? ›

The most common example of image recognition can be seen in the facial recognition system of your mobile. Facial recognition in mobiles is not only used to identify your face for unlocking your device; today, it is also being used for marketing.

What type of AI is image recognition? ›

Image recognition employs deep learning which is an advanced form of machine learning. Machine learning works by taking data as an input, applying various ML algorithms on the data to interpret it, and giving an output. Deep learning is different than machine learning because it employs a layered neural network.

What are the 3 components of intelligent automation? ›

Intelligent automation works because it is comprised of three important cognitive technologies, these are artificial intelligence (AI), business process management (BPM) and robotic process automation (RPA).

What are 4 characteristics of intelligent systems? ›

An intelligent system (1) operates in an environment with other agents, (2) possesses cognitive abilities such as perception, action control, deliberative reasoning or language use, (3) follows behavioral principles based on rationality and social norms and (4) has the capacity to adapt through learning.

What are at least 3 properties of intelligent agent systems? ›

Common characteristics of intelligent agents are adaptation based on experience, real time problem solving, analysis of error or success rates and the use of memory-based storage and retrieval.

What are the four 4 types of machine learning algorithms? ›

There are four types of machine learning algorithms: supervised, semi-supervised, unsupervised and reinforcement.

What are the top 3 technology direction of AI? ›

In this article, I will explain three major directions of artificial intelligence technology, that are speech recognition, computer vision, and natural language processing.

Are there 3 or 4 types of AI? ›

According to the current system of classification, there are four primary AI types: reactive, limited memory, theory of mind, and self-aware.

What are the types of intelligent control system? ›

The types of intelligent control includes: fuzzy logic, artificial neural networks, genetic programming, and others.

What are the six artificial intelligences? ›

To help executives get up to speed, we've identified the six main subsets of AI as machine learning, deep learning, robotics, neural networks, natural language processing, and genetic algorithms.

What are intelligent methods applied to extract data patterns? ›

Data mining

It is an essential process where intelligent methods are applied to extract data patterns. Methods can be summarization, classification, regression, association, or clustering.

What is the most advanced AI robot in the world? ›

Ameca is the brainchild of Cornwall-based startup, Engineered Arts, who describe her as the 'world's most advanced robot'. The robot is undoubtedly lifelike and can perform a range of facial expressions including winking, pursing its lips and scrunching its nose – just like a real person.

Is there a super intelligent AI? ›

ASI is still theoretical, so there are no real-life examples of superintelligent machines. Examples in science fiction of machine intelligence include the robot character of R2D2 in the movie Star Wars, which can perform multiple technical operations beyond the abilities of a human.

What is the most intelligent AI robot? ›

Sophia. Sophia is considered the most advanced humanoid robot.

How does Alexa use AI? ›

At the start, the system gets an input of natural language. After, it converts them into Artificial language like speech recognition. Here we get the data into a textual form which NLU (Natural Language Understanding) process to understand the meaning.

What software does Netflix use? ›

Java: the core of the service

At the core of Netflix's technology stack is the Java programming language. Java is a high-level, object-oriented programming language that is known for its portability and scalability, making it an ideal choice for building large, distributed systems like Netflix's.

What questions can't AI answer? ›

AI cannot answer questions requiring inference, a nuanced understanding of language, or a broad understanding of multiple topics. In other words, while scientists have managed to “teach” AI to pass standardized eighth-grade and even high-school science tests, it has yet to pass a college entrance exam.

Does Siri pass the Turing test? ›

Can Siri pass the Turing Test? Probably not. Siri would have to be able to convincingly carry out a conversation with a subject and be able to generate its own thoughts. So far, Siri only works with simple sentences and short phrases and is unable to carry out a full-blown conversation.

What is the consciousness test for AI? ›

The Turing Test is a method of inquiry in artificial intelligence (AI) for determining whether or not a computer is capable of thinking like a human being. The test is named after Alan Turing, the founder of the Turing Test and an English computer scientist, cryptanalyst, mathematician and theoretical biologist.

Is Siri real AI? ›

Siri is Apple's virtual assistant for iOS, macOS, tvOS and watchOS devices that use voice recognition and are powered by artificial intelligence (AI).

What does Elon Musk say about AI? ›

“AI is more dangerous than, say, mismanaged aircraft design or production maintenance or bad car production, in the sense that it is, it has the potential — however small one may regard that probability, but it is non-trivial — it has the potential of civilization destruction,” Musk said in his interview with Tucker ...

What is human level AI? ›

Human-level AI, sometimes referred to as AGI (artificial general intelligence), is defined as the ability of a machine program to pass the “Turing Test” demonstrating its ability to understand and/or learn any intellectual task that a human can, and to perform those tasks in a fashion indistinguishable from us humans.

Who is the guy who left Google? ›

Dr. Hinton said he has quit his job at Google, where he has worked for more than a decade and became one of the most respected voices in the field, so he can freely speak out about the risks of A.I. A part of him, he said, now regrets his life's work.

Is AI a threat to humans? ›

AI could pose a threat to humanity's future if it has certain ingredients, which would be superhuman intelligence, extensive autonomy, some resources and novel technology,” says Ryan Carey, a research fellow in AI safety at Oxford University's Future of Humanity Institute.

Who is the head of AI at humans? ›

From cracking Mona Lisa's smile to helping people be more creative with AI: The story of Prof. Nicu Sebe. AI is without a doubt one of the most exciting, impactful, if somewhat controversial technologies of the future.

What software is used for image recognition? ›

Anyline is an Optical Character Recognition (OCR) software that can be integrated into any camera-enabled mobile device and is a good choice if you're looking for a practical scanner to make your daily tasks easier.

What is the most popular AI image generator? ›

Best AI art generator overall

Bing's Image Creator is powered by a more advanced version of the DALL-E, and produces the same (if not higher) quality results just as quickly. Like DALL-E, it is free to use. All you need to do to access the art generator is visit the website and sign in with a Microsoft account.

How do you train AI image recognition? ›

In the Google Cloud console, in the Vertex AI section, go to the Datasets page. Click the name of the dataset you want to use to train your model to open its details page. Click Train new model. Note: You can type model.

What data does image recognition use? ›

The data fed to the recognition system is basically the location and intensity of various pixels in the image. You can train the system to map out the patterns and relations between different images using this information.

How do I use Google image recognition? ›

Search with an image from a website
  1. On your Android phone or tablet, go to the Google app or Chrome app .
  2. Go to the website with the image.
  3. Touch and hold the image.
  4. Tap Search Image with Google Lens. ...
  5. At the bottom, scroll to find related search results.
  6. To refine your search, tap Add to your search.

What is the difference between image recognition and object detection? ›

To quickly summarize: Image Classification helps us to classify what is contained in an image. Image Localization will specify the location of single object in an image whereas Object Detection specifies the location of multiple objects in the image.

How do image recognition algorithms work? ›

How does Image recognition work? Typically the task of image recognition involves the creation of a neural network that processes the individual pixels of an image. These networks are fed with as many pre-labelled images as we can, in order to “teach” them how to recognize similar images.

What is the basic of image recognition? ›

Image recognition is a computer vision task that works to identify and categorize various elements of images and/or videos. Image recognition models are trained to take an image as input and output one or more labels describing the image. The set of possible output labels are referred to as target classes.

What are different image recognition algorithms? ›

Some of the algorithms used in image recognition (Object Recognition, Face Recognition) are SIFT (Scale-invariant Feature Transform), SURF (Speeded Up Robust Features), PCA (Principal Component Analysis), and LDA (Linear Discriminant Analysis).

What are the 4 types of AI and what are their uses? ›

4 main types of artificial intelligence
  • Reactive machines. Reactive machines are AI systems that have no memory and are task specific, meaning that an input always delivers the same output. ...
  • Limited memory. The next type of AI in its evolution is limited memory. ...
  • Theory of mind. ...
  • Self-awareness.
May 17, 2023

Are there 4 basic AI concepts? ›

Most people focus on the results of AI. For those of us who like to look under the hood, there are four foundational elements to understand: categorization, classification, machine learning, and collaborative filtering.

What are the 5 main groups of AI? ›

We can broadly group the fields of application of AI into 5 main macro-areas:
  • Text AI.
  • Visual AI.
  • Interactive AI.
  • Analytic AI.
  • Functional AI.

What are the big 5 in AI? ›

Given the success of existing companies with new epochs, the most obvious place to start when thinking about the impact of AI is with the big five: Apple, Amazon, Facebook, Google, and Microsoft.

What are the types of intelligence in AI? ›

What Are the 4 Types of AI? The current categorization system categorizes AI into four basic categories: reactive, theory of mind, limited memory, and self-aware.

What are the different types of AI techniques? ›

There are three major types of Artificial Intelligence. They are Weak or Narrow AI, General AI, and Super AI. Moreover, there are 4 other types of Artificial Intelligence. Such as; Reactive machines, Theory of mind, Limited memory, and Self-awareness.

What are Type 2 artificial intelligence machines? ›

Artificial Intelligence type-2: Based on functionality

Purely reactive machines are the most basic types of Artificial Intelligence. Such AI systems do not store memories or past experiences for future actions. These machines only focus on current scenarios and react on it as per possible best action.

What are the three components of artificial intelligence? ›

To understand some of the deeper concepts, such as data mining, natural language processing, and driving software, you need to know the three basic AI concepts: machine learning, deep learning, and neural networks.

What are the three 3 key elements for AI? ›

The key elements of AI include: Natural language processing (NLP) Expert systems. Robotics.

What are the 6 principles of AI? ›

Microsoft outlines six key principles for responsible AI: accountability, inclusiveness, reliability and safety, fairness, transparency, and privacy and security. These principles are essential to creating responsible and trustworthy AI as it moves into more mainstream products and services.

What are the 8 foundations of AI? ›

✒️ Algorithm, incompleteness theorem, computable, tractability, NP completeness, Non deterministic polynomial and probability.

What are the original 7 aspects of AI? ›

The original seven aspects of AI, named by McCarthy and others at the Dartmouth Conference in 1955, include automatic computers, programming AI to use language, hypothetical neuron nets to be used to form concepts, measuring problem complexity, self-improvement, abstractions, and randomness and creativity.

What is the Big 5 analysis? ›

Many contemporary personality psychologists believe that there are five basic dimensions of personality, often referred to as the "Big 5" personality traits. The Big 5 personality traits are extraversion (also often spelled extroversion), agreeableness, openness, conscientiousness, and neuroticism.

Videos

1. [SAIF 2020] Day 1: Energy-Based Models for Self-Supervised Learning - Yann LeCun | Samsung
(Samsung)
2. Random Forest Algorithm Clearly Explained!
(Normalized Nerd)
3. How to use AI & machine learning to predict battery lifecycles
(Energy Storage News)
4. Pass Every Coursera Peer-Graded Assignment With 100 % Credit| 2020 | Coursera Assignment | Coursera
(Akash Tyagi)
5. how to download IEEE research papers for free without being a IEEE member
(Gadget Guys)
6. Hierarchical Clustering : Agglomerative Clustering and Divisive Clustering Explained in Hindi
(5 Minutes Engineering)
Top Articles
Latest Posts
Article information

Author: Stevie Stamm

Last Updated: 06/21/2023

Views: 5648

Rating: 5 / 5 (60 voted)

Reviews: 91% of readers found this page helpful

Author information

Name: Stevie Stamm

Birthday: 1996-06-22

Address: Apt. 419 4200 Sipes Estate, East Delmerview, WY 05617

Phone: +342332224300

Job: Future Advertising Analyst

Hobby: Leather crafting, Puzzles, Leather crafting, scrapbook, Urban exploration, Cabaret, Skateboarding

Introduction: My name is Stevie Stamm, I am a colorful, sparkling, splendid, vast, open, hilarious, tender person who loves writing and wants to share my knowledge and understanding with you.