Sign in to use this feature.

Years

Between: -

Article Types

Countries / Regions

Search Results (175)

Search Parameters:
Journal = AI

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
Opinion
What Is the Role of AI for Digital Twins?
AI 2023, 4(3), 721-728; https://doi.org/10.3390/ai4030038 (registering DOI) - 01 Sep 2023
Abstract
The concept of a digital twin is intriguing as it presents an innovative approach to solving numerous real-world challenges. Initially emerging from the domains of manufacturing and engineering, digital twin research has transcended its origins and now finds applications across a wide range [...] Read more.
The concept of a digital twin is intriguing as it presents an innovative approach to solving numerous real-world challenges. Initially emerging from the domains of manufacturing and engineering, digital twin research has transcended its origins and now finds applications across a wide range of disciplines. This multidisciplinary expansion has impressively demonstrated the potential of digital twin research. While the simulation aspect of a digital twin is often emphasized, the role of artificial intelligence (AI) and machine learning (ML) is severely understudied. For this reason, in this paper, we highlight the pivotal role of AI and ML for digital twin research. By recognizing that a digital twin is a component of a broader Digital Twin System (DTS), we can fully grasp the diverse applications of AI and ML. In this paper, we explore six AI techniques—(1) optimization (model creation), (2) optimization (model updating), (3) generative modeling, (4) data analytics, (5) predictive analytics and (6) decision making—and their potential to advance applications in health, climate science, and sustainability. Full article
Show Figures

Figure 1

Article
Privacy-Preserving Convolutional Bi-LSTM Network for Robust Analysis of Encrypted Time-Series Medical Images
AI 2023, 4(3), 706-720; https://doi.org/10.3390/ai4030037 - 28 Aug 2023
Viewed by 210
Abstract
Deep learning (DL) algorithms can improve healthcare applications. DL has improved medical imaging diagnosis, therapy, and illness management. The use of deep learning algorithms on sensitive medical images presents privacy and data security problems. Improving medical imaging while protecting patient anonymity is difficult. [...] Read more.
Deep learning (DL) algorithms can improve healthcare applications. DL has improved medical imaging diagnosis, therapy, and illness management. The use of deep learning algorithms on sensitive medical images presents privacy and data security problems. Improving medical imaging while protecting patient anonymity is difficult. Thus, privacy-preserving approaches for deep learning model training and inference are gaining popularity. These picture sequences are analyzed using state-of-the-art computer aided detection/diagnosis techniques (CAD). Algorithms that upload medical photos to servers pose privacy issues. This article presents a convolutional Bi-LSTM network to assess completely homomorphic-encrypted (HE) time-series medical images. From secret image sequences, convolutional blocks learn to extract selective spatial features and Bi-LSTM-based analytical sequence layers learn to encode time data. A weighted unit and sequence voting layer uses geographical with varying weights to boost efficiency and reduce incorrect diagnoses. Two rigid benchmarks—the CheXpert, and the BreaKHis public datasets—illustrate the framework’s efficacy. The technique outperforms numerous rival methods with an accuracy above 0.99 for both datasets. These results demonstrate that the proposed outline can extract visual representations and sequential dynamics from encrypted medical picture sequences, protecting privacy while attaining good medical image analysis performance. Full article
(This article belongs to the Topic Explainable AI for Health)
Show Figures

Figure 1

Article
Comparison of Various Nitrogen and Water Dual Stress Effects for Predicting Relative Water Content and Nitrogen Content in Maize Plants through Hyperspectral Imaging
AI 2023, 4(3), 692-705; https://doi.org/10.3390/ai4030036 - 18 Aug 2023
Viewed by 363
Abstract
Water and nitrogen (N) are major factors in plant growth and agricultural production. However, these are often confounded and produce overlapping symptoms of plant stress. The objective of this study is to verify whether the different levels of N treatment influence water status [...] Read more.
Water and nitrogen (N) are major factors in plant growth and agricultural production. However, these are often confounded and produce overlapping symptoms of plant stress. The objective of this study is to verify whether the different levels of N treatment influence water status prediction and vice versa with hyperspectral modeling. We cultivated 108 maize plants in a greenhouse under three-level N treatments in combination with three-level water treatments. Hyperspectral images were collected from those plants, then Relative Water Content (RWC), as well as N content, was measured as ground truth. A Partial Least Squares (PLS) regression analysis was used to build prediction models for RWC and N content. Then, their accuracy and robustness were compared according to the different N treatment datasets and different water treatment datasets, respectively. The results demonstrated that the PLS prediction for RWC using hyperspectral data was impacted by N stress difference (Ratio of Performance to Deviation; RPD from 0.87 to 2.27). Furthermore, the dataset with water and N dual stresses improved model accuracy and robustness (RPD from 1.69 to 2.64). Conversely, the PLS prediction for N content was found to be robust against water stress difference (RPD from 2.33 to 3.06). In conclusion, we suggest that water and N dual treatments can be helpful in building models with wide applicability and high accuracy for evaluating plant water status such as RWC. Full article
(This article belongs to the Special Issue Artificial Intelligence in Agriculture)
Show Figures

Figure 1

Article
Evaluation of an Arabic Chatbot Based on Extractive Question-Answering Transfer Learning and Language Transformers
AI 2023, 4(3), 667-691; https://doi.org/10.3390/ai4030035 - 16 Aug 2023
Viewed by 554
Abstract
Chatbots are programs with the ability to understand and respond to natural language in a way that is both informative and engaging. This study explored the current trends of using transformers and transfer learning techniques on Arabic chatbots. The proposed methods used various [...] Read more.
Chatbots are programs with the ability to understand and respond to natural language in a way that is both informative and engaging. This study explored the current trends of using transformers and transfer learning techniques on Arabic chatbots. The proposed methods used various transformers and semantic embedding models from AraBERT, CAMeLBERT, AraElectra-SQuAD, and AraElectra (Generator/Discriminator). Two datasets were used for the evaluation: one with 398 questions, and the other with 1395 questions and 365,568 documents sourced from Arabic Wikipedia. Extensive experimental works were conducted, evaluating both manually crafted questions and the entire set of questions by using confidence and similarity metrics. Our experimental results demonstrate that combining the power of transformer architecture with extractive chatbots can provide more accurate and contextually relevant answers to questions in Arabic. Specifically, our experimental results showed that the AraElectra-SQuAD model consistently outperformed other models. It achieved an average confidence score of 0.6422 and an average similarity score of 0.9773 on the first dataset, and an average confidence score of 0.6658 and similarity score of 0.9660 on the second dataset. The study concludes that the AraElectra-SQuAD showed remarkable performance, high confidence, and robustness, which highlights its potential for practical applications in natural language processing tasks for Arabic chatbots. The study suggests that the language transformers can be further enhanced and used for various tasks, such as specialized chatbots, virtual assistants, and information retrieval systems for Arabic-speaking users. Full article
Show Figures

Figure 1

Review
Explainable Artificial Intelligence (XAI): Concepts and Challenges in Healthcare
AI 2023, 4(3), 652-666; https://doi.org/10.3390/ai4030034 - 10 Aug 2023
Viewed by 815
Abstract
Artificial Intelligence (AI) describes computer systems able to perform tasks that normally require human intelligence, such as visual perception, speech recognition, decision-making, and language translation. Examples of AI techniques are machine learning, neural networks, and deep learning. AI can be applied in many [...] Read more.
Artificial Intelligence (AI) describes computer systems able to perform tasks that normally require human intelligence, such as visual perception, speech recognition, decision-making, and language translation. Examples of AI techniques are machine learning, neural networks, and deep learning. AI can be applied in many different areas, such as econometrics, biometry, e-commerce, and the automotive industry. In recent years, AI has found its way into healthcare as well, helping doctors make better decisions (“clinical decision support”), localizing tumors in magnetic resonance images, reading and analyzing reports written by radiologists and pathologists, and much more. However, AI has one big risk: it can be perceived as a “black box”, limiting trust in its reliability, which is a very big issue in an area in which a decision can mean life or death. As a result, the term Explainable Artificial Intelligence (XAI) has been gaining momentum. XAI tries to ensure that AI algorithms (and the resulting decisions) can be understood by humans. In this narrative review, we will have a look at some central concepts in XAI, describe several challenges around XAI in healthcare, and discuss whether it can really help healthcare to advance, for example, by increasing understanding and trust. Finally, alternatives to increase trust in AI are discussed, as well as future research possibilities in the area of XAI. Full article
Show Figures

Figure 1

Review
Explainable Image Classification: The Journey So Far and the Road Ahead
AI 2023, 4(3), 620-651; https://doi.org/10.3390/ai4030033 - 01 Aug 2023
Viewed by 905
Abstract
Explainable Artificial Intelligence (XAI) has emerged as a crucial research area to address the interpretability challenges posed by complex machine learning models. In this survey paper, we provide a comprehensive analysis of existing approaches in the field of XAI, focusing on the tradeoff [...] Read more.
Explainable Artificial Intelligence (XAI) has emerged as a crucial research area to address the interpretability challenges posed by complex machine learning models. In this survey paper, we provide a comprehensive analysis of existing approaches in the field of XAI, focusing on the tradeoff between model accuracy and interpretability. Motivated by the need to address this tradeoff, we conduct an extensive review of the literature, presenting a multi-view taxonomy that offers a new perspective on XAI methodologies. We analyze various sub-categories of XAI methods, considering their strengths, weaknesses, and practical challenges. Moreover, we explore causal relationships in model explanations and discuss approaches dedicated to explaining cross-domain classifiers. The latter is particularly important in scenarios where training and test data are sampled from different distributions. Drawing insights from our analysis, we propose future research directions, including exploring explainable allied learning paradigms, developing evaluation metrics for both traditionally trained and allied learning-based classifiers, and applying neural architectural search techniques to minimize the accuracy–interpretability tradeoff. This survey paper provides a comprehensive overview of the state-of-the-art in XAI, serving as a valuable resource for researchers and practitioners interested in understanding and advancing the field. Full article
(This article belongs to the Special Issue Interpretable and Explainable AI Applications)
Show Figures

Figure 1

Article
Evaluating Deep Learning Techniques for Blind Image Super-Resolution within a High-Scale Multi-Domain Perspective
AI 2023, 4(3), 598-619; https://doi.org/10.3390/ai4030032 - 01 Aug 2023
Viewed by 519
Abstract
Despite several solutions and experiments have been conducted recently addressing image super-resolution (SR), boosted by deep learning (DL), they do not usually design evaluations with high scaling factors. Moreover, the datasets are generally benchmarks which do not truly encompass significant diversity of domains [...] Read more.
Despite several solutions and experiments have been conducted recently addressing image super-resolution (SR), boosted by deep learning (DL), they do not usually design evaluations with high scaling factors. Moreover, the datasets are generally benchmarks which do not truly encompass significant diversity of domains to proper evaluate the techniques. It is also interesting to remark that blind SR is attractive for real-world scenarios since it is based on the idea that the degradation process is unknown, and, hence, techniques in this context rely basically on low-resolution (LR) images. In this article, we present a high-scale (8×) experiment which evaluates five recent DL techniques tailored for blind image SR: Adaptive Pseudo Augmentation (APA), Blind Image SR with Spatially Variant Degradations (BlindSR), Deep Alternating Network (DAN), FastGAN, and Mixture of Experts Super-Resolution (MoESR). We consider 14 datasets from five different broader domains (Aerial, Fauna, Flora, Medical, and Satellite), and another remark is that some of the DL approaches were designed for single-image SR but others not. Based on two no-reference metrics, NIQE and the transformer-based MANIQA score, MoESR can be regarded as the best solution although the perceptual quality of the created high-resolution (HR) images of all the techniques still needs to improve. Full article
(This article belongs to the Topic Applied Computing and Machine Intelligence (ACMI))
Show Figures

Figure 1

Article
Applying Few-Shot Learning for In-the-Wild Camera-Trap Species Classification
AI 2023, 4(3), 574-597; https://doi.org/10.3390/ai4030031 - 31 Jul 2023
Viewed by 496
Abstract
Few-shot learning (FSL) describes the challenge of learning a new task using a minimum amount of labeled data, and we have observed significant progress made in this area. In this paper, we explore the effectiveness of the FSL theory by considering a real-world [...] Read more.
Few-shot learning (FSL) describes the challenge of learning a new task using a minimum amount of labeled data, and we have observed significant progress made in this area. In this paper, we explore the effectiveness of the FSL theory by considering a real-world problem where labels are hard to obtain. To assist a large study on chimpanzee hunting activities, we aim to classify various animal species that appear in our in-the-wild camera traps located in Senegal. Using the philosophy of FSL, we aim to train an FSL network to learn to separate animal species using large public datasets and implement the network on our data with its novel species/classes and unseen environments, needing only to label a few images per new species. Here, we first discuss constraints and challenges caused by having in-the-wild uncurated data, which are often not addressed in benchmark FSL datasets. Considering these new challenges, we create two experiments and corresponding evaluation metrics to determine a network’s usefulness in a real-world implementation scenario. We then compare results from various FSL networks, and describe how factors may affect a network’s potential real-world usefulness. We consider network design factors such as distance metrics or extra pre-training, and examine their roles in a real-world implementation setting. We also consider additional factors such as support set selection and ease of implementation, which are usually ignored when a benchmark dataset has been established. Full article
(This article belongs to the Special Issue Feature Papers for AI)
Show Figures

Figure 1

Article
Improving Alzheimer’s Disease and Brain Tumor Detection Using Deep Learning with Particle Swarm Optimization
AI 2023, 4(3), 551-573; https://doi.org/10.3390/ai4030030 - 28 Jul 2023
Viewed by 1142
Abstract
Convolutional Neural Networks (CNNs) have exhibited remarkable potential in effectively tackling the intricate task of classifying MRI images, specifically in Alzheimer’s disease detection and brain tumor identification. While CNNs optimize their parameters automatically through training processes, finding the optimal values for these parameters [...] Read more.
Convolutional Neural Networks (CNNs) have exhibited remarkable potential in effectively tackling the intricate task of classifying MRI images, specifically in Alzheimer’s disease detection and brain tumor identification. While CNNs optimize their parameters automatically through training processes, finding the optimal values for these parameters can still be a challenging task due to the complexity of the search space and the potential for suboptimal results. Consequently, researchers often encounter difficulties determining the ideal parameter settings for CNNs. This challenge necessitates using trial-and-error methods or expert judgment, as the search for the best combination of parameters involves exploring a vast space of possibilities. Despite the automatic optimization during training, the process does not guarantee finding the globally-optimal parameter values. Hence, researchers often rely on iterative experimentation and expert knowledge to fine-tune these parameters and maximize CNN performance. This poses a significant obstacle in developing real-world applications that leverage CNNs for MRI image analysis. This paper presents a new hybrid model that combines the Particle Swarm Optimization (PSO) algorithm with CNNs to enhance detection and classification capabilities. Our method utilizes the PSO algorithm to determine the optimal configuration of CNN hyper-parameters. Subsequently, these optimized parameters are applied to the CNN architectures for classification. As a result, our hybrid model exhibits improved prediction accuracy for brain diseases while reducing the loss of function value. To evaluate the performance of our proposed model, we conducted experiments using three benchmark datasets. Two datasets were utilized for Alzheimer’s disease: the Alzheimer’s Disease Neuroimaging Initiative (ADNI) and an international dataset from Kaggle. The third dataset focused on brain tumors. The experimental assessment demonstrated the superiority of our proposed model, achieving unprecedented accuracy rates of 98.50%, 98.83%, and 97.12% for the datasets mentioned earlier, respectively. Full article
(This article belongs to the Special Issue Feature Papers for AI)
Show Figures

Figure 1

Article
High-Performance and Lightweight AI Model for Robot Vacuum Cleaners with Low Bitwidth Strong Non-Uniform Quantization
AI 2023, 4(3), 531-550; https://doi.org/10.3390/ai4030029 - 27 Jul 2023
Viewed by 497
Abstract
Artificial intelligence (AI) plays a critical role in the operation of robot vacuum cleaners, enabling them to intelligently navigate to clean and avoid indoor obstacles. Due to limited computational resources, manufacturers must balance performance and cost. This necessitates the development of lightweight AI [...] Read more.
Artificial intelligence (AI) plays a critical role in the operation of robot vacuum cleaners, enabling them to intelligently navigate to clean and avoid indoor obstacles. Due to limited computational resources, manufacturers must balance performance and cost. This necessitates the development of lightweight AI models that can achieve high performance. Traditional uniform weight quantization assigns the same number of levels to all weights, regardless of their distribution or importance. Consequently, this lack of adaptability may lead to sub-optimal quantization results, as the quantization levels do not align with the statistical properties of the weights. To address this challenge, in this work, we propose a new technique called low bitwidth strong non-uniform quantization, which largely reduces the memory footprint of AI models while maintaining high accuracy. Our proposed non-uniform quantization method, as opposed to traditional uniform quantization, aims to align with the actual weight distribution of well-trained neural network models. The proposed quantization scheme builds upon the observation of weight distribution characteristics in AI models and aims to leverage this knowledge to enhance the efficiency of neural network implementations. Additionally, we adjust the input image size to reduce the computational and memory demands of AI models. The goal is to identify an appropriate image size and its corresponding AI models that can be used in resource-constrained robot vacuum cleaners while still achieving acceptable accuracy on the object classification task. Experimental results indicate that when compared to the state-of-the-art AI models in the literature, the proposed AI model achieves a 2-fold decrease in memory usage from 15.51 MB down to 7.68 MB while maintaining the same accuracy of around 93%. In addition, the proposed non-uniform quantization model reduces memory usage by 20 times (from 15.51 MB down to 0.78 MB) with a slight accuracy drop of 3.11% (the classification accuracy is still above 90%). Thus, our proposed high-performance and lightweight AI model strikes an excellent balance between model complexity, classification accuracy, and computational resources for robot vacuum cleaners. Full article
Show Figures

Figure 1

Article
Federated Learning for IoT Intrusion Detection
AI 2023, 4(3), 509-530; https://doi.org/10.3390/ai4030028 - 24 Jul 2023
Viewed by 1032
Abstract
The number of Internet of Things (IoT) devices has increased considerably in the past few years, resulting in a large growth of cyber attacks on IoT infrastructure. As part of a defense in depth approach to cybersecurity, intrusion detection systems (IDSs) have acquired [...] Read more.
The number of Internet of Things (IoT) devices has increased considerably in the past few years, resulting in a large growth of cyber attacks on IoT infrastructure. As part of a defense in depth approach to cybersecurity, intrusion detection systems (IDSs) have acquired a key role in attempting to detect malicious activities efficiently. Most modern approaches to IDS in IoT are based on machine learning (ML) techniques. The majority of these are centralized, which implies the sharing of data from source devices to a central server for classification. This presents potentially crucial issues related to privacy of user data as well as challenges in data transfers due to their volumes. In this article, we evaluate the use of federated learning (FL) as a method to implement intrusion detection in IoT environments. FL is an alternative, distributed method to centralized ML models, which has seen a surge of interest in IoT intrusion detection recently. In our implementation, we evaluate FL using a shallow artificial neural network (ANN) as the shared model and federated averaging (FedAvg) as the aggregation algorithm. The experiments are completed on the ToN_IoT and CICIDS2017 datasets in binary and multiclass classification. Classification is performed by the distributed devices using their own data. No sharing of data occurs among participants, maintaining data privacy. When compared against a centralized approach, results have shown that a collaborative FL IDS can be an efficient alternative, in terms of accuracy, precision, recall and F1-score, making it a viable option as an IoT IDS. Additionally, with these results as baseline, we have evaluated alternative aggregation algorithms, namely FedAvgM, FedAdam and FedAdagrad, in the same setting by using the Flower FL framework. The results from the evaluation show that, in our scenario, FedAvg and FedAvgM tend to perform better compared to the two adaptive algorithms, FedAdam and FedAdagrad. Full article
(This article belongs to the Special Issue Feature Papers for AI)
Show Figures

Figure 1

Article
Training Artificial Neural Networks Using a Global Optimization Method That Utilizes Neural Networks
AI 2023, 4(3), 491-508; https://doi.org/10.3390/ai4030027 - 20 Jul 2023
Viewed by 761
Abstract
Perhaps one of the best-known machine learning models is the artificial neural network, where a number of parameters must be adjusted to learn a wide range of practical problems from areas such as physics, chemistry, medicine, etc. Such problems can be reduced to [...] Read more.
Perhaps one of the best-known machine learning models is the artificial neural network, where a number of parameters must be adjusted to learn a wide range of practical problems from areas such as physics, chemistry, medicine, etc. Such problems can be reduced to pattern recognition problems and then modeled from artificial neural networks, whether these problems are classification problems or regression problems. To achieve the goal of neural networks, they must be trained by appropriately adjusting their parameters using some global optimization methods. In this work, the application of a recent global minimization technique is suggested for the adjustment of neural network parameters. In this technique, an approximation of the objective function to be minimized is created using artificial neural networks and then sampling is performed from the approximation function and not the original one. Therefore, in the present work, learning of the parameters of artificial neural networks is performed using other neural networks. The new training method was tested on a series of well-known problems, a comparative study was conducted against other neural network parameter tuning techniques, and the results were more than promising. From what was seen after performing the experiments and comparing the proposed technique with others that have been used for classification datasets as well as regression datasets, there was a significant difference in the performance of the proposed technique, starting with 30% for classification datasets and reaching 50% for regression problems. However, the proposed technique, because it presupposes the use of global optimization techniques involving artificial neural networks, may require significantly higher execution time than other techniques. Full article
(This article belongs to the Special Issue Feature Papers for AI)
Show Figures

Figure 1

Commentary
Predictive Analytics with a Transdisciplinary Framework in Promoting Patient-Centric Care of Polychronic Conditions: Trends, Challenges, and Solutions
AI 2023, 4(3), 482-490; https://doi.org/10.3390/ai4030026 - 13 Jul 2023
Viewed by 740
Abstract
Context. This commentary is based on an innovative approach to the development of predictive analytics. It is centered on the development of predictive models for varying stages of chronic disease through integrating all types of datasets, adds various new features to a theoretically [...] Read more.
Context. This commentary is based on an innovative approach to the development of predictive analytics. It is centered on the development of predictive models for varying stages of chronic disease through integrating all types of datasets, adds various new features to a theoretically driven data warehousing, creates purpose-specific prediction models, and integrates multi-criteria predictions of chronic disease progression based on a biomedical evolutionary learning platform. After merging across-center databases based on the risk factors identified from modeling the predictors of chronic disease progression, the collaborative investigators could conduct multi-center verification of the predictive model and further develop a clinical decision support system coupled with visualization of a shared decision-making feature for patient care. The Study Problem. The success of health services management research is dependent upon the stability of pattern detection and the usefulness of nosological classification formulated from big-data-to-knowledge research on chronic conditions. However, longitudinal observations with multiple waves of predictors and outcomes are needed to capture the evolution of polychronic conditions. Motivation. The transitional probabilities could be estimated from big-data analysis with further verification. Simulation or predictive models could then generate a useful explanatory pathogenesis of the end-stage-disorder or outcomes. Hence, the clinical decision support system for patient-centered interventions could be systematically designed and executed. Methodology. A customized algorithm for polychronic conditions coupled with constraints-oriented reasoning approaches is suggested. Based on theoretical specifications of causal inquiries, we could mitigate the effects of multiple confounding factors in conducting evaluation research on the determinants of patient care outcomes. This is what we consider as the mechanism for avoiding the black-box expression in the formulation of predictive analytics. The remaining task is to gather new data to verify the practical utility of the proposed and validated predictive equation(s). More specifically, this includes two approaches guiding future research on chronic disease and care management: (1) To develop a biomedical evolutionary learning platform to predict the risk of polychronic conditions at various stages, especially for predicting the micro- and macro-cardiovascular complications experienced by patients with Type 2 diabetes for multidisciplinary care; and (2) to formulate appropriate prescriptive intervention services, such as patient-centered care management interventions for a high-risk group of patients with polychronic conditions. Conclusions. The commentary has identified trends, challenges, and solutions in conducting innovative AI-based healthcare research that can improve understandings of disease-state transitions from diabetes to other chronic polychronic conditions. Hence, better predictive models could be further formulated to expand from inductive (problem solving) to deductive (theory based and hypothesis testing) inquiries in care management research. Full article
Show Figures

Figure 1

Article
A Robust Vehicle Detection Model for LiDAR Sensor Using Simulation Data and Transfer Learning Methods
AI 2023, 4(2), 461-481; https://doi.org/10.3390/ai4020025 - 01 Jun 2023
Viewed by 1450
Abstract
Vehicle detection in parking areas provides the spatial and temporal utilisation of parking spaces. Parking observations are typically performed manually, limiting the temporal resolution due to the high labour cost. This paper uses simulated data and transfer learning to build a robust real-world [...] Read more.
Vehicle detection in parking areas provides the spatial and temporal utilisation of parking spaces. Parking observations are typically performed manually, limiting the temporal resolution due to the high labour cost. This paper uses simulated data and transfer learning to build a robust real-world model for vehicle detection and classification from single-beam LiDAR of a roadside parking scenario. The paper presents a synthetically augmented transfer learning approach for LiDAR-based vehicle detection and the implementation of synthetic LiDAR data. A synthetic augmented transfer learning method was used to supplement the small real-world data set and allow the development of data-handling techniques. In addition, adding the synthetically augmented transfer learning method increases the robustness and overall accuracy of the model. Experiments show that the method can be used for fast deployment of the model for vehicle detection using a LIDAR sensor. Full article
(This article belongs to the Section AI Systems: Theory and Applications)
Show Figures

Figure 1

Review
Machine-Learning-Based Prediction Modelling in Primary Care: State-of-the-Art Review
AI 2023, 4(2), 437-460; https://doi.org/10.3390/ai4020024 - 23 May 2023
Viewed by 2038
Abstract
Primary care has the potential to be transformed by artificial intelligence (AI) and, in particular, machine learning (ML). This review summarizes the potential of ML and its subsets in influencing two domains of primary care: pre-operative care and screening. ML can be utilized [...] Read more.
Primary care has the potential to be transformed by artificial intelligence (AI) and, in particular, machine learning (ML). This review summarizes the potential of ML and its subsets in influencing two domains of primary care: pre-operative care and screening. ML can be utilized in preoperative treatment to forecast postoperative results and assist physicians in selecting surgical interventions. Clinicians can modify their strategy to reduce risk and enhance outcomes using ML algorithms to examine patient data and discover factors that increase the risk of worsened health outcomes. ML can also enhance the precision and effectiveness of screening tests. Healthcare professionals can identify diseases at an early and curable stage by using ML models to examine medical pictures, diagnostic modalities, and spot patterns that may suggest disease or anomalies. Before the onset of symptoms, ML can be used to identify people at an increased risk of developing specific disorders or diseases. ML algorithms can assess patient data such as medical history, genetics, and lifestyle factors to identify those at higher risk. This enables targeted interventions such as lifestyle adjustments or early screening. In general, using ML in primary care offers the potential to enhance patient outcomes, reduce healthcare costs, and boost productivity. Full article
Show Figures

Figure 1

Back to TopTop