Sign in to use this feature.

Years

Between: -

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (2,651)

Search Parameters:
Journal = Algorithms

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
Article
A Mathematical Study on a Fractional-Order SEIR Mpox Model: Analysis and Vaccination Influence
Algorithms 2023, 16(9), 418; https://doi.org/10.3390/a16090418 (registering DOI) - 01 Sep 2023
Abstract
This paper establishes a novel fractional-order version of a recently expanded form of the Susceptible-Exposed-Infectious-Recovery (SEIR) Mpox model. This model is investigated by means of demonstrating some significant findings connected with the stability analysis and the vaccination impact, as well. In particular, we [...] Read more.
This paper establishes a novel fractional-order version of a recently expanded form of the Susceptible-Exposed-Infectious-Recovery (SEIR) Mpox model. This model is investigated by means of demonstrating some significant findings connected with the stability analysis and the vaccination impact, as well. In particular, we analyze the fractional-order Mpox model in terms of its invariant region, boundedness of solution, equilibria, basic reproductive number, and its elasticity. In accordance with an effective vaccine, we study the progression and dynamics of the Mpox disease in compliance with various scenarios of the vaccination ratio through the proposed fractional-order Mpox model. Accordingly, several numerical findings of the proposed model are depicted with the use of two numerical methods; the Fractional Euler Method (FEM) and Modified Fractional Euler Method (MFEM). Such findings demonstrate the influence of the fractional-order values coupled with the vaccination rate on the dynamics of the established disease model. Full article
(This article belongs to the Special Issue Mathematical Models and Their Applications IV)
Show Figures

Figure 1

Article
An Optimization Precise Model of Stroke Data to Improve Stroke Prediction
Algorithms 2023, 16(9), 417; https://doi.org/10.3390/a16090417 (registering DOI) - 01 Sep 2023
Abstract
Stroke is a major public health issue with significant economic consequences. This study aims to enhance stroke prediction by addressing imbalanced datasets and algorithmic bias. Our research focuses on accurately and precisely detecting stroke possibility to aid prevention. We tackle the overlooked aspect [...] Read more.
Stroke is a major public health issue with significant economic consequences. This study aims to enhance stroke prediction by addressing imbalanced datasets and algorithmic bias. Our research focuses on accurately and precisely detecting stroke possibility to aid prevention. We tackle the overlooked aspect of imbalanced datasets in the healthcare literature. Our study focuses on predicting stroke in a general context rather than specific subtypes. This clarification will not only ensure a clear understanding of our study’s scope but also enhance the overall transparency and impact of our findings. We construct an optimization model and describe an effective methodology and algorithms for machine learning classification, accommodating missing data and imbalances. Our models outperform previous efforts in stroke prediction, demonstrating higher sensitivity, specificity, accuracy, and precision. Data quality and preprocessing play a crucial role in developing reliable models. The proposed algorithm using SVMs achieves 98% accuracy and 97% recall score. In-depth data analysis and advanced machine learning techniques improve stroke prediction. This research highlights the value of data-oriented approaches, leading to enhanced accuracy and understanding of stroke risk factors. These methods can be applied to other medical domains, benefiting patient care and public health outcomes. By incorporating our findings, the efficiency and effectiveness of the public health system can be improved. Full article
Show Figures

Figure 1

Article
Discrete versus Continuous Algorithms in Dynamics of Affective Decision Making
Algorithms 2023, 16(9), 416; https://doi.org/10.3390/a16090416 - 29 Aug 2023
Viewed by 158
Abstract
The dynamics of affective decision making is considered for an intelligent network composed of agents with different types of memory: long-term and short-term memory. The consideration is based on probabilistic affective decision theory, which takes into account the rational utility of alternatives as [...] Read more.
The dynamics of affective decision making is considered for an intelligent network composed of agents with different types of memory: long-term and short-term memory. The consideration is based on probabilistic affective decision theory, which takes into account the rational utility of alternatives as well as the emotional alternative attractiveness. The objective of this paper is the comparison of two multistep operational algorithms of the intelligent network: one based on discrete dynamics and the other on continuous dynamics. By means of numerical analysis, it is shown that, depending on the network parameters, the characteristic probabilities for continuous and discrete operations can exhibit either close or drastically different behavior. Thus, depending on which algorithm is employed, either discrete or continuous, theoretical predictions can be rather different, which does not allow for a uniquely defined description of practical problems. This finding is important for understanding which of the algorithms is more appropriate for the correct analysis of decision-making tasks. A discussion is given, revealing that the discrete operation seems to be more realistic for describing intelligent networks as well as affective artificial intelligence. Full article
(This article belongs to the Topic Complex Networks and Social Networks)
Show Figures

Figure 1

Article
Predicting Online Item-Choice Behavior: A Shape-Restricted Regression Approach
Algorithms 2023, 16(9), 415; https://doi.org/10.3390/a16090415 - 29 Aug 2023
Viewed by 178
Abstract
This paper examines the relationship between user pageview (PV) histories and their itemchoice behavior on an e-commerce website. We focus on PV sequences, which represent time series of the number of PVs for each user–item pair. We propose a shape-restricted optimization model that [...] Read more.
This paper examines the relationship between user pageview (PV) histories and their itemchoice behavior on an e-commerce website. We focus on PV sequences, which represent time series of the number of PVs for each user–item pair. We propose a shape-restricted optimization model that accurately estimates item-choice probabilities for all possible PV sequences. This model imposes monotonicity constraints on item-choice probabilities by exploiting partial orders for PV sequences, according to the recency and frequency of a user’s previous PVs. To improve the computational efficiency of our optimization model, we devise efficient algorithms for eliminating all redundant constraints according to the transitivity of the partial orders. Experimental results using real-world clickstream data demonstrate that our method achieves higher prediction performance than that of a state-of-the-art optimization model and common machine learning methods. Full article
(This article belongs to the Section Algorithms for Multidisciplinary Applications)
Show Figures

Figure 1

Article
A Pattern Recognition Analysis of Vessel Trajectories
Algorithms 2023, 16(9), 414; https://doi.org/10.3390/a16090414 - 29 Aug 2023
Viewed by 168
Abstract
The automatic identification system (AIS) facilitates the monitoring of ship movements and provides essential input parameters for traffic safety. Previous studies have employed AIS data to detect behavioral anomalies and classify vessel types using supervised and unsupervised algorithms, including deep learning techniques. The [...] Read more.
The automatic identification system (AIS) facilitates the monitoring of ship movements and provides essential input parameters for traffic safety. Previous studies have employed AIS data to detect behavioral anomalies and classify vessel types using supervised and unsupervised algorithms, including deep learning techniques. The approach proposed in this work focuses on the recognition of vessel types through the “Take One Class at a Time” (TOCAT) classification strategy. This approach pivots on a collection of adaptive models rather than a single intricate algorithm. Using radar data, these models are trained by taking into account aspects such as identifiers, position, velocity, and heading. However, it purposefully excludes positional data to counteract the inconsistencies stemming from route variations and irregular sampling frequencies. Using the given data, we achieved a mean accuracy of 83% on a 6-class classification task. Full article
(This article belongs to the Special Issue Machine Learning for Pattern Recognition)
Show Figures

Figure 1

Article
Enhancing Metaheuristic Optimization: A Novel Nature-Inspired Hybrid Approach Incorporating Selected Pseudorandom Number Generators
Algorithms 2023, 16(9), 413; https://doi.org/10.3390/a16090413 - 28 Aug 2023
Viewed by 205
Abstract
In this paper, a hybrid nature-inspired metaheuristic algorithm based on the Genetic Algorithm and the African Buffalo Optimization is proposed. The hybrid approach adaptively switches between the Genetic Algorithm and the African Buffalo Optimization during the optimization process, leveraging their respective strengths to [...] Read more.
In this paper, a hybrid nature-inspired metaheuristic algorithm based on the Genetic Algorithm and the African Buffalo Optimization is proposed. The hybrid approach adaptively switches between the Genetic Algorithm and the African Buffalo Optimization during the optimization process, leveraging their respective strengths to improve performance. To improve randomness, the hybrid approach uses two high-quality pseudorandom number generators—the 64-bit and 32-bit versions of the SIMD-Oriented Fast Mersenne Twister. The effectiveness of the hybrid algorithm is evaluated on the NP-hard Container Relocation Problem, focusing on a test set of restricted Container Relocation Problems with higher complexity. The results show that the hybrid algorithm outperforms the individual Genetic Algorithm and the African Buffalo Optimization, which use standard pseudorandom number generators. The adaptive switch method allows the algorithm to adapt to different optimization problems and mitigate problems such as premature convergence and local optima. Moreover, the importance of pseudorandom number generator selection in metaheuristic algorithms is highlighted, as it directly affects the optimization results. The use of powerful pseudorandom number generators reduces the probability of premature convergence and local optima, leading to better optimization results. Overall, the research demonstrates the potential of hybrid metaheuristic approaches for solving complex optimization problems, which makes them relevant for scientific research and practical applications. Full article
(This article belongs to the Collection Feature Paper in Metaheuristic Algorithms and Applications)
Show Figures

Figure 1

Article
You Are Not Alone: Towards Cleaning Robot Navigation in Shared Environments through Deep Reinforcement Learning
Algorithms 2023, 16(9), 412; https://doi.org/10.3390/a16090412 - 28 Aug 2023
Viewed by 182
Abstract
For mobile cleaning robot navigation, it is crucial to not only base the motion decisions on the ego agent’s capabilities but also to take into account other agents in the shared environment. Therefore, in this paper, we propose a deep reinforcement learning (DRL)-based [...] Read more.
For mobile cleaning robot navigation, it is crucial to not only base the motion decisions on the ego agent’s capabilities but also to take into account other agents in the shared environment. Therefore, in this paper, we propose a deep reinforcement learning (DRL)-based approach for learning motion policy conditioned not only on ego observations of the environment, but also on incoming information about other agents. First, we extend a replay buffer to collect state observations on all agents at the scene and create a simulation setting from which to gather the training samples for DRL policy. Next, we express the incoming agent information in each agent’s frame of reference, thus making it translation and rotation invariant. We propose a neural network architecture with edge embedding layers that allows for the extraction of incoming information from a dynamic range of agents. This allows for generalization of the proposed approach to various settings with a variable number of agents at the scene. Through simulation results, we show that the introduction of edge layers improves the navigation policies in shared environments and performs better than other state-of-the-art DRL motion policy methods. Full article
Show Figures

Figure 1

Article
End-to-End Approach for Autonomous Driving: A Supervised Learning Method Using Computer Vision Algorithms for Dataset Creation
Algorithms 2023, 16(9), 411; https://doi.org/10.3390/a16090411 - 28 Aug 2023
Viewed by 168
Abstract
This paper presents a solution for an autonomously driven vehicle (a robotic car) based on artificial intelligence using a supervised learning method. A scaled-down robotic car containing only one camera as a sensor was developed to participate in the RoboCup Portuguese Open Autonomous [...] Read more.
This paper presents a solution for an autonomously driven vehicle (a robotic car) based on artificial intelligence using a supervised learning method. A scaled-down robotic car containing only one camera as a sensor was developed to participate in the RoboCup Portuguese Open Autonomous Driving League competition. This study is based solely on the development of this robotic car, and the results presented are only from this competition. Teams usually solve the competition problem by relying on computer vision algorithms, and no research could be found on neural network model-based assistance for vehicle control. This technique is commonly used in general autonomous driving, and the amount of research is increasing. To train a neural network, a large number of labelled images is necessary; however, these are difficult to obtain. In order to address this problem, a graphical simulator was used with an environment containing the track and the robot/car to extract images for the dataset. A classical computer vision algorithm developed by the authors processes the image data to extract relevant information about the environment and uses it to determine the optimal direction for the vehicle to follow on the track, which is then associated with the respective image-grab. Several trainings were carried out with the created dataset to reach the final neural network model; tests were performed within a simulator, and the effectiveness of the proposed approach was additionally demonstrated through experimental results in two real robotics cars, which performed better than expected. This system proved to be very successful in steering the robotic car on a road-like track, and the agent’s performance increased with the use of supervised learning methods. With computer vision algorithms, the system performed an average of 23 complete laps around the track before going off-track, whereas with assistance from the neural network model the system never went off the track. Full article
Show Figures

Figure 1

Article
Neural-Network-Assisted Finite Difference Discretization for Numerical Solution of Partial Differential Equations
Algorithms 2023, 16(9), 410; https://doi.org/10.3390/a16090410 - 28 Aug 2023
Viewed by 183
Abstract
A neural-network-assisted numerical method is proposed for the solution of Laplace and Poisson problems. Finite differences are applied to approximate the spatial Laplacian operator on nonuniform grids. For this, a neural network is trained to compute the corresponding coefficients for general quadrilateral meshes. [...] Read more.
A neural-network-assisted numerical method is proposed for the solution of Laplace and Poisson problems. Finite differences are applied to approximate the spatial Laplacian operator on nonuniform grids. For this, a neural network is trained to compute the corresponding coefficients for general quadrilateral meshes. Depending on the position of a given grid point x0 and its neighbors, we face with a nonlinear optimization problem to obtain the finite difference coefficients in x0. This computing step is executed with an artificial neural network. In this way, for any geometric setup of the neighboring grid points, we immediately obtain the corresponding coefficients. The construction of an appropriate training data set is also discussed, which is based on the solution of overdetermined linear systems. The method was experimentally validated on a number of numerical tests. As expected, it delivers a fast and reliable algorithm for solving Poisson problems. Full article
(This article belongs to the Special Issue Numerical Optimization and Algorithms)
Show Figures

Figure 1

Article
RooTri: A Simple and Robust Function to Approximate the Intersection Points of a 3D Scalar Field with an Arbitrarily Oriented Plane in MATLAB
Algorithms 2023, 16(9), 409; https://doi.org/10.3390/a16090409 - 27 Aug 2023
Viewed by 315
Abstract
With the function RooTri(), we present a simple and robust calculation method for the approximation of the intersection points of a scalar field given as an unstructured point cloud with a plane oriented arbitrarily in space. The point cloud is approximated to a [...] Read more.
With the function RooTri(), we present a simple and robust calculation method for the approximation of the intersection points of a scalar field given as an unstructured point cloud with a plane oriented arbitrarily in space. The point cloud is approximated to a surface consisting of triangles whose edges are used for computing the intersection points. The function contourc() of Matlab is taken as a reference. Our experiments show that the function contourc() produces outliers that deviate significantly from the defined nominal value, while the quality of the results produced by the function RooTri() increases with finer resolution of the examined grid. Full article
(This article belongs to the Section Algorithms for Multidisciplinary Applications)
Show Figures

Graphical abstract

Article
A Hybrid Simulation and Reinforcement Learning Algorithm for Enhancing Efficiency in Warehouse Operations
Algorithms 2023, 16(9), 408; https://doi.org/10.3390/a16090408 - 27 Aug 2023
Viewed by 255
Abstract
The use of simulation and reinforcement learning can be viewed as a flexible approach to aid managerial decision-making, particularly in the face of growing complexity in manufacturing and logistic systems. Efficient supply chains heavily rely on steamlined warehouse operations, and therefore, having a [...] Read more.
The use of simulation and reinforcement learning can be viewed as a flexible approach to aid managerial decision-making, particularly in the face of growing complexity in manufacturing and logistic systems. Efficient supply chains heavily rely on steamlined warehouse operations, and therefore, having a well-informed storage location assignment policy is crucial for their improvement. The traditional methods found in the literature for tackling the storage location assignment problem have certain drawbacks, including the omission of stochastic process variability or the neglect of interaction between various warehouse workers. In this context, we explore the possibilities of combining simulation with reinforcement learning to develop effective mechanisms that allow for the quick acquisition of information about a complex environment, the processing of that information, and then the decision-making about the best storage location assignment. In order to test these concepts, we will make use of the FlexSim commercial simulator. Full article
(This article belongs to the Special Issue 2022 and 2023 Selected Papers from Algorithms Editorial Board Members)
Show Figures

Figure 1

Article
A Novel Machine-Learning Approach to Predict Stress-Responsive Genes in Arabidopsis
Algorithms 2023, 16(9), 407; https://doi.org/10.3390/a16090407 - 27 Aug 2023
Viewed by 268
Abstract
This study proposes a hybrid gene selection method to identify and predict key genes in Arabidopsis associated with various stresses (including salt, heat, cold, high-light, and flagellin), aiming to enhance crop tolerance. An open-source microarray dataset (GSE41935) comprising 207 samples and 30,380 genes [...] Read more.
This study proposes a hybrid gene selection method to identify and predict key genes in Arabidopsis associated with various stresses (including salt, heat, cold, high-light, and flagellin), aiming to enhance crop tolerance. An open-source microarray dataset (GSE41935) comprising 207 samples and 30,380 genes was analyzed using several machine learning tools including the synthetic minority oversampling technique (SMOTE), information gain (IG), ReliefF, and least absolute shrinkage and selection operator (LASSO), along with various classifiers (BayesNet, logistic, multilayer perceptron, sequential minimal optimization (SMO), and random forest). We identified 439 differentially expressed genes (DEGs), of which only three were down-regulated (AT3G20810, AT1G31680, and AT1G30250). The performance of the top 20 genes selected by IG and ReliefF was evaluated using the classifiers mentioned above to classify stressed versus non-stressed samples. The random forest algorithm outperformed other algorithms with an accuracy of 97.91% and 98.51% for IG and ReliefF, respectively. Additionally, 42 genes were identified from all 30,380 genes using LASSO regression. The top 20 genes for each feature selection were analyzed to determine three common genes (AT5G44050, AT2G47180, and AT1G70700), which formed a three-gene signature. The efficiency of these three genes was evaluated using random forest and XGBoost algorithms. Further validation was performed using an independent RNA_seq dataset and random forest. These gene signatures can be exploited in plant breeding to improve stress tolerance in a variety of crops. Full article
(This article belongs to the Special Issue Machine Learning Algorithms in Natural Science)
Show Figures

Figure 1

Article
A Hybrid Discrete Memetic Algorithm for Solving Flow-Shop Scheduling Problems
Algorithms 2023, 16(9), 406; https://doi.org/10.3390/a16090406 - 26 Aug 2023
Viewed by 408
Abstract
Flow-shop scheduling problems are classic examples of multi-resource and multi-operation scheduling problems where the objective is to minimize the makespan. Because of the high complexity and intractability of the problem, apart from some exceptional cases, there are no explicit algorithms for finding the [...] Read more.
Flow-shop scheduling problems are classic examples of multi-resource and multi-operation scheduling problems where the objective is to minimize the makespan. Because of the high complexity and intractability of the problem, apart from some exceptional cases, there are no explicit algorithms for finding the optimal permutation in multi-machine environments. Therefore, different heuristic approaches, including evolutionary and memetic algorithms, are used to obtain the solution—or at least, a close enough approximation of the optimum. This paper proposes a novel approach: a novel combination of two rather efficient such heuristics, the discrete bacterial memetic evolutionary algorithm (DBMEA) proposed earlier by our group, and a conveniently modified heuristics, the Monte Carlo tree method. By their nested combination a new algorithm was obtained: the hybrid discrete bacterial memetic evolutionary algorithm (HDBMEA), which was extensively tested on the Taillard benchmark data set. Our results have been compared against all important other approaches published in the literature, and we found that this novel compound method produces good results overall and, in some cases, even better approximations of the optimum than any of the so far proposed solutions. Full article
(This article belongs to the Special Issue Hybrid Intelligent Algorithms)
Show Figures

Figure 1

Article
Generation of Achievable Three-Dimensional Trajectories for Autonomous Wheeled Vehicles via Tracking Differentiators
Algorithms 2023, 16(9), 405; https://doi.org/10.3390/a16090405 - 25 Aug 2023
Viewed by 197
Abstract
Planning an achievable trajectory for a mobile robot usually consists of two steps: (i) finding a path in the form of a sequence of discrete waypoints and (ii) transforming this sequence into a continuous and smooth curve. To solve the second problem, this [...] Read more.
Planning an achievable trajectory for a mobile robot usually consists of two steps: (i) finding a path in the form of a sequence of discrete waypoints and (ii) transforming this sequence into a continuous and smooth curve. To solve the second problem, this paper proposes algorithms for automatic dynamic smoothing of the primary path using a tracking differentiator with sigmoid corrective actions. Algorithms for setting the gains of the differentiator are developed, considering a set of design constraints on velocity, acceleration, and jerk for various mobile robots. When tracking a non-smooth primary path, the output variables of the differentiator generate smooth trajectories implemented by a mechanical plant. It is shown that the tracking differentiator with a different number of blocks also generates derivatives of the smoothed trajectory of any required order, taking into account the given constraints. Unlike standard analytical methods of polynomial smoothing, the proposed algorithm has a low computational load. It is easily implemented in real time on the on-board computer. In addition, simple methods for modeling a safety corridor are proposed, taking into account the dimensions of the vehicle when planning a polygon with stationary obstacles. Confirming results of numerical simulation of the developed algorithms are presented. Full article
(This article belongs to the Special Issue Machine Learning Algorithms for Distributed Autonomous Vehicles)
Show Figures

Figure 1

Article
Vector Control of PMSM Using TD3 Reinforcement Learning Algorithm
Algorithms 2023, 16(9), 404; https://doi.org/10.3390/a16090404 - 24 Aug 2023
Viewed by 212
Abstract
Permanent magnet synchronous motor (PMSM) drive systems are commonly utilized in mobile electric drive systems due to their high efficiency, high power density, and low maintenance cost. To reduce the tracking error of the permanent magnet synchronous motor, a reinforcement learning (RL) control [...] Read more.
Permanent magnet synchronous motor (PMSM) drive systems are commonly utilized in mobile electric drive systems due to their high efficiency, high power density, and low maintenance cost. To reduce the tracking error of the permanent magnet synchronous motor, a reinforcement learning (RL) control algorithm based on double delay deterministic gradient algorithm (TD3) is proposed. The physical modeling of PMSM is carried out in Simulink, and the current controller controlling id-axis and iq-axis in the current loop is replaced by a reinforcement learning controller. The optimal control network parameters were obtained through simulation learning, and DDPG, BP, and LQG algorithms were simulated and compared under the same conditions. In the experiment part, the trained RL network was compiled into C code according to the workflow with the help of rapid prototyping control, and then downloaded to the controller for testing. The measured output signal is consistent with the simulation results, which shows that the algorithm can significantly reduce the tracking error under the variable speed of the motor, making the system have a fast response. Full article
(This article belongs to the Special Issue Algorithms in Evolutionary Reinforcement Learning)
Show Figures

Figure 1

Back to TopTop