Sign in to use this feature.

Years

Between: -

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (890)

Search Parameters:
Journal = Computers

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
Article
Torus-Connected Toroids: An Efficient Topology for Interconnection Networks
Computers 2023, 12(9), 173; https://doi.org/10.3390/computers12090173 - 29 Aug 2023
Viewed by 153
Abstract
Recent supercomputers embody hundreds of thousands of compute nodes, and sometimes millions; as such, they are massively parallel systems. Node interconnection is thus critical to maximise the computing performance, and the torus topology has come out as a popular solution to this crucial [...] Read more.
Recent supercomputers embody hundreds of thousands of compute nodes, and sometimes millions; as such, they are massively parallel systems. Node interconnection is thus critical to maximise the computing performance, and the torus topology has come out as a popular solution to this crucial issue. This is the case, for example, for the interconnection network of the Fujitsu Fugaku, which was ranked world no. 1 until May 2022 and is the world no. 2 at the time of the writing of this article. Here, the number of dimensions used by the network topology of such torus-based interconnects stays rather low: it is equal to three for the Fujitsu Fugaku’s interconnect. As a result, it is necessary to greatly increase the arity of the underlying torus topology to be able to connect the numerous compute nodes involved, and this is eventually at the cost of a higher network diameter. Aiming at avoiding such a dramatic diameter rise, topologies can also combine several layers: such interconnects are called hierarchical interconnection networks (HIN). We propose, in this paper, which extends an earlier study, a novel interconnect topology for massively parallel systems, torus-connected toroids (TCT), whose advantage compared to existing topologies is that while it retains the torus topology for its desirable properties, the TCT network topology combines it with an additional layer, toroids, in order to significantly lower the network diameter. We both theoretically and empirically evaluate our proposal and quantitatively compare it to conventional approaches, which the TCT topology is shown to supersede. Full article
Show Figures

Figure 1

Article
A New Linear Model for the Calculation of Routing Metrics in 802.11s Using ns-3 and RStudio
Computers 2023, 12(9), 172; https://doi.org/10.3390/computers12090172 - 28 Aug 2023
Viewed by 230
Abstract
Wireless mesh networks (WMNs) offer a pragmatic solution with a cost-effective ratio when provisioning ubiquitous broadband internet access and diverse telecommunication systems. The conceptual underpinning of mesh networks finds application not only in IEEE networks, but also in 3GPP networks like LTE and [...] Read more.
Wireless mesh networks (WMNs) offer a pragmatic solution with a cost-effective ratio when provisioning ubiquitous broadband internet access and diverse telecommunication systems. The conceptual underpinning of mesh networks finds application not only in IEEE networks, but also in 3GPP networks like LTE and the low-power wide area network (LPWAN) tailored for the burgeoning Internet of Things (IoT) landscape. IEEE 802.11s is well known for its facto standard for WMN, which defines the hybrid wireless mesh protocol (HWMP) as a layer-2 routing protocol and airtime link (ALM) as a metric. In this intricate landscape, artificial intelligence (AI) plays a prominent role in the industry, particularly within the technology and telecommunication realms. This study presents a novel methodology for the computation of routing metrics, specifically the ALM. This methodology implements the network simulator ns-3 and the RStudio as a statistical computing environment for data analysis. The former has enabled for the creation of scripts that elicit a variety of scenarios for WMN where information is gathered and stored in databases. The latter (RStudio) takes this information, and at this point, two linear predictions are supported. The first uses linear models (lm) and the second employs general linear models (glm). To conclude this process, statistical tests are applied to the original model, as well as to the new suggested ones. This work substantially contributes in two ways: first, through the methodological tool for the metric calculation of the HWMP protocol that belongs to the IEEE 802.11s standard, using lm and glm for the selection and validation of the model regressors. At this stage the ANOVA and STEPWIZE tools of RStudio are used. The second contribution is a linear predictor that improves the WMN’s performance as a priori mechanism before the use of the ns-3 simulator. The ANCOVA tool of RStudio is employed in the latter. Full article
(This article belongs to the Topic Artificial Intelligence Models, Tools and Applications)
Show Figures

Figure 1

Article
Securing Financial Transactions with a Robust Algorithm: Preventing Double-Spending Attacks
Computers 2023, 12(9), 171; https://doi.org/10.3390/computers12090171 - 28 Aug 2023
Viewed by 285
Abstract
A zero-confirmation transaction is a transaction that has not yet been confirmed on the blockchain and is not yet part of the blockchain. The network propagates zero-confirmation transactions quickly, but they are not secured against double-spending attacks. In this study, the proposed method [...] Read more.
A zero-confirmation transaction is a transaction that has not yet been confirmed on the blockchain and is not yet part of the blockchain. The network propagates zero-confirmation transactions quickly, but they are not secured against double-spending attacks. In this study, the proposed method is used to secure zero-confirmation transactions by using the security hashing algorithm 512 in elliptic curve cryptography (ECDSA) instead of the security hashing algorithm 256. This is to generate a cryptographic identity to secure the transactions in zero-confirmation transactions instead of security hashing algorithm 256. The results show that SHA-512 is greater than SHA-256 in throughput. Additionally, SHA-512 offers better throughput performance than SHA-256 while also having a larger hash size. Results also show that SHA-512 is more secure than SHA-256. Full article
Show Figures

Figure 1

Article
Evaluating User Satisfaction Using Deep-Learning-Based Sentiment Analysis for Social Media Data in Saudi Arabia’s Telecommunication Sector
Computers 2023, 12(9), 170; https://doi.org/10.3390/computers12090170 - 26 Aug 2023
Viewed by 232
Abstract
Social media has become common as a means to convey opinions and express the extent of satisfaction and dissatisfaction with a service or product. In the Kingdom of Saudi Arabia specifically, most social media users share positive and negative opinions about a service [...] Read more.
Social media has become common as a means to convey opinions and express the extent of satisfaction and dissatisfaction with a service or product. In the Kingdom of Saudi Arabia specifically, most social media users share positive and negative opinions about a service or product, especially regarding communication services, which is one of the most important services for citizens who use it to communicate with the world. This research aimed to analyse and measure user satisfaction with the services provided by the Saudi Telecom Company (STC), Mobily, and Zain. This type of sentiment analysis is an important measure and is used to make important business decisions to succeed in increasing customer loyalty and satisfaction. In this study, the authors developed advanced methods based on deep learning (DL) to analyse and reveal the percentage of customer satisfaction using the publicly available dataset AraCust. Several DL models have been utilised in this study, including long short-term memory (LSTM), gated recurrent unit (GRU), and BiLSTM, on the AraCust dataset. The LSTM model achieved the highest performance in text classification, demonstrating a 98.04% training accuracy and a 97.03% test score. The study addressed the biggest challenge that telecommunications companies face: that the company’s services influence customers’ decisions due to their dissatisfaction with the provided services. Full article
Show Figures

Figure 1

Article
Digital Competence of Higher Education Teachers at a Distance Learning University in Portugal
Computers 2023, 12(9), 169; https://doi.org/10.3390/computers12090169 - 24 Aug 2023
Viewed by 218
Abstract
The Digital Education Action Plan (2021–2027) launched by the European Commission aims to revolutionize education systems, prioritizing the development of a robust digital education ecosystem and the enhancement of teachers’ digital transformation skills. This study focuses on Universidade Aberta, Portugal, to identify the [...] Read more.
The Digital Education Action Plan (2021–2027) launched by the European Commission aims to revolutionize education systems, prioritizing the development of a robust digital education ecosystem and the enhancement of teachers’ digital transformation skills. This study focuses on Universidade Aberta, Portugal, to identify the strengths and weaknesses of teachers’ digital skills within the Digital Competence Framework for Educators (DigCompEdu). Using a quantitative approach, the research utilized the DigCompEdu CheckIn self-assessment questionnaire, validated for the Portuguese population, to evaluate teachers’ perceptions of their digital competences. A total of 118 teachers participated in the assessment. Findings revealed that the teachers exhibited a notably high overall level of digital competence, positioned at the intersection of B2 (Expert) and C1 (Leader) on the DigCompEdu scale. However, specific areas for improvement were identified, particularly in Digital Technologies Resources and Assessment, the core pedagogical components of DigCompEdu, which displayed comparatively lower proficiency levels. To ensure continuous progress and alignment with the Digital Education Action Plan’s strategic priorities, targeted teacher training initiatives should focus on enhancing competences related to Digital Technologies Resources and Assessment. Full article
Show Figures

Figure 1

Article
Kids Surfing the Web: A Comparative Study in Portugal
Computers 2023, 12(9), 168; https://doi.org/10.3390/computers12090168 - 23 Aug 2023
Viewed by 239
Abstract
The conditions for safe Internet access and the development of skills enabling full participation in online environments are recognized in the Council of Europe’s strategy for child rights, from 2022. The guarantee of this right has implications for experiences inside and outside the [...] Read more.
The conditions for safe Internet access and the development of skills enabling full participation in online environments are recognized in the Council of Europe’s strategy for child rights, from 2022. The guarantee of this right has implications for experiences inside and outside the school context. Therefore, this study aims to compare the perceptions of students from different educational levels, who participated in a digital storytelling workshop, regarding online safety, searching habits, and digital competences. Data were collected through a questionnaire survey completed by 84 Portuguese students from elementary and secondary schools. A non-parametric multivariate analysis of variance was used to identify differences as children advanced across educational stages. The results revealed that secondary students tended to spend more time online and demonstrated more advanced search skills. Interestingly, the youngest children exhibited higher competences in creating games and practicing safety measures regarding online postings. These findings emphasize the importance of schools, in a joint action with the educational community, including parents, teachers and students, in developing a coordinated and vertically integrated approach to digital education that considers the children’s current knowledge, attitudes, and skills as a starting point for pedagogical intervention. Full article
Article
Brain Pathology Classification of MR Images Using Machine Learning Techniques
Computers 2023, 12(8), 167; https://doi.org/10.3390/computers12080167 - 19 Aug 2023
Viewed by 418
Abstract
A brain tumor is essentially a collection of aberrant tissues, so it is crucial to classify tumors of the brain using MRI before beginning therapy. Tumor segmentation and classification from brain MRI scans using machine learning techniques are widely recognized as challenging and [...] Read more.
A brain tumor is essentially a collection of aberrant tissues, so it is crucial to classify tumors of the brain using MRI before beginning therapy. Tumor segmentation and classification from brain MRI scans using machine learning techniques are widely recognized as challenging and important tasks. The potential applications of machine learning in diagnostics, preoperative planning, and postoperative evaluations are substantial. Accurate determination of the tumor’s location on a brain MRI is of paramount importance. The advancement of precise machine learning classifiers and other technologies will enable doctors to detect malignancies without requiring invasive procedures on patients. Pre-processing, skull stripping, and tumor segmentation are the steps involved in detecting a brain tumor and measurement (size and form). After a certain period, CNN models get overfitted because of the large number of training images used to train them. That is why this study uses deep CNN to transfer learning. CNN-based Relu architecture and SVM with fused retrieved features via HOG and LPB are used to classify brain MRI tumors (glioma or meningioma). The method’s efficacy is measured in terms of precision, recall, F-measure, and accuracy. This study showed that the accuracy of the SVM with combined LBP with HOG is 97%, and the deep CNN is 98%. Full article
(This article belongs to the Special Issue Machine and Deep Learning in the Health Domain)
Show Figures

Figure 1

Article
Enhancing Data Security: A Cutting-Edge Approach Utilizing Protein Chains in Cryptography and Steganography
Computers 2023, 12(8), 166; https://doi.org/10.3390/computers12080166 - 19 Aug 2023
Viewed by 393
Abstract
Nowadays, with the increase in cyber-attacks, hacking, and data theft, maintaining data security and confidentiality is of paramount importance. Several techniques are used in cryptography and steganography to ensure their safety during the transfer of information between the two parties without interference from [...] Read more.
Nowadays, with the increase in cyber-attacks, hacking, and data theft, maintaining data security and confidentiality is of paramount importance. Several techniques are used in cryptography and steganography to ensure their safety during the transfer of information between the two parties without interference from an unauthorized third party. This paper proposes a modern approach to cryptography and steganography based on exploiting a new environment: bases and protein chains used to encrypt and hide sensitive data. The protein bases are used to form a cipher key whose length is twice the length of the data to be encrypted. During the encryption process, the plain data and the cipher key are represented in several forms, including hexadecimal and binary representation, and several arithmetic operations are performed on them, in addition to the use of logic gates in the encryption process to increase encrypted data randomness. As for the protein chains, they are used as a cover to hide the encrypted data. The process of hiding inside the protein bases will be performed in a sophisticated manner that is undetectable by statistical analysis methods, where each byte will be fragmented into three groups of bits in a special order, and each group will be included in one specific protein base that will be allocated to this group only, depending on the classifications of bits that have been previously stored in special databases. Each byte of the encrypted data will be hidden in three protein bases, and these protein bases will be distributed randomly over the protein chain, depending on an equation designed for this purpose. The advantages of these proposed algorithms are that they are fast in encrypting and hiding data, scalable, i.e., insensitive to the size of plain data, and lossless algorithms. The experiments showed that the proposed cryptography algorithm outperforms the most recent algorithms in terms of entropy and correlation values that reach −0.6778 and 7.99941, and the proposed steganography algorithm has the highest payload of 2.666 among five well-known hiding algorithms that used DNA sequences as the cover of the data. Full article
(This article belongs to the Special Issue Using New Technologies on Cyber Security Solutions)
Show Figures

Figure 1

Article
Pm2.5 Time Series Imputation with Deep Learning and Interpolation
Computers 2023, 12(8), 165; https://doi.org/10.3390/computers12080165 - 16 Aug 2023
Viewed by 342
Abstract
Commonly, regression for time series imputation has been implemented directly through regression models, statistical, machine learning, and deep learning techniques. In this work, a novel approach is proposed based on a classification model that determines the NA value class, and from this, two [...] Read more.
Commonly, regression for time series imputation has been implemented directly through regression models, statistical, machine learning, and deep learning techniques. In this work, a novel approach is proposed based on a classification model that determines the NA value class, and from this, two types of interpolations are implemented: polynomial or flipped polynomial. An hourly pm2.5 time series from Ilo City in southern Peru was chosen as a study case. The results obtained show that for gaps of one NA value, the proposal in most cases presents superior results to techniques such as ARIMA, LSTM, BiLSTM, GRU, and BiGRU; thus, on average, in terms of R2, the proposal exceeds implemented benchmark models by between 2.4341% and 19.96%. Finally, supported by the results, it can be stated that the proposal constitutes a good alternative for short-gaps imputation in pm2.5 time series. Full article
(This article belongs to the Topic Artificial Intelligence Models, Tools and Applications)
Show Figures

Figure 1

Article
Requirement Change Prediction Model for Small Software Systems
Computers 2023, 12(8), 164; https://doi.org/10.3390/computers12080164 - 14 Aug 2023
Viewed by 378
Abstract
The software industry plays a vital role in driving technological advancements. Software projects are complex and consist of many components, so change is unavoidable in these projects. The change in software requirements must be predicted early to preserve resources, since it can lead [...] Read more.
The software industry plays a vital role in driving technological advancements. Software projects are complex and consist of many components, so change is unavoidable in these projects. The change in software requirements must be predicted early to preserve resources, since it can lead to project failures. This work focuses on small-scale software systems in which requirements are changed gradually. The work provides a probabilistic prediction model, which predicts the probability of changes in software requirement specifications. The first part of the work considers analyzing the changes in software requirements due to certain variables with the help of stakeholders, developers, and experts by the questionnaire method. Then, the proposed model incorporates their knowledge in the Bayesian network as conditional probabilities of independent and dependent variables. The proposed approach utilizes the variable elimination method to obtain the posterior probability of the revisions in the software requirement document. The model was evaluated by sensitivity analysis and comparison methods. For a given dataset, the proposed model computed the low state revisions probability to 0.42, and the high state revisions probability to 0.45. Thus, the results proved that the proposed approach can predict the change in the requirements document accurately by outperforming existing models. Full article
(This article belongs to the Special Issue Best Practices, Challenges and Opportunities in Software Engineering)
Show Figures

Figure 1

Article
Timing and Performance Metrics for TWR-K70F120M Device
Computers 2023, 12(8), 163; https://doi.org/10.3390/computers12080163 - 14 Aug 2023
Viewed by 323
Abstract
Currently, single-board computers (SBCs) are sufficiently powerful to run real-time operating systems (RTOSs) and applications. The purpose of this research was to investigate the timing performance of an NXP TWR-K70F120M device with μClinux OS on concurrently running tasks with real-time features and constraints, [...] Read more.
Currently, single-board computers (SBCs) are sufficiently powerful to run real-time operating systems (RTOSs) and applications. The purpose of this research was to investigate the timing performance of an NXP TWR-K70F120M device with μClinux OS on concurrently running tasks with real-time features and constraints, and provide new and distinct technical data not yet available in the literature. Towards this goal, a custom-built multithreaded application with specific compute-intensive sorting and matrix operations was developed and applied to obtain measurements in specific timing metrics, including task execution time, thread waiting time, and response time. In this way, this research extends the literature by documenting performance results on specific timing metrics. The performance of this device was additionally benchmarked and validated against commonly used platforms, a Raspberry Pi4 and BeagleBone AI SBCs. The experimental results showed that this device stands well both in terms of timing and efficiency metrics. Execution times were lower than with the other platforms, by approximately 56% in the case of two threads, and by 29% in the case of 32-thread configurations. The outcomes could be of practical value to companies which intend to use such low-cost embedded devices in the development of reliable real-time industrial applications. Full article
Show Figures

Figure 1

Article
Face Detection Using a Capsule Network for Driver Monitoring Application
Computers 2023, 12(8), 161; https://doi.org/10.3390/computers12080161 - 12 Aug 2023
Viewed by 465
Abstract
Bus driver distraction and cognitive load lead to higher accident risk. Driver distraction sources and complex physical and psychological effects must be recognized and analyzed in real-world driving conditions to reduce risk and enhance overall road safety. The implementation of a camera-based system [...] Read more.
Bus driver distraction and cognitive load lead to higher accident risk. Driver distraction sources and complex physical and psychological effects must be recognized and analyzed in real-world driving conditions to reduce risk and enhance overall road safety. The implementation of a camera-based system utilizing computer vision for face recognition emerges as a highly viable and effective driver monitoring approach applicable in public transport. Reliable, accurate, and unnoticeable software solutions need to be developed to reach the appropriate robustness of the system. The reliability of data recording depends mainly on external factors, such as vibration, camera lens contamination, lighting conditions, and other optical performance degradations. The current study introduces Capsule Networks (CapsNets) for image processing and face detection tasks. The authors’ goal is to create a fast and accurate system compared to state-of-the-art Neural Network (NN) algorithms. Based on the seven tests completed, the authors’ solution outperformed the other networks in terms of performance degradation in six out of seven cases. The results show that the applied capsule-based solution performs well, and the degradation in efficiency is noticeably smaller than for the presented convolutional neural networks when adversarial attack methods are used. From an application standpoint, ensuring the security and effectiveness of an image-based driver monitoring system relies heavily on the mitigation of disruptive occurrences, commonly referred to as “image distractions,” which represent attacks on the neural network. Full article
Show Figures

Figure 1

Article
Stochastic Modeling for Intelligent Software-Defined Vehicular Networks: A Survey
Computers 2023, 12(8), 162; https://doi.org/10.3390/computers12080162 - 12 Aug 2023
Viewed by 538
Abstract
Digital twins and the Internet of Things (IoT) have gained significant research attention in recent years due to their potential advantages in various domains, and vehicular ad hoc networks (VANETs) are one such application. VANETs can provide a wide range of services for [...] Read more.
Digital twins and the Internet of Things (IoT) have gained significant research attention in recent years due to their potential advantages in various domains, and vehicular ad hoc networks (VANETs) are one such application. VANETs can provide a wide range of services for passengers and drivers, including safety, convenience, and information. The dynamic nature of these environments poses several challenges, including intermittent connectivity, quality of service (QoS), and heterogeneous applications. Combining intelligent technologies and software-defined networking (SDN) with VANETs (termed intelligent software-defined vehicular networks (iSDVNs)) meets these challenges. In this context, several types of research have been published, and we summarize their benefits and limitations. We also aim to survey stochastic modeling and performance analysis for iSDVNs and the uses of machine-learning algorithms through digital twin networks (DTNs), which are also part of iSDVNs. We first present a taxonomy of SDVN architectures based on their modes of operation. Next, we survey and classify the state-of-the-art iSDVN routing protocols, stochastic computations, and resource allocations. The evolution of SDN causes its complexity to increase, posing a significant challenge to efficient network management. Digital twins offer a promising solution to address these challenges. This paper explores the relationship between digital twins and SDN and also proposes a novel approach to improve network management in SDN environments by increasing digital twin capabilities. We analyze the pitfalls of these state-of-the-art iSDVN protocols and compare them using tables. Finally, we summarize several challenges faced by current iSDVNs and possible future directions to make iSDVNs autonomous. Full article
Show Figures

Figure 1

Review
Medical Image Encryption: A Comprehensive Review
Computers 2023, 12(8), 160; https://doi.org/10.3390/computers12080160 - 11 Aug 2023
Viewed by 301
Abstract
In medical information systems, image data can be considered crucial information. As imaging technology and methods for analyzing medical images advance, there will be a greater wealth of data available for study. Hence, protecting those images is essential. Image encryption methods are crucial [...] Read more.
In medical information systems, image data can be considered crucial information. As imaging technology and methods for analyzing medical images advance, there will be a greater wealth of data available for study. Hence, protecting those images is essential. Image encryption methods are crucial in multimedia applications for ensuring the security and authenticity of digital images. Recently, the encryption of medical images has garnered significant attention from academics due to concerns about the safety of medical communication. Advanced approaches, such as e-health, smart health, and telemedicine applications, are employed in the medical profession. This has highlighted the issue that medical images are often produced and shared online, necessitating protection against unauthorized use. Full article
Show Figures

Figure 1

Article
Genetic Approach to Improve Cryptographic Properties of Balanced Boolean Functions Using Bent Functions
Computers 2023, 12(8), 159; https://doi.org/10.3390/computers12080159 - 09 Aug 2023
Viewed by 356
Abstract
Recently, balanced Boolean functions with an even number n of variables achieving very good autocorrelation properties have been obtained for 12n26. These functions attain the maximum absolute value in the autocorrelation spectra (without considering the zero point) less [...] Read more.
Recently, balanced Boolean functions with an even number n of variables achieving very good autocorrelation properties have been obtained for 12n26. These functions attain the maximum absolute value in the autocorrelation spectra (without considering the zero point) less than 2n2 and are found by using a heuristic search algorithm that is based on the design method of an infinite class of such functions for a higher number of variables. Here, we consider balanced Boolean functions that are closest to the bent functions in terms of the Hamming distance and perform a genetic algorithm efficiently aiming to optimize their cryptographic properties, which provides better absolute indicator values for all of those values of n for the first time. We also observe that among our results, the functions for 16n26 have nonlinearity greater than 2n12n2. In the process, our search strategy produces balanced Boolean functions with the best-known nonlinearity for 8n16. Full article
(This article belongs to the Special Issue Using New Technologies on Cyber Security Solutions)
Show Figures

Figure 1

Back to TopTop