Sign in to use this feature.

Years

Between: -

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (3,360)

Search Parameters:
Journal = Information

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
Article
GSCEU-Net: An End-to-End Lightweight Skin Lesion Segmentation Model with Feature Fusion Based on U-Net Enhancements
Information 2023, 14(9), 486; https://doi.org/10.3390/info14090486 (registering DOI) - 01 Sep 2023
Abstract
Accurate segmentation of lesions can provide strong evidence for early skin cancer diagnosis by doctors, enabling timely treatment of patients and effectively reducing cancer mortality rates. In recent years, some deep learning models have utilized complex modules to improve their performance for skin [...] Read more.
Accurate segmentation of lesions can provide strong evidence for early skin cancer diagnosis by doctors, enabling timely treatment of patients and effectively reducing cancer mortality rates. In recent years, some deep learning models have utilized complex modules to improve their performance for skin disease image segmentation. However, limited computational resources have hindered their practical application in clinical environments. To address this challenge, this paper proposes a lightweight model, named GSCEU-Net, which is able to achieve superior skin lesion segmentation performance at a lower cost. GSCEU-Net is based on the U-Net architecture with additional enhancements. Firstly, the partial convolution (PConv) module, proposed by the FasterNet model, is modified to an SConv module, which enables channel segmentation paths of different scales. Secondly, a newly designed Ghost SConv (GSC) module is proposed for incorporation into the model’s backbone, where the Separate Convolution (SConv) module is aided by a Multi-Layer Perceptron (MLP) and the output path residuals from the Ghost module. Finally, the Efficient Channel Attention (ECA) mechanism is incorporated at different levels into the decoding part of the model. The segmentation performance of the proposed model is evaluated on two public datasets (ISIC2018 and PH2) and a private dataset. Compared to U-Net, the proposed model achieves an IoU improvement of 0.0261 points and a DSC improvement of 0.0164 points, while reducing the parameter count by 190 times and the computational complexity by 170 times. Compared to other existing segmentation models, the proposed GSCEU-Net model also demonstrates superiority, along with an advanced balance between the number of parameters, complexity, and segmentation performance. Full article
(This article belongs to the Special Issue Computer Vision for Biomedical Image Processing)
Show Figures

Figure 1

Article
Social Media Analytics on Russia–Ukraine Cyber War with Natural Language Processing: Perspectives and Challenges
Information 2023, 14(9), 485; https://doi.org/10.3390/info14090485 (registering DOI) - 31 Aug 2023
Abstract
Utilizing social media data is imperative in comprehending critical insights on the Russia–Ukraine cyber conflict due to their unparalleled capacity to provide real-time information dissemination, thereby enabling the timely tracking and analysis of cyber incidents. The vast array of user-generated content on these [...] Read more.
Utilizing social media data is imperative in comprehending critical insights on the Russia–Ukraine cyber conflict due to their unparalleled capacity to provide real-time information dissemination, thereby enabling the timely tracking and analysis of cyber incidents. The vast array of user-generated content on these platforms, ranging from eyewitness accounts to multimedia evidence, serves as invaluable resources for corroborating and contextualizing cyber attacks, facilitating the attribution of malicious actors. Furthermore, social media data afford unique access to public sentiment, the propagation of propaganda, and emerging narratives, offering profound insights into the effectiveness of information operations and shaping counter-messaging strategies. However, there have been hardly any studies reported on the Russia–Ukraine cyber war harnessing social media analytics. This paper presents a comprehensive analysis of the crucial role of social-media-based cyber intelligence in understanding Russia’s cyber threats during the ongoing Russo–Ukrainian conflict. This paper introduces an innovative multidimensional cyber intelligence framework and utilizes Twitter data to generate cyber intelligence reports. By leveraging advanced monitoring tools and NLP algorithms, like language detection, translation, sentiment analysis, term frequency–inverse document frequency (TF-IDF), latent Dirichlet allocation (LDA), Porter stemming, n-grams, and others, this study automatically generated cyber intelligence for Russia and Ukraine. Using 37,386 tweets originating from 30,706 users in 54 languages from 13 October 2022 to 6 April 2023, this paper reported the first detailed multilingual analysis on the Russia–Ukraine cyber crisis in four cyber dimensions (geopolitical and socioeconomic; targeted victim; psychological and societal; and national priority and concerns). It also highlights challenges faced in harnessing reliable social-media-based cyber intelligence. Full article
Show Figures

Figure 1

Editorial
Social Media Mining and Analysis: A Brief Review of Recent Challenges
Information 2023, 14(9), 484; https://doi.org/10.3390/info14090484 - 31 Aug 2023
Viewed by 144
Abstract
Social media platforms are a type of web-based applications that are built on the conceptual and technical underpinnings of Web 2 [...] Full article
(This article belongs to the Special Issue Recent Advances in Social Media Mining and Analysis)
Review
Linked Data Interfaces: A Survey
Information 2023, 14(9), 483; https://doi.org/10.3390/info14090483 - 30 Aug 2023
Viewed by 139
Abstract
In the era of big data, linked data interfaces play a critical role in enabling access to and management of large-scale, heterogeneous datasets. This survey investigates forty-seven interfaces developed by the semantic web community in the context of the Web of Linked Data, [...] Read more.
In the era of big data, linked data interfaces play a critical role in enabling access to and management of large-scale, heterogeneous datasets. This survey investigates forty-seven interfaces developed by the semantic web community in the context of the Web of Linked Data, displaying information about general topics and digital library contents. The interfaces are classified based on their interaction paradigm, the type of information they display, and the complexity reduction strategies they employ. The main purpose to be addressed is the possibility of categorizing a great number of available tools so that comparison among them becomes feasible and valuable. The analysis reveals that most interfaces use a hybrid interaction paradigm combining browsing, searching, and displaying information in lists or tables. Complexity reduction strategies, such as faceted search and summary visualization, are also identified. Emerging trends in linked data interface focus on user-centric design and advancements in semantic annotation methods, leveraging machine learning techniques for data enrichment and retrieval. Additionally, an interactive platform is provided to explore and compare data on the analyzed tools. Overall, there is no one-size-fits-all solution for developing linked data interfaces and tailoring the interaction paradigm and complexity reduction strategies to specific user needs is essential. Full article
(This article belongs to the Special Issue Multidimensional Data Structures and Big Data Management)
Show Figures

Figure 1

Article
Unveiling the Confirmation Factors of Information System Quality on Continuance Intention towards Online Cryptocurrency Exchanges: The Extension of the Expectation Confirmation Model
Information 2023, 14(9), 482; https://doi.org/10.3390/info14090482 - 29 Aug 2023
Viewed by 118
Abstract
This study is based on the Expectation Confirmation Model and the Information System Success Model to evaluate the influence of perceived usefulness and satisfaction towards online cryptocurrency exchanges. Therefore, this study deconstructs the “confirmation” component of the information system continuous use model into [...] Read more.
This study is based on the Expectation Confirmation Model and the Information System Success Model to evaluate the influence of perceived usefulness and satisfaction towards online cryptocurrency exchanges. Therefore, this study deconstructs the “confirmation” component of the information system continuous use model into three different components: confirmation of information quality, confirmation of system quality, and confirmation of service quality, to investigate the factors that influence the desire to use online cryptocurrency exchanges continuously. This research used a questionnaire methodology, with data collected from 248 users of cryptocurrency platforms. This study found that perceived usefulness and satisfaction significantly correlated with continuance intention. Furthermore, information quality, system quality, and service quality significantly correlated with perceived usefulness and satisfaction. Finally, perceived usefulness was found to be significantly correlated with satisfaction. Full article
(This article belongs to the Special Issue Blockchain, Technology and Its Application)
Show Figures

Figure 1

Article
Formal Template-Based Generation of Attack–Defence Trees for Automated Security Analysis
Information 2023, 14(9), 481; https://doi.org/10.3390/info14090481 - 29 Aug 2023
Viewed by 143
Abstract
Systems that integrate cyber and physical aspects to create cyber-physical systems (CPS) are becoming increasingly complex, but demonstrating the security of CPS is hard and security is frequently compromised. These compromises can lead to safety failures, putting lives at risk. Attack Defense Trees [...] Read more.
Systems that integrate cyber and physical aspects to create cyber-physical systems (CPS) are becoming increasingly complex, but demonstrating the security of CPS is hard and security is frequently compromised. These compromises can lead to safety failures, putting lives at risk. Attack Defense Trees with sequential conjunction (ADS) are an approach to identifying attacks on a system and identifying the interaction between attacks and the defenses that are present within the CPS. We present a semantic model for ADS and propose a methodology for generating ADS automatically. The methodology takes as input a CPS system model and a library of templates of attacks and defenses. We demonstrate and validate the effectiveness of the ADS generation methodology using an example from the automotive domain. Full article
(This article belongs to the Special Issue Automotive System Security: Recent Advances and Challenges)
Show Figures

Figure 1

Article
The Impact of Digital Business on Energy Efficiency in EU Countries
Information 2023, 14(9), 480; https://doi.org/10.3390/info14090480 - 29 Aug 2023
Viewed by 162
Abstract
Digital business plays a crucial role in driving energy efficiency and sustainability by enabling innovative solutions such as smart grid technologies, data analytics for energy optimization, and remote monitoring and control systems. Through digitalization, businesses can streamline processes, minimize energy waste, and make [...] Read more.
Digital business plays a crucial role in driving energy efficiency and sustainability by enabling innovative solutions such as smart grid technologies, data analytics for energy optimization, and remote monitoring and control systems. Through digitalization, businesses can streamline processes, minimize energy waste, and make informed decisions that lead to more efficient resource utilization and reduced environmental impact. This paper aims at analyzing the character of digital business’ impact on energy efficiency to outline the relevant instruments to unleash EU countries’ potential for attaining sustainable development. The study applies the panel-corrected standard errors technique to check the effect of digital business on energy efficiency for the EU countries in 2011–2020. The findings show that digital business has a significant negative effect on energy intensity, implying that increased digital business leads to decreased energy intensity. Additionally, digital business practices positively contribute to reducing CO2 emissions and promoting renewable energy, although the impact on final energy consumption varies across different indicators. The findings underscore the significance of integrating digital business practices to improve energy efficiency, lower energy intensity, and advance the adoption of renewable energy sources within the EU. Policymakers and businesses should prioritize the adoption of digital technologies and e-commerce strategies to facilitate sustainable energy transitions and accomplish environmental objectives. Full article
(This article belongs to the Special Issue Artificial Intelligence and Big Data Applications)
Show Figures

Figure 1

Review
Considerations, Advances, and Challenges Associated with the Use of Specific Emitter Identification in the Security of Internet of Things Deployments: A Survey
Information 2023, 14(9), 479; https://doi.org/10.3390/info14090479 - 29 Aug 2023
Viewed by 161
Abstract
Initially introduced almost thirty years ago for the express purpose of providing electronic warfare systems the capabilities to detect, characterize, and identify radar emitters, Specific Emitter Identification (SEI) has recently received a lot of attention within the research community as a physical layer [...] Read more.
Initially introduced almost thirty years ago for the express purpose of providing electronic warfare systems the capabilities to detect, characterize, and identify radar emitters, Specific Emitter Identification (SEI) has recently received a lot of attention within the research community as a physical layer technique for securing Internet of Things (IoT) deployments. This attention is largely due to SEI’s demonstrated success in passively and uniquely identifying wireless emitters using traditional machine learning and the success of Deep Learning (DL) within the natural language processing and computer vision areas. SEI exploits distinct and unintentional features present within an emitter’s transmitted signals. These distinctive and unintentional features are attributed to slight manufacturing and assembly variations within and between the components, sub-systems, and systems comprising an emitter’s Radio Frequency (RF) front end. Although sufficient to facilitate SEI, these features do not hinder normal operations such as detection, channel estimation, timing, and demodulation. However, despite the plethora of SEI publications, it has remained largely a focus of academic endeavors, primarily focusing on proof-of-concept demonstration and little to no use in operational networks for various reasons. The focus of this survey is a review of SEI publications from the perspective of its use as a practical, effective, and usable IoT security mechanism; thus, we use IoT requirements and constraints (e.g., wireless standard, nature of their deployment) as a lens through which each reviewed paper is analyzed. Previous surveys have not taken such an approach and have only used IoT as motivation, a setting, or a context. In this survey, we consider operating conditions, SEI threats, SEI at scale, publicly available data sets, and SEI considerations that are dictated by the fact that it is to be employed by IoT devices or IoT infrastructure. Full article
(This article belongs to the Special Issue IoT-Based Systems for Safe and Secure Smart Cities)
Show Figures

Figure 1

Protocol
A Subjective Logical Framework-Based Trust Model for Wormhole Attack Detection and Mitigation in Low-Power and Lossy (RPL) IoT-Networks
Information 2023, 14(9), 478; https://doi.org/10.3390/info14090478 - 29 Aug 2023
Viewed by 245
Abstract
The increasing use of wireless communication and IoT devices has raised concerns about security, particularly with regard to attacks on the Routing Protocol for Low-Power and Lossy Networks (RPL), such as the wormhole attack. In this study, the authors have used the trust [...] Read more.
The increasing use of wireless communication and IoT devices has raised concerns about security, particularly with regard to attacks on the Routing Protocol for Low-Power and Lossy Networks (RPL), such as the wormhole attack. In this study, the authors have used the trust concept called PCC-RPL (Parental Change Control RPL) over communicating nodes on IoT networks which prevents unsolicited parent changes by utilizing the trust concept. The aim of this study is to make the RPL protocol more secure by using a Subjective Logic Framework-based trust model to detect and mitigate a wormhole attack. The study evaluates the trust-based designed framework known as SLF-RPL (Subjective Logical Framework-Routing Protocol for Low-Power and Lossy Networks) over various key parameters, i.e., low energy consumption, packet loss ratio and attack detection rate. The achieved results were conducted using a Contiki OS-based Cooja Network simulator with 30, 60, and 90 nodes with respect to a 1:10 malicious node ratio and compared with the existing PCC-RPL protocol. The results show that the proposed SLF-RPL framework demonstrates higher efficiency (0.0504 J to 0.0728 J out of 1 J) than PCC-RPL (0.065 J to 0.0963 J out of 1 J) in terms of energy consumption at the node level, a decreased packet loss ratio of 16% at the node level, and an increased attack detection rate at network level from 0.42 to 0.55 in comparison with PCC-RPL. Full article
Show Figures

Figure 1

Article
A Secure and Privacy-Preserving Blockchain-Based XAI-Justice System
Information 2023, 14(9), 477; https://doi.org/10.3390/info14090477 - 28 Aug 2023
Viewed by 200
Abstract
Pursuing “intelligent justice” necessitates an impartial, productive, and technologically driven methodology for judicial determinations. This scholarly composition proposes a framework that harnesses Artificial Intelligence (AI) innovations such as Natural Language Processing (NLP), ChatGPT, ontological alignment, and the semantic web, in conjunction with blockchain [...] Read more.
Pursuing “intelligent justice” necessitates an impartial, productive, and technologically driven methodology for judicial determinations. This scholarly composition proposes a framework that harnesses Artificial Intelligence (AI) innovations such as Natural Language Processing (NLP), ChatGPT, ontological alignment, and the semantic web, in conjunction with blockchain and privacy techniques, to examine, deduce, and proffer recommendations for the administration of justice. Specifically, through the integration of blockchain technology, the system affords a secure and transparent infrastructure for the management of legal documentation and transactions while preserving data confidentiality. Privacy approaches, including differential privacy and homomorphic encryption techniques, are further employed to safeguard sensitive data and uphold discretion. The advantages of the suggested framework encompass heightened efficiency and expediency, diminished error propensity, a more uniform approach to judicial determinations, and augmented security and privacy. Additionally, by utilizing explainable AI methodologies, the ethical and legal ramifications of deploying intelligent algorithms and blockchain technologies within the legal domain are scrupulously contemplated, ensuring a secure, efficient, and transparent justice system that concurrently protects sensitive information upholds privacy. Full article
(This article belongs to the Special Issue Machine Learning for the Blockchain)
Show Figures

Figure 1

Article
Assessment of SDN Controllers in Wireless Environment Using a Multi-Criteria Technique
Information 2023, 14(9), 476; https://doi.org/10.3390/info14090476 - 28 Aug 2023
Viewed by 137
Abstract
Software-defined network (SDN) technology can offer wireless networks the advantages of simplified control and network management. This SDN subdomain technology is called the software-defined wireless network (SDWN). In this study, the performance of four controllers in an SDWN environment is assessed, since the [...] Read more.
Software-defined network (SDN) technology can offer wireless networks the advantages of simplified control and network management. This SDN subdomain technology is called the software-defined wireless network (SDWN). In this study, the performance of four controllers in an SDWN environment is assessed, since the controller is the most significant component of the entire network. Using the Mininet-WiFi platform, the performance of each controller is evaluated in terms of throughput, latency, jitter, and packet loss. Moreover, a multi-criteria evaluation is introduced and applied to provide a fair comparison between SDWNs. This study provides an appropriate configuration of SDWNs that is useful for network engineering and can be used for SDWNs performance optimization. Full article
(This article belongs to the Section Wireless Technologies)
Show Figures

Figure 1

Article
A Novel Hardware Architecture for Enhancing the Keccak Hash Function in FPGA Devices
Information 2023, 14(9), 475; https://doi.org/10.3390/info14090475 - 28 Aug 2023
Viewed by 220
Abstract
Hash functions are an essential mechanism in today’s world of information security. It is common practice to utilize them for storing and verifying passwords, developing pseudo-random sequences, and deriving keys for various applications, including military, online commerce, banking, healthcare management, and the Internet [...] Read more.
Hash functions are an essential mechanism in today’s world of information security. It is common practice to utilize them for storing and verifying passwords, developing pseudo-random sequences, and deriving keys for various applications, including military, online commerce, banking, healthcare management, and the Internet of Things (IoT). Among the cryptographic hash algorithms, the Keccak hash function (also known as SHA-3) stands out for its excellent hardware performance and resistance to current cryptanalysis approaches compared to algorithms such as SHA-1 and SHA-2. However, there is always a need for hardware enhancements to increase the throughput rate and decrease area consumption. This study specifically focuses on enhancing the throughput rate of the Keccak hash algorithm by presenting a novel architecture that supplies efficient outcomes. This novel architecture achieved impressive throughput rates on Field-Programmable Gate Array (FPGA) devices with the Virtex-5, Virtex-6, and Virtex-7 models. The highest throughput rates obtained were 26.151 Gbps, 33.084 Gbps, and 38.043 Gbps, respectively. Additionally, the research paper includes a comparative analysis of the proposed approach with recently published methods and shows a throughput rate above 11.37% Gbps in Virtex-5, 10.49% Gbps in Virtex-6 and 11.47% Gbps in Virtex-7. This comparison allows for a comprehensive evaluation of the novel architecture’s performance and effectiveness in relation to existing methodologies. Full article
Show Figures

Figure 1

Article
Analyzing Sentiments Regarding ChatGPT Using Novel BERT: A Machine Learning Approach
Information 2023, 14(9), 474; https://doi.org/10.3390/info14090474 - 25 Aug 2023
Viewed by 329
Abstract
Chatbots are AI-powered programs designed to replicate human conversation. They are capable of performing a wide range of tasks, including answering questions, offering directions, controlling smart home thermostats, and playing music, among other functions. ChatGPT is a popular AI-based chatbot that generates meaningful [...] Read more.
Chatbots are AI-powered programs designed to replicate human conversation. They are capable of performing a wide range of tasks, including answering questions, offering directions, controlling smart home thermostats, and playing music, among other functions. ChatGPT is a popular AI-based chatbot that generates meaningful responses to queries, aiding people in learning. While some individuals support ChatGPT, others view it as a disruptive tool in the field of education. Discussions about this tool can be found across different social media platforms. Analyzing the sentiment of such social media data, which comprises people’s opinions, is crucial for assessing public sentiment regarding the success and shortcomings of such tools. This study performs a sentiment analysis and topic modeling on ChatGPT-based tweets. ChatGPT-based tweets are the author’s extracted tweets from Twitter using ChatGPT hashtags, where users share their reviews and opinions about ChatGPT, providing a reference to the thoughts expressed by users in their tweets. The Latent Dirichlet Allocation (LDA) approach is employed to identify the most frequently discussed topics in relation to ChatGPT tweets. For the sentiment analysis, a deep transformer-based Bidirectional Encoder Representations from Transformers (BERT) model with three dense layers of neural networks is proposed. Additionally, machine and deep learning models with fine-tuned parameters are utilized for a comparative analysis. Experimental results demonstrate the superior performance of the proposed BERT model, achieving an accuracy of 96.49%. Full article
Show Figures

Figure 1

Article
A Deep Neural Network for Working Memory Load Prediction from EEG Ensemble Empirical Mode Decomposition
Information 2023, 14(9), 473; https://doi.org/10.3390/info14090473 - 25 Aug 2023
Viewed by 326
Abstract
Mild Cognitive Impairment (MCI) and Alzheimer’s Disease (AD) are frequently associated with working memory (WM) dysfunction, which is also observed in various neural psychiatric disorders, including depression, schizophrenia, and ADHD. Early detection of WM dysfunction is essential to predict the onset of MCI [...] Read more.
Mild Cognitive Impairment (MCI) and Alzheimer’s Disease (AD) are frequently associated with working memory (WM) dysfunction, which is also observed in various neural psychiatric disorders, including depression, schizophrenia, and ADHD. Early detection of WM dysfunction is essential to predict the onset of MCI and AD. Artificial Intelligence (AI)-based algorithms are increasingly used to identify biomarkers for detecting subtle changes in loaded WM. This paper presents an approach using electroencephalograms (EEG), time-frequency signal processing, and a Deep Neural Network (DNN) to predict WM load in normal and MCI-diagnosed subjects. EEG signals were recorded using an EEG cap during working memory tasks, including block tapping and N-back visuospatial interfaces. The data were bandpass-filtered, and independent components analysis was used to select the best electrode channels. The Ensemble Empirical Mode Decomposition (EEMD) algorithm was then applied to the EEG signals to obtain the time-frequency Intrinsic Mode Functions (IMFs). The EEMD and DNN methods perform better than traditional machine learning methods as well as Convolutional Neural Networks (CNN) for the prediction of WM load. Prediction accuracies were consistently higher for both normal and MCI subjects, averaging 97.62%. The average Kappa score for normal subjects was 94.98% and 92.49% for subjects with MCI. Subjects with MCI showed higher values for beta and alpha oscillations in the frontal region than normal subjects. The average power spectral density of the IMFs showed that the IMFs (p = 0.0469 for normal subjects and p = 0.0145 for subjects with MCI) are robust and reliable features for WM load prediction. Full article
(This article belongs to the Special Issue Deep Learning for Image, Video and Signal Processing)
Show Figures

Figure 1

Article
Graph-Based Extractive Text Summarization Sentence Scoring Scheme for Big Data Applications
Information 2023, 14(9), 472; https://doi.org/10.3390/info14090472 - 22 Aug 2023
Viewed by 412
Abstract
The recent advancements in big data and natural language processing (NLP) have necessitated proficient text mining (TM) schemes that can interpret and analyze voluminous textual data. Text summarization (TS) acts as an essential pillar within recommendation engines. Despite the prevalent use of abstractive [...] Read more.
The recent advancements in big data and natural language processing (NLP) have necessitated proficient text mining (TM) schemes that can interpret and analyze voluminous textual data. Text summarization (TS) acts as an essential pillar within recommendation engines. Despite the prevalent use of abstractive techniques in TS, an anticipated shift towards a graph-based extractive TS (ETS) scheme is becoming apparent. The models, although simpler and less resource-intensive, are key in assessing reviews and feedback on products or services. Nonetheless, current methodologies have not fully resolved concerns surrounding complexity, adaptability, and computational demands. Thus, we propose our scheme, GETS, utilizing a graph-based model to forge connections among words and sentences through statistical procedures. The structure encompasses a post-processing stage that includes graph-based sentence clustering. Employing the Apache Spark framework, the scheme is designed for parallel execution, making it adaptable to real-world applications. For evaluation, we selected 500 documents from the WikiHow and Opinosis datasets, categorized them into five classes, and applied the recall-oriented understudying gisting evaluation (ROUGE) parameters for comparison with measures ROUGE-1, 2, and L. The results include recall scores of 0.3942, 0.0952, and 0.3436 for ROUGE-1, 2, and L, respectively (when using the clustered approach). Through a juxtaposition with existing models such as BERTEXT (with 3-gram, 4-gram) and MATCHSUM, our scheme has demonstrated notable improvements, substantiating its applicability and effectiveness in real-world scenarios. Full article
(This article belongs to the Special Issue Text Mining: Challenges, Algorithms, Tools and Applications)
Show Figures

Figure 1

Back to TopTop