Masters Theses
Permanent URI for this collection
Browse
Browsing Masters Theses by Department "Computer Science"
Results Per Page
Sort Options
-
ItemA COMPARISON OF SERIAL VERSUS PARALLEL ALGORITHMS FOR ENERGY CONSUMPTION IN WIRELESS SENSOR NETWORKS(Middle Tennessee State University, 2016-03-24) Reavis, Gregg ; Gu, Yi ; Pettey, Chrisila ; Yoo, Sung ; Computer ScienceThe majority of low-end sensors in wireless sensor networks (WSNs) operate on batteries, which either cannot be replaced or are not practical to replace. Therefore, it is important to measure the total energy consumption in WSNs, in order to minimize power consumption and maximize network lifespan. Many researchers have been devoting their efforts into this area, which shows that a heterogeneous network produces a better solution to prolonging the network lifespan. So far as we know, the algorithms for minimizing the energy consumption have all been implemented in serial algorithms. In this work, we propose a parallel programming approach for optimizing the minimum energy consumption and maximizing the lifespan of WSNs. The results from an extensive set of experiments on a large number of simulated sensor networks illustrate the performance superiority of the proposed parallel approach over an existing serial algorithm and confirms a parallel solution will provide faster results.
-
ItemA Computational Electrostatic Modeling Pipeline for Comparing pH-dependent gp120-CD4 Interactions in Founder and Chronic HIV Strains(Middle Tennessee State University, 2017-03-24) Howton, Jonathan ; Phillips, Joshua ; Barbosa, Sal ; Wright, Stephen ; Computer ScienceThough Human Immunodeficiency Virus has been studied for several decades, a consistently
-
ItemBiologically Inspired Task Abstraction and Generalization Models of Working Memory(Middle Tennessee State University, 2017-10-27) Jovanovich, Michael P. ; Phillips, Joshua ; Phillips, Joshua ; Li, Cen ; Barbosa, Sal ; Computer ScienceWe first present a model of working memory that affords generalization. By separating stimuli in such a way that filler representations may flow through the model based on the state of gates, which are opened or closed in response to role signals, an action selection network is afforded the ability to learn a response to fillers that is independent of the roles in which they were encountered. Next, we present n-task learning, an extension of temporal difference learning that allows for the formation of multiple policies based around a common set of sensory inputs. In order to allow for state inputs to take on multiple values, they are joined with an arbitrary input called an abstract task representation. Task performance is shown to converge to optimal for a dynamic categorization problem in which input features are identical across all tasks.
-
ItemComputationally Accelerated Papyrology(Middle Tennessee State University, 2015-03-25) Williams, Alex C. ; Carroll, Hyrum ; Carroll, Hyrum ; Wallin, John ; Li, Cen ; Computer SciencePapyrologists transcribe and identify papyrus fragments in order to enrich modern lives by better understanding the linguistics, culture, and literature of the ancient world. In practice, these tasks are extremely challenging and slow due the limited amount of information preserved in each papyrus fragment (i.e., due to deterioration). For example, since their discovery in the late 19th century, only 10\% of the more than 500,000 fragments in the Oxyrhynchus papyri collection has been given preliminary identifications.
-
ItemENHANCED THROUGHPUT FOR WORKFLOW SCHEDULING USING PARALELLISM COMPUTATION AND INFORMED SEARCH(Middle Tennessee State University, 2017-07-16) Tang, Kaite Tang ; Gu, Yi ; Phillips, Joshua ; Li, Cen ; Sarkar, Medha ; Computer ScienceNext-generation e-science is producing a huge amount of data that needs to be processed by geographically isolated scientists and users through different steps, which can be modeled as a Directed Acyclic Graph (DAG) structured computing workflow. Many big data science applications, especially streaming applications with complex DAG-structured workflows, require a smooth dataflow for the Quality of Service (QoS) guarantee. Even with the ever-increasing computing power available in the High Performance Computing (HPC) environments, i.e., parallel processing on a PC cluster, the execution time of such high-demand streaming applications may still take a few hours or even days in some cases. Therefore, supporting and optimizing the performance of such scientific workflows in wide-area networks, especially in Grid and Cloud environments, are crucial to the success of collaborative scientific discovery.
-
ItemENHANCED THROUGHPUT FOR WORKFLOW SCHEDULING USING PARALELLISM COMPUTATION AND INFORMED SEARCH(Middle Tennessee State University, 2017-07-16) Tang, Kaite Tang ; Gu, Yi ; Phillips, Joshua ; Li, Cen ; Sarkar, Medha ; Computer ScienceNext-generation e-science is producing a huge amount of data that needs to be processed by geographically isolated scientists and users through different steps, which can be modeled as a Directed Acyclic Graph (DAG) structured computing workflow. Many big data science applications, especially streaming applications with complex DAG-structured workflows, require a smooth dataflow for the Quality of Service (QoS) guarantee. Even with the ever-increasing computing power available in the High Performance Computing (HPC) environments, i.e., parallel processing on a PC cluster, the execution time of such high-demand streaming applications may still take a few hours or even days in some cases. Therefore, supporting and optimizing the performance of such scientific workflows in wide-area networks, especially in Grid and Cloud environments, are crucial to the success of collaborative scientific discovery.
-
ItemENHANCED THROUGHPUT FOR WORKFLOW SCHEDULING USING PARALELLISM COMPUTATION AND INFORMED SEARCH(Middle Tennessee State University, 2017-07-16) Tang, Kaite Tang ; Gu, Yi ; Phillips, Joshua ; Li, Cen ; Sarkar, Medha ; Computer ScienceNext-generation e-science is producing a huge amount of data that needs to be processed by geographically isolated scientists and users through different steps, which can be modeled as a Directed Acyclic Graph (DAG) structured computing workflow. Many big data science applications, especially streaming applications with complex DAG-structured workflows, require a smooth dataflow for the Quality of Service (QoS) guarantee. Even with the ever-increasing computing power available in the High Performance Computing (HPC) environments, i.e., parallel processing on a PC cluster, the execution time of such high-demand streaming applications may still take a few hours or even days in some cases. Therefore, supporting and optimizing the performance of such scientific workflows in wide-area networks, especially in Grid and Cloud environments, are crucial to the success of collaborative scientific discovery.
-
ItemImage Cryptography with Chaos: A Survey of Existing Methods and a Proposed Implementation Incorporating Fluid Dynamics Approaches(Middle Tennessee State University, 2016-10-27) Hammock, Gary L. ; Phillips, Joshua ; Gu, Yi ; Pettey, Chrisila ; Computer ScienceCryptography has uses in everyday applications ranging from e-commerce transactions to secure communications. There is current research in encrypting images in their native two dimensional form. To do this, deterministic chaos maps have been explored for their use in providing the operations required to transform a plaintext image into a ciphertext encrypted image and vice versa. This research implements existing bio-inspired and cellular automata image encryption techniques and shows that the bio-inspired approach is better than the cellular automata approach. A weakness in the cellular automata approach is also highlighted that was previously undiscovered. This research also explores a novel application of analogies from the field of Computational Fluid Dynamics to generate deterministic chaos. A cryptanalysis GUI was developed to quantitatively show that the proposed technique is superior to both the bio-inspired and cellular automata techniques using metrics including luminance histograms, pixel covariant dependence, chi-squared tests, and information entropy.
-
ItemPreserving Relative Dimension Rankings in the Presence of Noise Using the Box-Counting Algorithm(Middle Tennessee State University, 2016-06-23) Murphy, Michael Colin ; Phillips, Joshua ; Seo, Suk ; Green, Lisa ; Computer ScienceFractal dimension is a number that describes the degree of self-similarity, or "complexity", of a particular geometry. In digital image processing, fractal dimension is often used to provide quantitative comparisons between digital images. The Box-Counting Algorithm is one of the more widely used methods for estimating fractal dimension, although it has been shown to be highly sensitive to digital filtering and noise. This research investigates the variability in fractal dimension estimates obtained from the Box-Counting Algorithm as noise is applied to an image. In the case of increasing uniform noise, three distinct relationships emerge between dimensional estimates and their variability. It is then shown how these relationships may be leveraged to improve relative rankings among dimensional estimates when using the Box-Counting Algorithm.
-
ItemRUNTIME VERIFICATION OF STATE DIAGRAM FOR ROBOTICS(Middle Tennessee State University, 2017-12-01) Harvin, Taylor N. ; Dong, Zhijiang ; Li, Cen ; Barbosa, Salvador ; Computer ScienceIt is critical to develop a trustworthy system for cyber physic systems (CPS), such as unmanned aerial vehicle and robotic systems. However, it is challenging to develop trustworthy systems due to complicated system behavior and unknown or even hostile external environments that are in general unstable. It becomes even worse because of the integration of error detection and handling code in the system to react to unknown events or exceptions. To facilitate the development of trustworthy systems in CPS, we proposed a framework that allows developers to monitor system behavior at runtime easily. The framework is built around runtime verification tools and could detect any deviation from system behavior that is specified in state diagrams. One benefit of our framework is that it separates the monitoring code from system code that achieves the required functionalities. This creates a cleaner and modular system. A case study of a Lego EV3 robot is conducted to evaluate our framework.
-
ItemSQL INJECTION VULNERABILITY DETECTION IN WEB APPLICATIONS(Middle Tennessee State University, 2014-03-24) York, Jason ; Dong, Zhijiang ; Li, Cen ; Yoo, Jungsoon ; Computer ScienceSecurity is an essential requirement of most web applications, which typically access sensitive data such as personal information, and financial records. Leaking of such sensitive data could cause huge financial losses and hurt the reputation of the organization. However, studies have shown that security vulnerabilities are common in web applications due to the increased pressure on budget and timeline as well as the lack of security training. The goal of the project is to detect one specific kind of security vulnerabilities - SQL injection vulnerability in web applications by exploring source code. The developed tool is easy to use and provides enough flexibility to handle different database extensions.
-
ItemStatistical Optimization of Training Data for Semi-Supervised Text Document Clustering(Middle Tennessee State University, 2017-06-22) Newbold, Cody Renae ; Phillips, Joshua ; Pettey, Chrisila ; Li, Cen ; Computer ScienceUnsupervised machine learning algorithms suffer from uncertainty that results are accurate or useful. In particular, text document clustering algorithms such as Latent Dirichlet Allocation (LDA) and Latent Semantic Analysis (LSA) give no guarantee that documents are clustered in a manner similar to human readers. Using a semi-supervised approach on text document clustering, we show that the selection of training data can be statistically optimized using LDA and LSA. Using this method, a human reader categorizes a percentage of the data as an analysis step, then feeds the partially-labeled data into bootstrap training and testing steps. Using mutual information to discover which documents were better for training, the algorithm does a post-processing step using the optimized training set. The results show that mutual information values are higher when the statistically optimized training set is used and indicate that human-like performance is better achieved with optimized training data.
-
ItemStatistical Optimization of Training Data for Semi-Supervised Text Document Clustering(Middle Tennessee State University, 2017-06-22) Newbold, Cody Renae ; Phillips, Joshua ; Pettey, Chrisila ; Li, Cen ; Computer ScienceUnsupervised machine learning algorithms suffer from uncertainty that results are accurate or useful. In particular, text document clustering algorithms such as Latent Dirichlet Allocation (LDA) and Latent Semantic Analysis (LSA) give no guarantee that documents are clustered in a manner similar to human readers. Using a semi-supervised approach on text document clustering, we show that the selection of training data can be statistically optimized using LDA and LSA. Using this method, a human reader categorizes a percentage of the data as an analysis step, then feeds the partially-labeled data into bootstrap training and testing steps. Using mutual information to discover which documents were better for training, the algorithm does a post-processing step using the optimized training set. The results show that mutual information values are higher when the statistically optimized training set is used and indicate that human-like performance is better achieved with optimized training data.
-
ItemStatistical Optimization of Training Data for Semi-Supervised Text Document Clustering(Middle Tennessee State University, 2017-06-22) Newbold, Cody Renae ; Phillips, Joshua ; Pettey, Chrisila ; Li, Cen ; Computer ScienceUnsupervised machine learning algorithms suffer from uncertainty that results are accurate or useful. In particular, text document clustering algorithms such as Latent Dirichlet Allocation (LDA) and Latent Semantic Analysis (LSA) give no guarantee that documents are clustered in a manner similar to human readers. Using a semi-supervised approach on text document clustering, we show that the selection of training data can be statistically optimized using LDA and LSA. Using this method, a human reader categorizes a percentage of the data as an analysis step, then feeds the partially-labeled data into bootstrap training and testing steps. Using mutual information to discover which documents were better for training, the algorithm does a post-processing step using the optimized training set. The results show that mutual information values are higher when the statistically optimized training set is used and indicate that human-like performance is better achieved with optimized training data.