Theses

  • 17758
  • 0
  • Signal Analysis In The Ambiguity Domain
    Signal Analysis In The Ambiguity Domain
    Time-Frequency Distributions (TFDs) are accounted to be one of the powerful tools for analysis of time-varying signals. Although a variety of TFDs have been proposed, most of their designs were targeted towards obtaining good visualization and limited work is available for characterization applications.In this work, the characteristics of the ambiguity domain (AD) is suitably exploited to obtain a novel automated analysis scheme that preserves the inherent TF connection during Non-Stationary (NS) signal processing. Following this, an energy-based discriminative set of feature vectors for facilitating efficient characterization of the given time-varying input has been proposed. This scheme is motivated by the fact that, although, the interfering (or cross-) terms plague the representation, they carry important signal interaction information, which could be investigated for usability for time-varying signal analysis.Once having assessed the suitability of this domain for NS signal analysis, a new formulation for obtaining AD transformation is introduced. The number theory concepts, specifically the even-ordered Ramanujan Sums (RS) are used to obtain the proposed transform function. A detailed investigation and comparison to the classical approach, on this novel class of functions reveals the many benefits of the RS-modified AD functions: inherent sparsity in representation, dimensionality reduction, and robustness to noise.The next contribution in this work, is the proposal of kernel modifications in AD for obtaining high resolution (and good time localization) distribution. This is motivated by the existing trade-off between TF resolution and interfering term reduction in TF distributions. Here, certain variants of TF kernels are proposed in the AD. In addition, kernels that are derived from the concept of learning machines are introduced for discriminative characterization of NS signals.Following this, two novel AD-based schemes for neurological disorder discrimination using gait and pathological speech detection are introduced. The performance evaluation of these AD-based schemes, using a linear classifier, resulted in a maximum overall classification accuracy of 93.1% and 97.5% for gait and pathological speech applications respectively. The accuracies were obtained after a rigorous leave-one-out technique validation strategy.These results further confirm the potential of the proposed schemes for efficient information extraction for real-life signals.
    Signal analysis of sleep electrooculogram
    Signal analysis of sleep electrooculogram
    Measures of sleep physiology, not obvious to the human eye, may provide important clues to disease states, and responses to therapy. A significant amount of eye movement data is not attended to clinically in routine sleep studies because these data are too long, about six to eight hours in duration, and they are also mixed with many unknown artifacts usually produced from EEG signals or other activities. This research describes how eye movements were different in depressed patients who used antidepressant medications, compared to those who did not.The goal is to track antidepressant medications effects on sleep eye movements. Clinically used SSRIs such as Prozac (Fluoxetine), Celexa (Citalopram), Zoloft (Sertraline), the SNRI Effexor (Venlafaxine) have been considered in this study to assess the possible connections between eye movements recorded during sleep and serotonin activities. The novelty of this research is in the assessment of sleep eye movement, in order to track the antidepressant medications' effect on the brain through EOG channels. EOG analysis is valuable because it is a noninvasive method, and the following research is looking for findings that are invisible to the eyes of professional clinicians.This thesis focuses on quantifying sleep eye movements, with two techniques: autoregressive modeling and wavelet analysis. The eye movement detection software (EMDS) with more than 1500 lines was developed for detecting sleep eye movements. AR coefficients were derived from the sleep eye movements of the patients who were exposed to antidepressant medications, and those who were not, and then they are classified by means of linear discriminant analysis. also for wavelet analysis, discrete wavelet coefficients have been used for classifying sleep eye movements of the patients who were exposed to medication and those who were not.
    Signal processing for transmitted-reference ultra wideband system
    Signal processing for transmitted-reference ultra wideband system
    This thesis focuses on transmitted-reference ultra wideband (TR-UWB) systems coexistence with IEEE802.11a WLAN systems. TR-UWB systems can relax the difficult synchronization requirements and can provide a simple receiver architecture that gathers the energy from many resolvable multipath components. However, UWB TR systems are susceptible to interference which comes from other wireless systems. In this thesis, TR-UWB system performance is studied in the presence of strong IEEE 802.11a WLAN interference in both AWGN and IEEE channel model. In order to reduce both the effects of interference by and into UWB signals, we propose a new method in conjunction with a multi-carrier type transmission pulse using wavelet analysis and notch filtering. Using wavelet analysis, spectral density of the transmitted UWB signal around the interfering band is reduced by 60 dB lower than the peak. With the modified TR-UWB receiver, the TR-UWB system shows performance improvement in the presence of strong IEEE 8-2.11a interference in both AWGN and IEEE channel models. The proposed method can be used for the coexistence of different wireless systems with UWB system.
    Signal processing for ubiquitous biometric systems
    Signal processing for ubiquitous biometric systems
    This work presents two hardware independent and ubiquitous biometric solutions that can significantly improve security for computer and telephone related applications. Firstly, for computer security, a GMM based keystroke verification method is proposed along with the up-up keystroke latency (UUKL) feature which is being used for the first time. This method can verify the identity of users based on their typing pattern and achieved a FAR of 5.1%, a FRR of 6.5%, and a EER of 5.8% for a database of 41 users. Due to many inconsistencies in previous works, a new keystroke protocol has also been proposed. This protocol makes a number of recommendations concerning how to improve performance, reliability, and accuracy of any keystroke recognition system. Secondly, a GMM based text-independent speaker identification scheme is also proposed that utilizes novel spectral features for better speaker discrimination. Based on 100 users from the TIMIT database, these features achieved an identification error of 1.22% by incorporating information about the source of the speech signal. This represents a 6% improvement over the MFCC based features.
    Signal processing techniques for multimedia information security
    Signal processing techniques for multimedia information security
    The digital representation of multimedia and the Internet allows for the unauthorized duplication, transmission, and wide distribution of copyrighted multimedia content in an effortless manner. Content providers are faced with the challenge of how to protect their electronic content. Fingerprinting and watermarking are two techniques that help identify content that are copied and distributed illegally. This thesis presents a novel algorithm for each of these two content protection techniques. In fingerprinting, a novel algorithm that model fingerprint using Gaussian mixtures is developed for both audio and video signals. Simulation studies are used to evaluate the effectiveness of the algorithm in generating fingerprints that show high discrimination among different fingerprints and at the same time invariant to different distortions of the same fingerprint. In the proposed watermarking scheme, linear chirps are used as watermark messages. The watermark is embedded and detected by spread-spectrum watermarking. At the receiver, a post processing tool represents the retrieved watermark in a time-frequency distribution and uses a line detection algorithm to detect the watermark. The robustness of the watermark is demonstrated by extracting the watermark after different image processing operations performed using a third party evaluation tool called checkmark.
    Signal processing techniques for operator independent Doppler ultrasound : potential for use in transcranial Doppler ultrasound
    Signal processing techniques for operator independent Doppler ultrasound : potential for use in transcranial Doppler ultrasound
    2D Doppler ultrasound can be used for continuous monitoring of vasospasm. However, the use of Doppler ultrasound suffers from operator dependence requiring a skilled ultrasonographer to make Doppler angle corrections. The aim of the research is to minimize the need of dedicated ultrasonographers for Doppler ultrasound monitoring of cerebral vasospasms. In this thesis, three studies including a steady flow phantom, pulsatile flow phantom and in vivo human internal carotid artery (ICA) were completed with the use of 3D Doppler ultrasound. The 3D vascular structure of the phantom and ICA were obtained using binary skeletonization from 3D power Doppler images. The vascular structure was used in combination with angle independent pulsed-wave Doppler to reconstruct the temporal blood velocity profiles at various parts of the vasculature. The results indicate that Doppler angle corrections can be minimized with the use of 3D Doppler ultrasound, and operator independent monitoring of blood flow is possible.
    Simple and effective QoS schemes supporting VoIP traffic in IEEE 802.11 network
    Simple and effective QoS schemes supporting VoIP traffic in IEEE 802.11 network
    This thesis presents several simple and effective QoS schemes aimed to support VoIP in the IEEE 802.11 network. The proposed schemes are called dynamic backoff adjustment (DBA), high priority DCF (HP-DCF), adaptive DCF (ADCF) and direct priority DCF (DP-DCF). All four schemes utilize the contention mechanism and are compatible with the IEEE 802.11 standard. The first three schemes use dynamic adjustment approach to improve the delay of high-priority traffic in infrastructure network and the last schemes is designed for use in ad-hoc network.From the simulation results, the DBA scheme shows better bandwidth utilization over DCF and provides better differentiated service performance than EDCF. Whereas, the HP-DCF and AP schemes give better protection of the real-time traffic classes from the excessive best-effort traffic load in the infrastructure mode. Moreover, the DP-DCF also shows its ability in protecting the real-time traffic class in the ad-hoc environment.
    Simplified method of analysis of adjacent precast “deck free” concrete box beams used in accelerated bridge construction
    Simplified method of analysis of adjacent precast “deck free” concrete box beams used in accelerated bridge construction
    The prefabricated bridge is common method in construction since it provides controlled environmental conditions and long-term durability. These adjacent precast box beams are placed side by side with 15mm gaps, the top flanges connected with longitudinal shear keys poured on-site to assist in truckload distribution. Since the concrete-filled joints provide transverse shear rigidity, the load transferred from one beam to another takes place through transverse shear. A parametric study is conducted to investigate the accuracy of simplified analysis method in CHBDC for shear-connected beams to the adjacent box beams. A 3D finite-element was conducted on a wide range of box beams to obtain their magnification factors for moment and shear when subjected to truck loading. The obtained results were correlated with CHBDC and a more reliable simplified equations for distribution factors was developed. Special attention was given to the limitations of CHBDC simplified method and how it can be revised to include the adjacent box-beam.
    Simulating public interest: the issue of the public voice in the fee-for-carriage debate
    Simulating public interest: the issue of the public voice in the fee-for-carriage debate
    "One of the most fractious Canadian Radio-television and Telecommunications Commission (CRTC, or the Commission) policy hearings on record has recently come to a close. This was no run-of-the-mill, watch-the-paint-dry policy hearing. Tempers and passions flared as two industry titans, over-the-air (OTA) broadcasters, such as CTV and Canwest Global, and broadcast distribution undertakings (BDUs) such as Shaw Communications, Bell Canada and Rogers Inc. fought the battle of their lives over an issue called fee-for-carriage (FFC). The media covered the issues day in and day out. Canadians bombarded the CRTC with dose to 200,000 comments and the Government of Canada forced the CRTC to hold an additional hearing just to address the impact the decision could have on the public. With extensive media coverage and uncharacteristically active public participation, could this public policy process be deemed 'democracy in action'? This paper will argue that this is not the case. Through a discourse analysis of the debate within two distinctly differentiated public spheres -- 1) the battling media campaigns and 2) the CRTC public hearings in November and December of 2009 -- this paper will show that the public's ability to define its own interest, using its own voice, is tarnished to such a severe degree that this policy process fails"--From Introduction (page 3).
    Simulation Of A Convective Loop For The NTED™ Low Energy House
    Simulation Of A Convective Loop For The NTED™ Low Energy House
    The Nested Thermal Envelope Design (NTED™) is an innovative low energy house design that incorporates two thermal envelopes to create a core and perimeter zone. The perimeter acts as a thermal buffer zone, where heat loss from the core and solar gain in the perimeter is recovered to the core via an inter-zone heat pump. In order to optimize heat recovery from the perimeter and minimize temperature stratification, a complete loop is formed around the core living space, through which air may flow in a convective loop. A simplified convective loop was modelled with a commercial CFD software package. Simulations show the convective loop distributes solar gains and reduces temperature stratification in the perimeter. The location of the heat pump in the convective loop was found to affect the DOP by up to 21%.
    Simulation analysis of a multi-product flexible assembly line
    Simulation analysis of a multi-product flexible assembly line
    This thesis involves the use of simulation models to evaluate and optimize existing manufacturing assembly lines of electrical components. The goal of the simulation is primarily to mimic the existing production scenario in order to identify problematic areas such as bottleneck operations, conveyor speeds limiting production and factors inhibiting the performance of the resources. This simulation project uses a combination of the AweSim ® software and logic programming using MS Visual Basic ® . Through coding, the logic of the flow of the parts is demonstrated in the course of the steps such as the intermittent conveyors with single and double part flow. There are line selection rules in the models that follow restrictions which will affect the makespan, mean flow time and the utilization of the resources. Using different scenarios conclusions and recommendations are made on modifications to the existing production in order to improve makespan and mean flow time.
    Simulation models of current density imaging in studying cardiac states
    Simulation models of current density imaging in studying cardiac states
    Electro-mechanical disorders in cardiac function result in arrhythmias. Due to the non-stationary nature of these arrhythmias and, owing to lethality associated with certain type of arrhythmias, they are challenging to study. Most of the existing studies are limited in that they extract electrical activity from surface intracardiac electrical activity, either through the use of electrical or optical mapping. One way of studying current pathways inside and through biological tissues is by using Magnetic Resonance Imaging (MRI) based Low Frequency Current Density Imaging (LFCDI). For the first time CDI was used to study ex-vivo beating hearts in different cardiac states. It should be said that; this approach involves heavy logistical and procedural complexity, hence, it would be beneficial to adapt existing electrophysiological computer models to investigate and simulate current density maps specific to studying cardiac function. In achieving this, the proposed work presents an approach to model the current density maps in 3D and study the current distributions in different electrophysiological states of the heart. The structural and fiber orientation of the heart used in this study were extracted using MRI-based Diffusion Tensor Imaging. The monodomain and bidomain Aliev-Panfilov electrophysiological models were used for CDI modeling, and the results indicate that different states were distinguishable using range and correlation of simulated current density maps. The obtained results through modeling were corroborated with actual experimental CDI data from porcine hearts. Individually and comparatively, the experimental and simulation results for various states have the same trend in terms of variations (trend correlation coefficients ≥ 0.98) and state correlations (trend correlation coefficients ≥ 0.89). The results also show that the root mean square (RMS) error in average range ratios between bidomain CDI model results and real CDI data is 0.1972 and the RMS error in state correlations between bidomain CDI model results and real CDI data is 0.2833. These results indicate, as expected, the proposed bidomain model simulation of CDI corroborates well with experimental data and can serve as a valuable tool for studying lethal cardiac arrhythmias under different simulation conditions that are otherwise not possible or difficult in a real-world experimental setup.