DescriptionCompute Ontario Research Day 2014
Displaying 1 - 12 of 24
Format results
-
Recent advances in the search for complementary sequences
Ilias Kotsireas Wilfrid Laurier University
PIRSA:14050040 -
Simulating the Capture and Translocation of Rigid fd Viruses though a Nanopore
Hendrick de Hann University of Ontario Institute of Technology
PIRSA:14050036 -
-
-
-
-
-
-
-
-
Testing Discontinuous Galerkin Methods in the Einstein Toolkit for Numerical Relativity
Jonah Miller Los Alamos National Laboratory
PIRSA:14050045 -
-
Recent advances in the search for complementary sequences
Ilias Kotsireas Wilfrid Laurier University
PIRSA:14050040We will present recent developments in the search for complementary sequences namely new theoretical and algorithmic progress. SHARCNET resources are used quite heavily in this project. -
Simulating the Capture and Translocation of Rigid fd Viruses though a Nanopore
Hendrick de Hann University of Ontario Institute of Technology
PIRSA:14050036The passage of long biological molecules from one side of a membrane to the other through a nanoscale hole has been the subject of intense research in recent years. Motivated by the possibility of new sequencing technologies the focus of this work has been studying the translocation of DNA across biological and synthetic membranes. In this talk I will present results from a joint experimental-simulation study examining the translocation of rod-like fd viruses through a nanopore. While DNA is relatively flexible the fd virus has a persistence length that is over twice that of its contour length and is thus stiff. In principle translocation in this rod-like limit is much easier to model. However I will show that experimental results for the distribution of translocation times exhibit significant deviations from the expected result. I will present a model for fd translocation that was developed to probe these results. Simulations based on this model yield insight into previously unclear experimental results including i) details of how the polymer is capture by the pore at different external fields ii) a correlation between the translocation time and the conformation at capture and iii) sources for the increased dispersion in the translocation time distributions. -
New insights into polymer-induced drag reduction in turbulent flows
PIRSA:14050041Polymer additives are known to cause significant reduction in turbulent friction drag and reduce the energy dissipation rate of fluid transport. This effect is however bounded by a universal upper limit the maximum drag reduction (MDR) asymptote that does not change with polymer properties. Understanding MDR remains an important unsolved problem in the areas of turbulence and non-Newtonian fluid mechanics. Dynamical trajectories on the boundary in state space between laminar and turbulent plane channel flow - edge states - are computed for Newtonian and viscoelastic fluids. Viscoelasticity has a negligible effect on the properties of these solutions and at least at a low Reynolds number their mean velocity profiles correspond closely to experimental observations for polymer solutions in the MDR regime. These results confirm the existence of weak turbulence states that cannot be suppressed by polymer additives explaining the fact that there is an upper limit for polymer-induced drag reduction. -
HPC Application in Large Eddy Simulation of Fuel Spray / Air Jet interaction
PIRSA:14050046Along with the development of computational resources computational fluid dynamics (CFD) has evolved in resolving the finest length scales and smallest time scales of the flow. Direct numerical simulation (DNS) resolves the finest flow scales known as Kolmogorov length scales which are responsible for the dissipation of the energy transferred from the large and intermediate length scales. However DNS simulations are computationally costly and demand very powerful resources which are not widely available to this day. Large eddy simulation (LES) is a more feasible tool to resolve the large flow scales and model the sub-grid scales using a Reynolds averaged modeling. High performance computing tools make it possible to perform high fidelity large eddy simulations which reasonably (almost twelve times the Kolmogorov length scale) resolve the flow structures.In the present study large eddy simulation is utilized to simulate interaction of a high speed compressible round air jet with a group of sprays injected from a six-hole nozzle injector into the shear layer of the air jet. Fuel sprays are injected with 10 and 15 MPa injection pressures in the jet cross flows of 125 and 215 m/s. Simulations are performed using 64 processors and 240 GB of memory. The focus of the study is on the spray atomization assisted by air jet cross-flow. Consequent processes of fuel/air mixing are also investigated by focusing on the role of vortical structures resolved using large eddy simulation. -
Solving initial-boundary value problems without numerical differentiation
PIRSA:14050042The numerical solution of nonlinear partial differential equations with nontrivial boundary conditions is central to many areas of modelling. When high accuracy is required (pseudo) spectral methods are usually the first choice. Typically in this approach we search for the pre-image under a linear operator which represents a combination of spatial derivatives along with the boundayr conditions in every time step. This operator can be quite ill-conditioned. On a basis of Chebyshev polynomials for instance the condition number increases algebraically with the number of basis functions. I will present an alternative method based on recent work by Viswanath and Tobasco which avoids numerical differentiation entirely through the use of Green's functions. I will demonstrate this method on the Kuramoto-Sivashinsky equation with fixed boundary conditions. -
Modelling Surface Driven Flows in the Ocean
Eric Bembenek University of Waterloo
PIRSA:14050047Buoyancy driven flows at the top of the ocean or bottom of the atmosphere are inherently different from their interior dynamics. Oneidealized model that has recently become very popular to idealizethese surface flows with strong rotation is Surface Quasi-Geostrophic (SQG) dynamics. This model is appropriate for large-scale dynamics and assumes the motion is in near geostrophic and hydrostatic balance. Many of the numerical simulations of SQG have shown thatvortices are frequently generated at very small scales scales thatare well beyond the SQG limits.In this talk we examine the dynamics of a rotating three-dimensionalelliptic vortex in both the SQG model and a more general and muchmore complicated primitive equation model. In order to compute highresolution solutions to the three dimensional primitive equations we make use of Sharcnet resources. We find that in the case of strongrotation (small Rossby number) we confirm the predictions from SQG.With weaker rotation (moderate Rossby number) we see the non-SQG effects that arise and find that the regime where SQG can beappropriate can be very limited. We conclude that some of thepredictions that arise from the SQG model might not be very accuratein idealizing geophysical flows at the surface. -
Biological graph dissimilarity characterization using graph theory
PIRSA:14050043Many biological data sets and relationships can be modeled as graphs. Understanding how structure of these graphs relates to biological function is essential for understanding underlining mechanisms of disease and for aiding drug discoveries. Vertices of biological graphs represent individual entities such as genes and proteins. Edges represent the relationship between two cellular components such as physical and functional interactions. A challenging problem in the post-genomic era is graph comparisons as they are large typed complex and evolving. Comparing graph structures helps to gain insights into the underlying signaling mechanisms and treatments for complex diseases. With technological advancement biological data will continue to grow and so will the size and complexity of graphs.Large graph comparisons are computationally intensive as they involve the subgraph isomorphism problem which is NP-complete. Therefore graph comparison algorithms need to be efficient scalable and be able to systematically capture biologically meaningful graph structure differences. Efficient graph comparison algorithms are necessary for many types of biological graphs e.g. protein-protein interaction drug-target microRNA-gene gene-regulatory and co-expression graphs. Furthermore graph comparison algorithms are extremely useful for many applications such as comparing graphs characterizing different diseases representing different cancer subtypes or different drug treatment responses. There are two main categories of graph properties used for comparing biological graphs global graph properties and local graph properties. Global graph properties study the overall graph while local graph properties focus on local structures of the graph. Our objective is to develop an efficient scalable graph comparison algorithm such that graph structure differences between any two states can be obtained systematically. We achieve the objective in two steps. First we propose an algorithm such that graph structure differences are systematically obtained and verified that the differences are biologically meaningful. Then we develop a heuristic to improve upon the proposed algorithm in the first step in terms of efficiency and scalability. While our approaches are generic we apply it on non-small cell lung cancer data sets. The non-small cell lung cancer datasets are used to construct normal and tumor co-expression graphs. Global graphs properties do not contain the detail needed to capture the structural characteristics of biological graphs thus we used a local property graphlets. Graphlets are all non-isomorphic connected induced graphs on a specific number of vertices. By definition graphlets have the ability to capture all the local structures on a certain number of vertices. Results showed that our graphlet approach returns graph structure differences between normal and tumor conditions that correspond to biological knowledge. We then introduce a heuristic to identify areas that are likely to be different between the normal and tumor graph and perform graph comparisons on the identified areas only. The heuristic was able to achieve interesting results that were successfully validated in vitro. -
Designing Electroencephalographic (EEG) analysis software with HPC in mind: Focus on a modular submission interface and flexible data annotation
PIRSA:14050044Electroencephalography (EEG) is a method for measuring brain activity by recording electrical fields at the scalp surface. Although it has the highest temporal resolution among brain imaging techniques it has low spatial resolution and is very sensitive to various forms of noise (e.g. movement artifacts electrical sources in the environment impedance artifacts and various biological artifacts typically generated from muscle activation). Substantial progress in the implementation of new signal processing and statistical strategies for EEG data analysis is currently changing the specificity with which EEG researchers can interpret their data. Because EEG studies can produce large data sets (e.g. 100 participants each contributing an EEG recording that consists of 130+ recording channels for 1 hour at a common sampling rate of 500 Hz or 1000 Hz) and the new processing strategies are computationally intensive (e.g. Independen Components Analysis (ICA) and bootstrapping) the computation time involved is not feasible for many research situations. Thus often these advanced methods are not used due to computation limitations even though there is no information based downside to their outcome. In this talk I present two software extensions being developed at the Brock University Lifespan Research Center for integration with the leading open source EEG analysis software platform EEGLab (developed at the Swartz Center for Computational Neuroscience UCSD). The first is a modular interface for submitting unsupervised procedures to a compute cluster and the second is a flexible off line visualization tool that allows for the interactive annotation of extensive unsupervised processing. These software extensions together with resources such as SHARCNet can remove the computation constraints of advanced data processing from EEG research labs. -
HPC in Quantum Gravity
Sebastian Steinhaus Friedrich Schiller University Jena
PIRSA:14050037Application of numerical simulations to quantum gravity are so far largely neglected yet they possess remarkable potential to learn more about the theory. For approaches that attempt to construct quantum spacetime from fundamental microscopical building blocks e.g. spin foam models the collective behaviour involving many building blocks is unexplored.Therefore we numerically simulate the collective dynamics of many of these building blocks using coarse graining techniques i.e. tensor network renormalization and uncover a rich structure of fixed points with extended phases and phase transitions. Ref.: arXiv:1312.0905 [gr-qc] -
Predicting New Graphene - Boron Nitride 2D Nano-Materials: Structure Electron Bands Optical Response and Vibrations
PIRSA:14050048The goal of this research is to investigate theoretically the possibility of creating graphene-based semiconducting 2D heterosystems that allow tailoring of the band gap and creating states inside the gap by demand. Such systems are created in our computational experiment by depositing graphene on a layer of hexagonal boron nitride and adding hydrogen on top and bottom of the systems to passivate the dangling bonds and create covalent bonding between the layers of the system of interest. Apart from the atomic structure the thermal stability of the heterosystems their optical and vibrational properties were also studied. In this research four dierent bilayers and their properties are presented. -
Testing Discontinuous Galerkin Methods in the Einstein Toolkit for Numerical Relativity
Jonah Miller Los Alamos National Laboratory
PIRSA:14050045Discontinuous Galerkin finite element (DGFE) methods combine advantages of both finite differences and finite elements approaches. These methods scale extremely well and they have been very successful in computational fluid dynamics. As such we would like to transpose them to the domain of relativistic astrophysics. Recently we have implemented DGFE methods in the Einstein Toolkit a large numerical relativity codebase used by hundreds of scientists around the world. However before DGFE methods can be used in production simulations we must ensure that our implementation is up to the efficiency and accuracy standards of a production codebase. Here we detail our efforts to test our implementation using the Apples with Apples tests (c.f. arXiv:gr-qc/0305023 and arXiv:0709.3559). We briefly introduce DGFE methods explain the Apples with Apples tests and our rationale for using them and discuss results. -