Achtung:

Sie haben Javascript deaktiviert!
Sie haben versucht eine Funktion zu nutzen, die nur mit Javascript möglich ist. Um sämtliche Funktionalitäten unserer Internetseite zu nutzen, aktivieren Sie bitte Javascript in Ihrem Browser.

Info-Icon Diese Seite ist nicht in Deutsch verfügbar
Data Center Building O Bildinformationen anzeigen

Data Center Building O

Conference Papers


Liste im Research Information System öffnen

2021

Enabling Electronic Structure-Based Ab-Initio Molecular Dynamics Simulations with Hundreds of Millions of Atoms

R. Schade, T. Kenter, H. Elgabarty, M. Lass, O. Schütt, A. Lazzaro, H. Pabst, S. Mohr, J. Hutter, T. Kühne, C. Plessl, 2021

We push the boundaries of electronic structure-based ab-initio molecular dynamics (AIMD) beyond 100 million atoms. This scale is otherwise barely reachable with classical force-field methods or novel neural network and machine learning potentials. We achieve this breakthrough by combining innovations in linear-scaling AIMD, efficient and approximate sparse linear algebra, low and mixed-precision floating-point computation on GPUs, and a compensation scheme for the errors introduced by numerical approximations. The core of our work is the non-orthogonalized local submatrix (NOLSM) method, which scales very favorably to massively parallel computing systems and translates large sparse matrix operations into highly parallel, dense matrix operations that are ideally suited to hardware accelerators. We demonstrate that the NOLSM method, which is at the center point of each AIMD step, is able to achieve a sustained performance of 324 PFLOP/s in mixed FP16/FP32 precision corresponding to an efficiency of 67.7% when running on 1536 NVIDIA A100 GPUs.


    Generating Physically Sound Training Data for Image Recognition of Additively Manufactured Parts

    T. Nickchen, S. Heindorf, G. Engels, in: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, 2021, pp. 1994-2002


    Towards Performance Characterization of FPGAs in Context of HPC using OpenCL Benchmarks

    M. Meyer, in: Proceedings of the 11th International Symposium on Highly Efficient Accelerators and Reconfigurable Technologies, 2021


    2020

    A Runtime System for Finite Element Methods in a Partitioned Global Address Space

    S. Groth, D. Grünewald, J. Teich, F. Hannig, in: Proceedings of the 17th ACM International Conference on Computing Frontiers (CF '2020), ACM, 2020


    A Submatrix-Based Method for Approximate Matrix Function Evaluation in the Quantum Chemistry Code CP2K

    M. Lass, R. Schade, T. Kühne, C. Plessl, in: Proc. International Conference for High Performance Computing, Networking, Storage and Analysis (SC), IEEE Computer Society, 2020, pp. 1127-1140

    Electronic structure calculations based on density-functional theory (DFT) represent a significant part of today's HPC workloads and pose high demands on high-performance computing resources. To perform these quantum-mechanical DFT calculations on complex large-scale systems, so-called linear scaling methods instead of conventional cubic scaling methods are required. In this work, we take up the idea of the submatrix method and apply it to the DFT computations in the software package CP2K. For that purpose, we transform the underlying numeric operations on distributed, large, sparse matrices into computations on local, much smaller and nearly dense matrices. This allows us to exploit the full floating-point performance of modern CPUs and to make use of dedicated accelerator hardware, where performance has been limited by memory bandwidth before. We demonstrate both functionality and performance of our implementation and show how it can be accelerated with GPUs and FPGAs.


      Evaluating FPGA Accelerator Performance with a Parameterized OpenCL Adaptation of Selected Benchmarks of the HPCChallenge Benchmark Suite

      M. Meyer, T. Kenter, C. Plessl, in: 2020 IEEE/ACM International Workshop on Heterogeneous High-performance Reconfigurable Computing (H2RC), 2020

      FPGAs have found increasing adoption in data center applications since a new generation of high-level tools have become available which noticeably reduce development time for FPGA accelerators and still provide high-quality results. There is, however, no high-level benchmark suite available, which specifically enables a comparison of FPGA architectures, programming tools, and libraries for HPC applications. To fill this gap, we have developed an OpenCL-based open-source implementation of the HPCC benchmark suite for Xilinx and Intel FPGAs. This benchmark can serve to analyze the current capabilities of FPGA devices, cards, and development tool flows, track progress over time, and point out specific difficulties for FPGA acceleration in the HPC domain. Additionally, the benchmark documents proven performance optimization patterns. We will continue optimizing and porting the benchmark for new generations of FPGAs and design tools and encourage active participation to create a valuable tool for the community. To fill this gap, we have developed an OpenCL-based open-source implementation of the HPCC benchmark suite for Xilinx and Intel FPGAs. This benchmark can serve to analyze the current capabilities of FPGA devices, cards, and development tool flows, track progress over time, and point out specific difficulties for FPGA acceleration in the HPC domain. Additionally, the benchmark documents proven performance optimization patterns. We will continue optimizing and porting the benchmark for new generations of FPGAs and design tools and encourage active participation to create a valuable tool for the community.


      2019

      OpenCL Implementation of Cannon's Matrix Multiplication Algorithm on Intel Stratix 10 FPGAs

      P. Gorlani, T. Kenter, C. Plessl, in: Proceedings of the International Conference on Field-Programmable Technology (FPT), IEEE, 2019

      Stratix 10 FPGA cards have a good potential for the acceleration of HPC workloads since the Stratix 10 product line introduces devices with a large number of DSP and memory blocks. The high level synthesis of OpenCL codes can play a fundamental role for FPGAs in HPC, because it allows to implement different designs with lower development effort compared to hand optimized HDL. However, Stratix 10 cards are still hard to fully exploit using the Intel FPGA SDK for OpenCL. The implementation of designs with thousands of concurrent arithmetic operations often suffers from place and route problems that limit the maximum frequency or entirely prevent a successful synthesis. In order to overcome these issues for the implementation of the matrix multiplication, we formulate Cannon's matrix multiplication algorithm with regard to its efficient synthesis within the FPGA logic. We obtain a two-level block algorithm, where the lower level sub-matrices are multiplied using our Cannon's algorithm implementation. Following this design approach with multiple compute units, we are able to get maximum frequencies close to and above 300 MHz with high utilization of DSP and memory blocks. This allows for performance results above 1 TeraFLOPS.


        2018

        A Data Structure for Planning Based Workload Management of Heterogeneous HPC Systems

        A. Keller, in: Proc. Workshop on Job Scheduling Strategies for Parallel Processing (JSSPP), Springer, 2018, pp. 132-151

        This paper describes a data structure and a heuristic to plan and map arbitrary resources in complex combinations while applying time dependent constraints. The approach is used in the planning based workload manager OpenCCS at the Paderborn Center for Parallel Computing (PC\(^2\)) to operate heterogeneous clusters with up to 10000 cores. We also show performance results derived from four years of operation.


          A Massively Parallel Algorithm for the Approximate Calculation of Inverse p-th Roots of Large Sparse Matrices

          M. Lass, S. Mohr, H. Wiebeler, T. Kühne, C. Plessl, in: Proc. Platform for Advanced Scientific Computing (PASC) Conference, ACM, 2018

          We present the submatrix method, a highly parallelizable method for the approximate calculation of inverse p-th roots of large sparse symmetric matrices which are required in different scientific applications. Following the idea of Approximate Computing, we allow imprecision in the final result in order to utilize the sparsity of the input matrix and to allow massively parallel execution. For an n x n matrix, the proposed algorithm allows to distribute the calculations over n nodes with only little communication overhead. The result matrix exhibits the same sparsity pattern as the input matrix, allowing for efficient reuse of allocated data structures. We evaluate the algorithm with respect to the error that it introduces into calculated results, as well as its performance and scalability. We demonstrate that the error is relatively limited for well-conditioned matrices and that results are still valuable for error-resilient applications like preconditioning even for ill-conditioned matrices. We discuss the execution time and scaling of the algorithm on a theoretical level and present a distributed implementation of the algorithm using MPI and OpenMP. We demonstrate the scalability of this implementation by running it on a high-performance compute cluster comprised of 1024 CPU cores, showing a speedup of 665x compared to single-threaded execution.


            Automated Code Acceleration Targeting Heterogeneous OpenCL Devices

            H. Riebler, G.F. Vaz, T. Kenter, C. Plessl, in: Proc. ACM SIGPLAN Symposium on Principles and Practice of Parallel Programming (PPoPP), ACM, 2018


            OpenCL-based FPGA Design to Accelerate the Nodal Discontinuous Galerkin Method for Unstructured Meshes

            T. Kenter, G. Mahale, S. Alhaddad, Y. Grynko, C. Schmitt, A. Afzal, F. Hannig, J. Förstner, C. Plessl, in: Proc. Int. Symp. on Field-Programmable Custom Computing Machines (FCCM), IEEE, 2018

            The exploration of FPGAs as accelerators for scientific simulations has so far mostly been focused on small kernels of methods working on regular data structures, for example in the form of stencil computations for finite difference methods. In computational sciences, often more advanced methods are employed that promise better stability, convergence, locality and scaling. Unstructured meshes are shown to be more effective and more accurate, compared to regular grids, in representing computation domains of various shapes. Using unstructured meshes, the discontinuous Galerkin method preserves the ability to perform explicit local update operations for simulations in the time domain. In this work, we investigate FPGAs as target platform for an implementation of the nodal discontinuous Galerkin method to find time-domain solutions of Maxwell's equations in an unstructured mesh. When maximizing data reuse and fitting constant coefficients into suitably partitioned on-chip memory, high computational intensity allows us to implement and feed wide data paths with hundreds of floating point operators. By decoupling off-chip memory accesses from the computations, high memory bandwidth can be sustained, even for the irregular access pattern required by parts of the application. Using the Intel/Altera OpenCL SDK for FPGAs, we present different implementation variants for different polynomial orders of the method. In different phases of the algorithm, either computational or bandwidth limits of the Arria 10 platform are almost reached, thus outperforming a highly multithreaded CPU implementation by around 2x.


              2017

              Flexible FPGA design for FDTD using OpenCL

              T. Kenter, J. Förstner, C. Plessl, in: Proc. Int. Conf. on Field Programmable Logic and Applications (FPL), IEEE, 2017

              Compared to classical HDL designs, generating FPGA with high-level synthesis from an OpenCL specification promises easier exploration of different design alternatives and, through ready-to-use infrastructure and common abstractions for host and memory interfaces, easier portability between different FPGA families. In this work, we evaluate the extent of this promise. To this end, we present a parameterized FDTD implementation for photonic microcavity simulations. Our design can trade-off different forms of parallelism and works for two independent OpenCL-based FPGA design flows. Hence, we can target FPGAs from different vendors and different FPGA families. We describe how we used pre-processor macros to achieve this flexibility and to work around different shortcomings of the current tools. Choosing the right design configurations, we are able to present two extremely competitive solutions for very different FPGA targets, reaching up to 172 GFLOPS sustained performance. With the portability and flexibility demonstrated, code developers not only avoid vendor lock-in, but can even make best use of real trade-offs between different architectures.


                2016

                Confidentiality and Authenticity for Distributed Version Control Systems - A Mercurial Extension

                M. Lass, D. Leibenger, C. Sorge, in: Proc. 41st Conference on Local Computer Networks (LCN), IEEE, 2016

                Version Control Systems (VCS) are a valuable tool for software development and document management. Both client/server and distributed (Peer-to-Peer) models exist, with the latter (e.g., Git and Mercurial) becoming increasingly popular. Their distributed nature introduces complications, especially concerning security: it is hard to control the dissemination of contents stored in distributed VCS as they rely on replication of complete repositories to any involved user. We overcome this issue by designing and implementing a concept for cryptography-enforced access control which is transparent to the user. Use of field-tested schemes (end-to-end encryption, digital signatures) allows for strong security, while adoption of convergent encryption and content-defined chunking retains storage efficiency. The concept is seamlessly integrated into Mercurial---respecting its distributed storage concept---to ensure practical usability and compatibility to existing deployments.


                  Microdisk Cavity FDTD Simulation on FPGA using OpenCL

                  T. Kenter, C. Plessl, in: Proc. Workshop on Heterogeneous High-performance Reconfigurable Computing (H2RC), 2016


                  Multiobjective Optimal Control Methods for the Development of an Intelligent Cruise Control

                  M. Dellnitz, J. Eckstein, K. Flaßkamp, P. Friedel, C. Horenkamp, U. Köhler, S. Ober-Blöbaum, S. Peitz, S. Tiemeyer, in: Progress in Industrial Mathematics at ECMI, Springer International Publishing, 2016, pp. 633-641


                  Opportunities for deferring application partitioning and accelerator synthesis to runtime (extended abstract)

                  T. Kenter, G.F. Vaz, H. Riebler, C. Plessl, in: Workshop on Reconfigurable Computing (WRC), 2016


                  Performance-centric scheduling with task migration for a heterogeneous compute node in the data center

                  A. Lösch, T. Beisel, T. Kenter, C. Plessl, M. Platzner, in: Proceedings of the 2016 Design, Automation & Test in Europe Conference & Exhibition (DATE), EDA Consortium / IEEE, 2016, pp. 912-917

                  The use of heterogeneous computing resources, such as Graphic Processing Units or other specialized coprocessors, has become widespread in recent years because of their per- formance and energy efficiency advantages. Approaches for managing and scheduling tasks to heterogeneous resources are still subject to research. Although queuing systems have recently been extended to support accelerator resources, a general solution that manages heterogeneous resources at the operating system- level to exploit a global view of the system state is still missing.In this paper we present a user space scheduler that enables task scheduling and migration on heterogeneous processing resources in Linux. Using run queues for available resources we perform scheduling decisions based on the system state and on task characterization from earlier measurements. With a pro- gramming pattern that supports the integration of checkpoints into applications, we preempt tasks and migrate them between three very different compute resources. Considering static and dynamic workload scenarios, we show that this approach can gain up to 17% performance, on average 7%, by effectively avoiding idle resources. We demonstrate that a work-conserving strategy without migration is no suitable alternative.


                    Using Approximate Computing in Scientific Codes

                    M. Lass, T. Kühne, C. Plessl, in: Workshop on Approximate Computing (AC), 2016


                    Using Just-in-Time Code Generation for Transparent Resource Management in Heterogeneous Systems

                    H. Riebler, G.F. Vaz, C. Plessl, E.M.G. Trainiti, G.C. Durelli, C. Bolchini, in: Proc. HiPEAC Workshop on Reonfigurable Computing (WRC), 2016


                    Using Just-in-Time Code Generation for Transparent Resource Management in Heterogeneous Systems

                    H. Riebler, G.F. Vaz, C. Plessl, E.M.G.. Trainiti, G.C. Durelli, E. Del Sozzo, M.D.. Santambrogio, C. Bolchini, in: Proceedings of International Forum on Research and Technologies for Society and Industry (RTSI), IEEE, 2016, pp. 1-5

                    Hardware accelerators are becoming popular in academia and industry. To move one step further from the state-of-the-art multicore plus accelerator approaches, we present in this paper our innovative SAVEHSA architecture. It comprises of a heterogeneous hardware platform with three different high-end accelerators attached over PCIe (GPGPU, FPGA and Intel MIC). Such systems can process parallel workloads very efficiently whilst being more energy efficient than regular CPU systems. To leverage the heterogeneity, the workload has to be distributed among the computing units in a way that each unit is well-suited for the assigned task and executable code must be available. To tackle this problem we present two software components; the first can perform resource allocation at runtime while respecting system and application goals (in terms of throughput, energy, latency, etc.) and the second is able to analyze an application and generate executable code for an accelerator at runtime. We demonstrate the first proof-of-concept implementation of our framework on the heterogeneous platform, discuss different runtime policies and measure the introduced overheads.


                      2015

                      Easy-to-Use On-The-Fly Binary Program Acceleration on Many-Cores

                      M. Damschen, C. Plessl, in: Proceedings of the 5th International Workshop on Adaptive Self-tuning Computing Systems (ADAPT), 2015

                      This paper introduces Binary Acceleration At Runtime(BAAR), an easy-to-use on-the-fly binary acceleration mechanismwhich aims to tackle the problem of enabling existentsoftware to automatically utilize accelerators at runtime. BAARis based on the LLVM Compiler Infrastructure and has aclient-server architecture. The client runs the program to beaccelerated in an environment which allows program analysisand profiling. Program parts which are identified as suitable forthe available accelerator are exported and sent to the server.The server optimizes these program parts for the acceleratorand provides RPC execution for the client. The client transformsits program to utilize accelerated execution on the server foroffloaded program parts. We evaluate our work with a proofof-concept implementation of BAAR that uses an Intel XeonPhi 5110P as the acceleration target and performs automaticoffloading, parallelization and vectorization of suitable programparts. The practicality of BAAR for real-world examples is shownbased on a study of stencil codes. Our results show a speedup ofup to 4 without any developer-provided hints and 5.77 withhints over the same code compiled with the Intel Compiler atoptimization level O2 and running on an Intel Xeon E5-2670machine. Based on our insights gained during implementationand evaluation we outline future directions of research, e.g.,offloading more fine-granular program parts than functions, amore sophisticated communication mechanism or introducing onstack-replacement.


                      Improving Packet Processing Performance in the ATLAS FELIX Project – Analysis and Optimization of a Memory-Bounded Algorithm

                      J. Schumacher, J. T. Anderson, A. Borga, H. Boterenbrood, H. Chen, K. Chen, G. Drake, D. Francis, B. Gorini, F. Lanni, G. Lehmann-Miotto, L. Levinson, J. Narevicius, C. Plessl, A. Roich, S. Ryu, F. P. Schreuder, W. Vandelli, J. Vermeulen, J. Zhang, in: Proc. Int. Conf. on Distributed Event-Based Systems (DEBS), ACM, 2015


                      Transparent offloading of computational hotspots from binary code to Xeon Phi

                      M. Damschen, H. Riebler, G.F. Vaz, C. Plessl, in: Proceedings of the 2015 Conference on Design, Automation and Test in Europe (DATE), EDA Consortium / IEEE, 2015, pp. 1078-1083

                      In this paper, we study how binary applications can be transparently accelerated with novel heterogeneous computing resources without requiring any manual porting or developer-provided hints. Our work is based on Binary Acceleration At Runtime (BAAR), our previously introduced binary acceleration mechanism that uses the LLVM Compiler Infrastructure. BAAR is designed as a client-server architecture. The client runs the program to be accelerated in an environment, which allows program analysis and profiling and identifies and extracts suitable program parts to be offloaded. The server compiles and optimizes these offloaded program parts for the accelerator and offers access to these functions to the client with a remote procedure call (RPC) interface. Our previous work proved the feasibility of our approach, but also showed that communication time and overheads limit the granularity of functions that can be meaningfully offloaded. In this work, we motivate the importance of a lightweight, high-performance communication between server and client and present a communication mechanism based on the Message Passing Interface (MPI). We evaluate our approach by using an Intel Xeon Phi 5110P as the acceleration target and show that the communication overhead can be reduced from 40% to 10%, thus enabling even small hotspots to benefit from offloading to an accelerator.


                        2014

                        Deferring Accelerator Offloading Decisions to Application Runtime

                        G.F. Vaz, H. Riebler, T. Kenter, C. Plessl, in: Proceedings of the International Conference on ReConFigurable Computing and FPGAs (ReConFig), IEEE, 2014, pp. 1-8

                        Reconfigurable architectures provide an opportunityto accelerate a wide range of applications, frequentlyby exploiting data-parallelism, where the same operations arehomogeneously executed on a (large) set of data. However, whenthe sequential code is executed on a host CPU and only dataparallelloops are executed on an FPGA coprocessor, a sufficientlylarge number of loop iterations (trip counts) is required, such thatthe control- and data-transfer overheads to the coprocessor canbe amortized. However, the trip count of large data-parallel loopsis frequently not known at compile time, but only at runtime justbefore entering a loop. Therefore, we propose to generate codeboth for the CPU and the coprocessor, and to defer the decisionwhere to execute the appropriate code to the runtime of theapplication when the trip count of the loop can be determinedjust at runtime. We demonstrate how an LLVM compiler basedtoolflow can automatically insert appropriate decision blocks intothe application code. Analyzing popular benchmark suites, weshow that this kind of runtime decisions is often applicable. Thepractical feasibility of our approach is demonstrated by a toolflowthat automatically identifies loops suitable for vectorization andgenerates code for the FPGA coprocessor of a Convey HC-1. Thetoolflow adds decisions based on a comparison of the runtimecomputedtrip counts to thresholds for specific loops and alsoincludes support to move just the required data to the coprocessor.We evaluate the integrated toolflow with characteristic loopsexecuted on different input data sizes.


                          Kernel-Centric Acceleration of High Accuracy Stereo-Matching

                          T. Kenter, H. Schmitz, C. Plessl, in: Proceedings of the International Conference on ReConFigurable Computing and FPGAs (ReConFig), IEEE, 2014, pp. 1-8

                          Stereo-matching algorithms recently received a lot of attention from the FPGA acceleration community. Presented solutions range from simple, very resource efficient systems with modest matching quality for small embedded systems to sophisticated algorithms with several processing steps, implemented on big FPGAs. In order to achieve high throughput, most implementations strongly focus on pipelining and data reuse between different computation steps. This approach leads to high efficiency, but limits the supported computation patterns and due the high integration of the implementation, adaptions to the algorithm are difficult. In this work, we present a stereo-matching implementation, that starts by offloading individual kernels from the CPU to the FPGA. Between subsequent compute steps on the FPGA, data is stored off-chip in on-board memory of the FPGA accelerator card. This enables us to accelerate the AD-census algorithm with cross-based aggregation and scanline optimization for the first time without algorithmic changes and for up to full HD image dimensions. Analyzing throughput and bandwidth requirements, we outline some trade-offs that are involved with this approach, compared to tighter integration of more kernel loops into one design.


                            Numerical Simulation of the Damping Behavior of Particle-Filled Hollow Spheres

                            T. Steinle, J. Vrabec, A. Walther, in: Proc. Modeling, Simulation and Optimization of Complex Processes (HPSC), Springer International Publishing, 2014, pp. 233-243

                            In light of an increasing awareness of environmental challenges, extensive research is underway to develop new light-weight materials. A problem arising with these materials is their increased response to vibration. This can be solved using a new composite material that contains embedded hollow spheres that are partially filled with particles. Progress on the adaptation of molecular dynamics towards a particle-based numerical simulation of this material is reported. This includes the treatment of specific boundary conditions and the adaption of the force computation. First results are presented that showcase the damping properties of such particle-filled spheres in a bouncing experiment.


                              On Semeai Detection in Monte-Carlo Go

                              T. Graf, L. Schaefers, M. Platzner, in: Proc. Conf. on Computers and Games (CG), Springer, 2014, pp. 14-25


                              Partitioning and Vectorizing Binary Applications for a Reconfigurable Vector Computer

                              T. Kenter, G.F. Vaz, C. Plessl, in: Proceedings of the International Symposium on Reconfigurable Computing: Architectures, Tools, and Applications (ARC), Springer International Publishing, 2014, pp. 144-155

                              In order to leverage the use of reconfigurable architectures in general-purpose computing, quick and automated methods to find suitable accelerator designs are required. We tackle this challenge in both regards. In order to avoid long synthesis times, we target a vector copro- cessor, implemented on the FPGAs of a Convey HC-1. Previous studies showed that existing tools were not able to accelerate a real-world application with low effort. We present a toolflow to automatically identify suitable loops for vectorization, generate a corresponding hardware/software bipartition, and generate coprocessor code. Where applicable, we leverage outer-loop vectorization. We evaluate our tools with a set of characteristic loops, systematically analyzing different dependency and data layout properties.


                                Reconstructing AES Key Schedules from Decayed Memory with FPGAs

                                H. Riebler, T. Kenter, C. Plessl, C. Sorge, in: Proceedings of Field-Programmable Custom Computing Machines (FCCM), IEEE, 2014, pp. 222-229

                                In this paper, we study how AES key schedules can be reconstructed from decayed memory. This operation is a crucial and time consuming operation when trying to break encryption systems with cold-boot attacks. In software, the reconstruction of the AES master key can be performed using a recursive, branch-and-bound tree-search algorithm that exploits redundancies in the key schedule for constraining the search space. In this work, we investigate how this branch-and-bound algorithm can be accelerated with FPGAs. We translated the recursive search procedure to a state machine with an explicit stack for each recursion level and create optimized datapaths to accelerate in particular the processing of the most frequently accessed tree levels. We support two different decay models, of which especially the more realistic non-idealized asymmetric decay model causes very high runtimes in software. Our implementation on a Maxeler dataflow computing system outperforms a software implementation for this model by up to 27x, which makes cold-boot attacks against AES practical even for high error rates.


                                  Runtime Resource Management in Heterogeneous System Architectures: The SAVE Approach

                                  G. C. Durelli, M. Pogliani, A. Miele, C. Plessl, H. Riebler, G.F. Vaz, M. D. Santambrogio, C. Bolchini, in: Proc. Int. Symp. on Parallel and Distributed Processing with Applications (ISPA), IEEE, 2014, pp. 142-149


                                  SAVE: Towards efficient resource management in heterogeneous system architectures

                                  G. C. Durelli, M. Copolla, K. Djafarian, G. Koranaros, A. Miele, M. Paolino, O. Pell, C. Plessl, M. D. Santambrogio, C. Bolchini, in: Proc. Int. Conf. on Reconfigurable Computing: Architectures, Tools and Applications (ARC), Springer, 2014


                                  2013

                                  Distributing Storage in Cloud Environments

                                  P. Berenbrink, A. Brinkmann, T. Friedetzky, D. Meister, L. Nagel, in: Proc. Int. Symp. on Parallel and Distributed Processing Workshops (IPDPSW), IEEE, 2013


                                  File Recipe Compression in Data Deduplication Systems

                                  D. Meister, A. Brinkmann, T. Süß, in: Proc. USENIX Conference on File and Storage Technologies (FAST), USENIX Association, 2013, pp. 175-182


                                  FPGA Implementation of a Second-Order Convolutive Blind Signal Separation Algorithm

                                  S. Kasap, S. Redif, in: Proc. IEEE Signal Processing and Communications Conf. (SUI), IEEE, 2013


                                  FPGA-accelerated Key Search for Cold-Boot Attacks against AES

                                  H. Riebler, T. Kenter, C. Sorge, C. Plessl, in: Proceedings of the International Conference on Field-Programmable Technology (FPT), IEEE, 2013, pp. 386-389

                                  Cold-boot attacks exploit the fact that DRAM contents are not immediately lost when a PC is powered off. Instead the contents decay rather slowly, in particular if the DRAM chips are cooled to low temperatures. This effect opens an attack vector on cryptographic applications that keep decrypted keys in DRAM. An attacker with access to the target computer can reboot it or remove the RAM modules and quickly copy the RAM contents to non-volatile memory. By exploiting the known cryptographic structure of the cipher and layout of the key data in memory, in our application an AES key schedule with redundancy, the resulting memory image can be searched for sections that could correspond to decayed cryptographic keys; then, the attacker can attempt to reconstruct the original key. However, the runtime of these algorithms grows rapidly with increasing memory image size, error rate and complexity of the bit error model, which limits the practicability of the approach.In this work, we study how the algorithm for key search can be accelerated with custom computing machines. We present an FPGA-based architecture on a Maxeler dataflow computing system that outperforms a software implementation up to 205x, which significantly improves the practicability of cold-attacks against AES.


                                    MCD: Overcoming the Data Download Bottleneck in Data Centers

                                    J. Kaiser, D. Meister, V. Gottfried, A. Brinkmann, in: Proc. IEEE Int. Conf. on Networking, Architecture and Storage (NAS), IEEE Computer Society, 2013, pp. 88-97


                                    On-The-Fly Computing: A Novel Paradigm for Individualized IT Services

                                    M. Happe, P. Kling, C. Plessl, M. Platzner, F. Meyer auf der Heide, in: Proceedings of the 9th IEEE Workshop on Software Technology for Future embedded and Ubiquitous Systems (SEUS), IEEE, 2013

                                    In this paper we introduce “On-The-Fly Computing”, our vision of future IT services that will be provided by assembling modular software components available on world-wide markets. After suitable components have been found, they are automatically integrated, configured and brought to execution in an On-The-Fly Compute Center. We envision that these future compute centers will continue to leverage three current trends in large scale computing which are an increasing amount of parallel processing, a trend to use heterogeneous computing resources, and—in the light of rising energy cost—energy-efficiency as a primary goal in the design and operation of computing systems. In this paper, we point out three research challenges and our current work in these areas.


                                      Parallel Macro Pipelining on the Intel SCC Many-Core Computer

                                      T. Suess, A. Schoenrock, S. Meisner, C. Plessl, in: Proc. Int. Symp. on Parallel and Distributed Processing Workshops (IPDPSW), IEEE Computer Society, 2013, pp. 64-73


                                      2012

                                      A Data Driven Science Gateway for Computational Workflows

                                      R. Grunzke, G. Birkenheuer, D. Blunk, S. Breuers, A. Brinkmann, S. Gesing, S. Herres-Pawlis, O. Kohlbacher, J. Krüger, M. Kruse, R. Müller-Pfefferkorn, P. Schäfer, B. Schuller, T. Steinke, A. Zink, in: Proc. UNICORE Summit, 2012


                                      A Science Gateway Getting Ready for Serving the International Molecular Simulation Community

                                      S. Gesing, S. Herres-Pawlis, G. Birkenheuer, A. Brinkmann, R. Grunzke, P. Kacsuk, O. Kohlbacher, M. Kozlovszky, J. Krüger, R. Müller-Pfefferkorn, P. Schäfer, T. Steinke, in: Proceedings of Science, 2012


                                      A Study on Data Deduplication in HPC Storage Systems

                                      D. Meister, J. Kaiser, A. Brinkmann, M. Kuhn, J. Kunkel, T. Cortes, in: Proc. Int. Conf. on Supercomputing (SC), IEEE Computer Society, 2012, pp. 7:1-7:11


                                      Comparison of Bayesian Move Prediction Systems for Computer Go

                                      M. Wistuba, L. Schaefers, M. Platzner, in: Proc. IEEE Conf. on Computational Intelligence and Games (CIG), IEEE, 2012, pp. 91-99


                                      Convey Vector Personalities – FPGA Acceleration with an OpenMP-like Effort?

                                      B. Meyer, J. Schumacher, C. Plessl, J. Förstner, in: Proc. Int. Conf. on Field Programmable Logic and Applications (FPL), IEEE, 2012, pp. 189-196

                                      Although the benefits of FPGAs for accelerating scientific codes are widely acknowledged, the use of FPGA accelerators in scientific computing is not widespread because reaping these benefits requires knowledge of hardware design methods and tools that is typically not available with domain scientists. A promising but hardly investigated approach is to develop tool flows that keep the common languages for scientific code (C,C++, and Fortran) and allow the developer to augment the source code with OpenMPlike directives for instructing the compiler which parts of the application shall be offloaded the FPGA accelerator. In this work we study whether the promise of effective FPGA acceleration with an OpenMP-like programming effort can actually be held. Our target system is the Convey HC-1 reconfigurable computer for which an OpenMP-like programming environment exists. As case study we use an application from computational nanophotonics. Our results show that a developer without previous FPGA experience could create an FPGA-accelerated application that is competitive to an optimized OpenMP-parallelized CPU version running on a two socket quad-core server. Finally, we discuss our experiences with this tool flow and the Convey HC-1 from a productivity and economic point of view.


                                        Design of an exact data deduplication cluster

                                        J. Kaiser, D. Meister, A. Brinkmann, S. Effert, in: Proc. Symp. on Mass Storage Systems and Technologies (MSST), IEEE, 2012, pp. 1-12


                                        Eight Ways to put your FPGA on Fire – A Systematic Study of Heat Generators

                                        M. Happe, H. Hangmann, A. Agne, C. Plessl, in: Proceedings of the International Conference on Reconfigurable Computing and FPGAs (ReConFig), IEEE, 2012, pp. 1-8

                                        Due to the continuously shrinking device structures and increasing densities of FPGAs, thermal aspects have become the new focus for many research projects over the last years. Most researchers rely on temperature simulations to evaluate their novel thermal management techniques. However, the accuracy of the simulations is to some extent questionable and they require a high computational effort if a detailed thermal model is used.For experimental evaluation of real-world temperature management methods, often synthetic heat sources are employed. Therefore, in this paper we investigated the question if we can create significant rises in temperature on modern FPGAs to enable future evaluation of thermal management techniques based on experiments in contrast to simulations. Therefore, we have developed eight different heat-generating cores that use different subsets of the FPGA resources. Our experimental results show that, according to the built-in thermal diode of our Xilinx Virtex-5 FPGA, we can increase the chip temperature by 134 degree C in less than 12 minutes by only utilizing about 21% of the slices.


                                          ESB: Ext2 Split Block Device

                                          J. Kaiser, D. Meister, T. Hartung, A. Brinkmann, in: Proc. IEEE Int. Conf. on Parallel and Distributed Systems (ICPADS), IEEE, 2012, pp. 181-188


                                          Exploration of Ring Oscillator Design Space for Temperature Measurements on FPGAs

                                          C. Rüthing, M. Happe, A. Agne, C. Plessl, in: Proceedings of the International Conference on Field Programmable Logic and Applications (FPL), IEEE, 2012, pp. 559-562

                                          While numerous publications have presented ring oscillator designs for temperature measurements a detailed study of the ring oscillator's design space is still missing. In this work, we introduce metrics for comparing the performance and area efficiency of ring oscillators and a methodology for determining these metrics. As a result, we present a systematic study of the design space for ring oscillators for a Xilinx Virtex-5 platform FPGA.


                                            FPGA implementation of a second-order convolutive blind signal separation algorithm

                                            S. Kasap, S. Redif, in: Int. Architecture and Engineering Symp. (ARCHENG), 2012


                                            FPGA-based design and implementation of an approximate polynomial matrix EVD algorithm

                                            S. Kasap, S. Redif, in: Proc. Int. Conf. on Field Programmable Technology (ICFPT), IEEE Computer Society, 2012, pp. 135-140


                                            Generic User Management for Science Gateways via Virtual Organizations

                                            T. Schlemmer, R. Grunzke, S. Gesing, J. Krüger, G. Birkenheuer, R. Müller-Pfefferkorn, O. Kohlbacher, in: Proc. EGI Technical Forum, 2012


                                            Hardware/Software Platform for Self-aware Compute Nodes

                                            M. Happe, A. Agne, C. Plessl, M. Platzner, in: Proceedings of the Workshop on Self-Awareness in Reconfigurable Computing Systems (SRCS), 2012, pp. 8-9

                                            Today's design and operation principles and methods do not scale well with future reconfigurable computing systems due to an increased complexity in system architectures and applications, run-time dynamics and corresponding requirements. Hence, novel design and operation principles and methods are needed that possibly break drastically with the static ones we have built into our systems and the fixed abstraction layers we have cherished over the last decades. Thus, we propose a HW/SW platform that collects and maintains information about its state and progress which enables the system to reason about its behavior (self-awareness) and utilizes its knowledge to effectively and autonomously adapt its behavior to changing requirements (self-expression).To enable self-awareness, our compute nodes collect information using a variety of sensors, i.e. performance counters and thermal diodes, and use internal self-awareness models that process these information. For self-awareness, on-line learning is crucial such that the node learns and continuously updates its models at run-time to react to changing conditions. To enable self-expression, we break with the classic design-time abstraction layers of hardware, operating system and software. In contrast, our system is able to vertically migrate functionalities between the layers at run-time to exploit trade-offs between abstraction and optimization.This paper presents a heterogeneous multi-core architecture, that enables self-awareness and self-expression, an operating system for our proposed hardware/software platform and a novel self-expression method.


                                              One Phase Commit: A Low Overhead Atomic Commitment Protocol for Scalable Metadata Services

                                              G. Congiu, M. Grawinkel, S. Narasimhamurthy, A. Brinkmann, in: Proc. Workshop on Interfaces and Architectures for Scientific Data Storage (IASDS), IEEE, 2012, pp. 16-24


                                              Pragma based parallelization - Trading hardware efficiency for ease of use?

                                              T. Kenter, C. Plessl, H. Schmitz, in: Proceedings of the International Conference on ReConFigurable Computing and FPGAs (ReConFig), IEEE, 2012, pp. 1-8

                                              One major obstacle for a wide spread FPGA usage in general-purpose computing is the development tool flow that requires much higher effort than for pure software solutions. Convey Computer promises a solution to this problem for their HC-1 platform, where the FPGAs are configured to run as a vector processor and the software source code can be annotated with pragmas that guide an automated vectorization process. We investigate this approach for a stereo matching algorithm that has abundant parallelism and a number of different computational patterns. We note that for this case study the automated vectorization in its current state doesn’t hold its productivity promise. However, we also show that using the Vector Personality can yield a significant speedups compared to CPU implementations in two of three investigated phases of the algorithm. Those speedups don’t match custom FPGA implementations, but can come with much reduced development effort.


                                                Programming and Scheduling Model for Supporting Heterogeneous Accelerators in Linux

                                                T. Beisel, T. Wiersema, C. Plessl, A. Brinkmann, in: Proc. Workshop on Computer Architecture and Operating System Co-design (CAOS), 2012


                                                The MoSGrid Community From National to International Scale

                                                S. Gesing, S. Herres-Pawlis, G. Birkenheuer, A. Brinkmann, R. Grunzke, P. Kacsuk, O. Kohlbacher, M. Kozlovszky, J. Krüger, R. Müller-Pfefferkorn, P. Schäfer, T. Steinke, in: Proc. EGI Community Forum, 2012


                                                Towards Dynamic Scripted pNFS Layouts

                                                M. Grawinkel, T. Süß, G. Best, I. Popov, A. Brinkmann, in: Proc. Parallel Data Storage Workshop (PDSW), IEEE, 2012, pp. 13-17


                                                Turning control flow graphs into function calls: Code generation for heterogeneous architectures

                                                P. Barrio, C. Carreras, R. Sierra, T. Kenter, C. Plessl, in: Proceedings of the International Conference on High Performance Computing and Simulation (HPCS), IEEE, 2012, pp. 559-565

                                                Heterogeneous machines are gaining momentum in the High Performance Computing field, due to the theoretical speedups and power consumption. In practice, while some applications meet the performance expectations, heterogeneous architectures still require a tremendous effort from the application developers. This work presents a code generation method to port codes into heterogeneous platforms, based on transformations of the control flow into function calls. The results show that the cost of the function-call mechanism is affordable for the tested HPC kernels. The complete toolchain, based on the LLVM compiler infrastructure, is fully automated once the sequential specification is provided.


                                                  2011

                                                  A Science Gateway for Molecular Simulations

                                                  S. Gesing, P. Kacsuk, M. Kozlovszky, G. Birkenheuer, D. Blunk, S. Breuers, A. Brinkmann, G. Fels, R. Grunzke, S. Herres-Pawlis, J. Krüger, L. Packschies, R. Müller-Pfefferkorn, P. Schäfer, T. Steinke, A. Szikszay Fabri, K. Warzecha, M. Wewior, O. Kohlbacher, in: Proc. EGI User Forum, 2011, pp. 94-95


                                                  An Energy-Aware SaaS Stack

                                                  O. Niehörster, A. Keller, A. Brinkmann, in: Proc. Int. Meeting of the IEEE Int. Symp. on Modeling, Analysis and Simulation of Computer and Telecommunication Systems (MASCOTS), 2011

                                                  We present a multi-agent system on top of the IaaS layer consisting of a scheduler agent and multiple worker agents. Each job is controlled by an autonomous worker agent, which is equipped with application specific knowledge (e.g., performance functions) allowing it to estimate the type and number of necessary resources. During runtime, the worker agent monitors the job and adapts its resources to ensure the specified quality of service - even in noisy clouds where the job instances are influenced by other jobs. All worker agents interact with the scheduler agent, which takes care of limited resources and does a cost-aware scheduling by assigning jobs to times with low energy costs. The whole architecture is self-optimizing and able to use public or private clouds.


                                                    Autonomic Resource Management Handling Delayed Configuration Effects

                                                    O. Niehörster, A. Brinkmann, in: Proc. IEEE Int. Conf. on Cloud Computing Technology and Science (CloudCom), IEEE Computer Society, 2011, pp. 138-145


                                                    Autonomic Resource Management with Support Vector Machines

                                                    O. Niehörster, J. Simon, A. Brinkmann, A. Krieger, in: Proc. IEEE/ACM Int. Conf. on Grid Computing (GRID), IEEE Computer Society, 2011, pp. 157-164


                                                    Cooperative multitasking for heterogeneous accelerators in the Linux Completely Fair Scheduler

                                                    T. Beisel, T. Wiersema, C. Plessl, A. Brinkmann, in: Proc. Int. Conf. on Application-Specific Systems, Architectures, and Processors (ASAP), IEEE Computer Society, 2011, pp. 223-226


                                                    Estimation and Partitioning for CPU-Accelerator Architectures

                                                    T. Kenter, C. Plessl, M. Platzner, M. Kauschke, in: Intel European Research and Innovation Conference, 2011


                                                    Evaluation of Applied Intra-Disk Redundancy Schemes to Improve Single Disk Reliability

                                                    M. Grawinkel, T. Schäfer, A. Brinkmann, J. Hagemeyer, M. Porrmann, in: Proc. Int. Symp. on Modeling, Analysis and Simulation of Computer and Telecommunication Systems (MASCOTS), IEEE Computer Society, 2011, pp. 297-306


                                                    Granular Security for a Science Gateway in Structural Bioinformatics

                                                    S. Gesing, R. Grunzke, Balaskó, G. Birkenheuer, D. Blunk, S. Breuers, A. Brinkmann, G. Fels, S. Herres-Pawlis, P. Kacsuk, M. Kozlovszky, J. Krüger, L. Packschies, P. Schäfer, B. Schuller, J. Schuster, T. Steinke, A. Szikszay Fabri, M. Wewior, R. Müller-Pfefferkorn, O. Kohlbacher, in: Proc. Int. Workshop on Scientific Gateways (IWSG), Consorzio COMETA, 2011


                                                    Just-in-time Instruction Set Extension – Feasibility and Limitations for an FPGA-based Reconfigurable ASIP Architecture

                                                    M. Grad, C. Plessl, in: Proc. Reconfigurable Architectures Workshop (RAW), IEEE Computer Society, 2011, pp. 278-285


                                                    Lonestar: An Energy-Aware Disk Based Long-Term Archival Storage System

                                                    M. Grawinkel, M. Pargmann, H. Dömer, A. Brinkmann, in: Proc. IEEE Int. Conf. on Parallel and Distributed Systems (ICPADS), IEEE, 2011, pp. 380-387


                                                    Measuring and Predicting Temperature Distributions on FPGAs at Run-Time

                                                    M. Happe, A. Agne, C. Plessl, in: Proceedings of the 2011 International Conference on Reconfigurable Computing and FPGAs (ReConFig), IEEE, 2011, pp. 55-60

                                                    In the next decades, hybrid multi-cores will be the predominant architecture for reconfigurable FPGA-based systems. Temperature-aware thread mapping strategies are key for providing dependability in such systems. These strategies rely on measuring the temperature distribution and redicting the thermal behavior of the system when there are changes to the hardware and software running on the FPGA. While there are a number of tools that use thermal models to predict temperature distributions at design time, these tools lack the flexibility to autonomously adjust to changing FPGA configurations. To address this problem we propose a temperature-aware system that empowers FPGA-based reconfigurable multi-cores to autonomously predict the on-chip temperature distribution for pro-active thread remapping. Our system obtains temperature measurements through a self-calibrating grid of sensors and uses area constrained heat-generating circuits in order to generate spatial and temporal temperature gradients. The generated temperature variations are then used to learn the free parameters of the system's thermal model. The system thus acquires an understanding of its own thermal characteristics. We implemented an FPGA system containing a net of 144 temperature sensors on a Xilinx Virtex-6 LX240T FPGA that is aware of its thermal model. Finally, we show that the temperature predictions vary less than 0.72 degree C on average compared to the measured temperature distributions at run-time.


                                                      MoSGrid: Progress of Workflow driven Chemical Simulations

                                                      G. Birkenheuer, D. Blunk, S. Breuers, A. Brinkmann, G. Fels, S. Gesing, R. Grunzke, S. Herres-Pawlis, O. Kohlbacher, J. Krüger, U. Lang, L. Packschies, R. Müller-Pfefferkorn, P. Schäfer, J. Schuster, T. Steinke, K. Warzecha, M. Wewior, in: Proc. of Grid Workflow Workshop (GWW), 2011


                                                      Parallel Monte-Carlo Tree Search for HPC Systems

                                                      T. Graf, U. Lorenz, M. Platzner, L. Schaefers, in: Proc. European Conf. on Parallel Processing (Euro-Par), Springer, 2011


                                                      Performance Estimation Framework for Automated Exploration of CPU-Accelerator Architectures

                                                      T. Kenter, M. Platzner, C. Plessl, M. Kauschke, in: Proc. Int. Symp. on Field-Programmable Gate Arrays (FPGA), ACM, 2011, pp. 177-180


                                                      Reliable and Randomized Data Distribution Strategies for Large Scale Storage Systems

                                                      A. Miranda, S. Effert, Y. Kang, E. Miller, A. Brinkmann, T. Cortes, in: Proc. Int. Conf. on High Performance Computing (HIPC), IEEE Computer Society, 2011, pp. 1-10


                                                      Request Load Balancing for Highly Skewed Traffic in P2P Networks

                                                      A. Brinkmann, Y. Gao, M. Korzeniowski, D. Meister, in: Proc. IEEE Int. Conf. on Networking, Architecture and Storage (NAS), IEEE, 2011, pp. 53-62


                                                      Rule Based Mapping of Virtual Machines in Clouds

                                                      C. Kleineweber, A. Keller, O. Niehörster, A. Brinkmann, in: Proc. Int. Conf. on Parallel, Distributed and Network-Based Computing (PDP), 2011

                                                      Infrastructure as a Service providers use virtualization to abstract their hardware and to create a dynamic data center. Virtualization enables the consolidation of virtual machines as well as the migration of them to other hosts during runtime. Each provider has its own strategy to efficiently operate a data center. We present a rule based mapping algorithm for VMs, which is able to automatically adapt the mapping between VMs and physical hosts. It offers an interface where policies can be defined and combined in a generic way. The algorithm performs the initial mapping at request time as well as a remapping during runtime. It deals with policy and infrastructure changes. We extended the open source IaaS solution Eucalyptus and we evaluated it with typical policies: maximizing the compute performance and VM locality to achieve a high performance and minimizing energy consumption. The evaluation was done on state-of-the-art servers in our own data center and by simulations using a workload of the Parallel Workload Archive. The results show that our algorithm performs well in dynamic data centers environments.


                                                        Transformation of scientific algorithms to parallel computing code: subdomain support in a MPI-multi-GPU backend

                                                        B. Meyer, C. Plessl, J. Förstner, in: Symp. on Application Accelerators in High Performance Computing (SAAHPC), IEEE Computer Society, 2011, pp. 60-63


                                                        2010

                                                        An Open Source Circuit Library with Benchmarking Facilities

                                                        M. Grad, C. Plessl, in: Proc. Int. Conf. on Engineering of Reconfigurable Systems and Algorithms (ERSA), CSREA Press, 2010, pp. 144-150


                                                        Balls into Bins with Related Random Choices

                                                        P. Berenbrink, A. Brinkmann, T. Friedetzky, L. Nagel, in: Proc. Int. Symp. on Parallelism in Algorithms and Architectures (SPAA), ACM, 2010, pp. 100-105


                                                        Balls into Non-uniform Bins

                                                        P. Berenbrink, A. Brinkmann, T. Friedetzky, L. Nagel, in: Proc. Int. Symp. on Parallel and Distributed Processing (IPDPS), IEEE, 2010, pp. 1-10


                                                        Configurable Processor Architectures: History and Trends

                                                        D. Andrews, C. Plessl, in: Proc. Int. Conf. on Engineering of Reconfigurable Systems and Algorithms (ERSA), CSREA Press, 2010, pp. 165


                                                        dedupv1: Improving Deduplication Throughput using Solid State Drives (SSD)

                                                        D. Meister, A. Brinkmann, in: Proc. Symp. on Mass Storage Systems and Technologies (MSST), IEEE Computer Society, 2010, pp. 1-6


                                                        Enforcing SLAs in Scientific Clouds

                                                        O. Niehörster, A. Brinkmann, G. Fels, J. Krüger, J. Simon, in: Proc. Int. Conf. on Cluster Computing (CLUSTER), IEEE, 2010, pp. 178-187


                                                        Grid-Workflows in Molecular Science

                                                        G. Birkenheuer, S. Breuers, A. Brinkmann, D. Blunk, G. Fels, S. Gesing, S. Herres-Pawlis, O. Kohlbacher, J. Krüger, L. Packschies, in: Proc. of Grid Workflow Workshop (GWW), Gesellschaft für Informatik (GI), 2010, pp. 177-184


                                                        hashFS: Applying Hashing to Optimized File Systems for Small File Reads

                                                        P.H. Lensing, D. Meister, A. Brinkmann, in: Proc. Int. Worksh. on Storage Network Architecture and Parallel I/Os (SNAPI), IEEE, 2010, pp. 33-42


                                                        Non-intrusive Virtualization Management Using libvirt

                                                        M. Bolte, M. Sievers, G. Birkenheuer, O. Niehörster, A. Brinkmann, in: Proc. Design, Automation and Test in Europe Conf. (DATE), EDA Consortium, 2010


                                                        Performance Estimation for the Exploration of CPU-Accelerator Architectures

                                                        T. Kenter, M. Platzner, C. Plessl, M. Kauschke, in: Proc. Workshop on Architectural Research Prototyping (WARP), International Symposium on Computer Architecture (ISCA), 2010


                                                        Pruning the Design Space for Just-In-Time Processor Customization

                                                        M. Grad, C. Plessl, in: Proc. Int. Conf. on ReConFigurable Computing and FPGAs (ReConFig), IEEE Computer Society, 2010, pp. 67-72


                                                        Reconfigurable Nodes for Future Networks

                                                        A. Keller, B. Plattner, E. Lübbers, M. Platzner, C. Plessl, in: Proc. IEEE Globecom Workshop on Network of the Future (FutureNet), IEEE, 2010, pp. 372-376


                                                        Reliability Analysis of Declustered-Parity RAID 6 with Disk Scrubbing and Considering Irrecoverable Read Errors

                                                        Y. Gao, D. Meister, A. Brinkmann, in: Proc. IEEE Int. Conf. on Networking, Architecture and Storage (NAS), IEEE, 2010, pp. 126-134


                                                        Risk Aware Overbooking for Commercial Grids

                                                        G. Birkenheuer, A. Brinkmann, H. Karl, in: Job Scheduling Strategies for Parallel Processing - 15th International Workshop, JSSPP 2010, Atlanta, GA, USA, April 23, 2010, Revised Selected Papers, 2010, pp. 51-76


                                                        Rupeas: Ruby Powered Event Analysis DSL

                                                        M. Woehrle, C. Plessl, L. Thiele, in: Proc. Int. Conf. Networked Sensing Systems (INSS), IEEE, 2010, pp. 245-248


                                                        SkewCCC+: A Heterogeneous Distributed Hash Table

                                                        M. Bienkowski, A. Brinkmann, M. Klonowski, M. Korzeniowski, in: Proceedings of the 14th International Conference On Principles Of Distributed Systems (Opodis), Springer, 2010


                                                        The MoSGrid Gaussian Portlet - Technologies for the Implementation of Portlets for Molecular Simulations

                                                        M. Wewior, L. Packschies, D. Blunk, D. Wickeroth, K. Warzecha, S. Herres-Pawlis, S. Gesing, S. Breuers, J. Krüger, G. Birkenheuer, U. Lang, in: Proc. Int. Workshop on Scientific Gateways (IWSG), Consorzio COMETA, 2010, pp. 39-43


                                                        Towards Adaptive Networking for Embedded Devices based on Reconfigurable Hardware

                                                        E. Lübbers, M. Platzner, C. Plessl, A. Keller, B. Plattner, in: Proc. Int. Conf. on Engineering of Reconfigurable Systems and Algorithms (ERSA), CSREA Press, 2010, pp. 225-231


                                                        Using Shared Library Interposing for Transparent Acceleration in Systems with Heterogeneous Hardware Accelerators

                                                        T. Beisel, M. Niekamp, C. Plessl, in: Proc. Int. Conf. on Application-Specific Systems, Architectures, and Processors (ASAP), IEEE Computer Society, 2010, pp. 65-72


                                                        Workflow Interoperability in a Grid Portal for Molecular Simulations

                                                        S. Gesing, I. Marton, G. Birkenheuer, B. Schuller, R. Grunzke, J. Krüger, S. Breuers, D. Blunk, G. Fels, L. Packschies, A. Brinkmann, O. Kohlbacher, M. Kozlovszky, in: Proc. Int. Workshop on Scientific Gateways (IWSG), Consorzio COMETA, 2010, pp. 44-48


                                                        2009

                                                        An Accelerator for k-th Nearest Neighbor Thinning Based on the IMORC Infrastructure

                                                        T. Schumacher, C. Plessl, M. Platzner, in: Proc. Int. Conf. on Field Programmable Logic and Applications (FPL), IEEE, 2009, pp. 338-344


                                                        An Orchestration as a Service Infrastructure using Grid Technologies and WS-BPEL

                                                        A. Höing, G. Scherp, S. Gudenkauf, D. Meister, A. Brinkmann, in: Proc. Int. Conf. on Service Oriented Computing (ICSOC), Springer, 2009, pp. 301-315


                                                        Communication Performance Characterization for Reconfigurable Accelerator Design on the XD1000

                                                        T. Schumacher, T. Süß, C. Plessl, M. Platzner, in: Proc. Int. Conf. on ReConFigurable Computing and FPGAs (ReConFig), IEEE Computer Society, 2009, pp. 119-124


                                                        Connecting Communities on the Meta-Scheduling Level: The DGSI Approach!

                                                        G. Birkenheuer, A. Carlson, A. Fölling, M. Högqvist, A. Hoheisel, A. Papaspyrou, K. Rieger, B. Schott, W. Ziegler, in: Proc. Cracow Grid Workshop (CGW), 2009, pp. 96-103


                                                        EvoCaches: Application-specific Adaptation of Cache Mapping

                                                        P. Kaufmann, C. Plessl, M. Platzner, in: Proc. NASA/ESA Conference on Adaptive Hardware and Systems (AHS), IEEE Computer Society, 2009, pp. 11-18

                                                        In this work we present EvoCache, a novel approach for implementing application-specific caches. The key innovation of EvoCache is to make the function that maps memory addresses from the CPU address space to cache indices programmable. We support arbitrary Boolean mapping functions that are implemented within a small reconfigurable logic fabric. For finding suitable cache mapping functions we rely on techniques from the evolvable hardware domain and utilize an evolutionary optimization procedure. We evaluate the use of EvoCache in an embedded processor for two specific applications (JPEG and BZIP2 compression) with respect to execution time, cache miss rate and energy consumption. We show that the evolvable hardware approach for optimizing the cache functions not only significantly improves the cache performance for the training data used during optimization, but that the evolved mapping functions generalize very well. Compared to a conventional cache architecture, EvoCache applied to test data achieves a reduction in execution time of up to 14.31% for JPEG (10.98% for BZIP2), and in energy consumption by 16.43% for JPEG (10.70% for BZIP2). We also discuss the integration of EvoCache into the operating system and show that the area and delay overheads introduced by EvoCache are acceptable.


                                                          Liste im Research Information System öffnen

                                                          Die Universität der Informationsgesellschaft