Achtung:

Sie haben Javascript deaktiviert!
Sie haben versucht eine Funktion zu nutzen, die nur mit Javascript möglich ist. Um sämtliche Funktionalitäten unserer Internetseite zu nutzen, aktivieren Sie bitte Javascript in Ihrem Browser.

Data Center Building O Show image information

Data Center Building O

HARP Cluster

After a beta testing phase with selected participants, the HARP cluster was opened on August 18, 2017 to a broad community of researchers within the Intel Hardware Accelerator Research Program (HARP). At peak times, the HARP cluster had several hundred registered users and executed tens of thousands of jobs per month. After several years of operation, the service has been terminated.

Further links

Technical description

System Xeon+FPGA
Processors 14-core Broadwell CPU
FPGAs Arria 10 GX 1150
Main Memory 64 GiB per node, shared memory hierarchy between CPU and FPGA
Partitions per development stack
  • OpenCL, 4 nodes
  • Intel AAL, 3 nodes
  • Intel OPAE, 3 nodes
Workload manager Slurm

Selected Publications using the HARP-Cluster


Open list in Research Information System

Transparent Control Flow Transfer between CPU and Accelerators for HPC

D. Granhão, J.C. Canas Ferreira, Electronics (2021), 406

<jats:p>Heterogeneous platforms with FPGAs have started to be employed in the High-Performance Computing (HPC) field to improve performance and overall efficiency. These platforms allow the use of specialized hardware to accelerate software applications, but require the software to be adapted in what can be a prolonged and complex process. The main goal of this work is to describe and evaluate mechanisms that can transparently transfer the control flow between CPU and FPGA within the scope of HPC. Combining such a mechanism with transparent software profiling and accelerator configuration could lead to an automatic way of accelerating regular applications. In this work, a mechanism based on the ptrace system call is proposed, and its performance on the Intel Xeon+FPGA platform is evaluated. The feasibility of the proposed approach is demonstrated by a working prototype that performs the transparent control flow transfer of any function call to a matching hardware accelerator. This approach is more general than shared library interposition at the cost of a small time overhead in each accelerator use (about 1.3ms in the prototype implementation).</jats:p>


OpenCL-based FPGA Accelerator for Semi-Global Approximate String Matching Using Diagonal Bit-Vectors

D. Castells-Rufas, S. Marco-Sola, Q. Aguado-Puig, A. Espinosa-Morales, J.C. Moure, L. Alvarez, M. Moreto, in: 2021 31st International Conference on Field-Programmable Logic and Applications (FPL), IEEE, 2021

An FPGA accelerator for the computation of the semi-global Levenshtein distance between a pattern and a reference text is presented. The accelerator provides an important benefit to reduce the execution time of read-mappers used in short-read genomic sequencing. Previous attempts to solve the same problem in FPGA use the Myers algorithm following a column approach to compute the dynamic programming table. We use an approach based on diagonals that allows for some resource savings while maintaining a very high throughput of 1 alignment per clock cycle. The design is implemented in OpenCL and tested on two FPGA accelerators. The maximum performance obtained is 91.5 MPairs/s for 100 × 120 sequences and 47 MPairs/s for 300 × 360 sequences, the highest ever reported for this problem.


Exploration of FPGA-Based Hardware Designs for QR Decomposition for Solving Stiff ODE Numerical Methods Using the HARP Hybrid Architecture

C. Alberto Oliveira de Souza Junior, J. Bispo, J.M.P. Cardoso, P.C. Diniz, E. Marques, Electronics (2020), 843

<jats:p>In this article, we focus on the acceleration of a chemical reaction simulation that relies on a system of stiff ordinary differential equation (ODEs) targeting heterogeneous computing systems with CPUs and field-programmable gate arrays (FPGAs). Specifically, we target an essential kernel of the coupled chemistry aerosol-tracer transport model to the Brazilian developments on the regional atmospheric modeling system (CCATT-BRAMS). We focus on a linear solve step using the QR factorization based on the modified Gram-Schmidt method as the basis of the ODE solver in this application. We target Intel hardware accelerator research program (HARP) architecture with the OpenCL programming environment for these early experiments. Our design exploration reveals a hardware design that is up to 4 times faster than the original iterative Jacobi method used in this solver. Still, even with hardware support, the overall performance of our QR-based hardware is lower than its original software version.</jats:p>


Combining Multiple Optimized FPGA-based Pulsar Search Modules Using OpenCL

H. Wang, P. Thiagaraj, O. Sinnen, Journal of Astronomical Instrumentation (2019), 1950008

<jats:p> Field-Programmable Gate Arrays (FPGAs) are widely used in the central signal processing design of the Square Kilometer Array (SKA) as hardware accelerators. The frequency domain acceleration search (FDAS) module is an important part of the SKA1-MID pulsar search engine. To develop for a yet to be finalized hardware, for cross-discipline interoperability and to achieve fast prototyping, OpenCL as a high-level FPGA synthesis approaches employed to create the sub-modules of FDAS. The FT convolution and the harmonic-summing plus some other minor sub-modules are elements in the FDAS module that have been well-optimized separately before. In this paper, we explore the design space of combining well-optimized designs, dealing with the ensuing need to trade-off and compromise. Pipeline computing is employed to handle multiple input arrays at high speed. The hardware target is to employ multiple high-end FPGAs to process the combined FDAS module. The results show interesting consequences, where the best individual solutions are not necessarily the best solutions for the speed of a pipeline where FPGA resources and memory bandwidth need to be shared. By proposing multiple buffering techniques to the pipeline, the combined FDAS module can achieve up to 2[Formula: see text] speedup over implementations without pipeline computing. We perform an extensive experimental evaluation on multiple high-end FPGA cards hosted in a workstation and compare to a technology comparable mid-range GPU. </jats:p>


Mapping a Guided Image Filter on the HARP Reconfigurable Architecture Using OpenCL

T. Faict, E.H. D’Hollander, B. Goossens, Algorithms (2019), 149

<jats:p>Intel recently introduced the Heterogeneous Architecture Research Platform, HARP. In this platform, the Central Processing Unit and a Field-Programmable Gate Array are connected through a high-bandwidth, low-latency interconnect and both share DRAM memory. For this platform, Open Computing Language (OpenCL), a High-Level Synthesis (HLS) language, is made available. By making use of HLS, a faster design cycle can be achieved compared to programming in a traditional hardware description language. This, however, comes at the cost of having less control over the hardware implementation. We will investigate how OpenCL can be applied to implement a real-time guided image filter on the HARP platform. In the first phase, the performance-critical parameters of the OpenCL programming model are defined using several specialized benchmarks. In a second phase, the guided image filter algorithm is implemented using the insights gained in the first phase. Both a floating-point and a fixed-point implementation were developed for this algorithm, based on a sliding window implementation. This resulted in a maximum floating-point performance of 135 GFLOPS, a maximum fixed-point performance of 430 GOPS and a throughput of HD color images at 74 frames per second.</jats:p>


Parallel multiprocessing and scheduling on the heterogeneous Xeon+FPGA platform

A. Rodríguez, A. Navarro, R. Asenjo, F. Corbera, R. Gran, D. Suárez, J. Nunez-Yanez, The Journal of Supercomputing (2019)

Heterogeneous computing that exploits simultaneous co-processing with different device types has been shown to be effective at both increasing performance and reducing energy consumption. In this paper, we extend a scheduling framework encapsulated in a high-level C++ template and previously developed for heterogeneous chips comprising CPU and GPU cores, to new high-performance platforms for the data center, which include a cache coherent FPGA fabric and many-core CPU resources. Our goal is to evaluate the suitability of our framework with these new FPGA-based platforms, identifying performance benefits and limitations.We target the state-of-the-art HARP processor that includes 14 high-end Xeon classes tightly coupled to a FPGA device located in the same package. We select eight benchmarks from the high-performance computing domain that have been ported and optimized for this heterogeneous platform. The results show that a dynamic and adaptive scheduler that exploits simultaneous processing among the devices can improve performance up to a factor of 8 × compared to the best alternative solutions that only use the CPU cores or the FPGA fabric. Moreover, our proposal achieves up to 15% and 37% of improvement compared to the best heterogeneous solutions found with a dynamic and static schedulers, respectively.


Energy Efficient Parallel K-Means Clustering for an Intel® Hybrid Multi-Chip Package

M.A. Souza, L.A. Maciel, P.H. Penna, H.C. Freitas, in: 2018 30th International Symposium on Computer Architecture and High Performance Computing (SBAC-PAD), 2019

FPGA devices have been proving to be good candidates to accelerate applications from different research topics. For instance, machine learning applications such as K-Means clustering usually relies on large amount of data to be processed, and, despite the performance offered by other architectures, FPGAs can offer better energy efficiency. With that in mind, Intel has launched a platform that integrates a multicore and an FPGA in the same package, enabling low latency and coherent fine-grained data offload. In this paper, we present a parallel implementation of the K-Means clustering algorithm, for this novel platform, using OpenCL language, and compared it against other platforms. We found that the CPU+FPGA platform was more energy efficient than the CPU-only approach from 70.71% to 85.92%, with Standard and Tiny input sizes respectively, and up to 68.21% of performance improvement was obtained with Tiny input size. Furthermore, it was up to 7.2×more energy efficient than an Intel® Xeon Phi ™, 21.5×than a cluster of Raspberry Pi boards, and 3.8×than the low-power MPPA-256 architecture, when the Standard input size was used.


Automatic Mapping of the Sum-Product Network Inference Problem to FPGA-Based Accelerators

L. Sommer, J. Oppermann, A. Molina, C. Binnig, K. Kersting, A. Koch, in: 2018 IEEE 36th International Conference on Computer Design (ICCD), 2019

In recent years, FPGAs have been successfully employed for the implementation of efficient, application-specific accelerators for a wide range of machine learning tasks. In this work, we consider probabilistic models, namely, (Mixed) Sum-Product Networks (SPN), a deep architecture that can provide tractable inference for multivariate distributions over mixed data-sources. We develop a fully pipelined FPGA accelerator architecture, including a pipelined interface to external memory, for the inference in (mixed) SPNs. To meet the precision constraints of SPNs, all computations are conducted using double-precision floating point arithmetic. Starting from an input description, the custom FPGA-accelerator is synthesized fully automatically by our tool flow. To the best of our knowledge, this work is the first approach to offload the SPN inference problem to FPGA-based accelerators. Our evaluation shows that the SPN inference problem benefits from offloading to our pipelined FPGA accelerator architecture.


Constructing Concurrent Data Structures on FPGA with Channels

H. Yan, Z. Li, L. Liu, S. Yin, S. Wei, in: Proceedings of the 2019 ACM/SIGDA International Symposium on Field-Programmable Gate Arrays, 2019

The performance of High-Level Synthesis (HLS) applications with irregular data structures is limited by its imperative programming paradigm like C/C++. In this paper, we show that constructing concurrent data structures with channels, a programming construct derived from CSP (communicating sequential processes) paradigm, is an effective approach to improve the performance of these applications. We evaluate concurrent data structure for FPGA by synthesizing a K-means clustering algorithm on the Intel HARP2 platform. A fully pipelined KMC processing element can be synthesized from OpenCL with the help of a SPSC (single-producer-single-consumer) queue and stack built from channels, achieving 15.2x speedup over a sequential baseline. The number of processing element can be scaled up by leveraging a MPMC (multiple-producer-multiple-consumer) stack with work distribution for dynamic load balance. Evaluation shows that an additional 3.5x speedup can be achieved when 4 processing element is instantiated. These results show that the concurrent data structure built with channels has great potential for improving the parallelism of HLS applications. We hope that our study will stimulate further research into the potential of channel-based HLS.


FPGA-Accelerated Optimistic Concurrency Control for Transactional Memory

Z. Li, L. Liu, Y. Deng, J. Wang, Z. Liu, S. Yin, S. Wei, in: Proceedings of the 52nd Annual IEEE/ACM International Symposium on Microarchitecture, 2019

Transactional Memory (TM) has been considered as a promising alternative to existing synchronization operations, which are often the largest stumbling block to unleashing parallelism of applications. Efficient implementations of TM, however, are challenging due to the tension between lowering performance overhead and avoiding unnecessary aborts. In this paper, we present Reachability-based Optimistic Concurrency Control for Transactional Memory (ROCoCoTM), a novel scheme which offloads concurrency control (CC) algorithms, the central building blocks of TM systems, to reconfigurable hardware. To reduce the abort rate, an innovative formalization of mainstream CC algorithms is developed to reveal a common restriction that leads to unnecessary aborts. This restriction is resolved by the ROCoCo algorithm with a centralized validation phase, which can be efficiently pipelined in hardware. Thanks to a high-performance offloading engine implemented in reconfigurable hardware, ROCoCo algorithm results in decreased abort rates and reduced performance overhead. The whole system is implemented on Intel's HARP2 platform and evaluated with the STAMP benchmark suite. Experiments show 1.55x and 8.05x geomean speedup over TinySTM and an HTM based on Intel TSX, respectively. Given the fast-growing deployment of commodity CPU-FPGA platforms, ROCoCoTM paves the way for software programmers to exploit heterogeneous computing resources with a high-level transactional abstraction to effectively extract the parallelism in modern applications.


Breaking the Synchronization Bottleneck with Reconfigurable Transactional Execution

Z. Li, L. Liu, Y. Deng, S. Yin, S. Wei, IEEE Computer Architecture Letters (2018), pp. 147-150

The advent of FPGA-based hybrid architecture offers the opportunity of customizing memory subsystems to enhance the overall system performance. However, it is not straightforward to design efficient FPGA circuits for emerging FPGAs applications such as in-memory database and graph analytics, which heavily depend on concurrent data structures (CDS'). Highly dynamic behaviors of CDS' have to be orchestrated by synchronization primitives for correct execution. These primitives induce overwhelming memory traffic for synchronizations on FPGAs. This paper proposes a novel method for systematically exploring and exploiting memory-level parallelism (MLP) of CDS by transactional execution on FPGAs. Inspired by the idea that semantics of transactions can be implemented in a more efficient and scalable manner on FPGAs than on CPUs, we propose a transaction-based reconfigurable runtime system for capturing MLP of CDS'. Experiments on linked-list and skip-list show our approach achieves 5.18x and 1.55x throughput improvement on average than lock-based FPGA implementations and optimized CDS algorithms on a state-of-the-art multi-core CPU respectively.


Automatic Offloading of Cluster Accelerators

C. Ceissler, R. Nepomuceno, M. Pereira, G. Araujo, in: 2018 IEEE 26th Annual International Symposium on Field-Programmable Custom Computing Machines (FCCM), 2018

The sheer amount of computing resources required to run modern cloud workloads has put a lot of pressure on the design of power efficient cluster nodes. To address this problem, Intel (HARP) and Microsoft (Catapult) have proposed CPU-FPGA integrated architectures that can deliver efficient power-performance executions. Unfortunately, the integration of FPGA acceleration modules to software is a challenging endeavor that does not have a seamless programming model. This paper proposes HardCloud (www.hardcloud.org), an extension of the OpenMP 4.X standard that eases the task of offloading FPGA modules to cluster accelerators.


A Case Study in Using OpenCL on FPGAs: Creating an Open-Source Accelerator of the AutoDock Molecular Docking Software

L. Solis-Vasquez, A. Koch, in: FSP Workshop 2018; Fifth International Workshop on FPGAs for Software Programmers, 2018, pp. 1-10

In recent years, OpenCL has been increasingly adopted as it enables software programmers to harness the performance and power efficiency of FPGAs. Despite simplifying the FPGA programming challenge, achieving high performance and energy efficiency with OpenCL is still a difficult task. In order to further contribute to the advance of the OpenCL usage for FPGAs, we utilize a realistic application scenario as our case study: the AutoDock molecular docking software. While OpenCL has proven its effectiveness in accelerating molecular docking on GPUs, for FPGA-based AutoDock accelerators it struggles with difficult design patterns. Besides complex multiple-producers to single-consumer datapaths, these include time-intensive loops with variable runtimes. Therefore, this work presents the design and optimization steps for implementing AutoDock in OpenCL targeting an Arria-10 FPGA, as well as a corresponding execution runtime and energy-efficiency evaluation. Applying these techniques improved the performance of the initial OpenCL implementation for FPGAs by three orders of magnitude, with the final version of the code now yielding speed-ups of up to ~2.7x, and energy-efficiency gains of up to ~1.8x over the original serial AutoDock version executing on a current-generation CPU.


Open list in Research Information System

The University for the Information Society