Achtung:

Sie haben Javascript deaktiviert!
Sie haben versucht eine Funktion zu nutzen, die nur mit Javascript möglich ist. Um sämtliche Funktionalitäten unserer Internetseite zu nutzen, aktivieren Sie bitte Javascript in Ihrem Browser.

Data Center Building O Show image information

Data Center Building O

Axel Keller

Contact

Paderborn Center for Parallel Computing (PC2)

Technical Employee - Employee - System administrator

Phone:
+49 5251 60-1723
Fax:
+49 5251 60-1714
Office:
X0.122
Web:

Publications


Open list in Research Information System

2018

A Data Structure for Planning Based Workload Management of Heterogeneous HPC Systems

A. Keller, in: Proc. Workshop on Job Scheduling Strategies for Parallel Processing (JSSPP), Springer, 2018, pp. 132-151

This paper describes a data structure and a heuristic to plan and map arbitrary resources in complex combinations while applying time dependent constraints. The approach is used in the planning based workload manager OpenCCS at the Paderborn Center for Parallel Computing (PC\(^2\)) to operate heterogeneous clusters with up to 10000 cores. We also show performance results derived from four years of operation.


2012

Cost-aware and SLO Fulfilling Software as a Service

O. Niehörster, J. Simon, A. Brinkmann, A. Keller, J. Krüger, Journal of Grid Computing (2012), 10(3), pp. 553-577

Virtualization technology makes data centers more dynamic and easier to administrate. Today, cloud providers offer customers access to complex applications running on virtualized hardware. Nevertheless, big virtualized data centers become stochastic environments and the simplification on the user side leads to many challenges for the provider. He has to find cost-efficient configurations and has to deal with dynamic environments to ensure service level objectives (SLOs). We introduce a software solution that reduces the degree of human intervention to manage clouds. It is designed as a multi-agent system (MAS) and placed on top of the Infrastructure as a Service (IaaS) layer. Worker agents allocate resources, configure applications, check the feasibility of requests, and generate cost estimates. They are equipped with application specific knowledge allowing it to estimate the type and number of necessary resources. During runtime, a worker agent monitors the job and adapts its resources to ensure the specified quality of service—even in noisy clouds where the job instances are influenced by other jobs. They interact with a scheduler agent, which takes care of limited resources and does a cost-aware scheduling by assigning jobs to times with low costs. The whole architecture is self-optimizing and able to use public or private clouds. Building a private cloud needs to face the challenge to find a mapping of virtual machines (VMs) to hosts. We present a rule-based mapping algorithm for VMs. It offers an interface where policies can be defined and combined in a generic way. The algorithm performs the initial mapping at request time as well as a remapping during runtime. It deals with policy and infrastructure changes. An energy-aware scheduler and the availability of cheap resources provided by a spot market are analyzed. We evaluated our approach by building up an SaaS stack, which assigns resources in consideration of an energy function and that ensures SLOs of two different applications, a brokerage system and a high-performance computing software. Experiments were done on a real cloud system and by simulations.


2011

An Energy-Aware SaaS Stack

O. Niehörster, A. Keller, A. Brinkmann, in: Proc. Int. Meeting of the IEEE Int. Symp. on Modeling, Analysis and Simulation of Computer and Telecommunication Systems (MASCOTS), 2011

We present a multi-agent system on top of the IaaS layer consisting of a scheduler agent and multiple worker agents. Each job is controlled by an autonomous worker agent, which is equipped with application specific knowledge (e.g., performance functions) allowing it to estimate the type and number of necessary resources. During runtime, the worker agent monitors the job and adapts its resources to ensure the specified quality of service - even in noisy clouds where the job instances are influenced by other jobs. All worker agents interact with the scheduler agent, which takes care of limited resources and does a cost-aware scheduling by assigning jobs to times with low energy costs. The whole architecture is self-optimizing and able to use public or private clouds.


Rule Based Mapping of Virtual Machines in Clouds

C. Kleineweber, A. Keller, O. Niehörster, A. Brinkmann, in: Proc. Int. Conf. on Parallel, Distributed and Network-Based Computing (PDP), 2011

Infrastructure as a Service providers use virtualization to abstract their hardware and to create a dynamic data center. Virtualization enables the consolidation of virtual machines as well as the migration of them to other hosts during runtime. Each provider has its own strategy to efficiently operate a data center. We present a rule based mapping algorithm for VMs, which is able to automatically adapt the mapping between VMs and physical hosts. It offers an interface where policies can be defined and combined in a generic way. The algorithm performs the initial mapping at request time as well as a remapping during runtime. It deals with policy and infrastructure changes. We extended the open source IaaS solution Eucalyptus and we evaluated it with typical policies: maximizing the compute performance and VM locality to achieve a high performance and minimizing energy consumption. The evaluation was done on state-of-the-art servers in our own data center and by simulations using a workload of the Parallel Workload Archive. The results show that our algorithm performs well in dynamic data centers environments.


Virtualized HPC: a contradiction in terms?

G. Birkenheuer, A. Brinkmann, J. Kaiser, A. Keller, M. Keller, C. Kleineweber, C. Konersmann, O. Niehörster, T. Schäfer, J. Simon, M. Wilhelm, Software: Practice and Experience (2011)

System virtualization has become the enabling technology to manage the increasing number of different applications inside data centers. The abstraction from the underlying hardware and the provision of multiple virtual machines (VM) on a single physical server have led to a consolidation and more efficient usage of physical servers. The abstraction from the hardware also eases the provision of applications on different data centers, as applied in several cloud computing environments. In this case, the application need not adapt to the environment of the cloud computing provider, but can travel around with its own VM image, including its own operating system and libraries. System virtualization and cloud computing could also be very attractive in the context of high‐performance computing (HPC). Today, HPC centers have to cope with both, the management of the infrastructure and also the applications. Virtualization technology would enable these centers to focus on the infrastructure, while the users, collaborating inside their virtual organizations (VOs), would be able to provide the software. Nevertheless, there seems to be a contradiction between HPC and cloud computing, as there are very few successful approaches to virtualize HPC centers. This work discusses the underlying reasons, including the management and performance, and presents solutions to overcome the contradiction, including a set of new libraries. The viability of the presented approach is shown based on evaluating a selected parallel, scientific application in a virtualized HPC environment.


2008

Enhancing SLA Provisioning by Utilizing Profit-Oriented Fault Tolerance

D. Battré, M. Hovestadt, O. Kao, A. Keller, K. Voss, in: Proc. Int. Conf. on Parallel and Distributed Computing and Systems (PDCS), 2008, pp. 212-218


Germany, Belgium, France, and Back Again: Job Migration using Globus

D. Battré, M. Hovestadt, O. Kao, A. Keller, K. Voss, in: Proc. Int. Conf. on Grid Computing and Applications (GCA), 2008


Implementation of Virtual Execution Environments for improving SLA-compliant Job Migration in Grids

D. Battré, M. Hovestadt, O. Kao, A. Keller, K. Voss, in: Proc. Int. Workshop on Scheduling and Resource Management for Parallel and Distributed Systems, 2008

Abstract: Commercial Grid users demand for contractually fixed QoS levels. Service Level Agreements (SLAs) are powerful instruments for describing such contracts. SLA-aware resource management is the foundation for realizing SLA contracts within the Grid. OpenCCS is such an SLA-aware RMS, using transparent checkpointing to cope with resource outages. It generates a compatibility profile for each checkpoint dataset, so that the job can be resumed even on resources within the Grid. However, only a small number of Grid resources comply to such a profile. This paper describes the concept of virtual execution environments and how they increase the number of potential migration targets.The paper also describes how these virtual execution environments have been implemented within the OpenCCS resource management system.


Increasing Fault-tolerance by Introducing Virtual Execution Environments.

D. Battré, M. Hovestadt, O. Kao, A. Keller, K. Voss, 2008


Job Migration and Fault Tolerance in SLA-aware Resource Management Systems

D. Battré, M. Hovestadt, O. Kao, A. Keller, K. Voss, in: Proc. Int. Conf. on Grid and Pervasive Computing (GPC), 2008, pp. 43-48

Contractually fixed service quality levels are mandatory prerequisites for attracting the commercial user to Grid environments. Service Level Agreements (SLAs) are powerful instruments for describing obligations and expectations in such a business relationship. At the level of local resource management systems, checkpointing and restart is an important instrument for realizing fault tolerance and SLA awareness. This paper highlights the concepts of migrating such checkpoint datasets to achieve the goal of SLA compliant job execution.


Paderborn, Belgien, Frankreich und zurück

M. Hovestadt, A. Keller, K. Voss. Paderborn, Belgien, Frankreich und zurück. 2008.


Quality Assurance of Grid Service Provisioning by Risk Aware Managing of Resource Failures

D. Battré, M. Hovestadt, O. Kao, A. Keller, K. Voss, in: Proc. Int. Conf. on Risks and Security of Internet and Systems, 2008


Virtual Execution Environments and the Negotiation of Service Level Agreements in Grid Systems

D. Battré, M. Hovestadt, O. Kao, A. Keller, K. Voss, in: Proc. Int. DMTF Academic Alliance Workshop on Systems and Virtualization Management: Standards and New Technologies, 2008

Service Level Agreements (SLAs) have focal importance if the commercial customer should be attracted to the Grid. An SLA-aware resource management system has already been realize, able to fulfill the SLA of jobs even in the case of resource failures. For this, it is able to migrate checkpointed jobs over the Grid. At this, virtual execution environments allow to increase the number of potential migration targets significantly. In this paper we outline the concept of such virtual execution environments and focus on the SLA negotiation aspects.


Virtual Execution Environments for ensuring SLA-compliant Job Migration in Grids

D. Battré, M. Hovestadt, O. Kao, A. Keller, K. Voss, in: Proc. Int. Conf. on Services Computing (SCC), 2008

OpenCCS is an SLA-aware resource management system which uses transparent checkpointing of applications and migration of checkpoint datasets for ensuring SLA-compliance also in case of resource outages. Migration of checkpoints presumes a high grade of compatibility between source and target resource. Hence, even in large Grid systems only a small number of resources are eligible migration targets. This short paper describes the concept of virtual execution environments and how they increase the number of potential migration targets. It will also outline an implementation within OpenCCS.


2007

Planning-based Scheduling for SLA-awareness and Grid Integration

D. Battré, M. Hovestadt, O. Kao, A. Keller, K. Voss, in: Proc. Workshop of the UK PLANNING AND SCHEDULING Special Interest Group (PlanSIG), 2007

Service level agreements (SLAs) are powerful instruments for describing all obligations and expectations in a business relationship. It is of focal importance for deploying Grid technology to commercial applications. The EC-funded project HPC4U (Highly Predictable Clusters for Internet Grids) aimed at introducing SLA-awareness in local resource management systems, while the EC-funded project AssessGrid introduced the notion of risk, which is associated with every business contract. This paper highlights the concept of planning based resource management and describes the SLA-aware scheduler developed and used in these projects.


Transparent Cross Border Migration of Parallel Multi Node Applications

D. Battré, M. Hovestadt, O. Kao, A. Keller, K. Voss, in: Proc. Cracow Grid Workshop, Academic Computer Center CYFRNET, 2007, pp. 334-341


2006

Provision of Fault Tolerance with Grid-enabled and SLA-aware Resource Management Systems

F. Heine, M. Hovestadt, O. Kao, A. Keller, in: Parallel Computing: Current and Future Issues of High End Computing, 2006, pp. 113-120


The Virtual Resource Manager: Local Autonomy versus QoS Guarantees for Grid Applications

L. Burchard, F. Heine, H. Heiss, M. Hovestadt, O. Kao, A. Keller, B. Linnert, J. Schneider, in: Future Generation Grids, 2006, pp. 83-98

In this paper, we describe the architecture of the virtual resource manager VRM, a management system designed to reside on top of local resource management systems for cluster computers and other kinds of resources. The most important feature of the VRM is its capability to handle quality-of-service (QoS) guarantees and service-level agreements (SLAs). The particular emphasis of the paper is on the various opportunities to deal with local autonomy for resource management systems not supporting SLAs. As local administrators may not want to hand over complete control to the Grid management, it is necessary to define strategies that deal with this issue. Local autonomy should be retained as much as possible while providing reliability and QoS guarantees for Grid applications, e.g., specified as SLAs.


2005

A Quality-of-Service Architecture for Future Grid Computing Applications.

L. Burchard, F. Heine, M. Hovestadt, O. Kao, A. Keller, B. Linnert, in: Proc. IEEE Int. Parallel & Distributed Processing Symposium (IPDPS), 2005, pp. 132a-132a

The next generation grid applications demand grid middleware for a flexible negotiation mechanism supporting various ways of quality-of-service (QoS) guarantees. In this context, a QoS guarantee covers simultaneous allocations of various kinds of different resources, such as processor runtime, storage capacity, or network bandwidth, which are specified in the form of service level agreements (SLA). Currently, a gap exists between the capabilities of grid middleware and the underlying resource management systems concerning their support for QoS and SLA negotiation. In this paper we present an approach which closes this gap. Introducing the architecture of the virtual resource manager, we highlight its main QoS management features like run-time responsibility, co-allocation, and fault tolerance.


SLA-aware Job Migration in Grid Environments

F. Heine, M. Hovestadt, O. Kao, A. Keller, in: Grid Computing: New Frontiers of High Performance Computing, 2005, pp. 185-201

Grid Computing promises an efficient sharing of world-wide distributed resources, ranging from hardware, software, expert knowledge to special I/O devices. However, although the main Grid mechanisms are already developed or are currently addressed by tremendous research effort, the Grid environment still suffers from a low acceptance in different user communities. Beside difficulties regarding an intuitive and comfortable resource access, various problems related to the reliability and the Quality-of-Service while using the Grid exist. Users should be able to rely, that their jobs will have certain priority at the remote Grid site and that they will be finished upon the agreed time regardless of any provider problems. Therefore, QoS issues have to be considered in the Grid middleware but also in the local resource management systems at the Grid sites. However, most of the currently used resource management systems are not suitable for SLAs, as they do not support resource reservation and do not offer mechanisms for job checkpointing/migration respectively. The latter are mandatory for Grid providers as rescue anchor in case of system failures or system overload. This paper focuses on SLA-aware job migration and presents a work, which is being performed in the EU supported project HPC4U.


2004

An Architecture for SLA-aware Resource Management

L. Burchard, H. Heiss, M. Hovestadt, O. Kao, A. Keller, B. Linnert, in: Proceedings of the GI-Meeting on Operating Systems, 2004


SLA-aware Job Migration in Grid Environments

O. Kao, M. Hovestadt, A. Keller, in: Proc. Advanced Research Workshop on High Perfomance Computing: Technology and Applications, 2004


Virtual Resource Manager: An Architecture for SLA-aware Resource Management

L. Burchard, M. Hovestadt, O. Kao, A. Keller, B. Linnert, in: Proc. Int. Symposium on Cluster Computing and the Grid (CCGRID), 2004

The next generation Grid will demand the Grid middleware to provide flexibility, transparency, and reliability. This implies the appliance of service level agreements to guarantee a negotiated level of quality of service. These requirements also affect the local resource management systems providing resources for the Grid. At this a gap between these demands and the features of today's resource management systems becomes apparent. In this paper we present an approach which closes this gap. Introducing the architecture of the virtual resource manager we highlight its main features of runtime responsibility, resource virtualization, information hiding, autonomy provision, and smooth integration of existing resource management system installations.


2003

Scheduling in HPC Resource Management Systems: Queuing vs. Planning

M. Hovestadt, O. Kao, A. Keller, A. Streit, in: Proc. Workshop on Job Scheduling Strategies for Parallel Processing (JSSPP), 2003, pp. 1-20

Nearly all existing HPC systems are operated by resource management systems based on the queuing approach. With the increasing acceptance of grid middleware like Globus, new requirements for the underlying local resource management systems arise. Features like advanced reservation or quality of service are needed to implement high level functions like co-allocation. However it is difficult to realize these features with a resource management system based on the queuing concept since it considers only the present resource usage. In this paper we present an approach which closes this gap. By assigning start times to each resource request, a complete schedule is planned. Advanced reservations are now easily possible. Based on this planning approach functions like diffuse requests, automatic duration extension, or service level agreements are described. We think they are useful to increase the usability, acceptance and performance of HPC machines. In the second part of this paper we present a planning based resource management system which already covers some of the mentioned features.


2001

Anatomy of a Resource Management System for HPC Clusters

A. Keller, A. Reinefeld, Annual Review of Scalable Computing (2001), 3, pp. 1-31

Workstation clusters are often not only used for high-throughput computing in time-sharing mode but also for running complex parallel jobs in space-sharing mode. This poses several difficulties to the resource management system, which must be able to reserve computing resources for exclusive use and also to determine an optimal process mapping for a given system topology. On the basis of our CCS software, we describe the anatomy of a modern resource management system. Like Codine, Condor, and LSF, CCS provides mechanisms for the user-friendly system access and management of clusters. But unlike them, CCS is targeted at the effective support of space-sharing parallel computers and even metacomputers. Among other features, CCS provides a versatile resource description facility, topology-based process mapping, pluggable schedulers, and hooks to metacomputer management.


Early Experiences with the EGrid Testbed

J. Gehring, A. Keller, A. Reinefeld, A. Streit, in: Proc. Int. Symposium on Cluster Computing and the Grid (CCGRID), 2001, pp. 130-137

The Testbed and Applications working group of the European Grid Forum (EGrid) is actively building and experimenting with a grid infrastructure connecting several research-based supercomputing sites located in Europe. The paper reports on our first feasibility study: running a self-migrating version of the Cactus simulation code across the European grid testbed, including "live" remote data visualization and steering from different demonstration booths at Supercomputing 2000, in Dallas, TX. We report on the problems that had to be resolved for this endeavour and identify open research challenges for building production-grade grid environments.


Lessons Learned While Operating Two Large SCI Clusters

A. Keller, A. Krawinkel, in: Proc. Int. Symposium on Cluster Computing and the Grid (CCGRID), 2001, pp. 303-310

The availability of commodity high performance components for workstations and networks made it possible to build up large, PC based compute clusters at modest costs. These clusters seem to be a realistic alternative to proprietary, massively parallel systems with respect to the price/performance ratio. However, from the administration point of view, those systems are still often solely a collection of autonomous nodes, connected by a fast short area network. Therefore, aiming at providing the best possible performance in daily work to all users, a lot of work has to be done before obtaining the expected result. The paper describes the problem areas we had to cope with during the integration of two large SCI clusters (one with 64 and one with 192 processors) in the environment of the Paderborn Center for Parallel Computing.


2000

RsdEditor: A Graphical User Interface for Specifying Metacomputer Components

R. Baraglia, A. Keller, D. Laforenza, A. Reinefeld, in: Proc. Heterogenous Computing Workshop HCW at IPDPS, 2000, pp. 336-348

RsdEditor is a graphical user interface which produces specifications of computational resources. It is used in the RSD (Resource and Service Description) environment for specifying, registering, requesting and accessing resources and services in a metacomputer. RsdEditor was designed to be used by the administrators and users of metacomputing environments. At the administrator level, the GUI is used to describe the available computing and networking components of a metacomputer. At the user level, RsdEditor can be used to specify which characteristics of the computational resources are needed to execute a meta-application. This paper is organized as follows: it first introduces RsdEditor. It then briefly describes the RSD environment, and finally, it highlights various features and implementation issues of RsdEditor.


1999

Managing Clusters of Geographically Distributed High-Performance Computers

M. Brune, J. Gehring, A. Keller, A. Reinefeld, Concurrency, Practice, and Experience (1999), II(15), pp. 887-911

We present a software system for the management of geographically distributed high‐performance computers. It consists of three components: 1. The Computing Center Software (CCS) is a vendor‐independent resource management software for local HPC systems. It controls the mapping and scheduling of interactive and batch jobs on massively parallel systems; 2. The Resource and Service Description (RSD) is used by CCS for specifying and mapping hardware and software components of (meta‐)computing environments. It has a graphical user interface, a textual representation and an object‐oriented API; 3. The Service Coordination Layer (SCL) co‐ordinates the co‐operative use of resources in autonomous computing sites. It negotiates between the applications' requirements and the available system services.


Multi-User System Management on SCI Cluster

M. Brune, A. Keller, A. Reinefeld, in: SCI - Scalable Coherent Interface: Architecture and Software for High Performance Compute Clusters, 1999, pp. 443-460

The growing maturity of hardware and software components has tempted researchers to build very large SCI clusters with several hundred processors that are operated as high-performance compute servers in multi-user mode. In this chapter, we present a resource management software for the user access and system administration of high-performance compute clusters named Computing Center Software (CCS). It is in day-to-day use since 1992 on various parallel systems and has recently been adapted to the management of SCI clusters. CCS provides pluggable schedulers, optimal space partitioning for multiple users, reliable user access, and powerful tools for specifying resources and services by means of a specification language and a graphical user interface. After a brief introduction in the remainder of this section, we describe the CCS system architecture and the characteristics of its resource description facilities.


Resource Management for High-Performance PC Clusters

M. Brune, A. Keller, A. Reinefeld, in: Proc. Int. Conf. on High-Performance Computing and Networking (HPCN), 1999, pp. 270-280

With the recent availability of cost-effective network cards for the PCI bus, researchers have been tempted to build up large compute clusters with standard PCs. Many of them are operated with workstation cluster management software in high-throughput or single user mode. For very large clusters with more than 100 PEs, however, it becomes necessary to implement a full fledged resource management software that allows to partition the system for multi-user access. In this paper, we present our Computing Center Software (CCS), which was originally designed for managing massively parallel high-performance computers, and now adapted to modern workstation clusters. It provides - partitioning of exclusive and non-exclusive resources, - hardware-independent scheduling of interactive and batch jobs, - open, extensible interfaces to other resource management systems, - a high degree of reliability.


Specifying Resources and Services in Metacomputing Systems

M. Brune, J. Gehring, A. Keller, A. Reinefeld, in: High-Performance Cluster Computing: Architecture and Systems, 1999, pp. 186-200

With a steadily increasing number of services, metacomputing is now gaining importance in science and industry. Virtual organizations, autonomous agents, mobile computing services, and high-performance client–server applications are among the many examples of metacomputing services. For all of them, resource description plays a major role in organizing access, use, and administration of the computing components and software services. We present a generic Resource and Service Description (RSD) for specifying the hardware and software components of (meta-) computing environments. Its graphical interface allows metacomputer users to specify their resource requests. Its textual counterpart gives service providers the necessary flexibility to specify topology and properties of the available system and software resources. Finally, its internal object-oriented representation is used to link different resource management systems and service tools. With these three representations, our generic RSD approach is a key component for building metacomputer environments.


1998

CCS Resource Management in Networked HPC Systems

A. Keller, A. Reinefeld, in: Proc. Heterogenous Computing Workshop (HCW) at IPPS, 1998, pp. 44-56

CCS is a resource management system for parallel high-performance computers. At the user level, CCS provides vendor-independent access to parallel systems. At the system administrator level, CCS offers tools for controlling (i.e, specifying, configuring and scheduling) the system components that are operated in a computing center. Hence the name "Computing Center Software". CCS provides: hardware-independent scheduling of interactive and batch jobs; partitioning of exclusive and non-exclusive resources; open, extensible interfaces to other resource management systems; a high degree of reliability (e.g. automatic restart of crashed daemons); fault tolerance in the case of network breakdowns. The authors describe CCS as one important component for the access, job distribution, and administration of networked HPC systems in a metacomputing environment.


RSD - Resource and Service Description

M. Brune, J. Gehring, A. Keller, A. Reinefeld, in: Proc. Int. Conf. on High-Performance Computing Systems (HPCS), 1998

RSD (Resource and Service Description) is a scheme for specifying resources and services in complex heterogeneous computing systems and metacomputing environments. At the system administrator level, RSD is used to specify the available system components, such as the number of nodes, their interconnection topology, CPU speeds, and available software packages. At the user level, a GUI provides a comfortable, high-level interface for specifying system requests. A textual editor can be used for defining repetitive and recursive structures. This gives service providers the necessary flexibility for fine-grained specification of system topologies, interconnection networks, system and software dependent properties. All these representations are mapped onto a single, coherent internal object-oriented resource representation. Dynamic aspects (like network performance, availability of compute nodes, and compute node loads) are traced at runtime and included in the resource description to allow for optimal process mapping and dynamic task load balancing at runtime at the metacomputer level. This is done in a self-organizing way, with human system operators becoming only involved when new hardware/software components are installed.


Specifying Resources and Services in Metacomputing Environments

M. Brune, J. Gehring, A. Keller, B. Monien, Parallel Computing (1998), 24, pp. 1751-1776

With a steadily increasing number of services, metacomputing is now gaining importance in science and industry. Virtual organizations, autonomous agents, mobile computing services, and high-performance client–server applications are among the many examples of metacomputing services. For all of them, resource description plays a major role in organizing access, use, and administration of the computing components and software services. We present a generic Resource and Service Description (RSD) for specifying the hardware and software components of (meta-) computing environments. Its graphical interface allows metacomputer users to specify their resource requests. Its textual counterpart gives service providers the necessary flexibility to specify topology and properties of the available system and software resources. Finally, its internal object-oriented representation is used to link different resource management systems and service tools. With these three representations, our generic RSD approach is a key component for building metacomputer environments.


1997

A Closer Step towards Management of Metacomputing-Resources

M. Brune, C. Hellmann, A. Keller, in: Proc. Workshop Hypercomputing at ITG/GI-Conference Architekur von Rechensystemen, 1997


Open list in Research Information System

The University for the Information Society